CN112135092A - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
CN112135092A
CN112135092A CN202010915959.3A CN202010915959A CN112135092A CN 112135092 A CN112135092 A CN 112135092A CN 202010915959 A CN202010915959 A CN 202010915959A CN 112135092 A CN112135092 A CN 112135092A
Authority
CN
China
Prior art keywords
image
face
address
images
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010915959.3A
Other languages
Chinese (zh)
Other versions
CN112135092B (en
Inventor
吕胜伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010915959.3A priority Critical patent/CN112135092B/en
Publication of CN112135092A publication Critical patent/CN112135092A/en
Application granted granted Critical
Publication of CN112135092B publication Critical patent/CN112135092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, which comprises the following steps: acquiring n frames of images in a statistical period, and selecting m frames of images with the same target object from the n frames of images, wherein m is less than or equal to n; selecting a target image corresponding to the target object from the m frames of images, and intercepting an object cutout corresponding to the target object from the target image; determining an object background map corresponding to the target object based on the m-frame images; judging whether the object background image is sent to a storage device in the statistical period or not; if yes, sending the object matting to the storage device; and if not, sending the object matting and the object background picture to the storage equipment. By the technical scheme, the sending number of the object background pictures is reduced, and the purposes of saving bandwidth resources and storing resources are achieved.

Description

Image processing method
Technical Field
The application relates to the technical field of video monitoring, in particular to an image processing method.
Background
Face recognition (also referred to as face recognition or face recognition) is a biometric technology for performing identity recognition based on facial feature information of a person, and uses an image acquisition device to acquire an image or video stream containing the face and automatically detect and track the face in the image, thereby performing face recognition on the detected face.
After the face is identified, the face matting and the face background image are generally kept, and when face retrieval comparison is carried out, the face matting is only required to be presented. However, when it is necessary to know information such as the time, place, and surrounding situation of the face, it is also necessary to be able to search the face background map.
In the related art, in order to retain face matting and a face background image, after an image acquisition device acquires an image containing a face, the image acquisition device acquires face matting (i.e. a small image containing the face) from the image, transmits the face matting and the face background image (i.e. a large image containing the face, which may be an image containing the face acquired by the image acquisition device) to a storage device, and the storage device stores the face matting and the face background image.
In the above manner, since the image acquisition device needs to transmit the face matting and the face background image to the storage device, a large amount of bandwidth resources of the image acquisition device need to be occupied. Because the storage device needs to store the face matting map and the face background map, a large amount of storage resources of the storage device can be occupied.
For example, if 10 faces exist in a scene, the image acquisition device needs to transmit 10 face mattes and 10 face background maps to the storage device, and the storage device also needs to store 10 face mattes and 10 face background maps, and as the number of faces in the scene increases, the number of images to be transmitted and stored is large, the transmission of the images can occupy a large amount of bandwidth resources, and the storage of the images can occupy a large amount of storage resources.
Disclosure of Invention
The application provides an image processing method, which is applied to image acquisition equipment and comprises the following steps:
acquiring n frames of images in a statistical period, and selecting m frames of images with the same target object from the n frames of images, wherein m is less than or equal to n;
selecting a target image corresponding to the target object from the m frames of images, and intercepting an object cutout corresponding to the target object from the target image;
determining an object background map corresponding to the target object based on the m-frame images;
judging whether the object background image is sent to a storage device in the statistical period or not;
if yes, sending the object matting to the storage device;
and if not, sending the object matting and the object background picture to the storage equipment.
In a possible embodiment, the determining an object background map corresponding to the target object based on the m-frame image includes: for each frame of the m frames of images, determining a score value corresponding to each object in the image, and determining a score value corresponding to the image based on the score value corresponding to each object;
selecting an image with the highest score value based on the score value corresponding to each frame of image in the m frames of images;
and determining an object background image corresponding to the target object based on the image with the highest score value.
In a possible implementation, the determining the score value corresponding to the image based on the score value corresponding to each object includes: determining a score value corresponding to the image based on the median of the score values corresponding to each object; or, the score value corresponding to the image is determined based on the average value of the score values corresponding to each object.
In a possible implementation, the selecting a target image corresponding to the target object from the m frames of images includes:
and determining a score value corresponding to the target object in each frame of the m frames of images, and determining the image with the highest score value as the target image corresponding to the target object.
In one possible implementation, after sending the object matte and the object background map to the storage device, the method further includes:
receiving a matting address and a background map address returned by the storage device; wherein the matting address is used for representing the storage address of the object matting in the storage device, and the background map address is used for representing the storage address of the object background map in the storage device;
and sending the matting address, the background map address and the alarm information of the target object to a service platform, and recording the matting address and the mapping relation between the background map address and the alarm information by the service platform.
In one possible implementation, after the sending the object matte to the storage device, the method further includes:
receiving a matting address returned by the storage device, and locally inquiring a background map address of the object background map; wherein the matting address is used for representing the storage address of the object matting in the storage device, and the background map address is used for representing the storage address of the object background map in the storage device;
and sending the matting address, the background map address and the alarm information of the target object to a service platform, and recording the matting address and the mapping relation between the background map address and the alarm information by the service platform.
In one possible embodiment, the method further comprises: in the statistical period, when the object background image is sent to the storage device every time, recording the mark of the object background image in the mapping table; when the statistical period is over, deleting the marks of all the object background images recorded in the mapping table;
judging whether the object background image is sent to a storage device in the statistical period or not, wherein the judging step comprises the following steps:
judging whether the mapping table has the mark of the object background image or not;
if yes, determining that the object background image is sent to a storage device in the statistical period;
if not, determining that the object background image is not sent to the storage device in the counting period.
In one possible implementation, the target object comprises a target face, the object matting comprises face matting, and the object background image comprises a face background image.
According to the technical scheme, in the embodiment of the application, whether the object background image is sent to the storage device or not is judged aiming at the selected object cutout and the selected object background image, if so, the object cutout is sent to the storage device, and if not, the object cutout and the object background image are sent to the storage device, so that the sending quantity of the object background images is reduced, the purposes of saving bandwidth resources and storage resources are achieved, the reusability of the object background image is improved, a large amount of bandwidth resources of the image acquisition device are avoided being occupied, and a large amount of storage resources of the storage device are avoided being occupied. For example, if 10 objects exist in the scene, the image acquisition device may only need to transmit 10 object mattes and 1 object background map to the storage device, and the storage device only stores 10 object mattes and 1 object background map, thereby avoiding occupying a large amount of bandwidth resources and occupying a large amount of storage resources.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating an image processing method according to an embodiment of the present application;
FIG. 4 is a flow diagram illustrating an image processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of an image processing procedure in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
Referring to fig. 1, which is a schematic view of an application scenario of the embodiment of the present application, a system (e.g., a video monitoring system) of the application scenario may include an image capturing device 110, a storage device 120, and a service platform 130.
The image capturing device 110 is a terminal device having an image capturing function, such as a video camera, a snapshot camera, and the like, and the type of the image capturing device 110 is not limited as long as an image can be captured. The number of the image capturing devices 110 may be plural, and one image capturing device is illustrated as an example. The image capturing device 110 is connected to the storage device 120, and the image capturing device 110 is connected to the service platform 130.
The Storage device 120 is a device having an image Storage function, and is configured to store the image uploaded by the image acquisition device 110, for example, a cloud Storage device, a Storage server, an IP-SAN (Storage Area Network), and the like, and the type of the Storage device 120 is not limited as long as the image can be stored.
The service platform 130 is a device with a service function, and is used for processing a user service.
Based on the application scenario, an embodiment of the present application provides an image processing method, which is shown in fig. 2 and is a schematic flow chart of the image processing method, where the method is applied to an image acquisition device, and the method includes:
step 201, acquiring n frames of images in a statistical period, and selecting m frames of images with the same target object from the n frames of images, wherein m is less than or equal to n, and both m and n are positive integers.
For example, the length of the statistical period may be configured, and if the length of the statistical period is 1 second, the 1 st second is statistical period 1, the 2 nd second is statistical period 2, the 3 rd second is statistical period 3, and so on.
In one possible implementation, all the images acquired by the image acquisition device in the statistical period may be buffered, and these images are taken as n frames of images in the statistical period. For example, when 100 frames of images are acquired in the statistical period 1, the 100 frames of images are taken as n frames of images in the statistical period 1, and the subsequent steps are performed based on the 100 frames of images. When 80 frames of images are acquired in the statistical period 2, the 80 frames of images are taken as n frames of images in the statistical period 2, and the subsequent steps are executed based on the 80 frames of images, and so on.
In another possible embodiment, in order to save the storage space of the image capturing device, an upper limit value of n, such as 50, may be agreed, based on which the images captured by the image capturing device in the statistical period may be buffered, and the number of buffered images does not exceed 50, and these images are taken as n frames of images in the statistical period.
For example, when 80 frames of images are acquired in any statistical period (e.g., statistical period 1), only 50 frames of images in the 80 frames of images (e.g., any 50 frames of images in the 80 frames of images, or the last 50 frames of images in the 80 frames of images, or the first 50 frames of images in the 80 frames of images, etc., without limitation) are buffered, the buffered 50 frames of images are used as n frames of images in the statistical period 1, and subsequent steps are performed based on the 50 frames of images.
When 40 frames of images are acquired in any statistical period (such as statistical period 2), the 40 frames of images can be buffered, the buffered 40 frames of images are used as n frames of images in statistical period 1, and the subsequent steps are executed based on the 40 frames of images. Or, 10 frames of images in the statistical period 1 (for example, the last 10 frames of images in the statistical period 1) are retained, 40 frames of images collected in the statistical period 2 are buffered, 50 frames of buffered images are used as n frames of images in the statistical period 2, and the subsequent steps are executed based on the 50 frames of images.
After n frames of images within the statistical period are obtained, a plurality of objects can be identified from the n frames of images, and each object is taken as a target object. For example, the n-frame images are 4-frame images, and the 4-frame images are respectively denoted as an image a 1-an 4, and assuming that the image a 1-the image a3 each include an object 1 and an object 2, and the image a4 includes an object 2 and an object 3, the object 1, the object 2, and the object 3 can be identified from the 4-frame images.
When the object 1 is taken as a target object, 3 frame images in which the object 1 exists, that is, an image a1, an image a2, and an image a3, are selected from the 4 frame images. When the object 2 is a target object, 4 frame images in which the object 2 exists, that is, an image a1, an image a2, an image a3, and an image a4 are selected from the 4 frame images. When the object 3 is a target object, 1 frame image in which the object 3 exists, that is, the image a4 is selected from the 4 frame images.
In summary, for each target object, m frames of images in which the target object exists may be selected from n frames of images, where m is smaller than or equal to n, and a description will be given by taking one target object as an example.
Step 202, selecting a target image corresponding to the target object from m frames of images with the same target object, and intercepting an object cutout corresponding to the target object from the target image.
For example, for each of m frames of images, a score value corresponding to the target object in the image is determined, and an image with the highest score value is determined as the target image corresponding to the target object.
For example, when determining the score value corresponding to the target object, the score value corresponding to the target object may be determined based on a resolution of the target object in the image and/or a deflection angle of the target object in the image. For example, a resolution score is determined based on the resolution of the target object in the image, and the resolution score is used as a score value corresponding to the target object. Or determining an angle score based on the deflection angle of the target object in the image, and taking the angle score as a score value corresponding to the target object. Or determining a resolution score based on the resolution of the target object in the image, determining an angle score based on the deflection angle of the target object in the image, performing weighted operation on the resolution score and the angle score, and taking the score after weighted operation as a score value corresponding to the target object. Of course, the above are only a few examples of determining the score value corresponding to the target object, and the method is not limited thereto.
When determining the resolution score based on the resolution of the target object in the image, taking the target object as a target face as an example, the resolution of the target face in the image may be obtained (i.e., the target face in the image is cut out, the target face is a sub-image in the image, and the resolution of the target face is determined), and the resolution is recorded as the resolution x 1. The resolution of M × N (e.g., 64 × 64) may be agreed in advance as the optimum resolution, and the resolution score is larger when the difference between the resolution x1 and the optimum resolution is smaller, and the resolution score is smaller when the difference between the resolution x1 and the optimum resolution is larger. For example, the resolution score may be 100 if the resolution x1 is 64 x 64, 99 if the resolution x1 is 63 x 63 (or 65 x 65), 98 if the resolution x1 is 62 x 62 (or 66 x 66), and so on.
When determining the angle score based on the deflection angle of the target object in the image, taking the target object as a target face as an example, the deflection angle of the target face in the image may be obtained, and the deflection angle may be recorded as the deflection angle x 2. It may be agreed in advance to designate the yaw angle (e.g., 0 degrees, i.e., the frontal face) as the optimum yaw angle, the angle score being larger as the difference between the yaw angle x2 and the optimum yaw angle is smaller, and the angle score being smaller as the difference between the yaw angle x2 and the optimum yaw angle is larger. For example, if the deflection angle x2 is 0 degrees, the angular score may be 100, if the deflection angle x2 is 1 degree (e.g., 1 degree left, 1 degree right, 1 degree up, 1 degree down), the angular score may be 99, and so on.
For example, assuming that the object 1 is a target object and the m-frame image includes an image a1, an image a2 and an image a3, for the image a1, a resolution score may be determined based on the resolution of the object 1 in the image a1, an angle score may be determined based on the deflection angle of the object 1 in the image a1, and a corresponding score value 1 of the object 1 in the image a1 may be determined based on the resolution score and the angle score, and similarly, a corresponding score value 2 of the object 1 in the image a2 may be determined, and a corresponding score value 3 of the object 1 in the image a3 may be determined.
Then, the highest score value, such as the score value 1, is determined from the score value 1, the score value 2, and the score value 3, and the image a1 corresponding to the score value 1 is determined as the target image corresponding to the object 1.
For example, after a target image corresponding to a target object is selected from m frames of images, an object cutout corresponding to the target object may be cut out from the target image, that is, the target object in the target image is cut out, and the cut out target object is the object cutout. Taking the target object as a target face as an example, the object matting is face matting (i.e. a small image containing a face) captured in the target image.
Step 203, determining an object background image corresponding to the target object based on the m frames of images, namely, determining the object background image corresponding to the target object from the m frames of images in which the same target object exists.
In a possible implementation, the flow shown in fig. 3 may be used to determine an object background map corresponding to the target object, but the following manner is only an example of determining the object background map, and this is not limited to this, for example, any one frame image (e.g., the first frame image, the last frame image, the image in the middle position, etc.) in the m frame images may be used as the object background map corresponding to the target object.
Step 2031, for each frame of the m frames of images, determining a score value corresponding to each object in the image, and determining a score value corresponding to the image based on the score value corresponding to each object.
In one possible embodiment, determining the score value corresponding to the image based on the score value corresponding to each object may include, but is not limited to: the score value corresponding to the image is determined based on the median of the score value corresponding to each object, for example, the median is taken as the score value corresponding to the image. Alternatively, the score value corresponding to the image is determined based on an average value of the score values corresponding to each object, for example, the average value is taken as the score value corresponding to the image. Alternatively, the score value corresponding to the image is determined based on the minimum value of the score values corresponding to each object (i.e., the minimum value of the score values corresponding to all objects), for example, the minimum value is taken as the score value corresponding to the image. Alternatively, the score value corresponding to the image is determined based on the maximum value of the score value corresponding to each object (i.e., the maximum value among the score values corresponding to all objects), for example, the maximum value is taken as the score value corresponding to the image. Of course, the above are only a few examples, and there is no limitation as long as the score value corresponding to the image can be determined based on the score value corresponding to each object.
Illustratively, assuming that an object 1 is a target object and an m-frame image includes an image a1 and an image a2, assuming that an image a1 includes an object 1, an object 2, and an object 3, and an image a2 includes an object 1, an object 2, an object 3, and an object 4, on the basis of which: for image a1, the score values corresponding to object 1, object 2, and object 3 in image a1 need to be determined. In determining the score value corresponding to the object 1 in the image a1, the score value b1 corresponding to the object 1 in the image a1 is determined based on the resolution of the object 1 in the image a1 and/or the deflection angle of the object 1 in the image a1, as described in detail in step 202. Similarly, a score value b2 corresponding to object 2 in image a1 may be determined, and a score value b3 corresponding to object 3 in image a1 may be determined. Then, the median of the score value b1, the score value b2, and the score value b3 is determined as the score value corresponding to the image a 1.
For image a2, a score value b4 corresponding to object 1 in image a2 may be determined, and a score value b5 corresponding to object 2 in image a2 may be determined, and a score value b6 corresponding to object 3 in image a2 may be determined, and a score value b7 corresponding to object 4 in image a2 may be determined. Then, the median of the score value b4, the score value b5, the score value b6, and the score value b7 is determined as the score value corresponding to the image a 2.
Step 2032, based on the score value corresponding to each frame of image in the m frames of images (i.e. the m frames of images with the target object), selecting the image with the highest score value, i.e. selecting the image with the highest score value from the m frames of images.
Step 2033, an object background map corresponding to the target object is determined based on the image with the highest score value.
For example, assuming that the object 1 is a target object and the m-frame image includes the image a1 and the image a2, an image with the highest score value is selected from the image a1 and the image a2 based on the score value corresponding to the image a1 and the score value corresponding to the image a2, and assuming that the image with the highest score value is the image a1, an object background map corresponding to the object 1 is determined based on the image a1, for example, the image a1 is used as the object background map corresponding to the object 1.
For example, if the target object is a target face, the object background image may be a face background image (i.e., a large image containing a face), and the face background image may be an image with the highest score value.
Step 204, determine whether the object background map has been sent to the storage device in the statistical period.
If so, step 205 may be performed; if not, step 206 may be performed.
In step 205, the object matte is sent to the storage device, but the object background image is prohibited from being sent to the storage device, that is, the image acquisition device does not need to send the object background image to the storage device.
Step 206, the object matting and the object background image are sent to a storage device.
For example, for each statistical period, in the statistical period, each time the image acquisition device sends the object background map to the storage device, the identifier of the object background map may be recorded in the mapping table; when the statistical period expires, the identifiers of all object background maps recorded in the mapping table may be deleted. For example, in the statistical period 1, each time an object background map is sent to the storage device, an identifier of the object background map may be recorded in the mapping table; when the statistical period 1 expires, that is, when the next statistical period 2 is entered, the identifiers of all the object background maps recorded in the mapping table may be deleted, and the identifiers in the mapping table may be updated again.
On the basis, in step 204, the image capturing device may determine whether the mapping table has the identifier of the object background map; if yes, determining that the object background image is sent to the storage device in the statistical period; if not, determining that the object background image is not sent to the storage device in the counting period.
For example, when the object 1 is taken as a target object, assuming that the object background map corresponding to the object 1 is the image a1, since the mapping table does not have the identifier of the image a1, the image capturing device sends the object matting and the object background map (i.e., the image a1) corresponding to the object 1 to the storage device, and records the identifier of the image a1 in the mapping table. When the object 2 is taken as a target object, assuming that the object background image corresponding to the object 2 is the image a1, since the identifier of the image a1 already exists in the mapping table, the image acquisition device only sends the object matte corresponding to the object 2 to the storage device, and no longer sends the object background image corresponding to the object 2 (i.e., the image a1) to the storage device, so as to reduce the sending number of the object background images. Moreover, the object 1 and the object 2 share the same object background map (i.e., the image a1), which does not result in the object 2 having no object background map.
In one possible implementation, for step 206, after the image capture device sends the object matte and the object background map to the storage device, a matte address and a background map address returned by the storage device may also be received, where the matte address may be used to indicate a storage address (such as a URL (Uniform Resource Locator) address) of the object matte in the storage device, and the background map address may be used to indicate a storage address of the object background map in the storage device. Then, the image acquisition device can send the cutout address, the background map address and the alarm information of the target object to a service platform, and the service platform records the cutout address and the mapping relationship between the background map address and the alarm information.
For example, after the storage device receives the object matte and the object background image, the object matte and the object background image may be stored, a storage address of the object matte in the storage device is determined (the storage address is referred to as the matte address, that is, the object matte is stored at a position corresponding to the matte address, and the object matte can be found by the matte address), and a storage address of the object background image in the storage device is determined (the storage address is referred to as the background image address, that is, the object background image is stored at a position corresponding to the background image address, and the object background image can be found by the background image address). Then, the storage device returns the cutout address and the background map address to the image acquisition device, and the cutout address and the background map address are received by the image acquisition device.
After the image acquisition device obtains the object cutout and the object background image corresponding to the target object, alarm information of the target object can be acquired, such as acquisition time of the object background image (that is, acquisition time of the object background image acquired by the image acquisition device), acquisition position of the object background image (such as position coordinates of the image acquisition device) and attributes of the target object (such as sex, height, facial features, iris features and the like when the target object is a target face, and the attributes are not limited). Of course, the above are only a few examples of the alarm information, and the alarm information is not limited to this, and may be any attribute of the target object that can be acquired by the image acquisition device.
After obtaining the matting address, the background map address and the alarm information, the image acquisition device can send the matting address, the background map address and the alarm information to a service platform, and the service platform records the matting address and the mapping relationship between the background map address and the alarm information. In the subsequent process, the service platform can acquire the acquisition time of the object background image, the acquisition position of the object background image, the attribute of the target object and other contents based on the alarm information of the target object. When the service platform needs to query the object matting of the target object, the object matting of the target object can be queried from the storage device based on the matting address. When the business platform needs to inquire the object background graph of the target object, the object background graph of the target object can be inquired from the storage device based on the background graph address. Regarding the processing manner of the service platform, no limitation is made in this embodiment.
After obtaining the background map address of the object background map, the image acquisition device may also locally record the mapping relationship between the identifier of the object background map and the background map address. For example, when the object background map is the image a1, the image capture device may record a mapping relationship between the identifier of the image a1 and the address of the background map of the image a1 (indicating the storage address of the image a1 in the storage device, i.e., the URL address).
In another possible implementation, for step 205, after the image capture device sends the object matte to the storage device, the image capture device may further receive a matte address returned by the storage device, and locally query a background map address of the object background map, where the matte address is used to represent a storage address of the object matte in the storage device, and the background map address is used to represent a storage address of the object background map in the storage device. Then, the image acquisition device sends the cutout address, the background map address and the alarm information of the target object to a service platform, and the service platform records the cutout address and the mapping relation between the background map address and the alarm information.
For example, after the storage device receives the object matte, the object matte can be stored, and a storage address of the object matte in the storage device can be determined (the storage address is referred to as the matte address). Then, the storage device returns the cutout address to the image acquisition device, and the cutout address is received by the image acquisition device.
After the image acquisition device obtains the object cutout image and the object background image corresponding to the target object, alarm information of the target object can be acquired, such as the acquisition time and the acquisition position of the object background image, the attribute of the target object and the like.
For example, since the image capturing device already records the mapping relationship between the identifier of the object background map and the address of the background map locally, after the image capturing device obtains the object background map corresponding to the target object, the image capturing device may also query the address of the background map of the object background map locally when the object background map does not need to be sent to the storage device. For example, the image capturing device may query the mapping relationship through the identifier of the object background map to obtain the background map address of the object background map.
In summary, the image acquisition device may obtain a matting address, a background map address, and alarm information, and send the matting address, the background map address, and the alarm information to the service platform, and the service platform records the matting address and a mapping relationship between the background map address and the alarm information.
In the above embodiments, the object may comprise a face, the target object may comprise a target face, the object matting may comprise face matting, and the object background map may comprise a face background map. Of course, the human face is only an example of the object, and there may be other types of objects, such as eyes, a head, a vehicle, and the like, which is not limited thereto.
For example, the face background image may be an image acquired by an image acquisition device, that is, an image of the entire scene, and the face background image includes a face and a scene. For face matting (also called face minimap), a minimap is cut out from a face background image.
For example, the execution sequence is only an example given for convenience of description, and in practical applications, the execution sequence between the steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
According to the technical scheme, in the embodiment of the application, whether the object background image is sent to the storage device or not is judged aiming at the selected object cutout and the selected object background image, if so, the object cutout is sent to the storage device, and if not, the object cutout and the object background image are sent to the storage device, so that the sending quantity of the object background images is reduced, the purposes of saving bandwidth resources and storage resources are achieved, the reusability of the object background image is improved, a large amount of bandwidth resources of the image acquisition device are avoided being occupied, and a large amount of storage resources of the storage device are avoided being occupied. For example, if 10 objects exist in the scene, the image acquisition device may only need to transmit 10 object mattes and 1 object background map to the storage device, and the storage device only stores 10 object mattes and 1 object background map, thereby avoiding occupying a large amount of bandwidth resources and occupying a large amount of storage resources.
The following describes an image processing method according to an embodiment of the present application with reference to specific embodiments. Referring to fig. 4, a schematic flowchart of an image processing method in an embodiment of the present application is shown, where the method may include:
step 401, the image acquisition device acquires and caches n frames of images within the statistical period Δ t.
Step 402, the image acquisition device identifies a plurality of face minimaps from n frames of images, assigns the same face identifier to the face minimaps of the same user, and assigns different face identifiers to the face minimaps of different users.
For example, the image capturing device may recognize all faces in n frames of images through a face detection algorithm, where each face is a face thumbnail, for example, there are 3 faces in the 1 st frame of image, 3 faces in the 2 nd frame of image, and 4 faces in the 3 rd frame of image, and then the image capturing device recognizes 10 faces from all 3 frames of images through the face detection algorithm, so as to obtain 10 face thumbnails.
The image acquisition equipment can identify the face minigrams of the same user from all the face minigrams through a face tracking algorithm, and distinguish the face minigrams of different users, and based on the face minigrams, the same face identification can be distributed to the face minigrams of the same user, and different face identifications can be distributed to the face minigrams of different users.
For example, face identifier 1, face identifier 2, and face identifier 3 are allocated to 3 face thumbnails in the 1 st frame image, face identifier 1, face identifier 2, and face identifier 3 are allocated to 3 face thumbnails in the 2 nd frame image, and face identifier 1, face identifier 2, face identifier 3, and face identifier 4 are allocated to 4 face thumbnails in the 3 rd frame image. The 3 person face miniatures (respectively located in the 1 st frame image, the 2 nd frame image and the 3 rd frame image) for the face identification 1 are the face miniatures of the user 1; the 3 person face miniatures aiming at the face identification 2 are the face miniatures of the user 2; the 3-person face thumbnail for the face label 3 is the face thumbnail of the user 3, and the 1-person face thumbnail for the face label 4 (located in the 3 rd frame image) is the face thumbnail of the user 4.
For example, the face detection algorithm is used to identify a face thumbnail from an image, and the embodiment is not limited with respect to the identification manner of the face detection algorithm. The face tracking algorithm is used to track the face thumbnail of the same user from a plurality of face thumbnails, and the tracking manner of the face tracking algorithm is not limited in this embodiment.
Referring to fig. 5, each frame of the n frames of images may be assigned an image identifier, such as image identifier 1, image identifier 2, …, and image identifier n, and serves as a background map.
Referring to fig. 5, a background large map (hereinafter referred to as the background large map 1) corresponding to the image identifier 1 may include a face small map 11, a face small map 12, …, and a face small map 1 a. The background large map corresponding to the image identifier 2 (hereinafter referred to as the background large map 2) may include a face small map 21, a face small map 22, …, a face small map 2b, and so on, and the background large map corresponding to the image identifier n (hereinafter referred to as the background large map n) may include a face small map n1, a face small map n2, …, and a face small map nc. If the face thumbnail 11, the face thumbnails 21, …, and the face thumbnail n1 are face thumbnails of the same user, the face thumbnails correspond to the same face identifier. If the face tiles 12, 22, …, n2 are face tiles of the same user, the face tiles correspond to the same face identifiers, and so on.
In step 403, for each face minimap in all face minimaps (i.e., all face minimaps in all n frames of images), the image capturing device determines a score value corresponding to the face minimap.
For example, assuming that the background large image 1 includes a face small image 11, a face small image 12 and a face small image 13, the image capturing apparatus may score the face by faceThe algorithm determines a score value fp1 (id) corresponding to the face minimap 111) And determining a score value fp1 (id) corresponding to the face small graph 12 through a face scoring algorithm2) And determining a score value fp1 (id) corresponding to the face small graph 13 through a face scoring algorithm3)。id1Face identification, id, representing the face thumbnail 112Face identification, id, representing the face miniatures 123Representing the face identification of the face thumbnail 13. In summary, the scoring matrix P1 in fig. 1 with a large background can be obtained, where P1 is { fp1 (id)1),fp1(id2),fp1(id3)}。
Similarly, assuming that the background large map 2 includes the face small map 21, the face small map 22 and the face small map 23, the image capturing apparatus may determine the score value fp2 (id) corresponding to the face small map 21 through a face scoring algorithm1) And determining a score value fp2 (id) corresponding to the face small graph 22 through a face scoring algorithm2) And determining a score value fp2 (id) corresponding to the face small graph 23 through a face scoring algorithm3)。id1Face identification, id, representing a face thumbnail 212Face identification, id, representing the face thumbnail 223Representing the face identification of the face thumbnail 23. In summary, the scoring matrix P2 in fig. 2 with a large background can be obtained, where P2 is { fp2 (id)1),fp2(id2),fp2(id3)}。
Assuming that n is 3, that is, there are 3 background large maps in total, and the background large map 3 includes a face small map 31, a face small map 32, a face small map 33 and a face small map 34, the image capturing device may determine a score fp3 (id) corresponding to the face small map 31 through a face scoring algorithm1) And determining a score value fp3 (id) corresponding to the face small graph 32 through a face scoring algorithm2) And determining a score value fp3 (id) corresponding to the face small graph 33 through a face scoring algorithm3) And determining a score value fp4 (id) corresponding to the face small graph 34 through a face scoring algorithm4)。id1Face identification, id, representing a face thumbnail 312Face identification, id, representing the face thumbnail 323Face identification, id, representing the face thumbnail 334Representing the face identification of the face thumbnail 34. In summary, the scoring matrix P3 in fig. 3 with a large background can be obtained, where P3 ═ fp3(id1),fp3(id2),fp3(id3),fp3(id4)}。
In summary, the face thumbnail 11, the face thumbnail 21 and the face thumbnail 31 are face thumbnails of the same user, and correspond to the face identifier id1. Face minimap 12, face minimap 22 and face minimap 32 are face minimaps of the same user, corresponding to face identification id2. The face minimap 13, the face minimap 23 and the face minimap 33 are face minimaps of the same user, corresponding to the face identification id3. The face minimap 34 is the face of the background minimap 3. Background large figure 1 has 3 faces (the face identifications are id respectively1,id2,id3) The background big picture 2 has 3 faces (the face identifications are id respectively1,id2,id3) The background large image 3 has 4 faces (the face identifications are id respectively1,id2,id3,id4)。
In the above embodiment, fp denotes a face scoring algorithm, and Pn denotes a matrix composed of score values of all face thumbnails in the nth frame image. When the score value corresponding to the face small image is determined through the face scoring algorithm, the score value corresponding to the face small image can be determined based on the resolution of the face small image in the background large image and/or the deflection angle of the face small image in the background large image, and details of the determination method are omitted.
And step 404, selecting the highest score value from all the score values corresponding to the face identifier for each face identifier, and taking the face minimap with the highest score value as the face sectional map corresponding to the face identifier.
For example, for face identification id1In other words, with face identification id1The corresponding score value includes fp1 (id)1),fp2(id1) And fp3 (id)1) From fp1 (id)1),fp2(id1) And fp3 (id)1) The highest score value is chosen, assuming fp2 (id)1) At the highest score value, fp2 (id)1) The corresponding face thumbnail 21 as the face identification id1Corresponding face matting, i.e. background big picture 2 as face identification id1Corresponding target image is intercepted from the background big image 2 and identified with the face identification id1Corresponding face matting, face identification id1Are the target objects of the above embodiments. As another example, for face identification id2In other words, with face identification id2The corresponding score value includes fp1 (id)2),fp2(id2) And fp3 (id)2) From fp1 (id)2),fp2(id2) And fp3 (id)2) The highest score value is chosen, assuming fp2 (id)2) At the highest score value, fp2 (id)2) The corresponding face thumbnail 22 as the identity id with the face2And (4) corresponding face matting.
Exemplarily, for face identification id1In other words, when the face ID is identified1When a certain frame disappears, the ID is identified from the face1Selecting fp (id) from appearance to disappearance1) The highest frame is taken as a target image, and the image acquisition equipment intercepts the image with the face identification id from the target image1And (4) corresponding face matting.
Step 405, for each face identifier, selecting m frames of images corresponding to the face identifier from the n frames of images (that is, the m frames of images all include a face thumbnail corresponding to the face identifier), for each frame of image in the m frames of images, determining a score value corresponding to each face thumbnail in the image by using the score value corresponding to the face thumbnail, and using the image with the highest score value as a face background map corresponding to the face identifier.
Referring to FIG. 5, the identification id for a face1With face identification id1The corresponding m-frame image includes a background large image 1, a background large image 2, and a background large image 3. For the background big map 1, based on the scoring matrix P1 of the background big map 1, P1 ═ fp1 (id)1),fp1(id2),fp1(id3) F, fp1 (id) can be calculated1),fp1(id2),fp1(id3) Median fzhong (P)1) And the median is taken as the score value corresponding to the background large image 1. For the background big map 2, based on the scoring matrix P2 of the background big map 2, P2 ═ fp2 (id)1),fp2(id2),fp2(id3) F, fp2 (id) can be calculated1),fp2(id2),fp2(id3) Median fzhong (P)2) And taking the median asBackground figure 2 corresponds to the score value. And, for the background big map 3, the scoring matrix P3, P3 ═ fp3 (id) based on the background big map 3 can be used1),fp3(id2),fp3(id3),fp3(id4) Calculate fp3 (id)1),fp3(id2),fp3(id3),fp3(id4) Median fzhong (P)3) And the median is taken as the score value corresponding to the background large figure 3.
If the score value corresponding to the background large image 3 is the highest, the background large image 3 is used as the face identification id1Obtaining a face identification id according to the corresponding face background image1And the corresponding face matting and face background images. Similarly, the face matting and the face background image corresponding to other face identifiers can be obtained, and are not described herein again.
Step 406, for each face identifier, after obtaining the face matting map and the face background map corresponding to the face identifier, the image acquisition device determines whether the face background map has been sent to the storage device within the statistical period. If yes, go to step 407; if not, step 408 may be performed.
Step 407, the image acquisition device sends the face matting to the storage device.
Step 408, the image capturing device sends the face matting and the face background image to a storage device.
In a possible implementation manner, if the face matting is sent to the storage device, the image acquisition device receives the matting address returned by the storage device, locally queries a background map address of a face background map, and sends the matting address, the background map address and alarm information to the service platform. Or if the face matting and the face background image are sent to the storage device, the image acquisition device receives the matting address and the background image address returned by the storage device, and sends the matting address, the background image address and the alarm information to the service platform.
For example, the execution sequence is only an example given for convenience of description, and in practical applications, the execution sequence between the steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Based on the same application concept as the method, the embodiment of the present application provides an image processing apparatus, which is applied to an image capturing device, and the apparatus includes:
the acquisition module is used for acquiring n frames of images in a statistical period and selecting m frames of images with the same target object from the n frames of images, wherein m is less than or equal to n; selecting a target image corresponding to the target object from the m frames of images, and intercepting an object cutout corresponding to the target object from the target image; determining an object background map corresponding to the target object based on the m-frame images; the judging module is used for judging whether the object background image is sent to the storage device in the counting period; a sending module, configured to send the object matting to the storage device if the object matting is the object; and if not, sending the object matting and the object background picture to the storage equipment.
For example, when the obtaining module determines the object background map corresponding to the target object based on the m frames of images, the obtaining module is specifically configured to: for each frame of the m frames of images, determining a score value corresponding to each object in the image, and determining a score value corresponding to the image based on the score value corresponding to each object;
selecting an image with the highest score value based on the score value corresponding to each frame of image in the m frames of images;
and determining an object background image corresponding to the target object based on the image with the highest score value.
For example, when the obtaining module determines the score value corresponding to the image based on the score value corresponding to each object, the obtaining module is specifically configured to: determining a score value corresponding to the image based on the median of the score values corresponding to each object; or, the score value corresponding to the image is determined based on the average value of the score values corresponding to each object.
The obtaining module is specifically configured to, when selecting a target image corresponding to the target object from the m-frame images: and determining a score value corresponding to the target object in each frame of the m frames of images, and determining the image with the highest score value as the target image corresponding to the target object.
Illustratively, after the sending module sends the object matting map and the object background map to the storage device, the sending module is further configured to: receiving a matting address and a background map address returned by the storage device; wherein the matting address is used for representing the storage address of the object matting in the storage device, and the background map address is used for representing the storage address of the object background map in the storage device;
and sending the matting address, the background map address and the alarm information of the target object to a service platform, and recording the matting address and the mapping relation between the background map address and the alarm information by the service platform.
Illustratively, after the sending module sends the object matting to the storage device, the sending module is further configured to: receiving a matting address returned by the storage device, and locally inquiring a background map address of the object background map; wherein the matting address is used for representing the storage address of the object matting in the storage device, and the background map address is used for representing the storage address of the object background map in the storage device;
and sending the matting address, the background map address and the alarm information of the target object to a service platform, and recording the matting address and the mapping relation between the background map address and the alarm information by the service platform.
Based on the same application concept as the method, the embodiment of the application provides an image acquisition device, which comprises: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to perform the steps of:
acquiring n frames of images in a statistical period, and selecting m frames of images with the same target object from the n frames of images, wherein m is less than or equal to n;
selecting a target image corresponding to the target object from the m frames of images, and intercepting an object cutout corresponding to the target object from the target image;
determining an object background map corresponding to the target object based on the m-frame images;
judging whether the object background image is sent to a storage device in the statistical period or not;
if yes, sending the object matting to the storage device;
and if not, sending the object matting and the object background picture to the storage equipment.
In one possible embodiment, a method for determining a transmitted face image includes:
caching multi-frame images with human faces, wherein at least 1 frame of image in the multi-frame images has a human face of not less than 2 persons;
analyzing a plurality of frames of images by using a face recognition algorithm;
in response to the analysis, determining a score value of each face and a score function value of all faces for any frame of image, wherein the score value is used for indicating the image quality of the face, and the score function value is determined by the score values of all faces;
in response to a particular face:
determining an image with the highest score value of a specific face and an image with the highest score function value of the specific face in a multi-frame image;
intercepting the image with the highest score value of the specific face to generate a frame of face image with small size, wherein the face image with small size only retains the image content corresponding to the specific face; and
and sending the face image with the small size and the image with the specific face and the highest score function value.
The scoring function value is a numerical value generated by performing mean operation on the scoring values of all the faces;
the scoring function value is a median determined after ranking the scoring values of all the faces;
wherein the multi-frame images are images generated by the camera during the period from appearance to disappearance of a specific face.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the image processing method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image processing method is applied to an image acquisition device, and comprises the following steps:
acquiring n frames of images in a statistical period, and selecting m frames of images with the same target object from the n frames of images, wherein m is less than or equal to n;
selecting a target image corresponding to the target object from the m frames of images, and intercepting an object cutout corresponding to the target object from the target image;
determining an object background map corresponding to the target object based on the m-frame images;
judging whether the object background image is sent to a storage device in the statistical period or not;
if yes, sending the object matting to the storage device;
and if not, sending the object matting and the object background picture to the storage equipment.
2. The method of claim 1,
the determining an object background map corresponding to the target object based on the m-frame images includes:
for each frame of the m frames of images, determining a score value corresponding to each object in the image, and determining a score value corresponding to the image based on the score value corresponding to each object;
selecting an image with the highest score value based on the score value corresponding to each frame of image in the m frames of images;
and determining an object background image corresponding to the target object based on the image with the highest score value.
3. The method of claim 2,
the determining the score value corresponding to the image based on the score value corresponding to each object comprises:
determining a score value corresponding to the image based on the median of the score values corresponding to each object; or the like, or, alternatively,
and determining the score value corresponding to the image based on the average value of the score values corresponding to each object.
4. The method of claim 1,
the selecting a target image corresponding to the target object from the m-frame images includes:
and determining a score value corresponding to the target object in each frame of the m frames of images, and determining the image with the highest score value as the target image corresponding to the target object.
5. The method of claim 1, wherein after sending the object matte and the object background map to the storage device, the method further comprises:
receiving a matting address and a background map address returned by the storage device; wherein the matting address is used for representing the storage address of the object matting in the storage device, and the background map address is used for representing the storage address of the object background map in the storage device;
and sending the matting address, the background map address and the alarm information of the target object to a service platform, and recording the matting address and the mapping relation between the background map address and the alarm information by the service platform.
6. The method of claim 1,
after the sending the object matte to the storage device, the method further comprises:
receiving a matting address returned by the storage device, and locally inquiring a background map address of the object background map; wherein the matting address is used for representing the storage address of the object matting in the storage device, and the background map address is used for representing the storage address of the object background map in the storage device;
and sending the matting address, the background map address and the alarm information of the target object to a service platform, and recording the matting address and the mapping relation between the background map address and the alarm information by the service platform.
7. The method of claim 1, further comprising: in the statistical period, when the object background image is sent to the storage device every time, recording the mark of the object background image in the mapping table; when the statistical period is over, deleting the marks of all the object background images recorded in the mapping table;
judging whether the object background image is sent to a storage device in the statistical period or not, wherein the judging step comprises the following steps:
judging whether the mapping table has the mark of the object background image or not;
if yes, determining that the object background image is sent to a storage device in the statistical period;
if not, determining that the object background image is not sent to the storage device in the counting period.
8. A method for determining a transmitted face image, comprising:
caching multi-frame images with human faces, wherein at least 1 frame of image in the multi-frame images has a human face of not less than 2 persons;
analyzing the multi-frame image by using a face recognition algorithm;
in response to the analysis, determining a score value of each face and a score function value of all faces for any frame of image, wherein the score value is used for indicating the image quality of the face, and the score function value is determined by the score values of all faces;
in response to a particular face:
determining an image with the highest score value of the specific face and an image with the specific face and the highest score function value in the multi-frame images;
intercepting the image with the highest score value of the specific face to generate a frame of face image with small size, wherein the face image with small size only retains the image content corresponding to the specific face; and
and sending the face image with the small size and the image with the specific face and the highest score function value.
9. The method of claim 8, wherein the score function value is a numerical value generated by averaging the score values of all the faces, or wherein the score function value is a median determined by sorting the score values of all the faces.
10. The method of claim 9, wherein the plurality of frame images are images generated by a camera during appearance to disappearance of the particular face.
CN202010915959.3A 2020-09-03 2020-09-03 Image processing method Active CN112135092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010915959.3A CN112135092B (en) 2020-09-03 2020-09-03 Image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010915959.3A CN112135092B (en) 2020-09-03 2020-09-03 Image processing method

Publications (2)

Publication Number Publication Date
CN112135092A true CN112135092A (en) 2020-12-25
CN112135092B CN112135092B (en) 2023-05-26

Family

ID=73848896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010915959.3A Active CN112135092B (en) 2020-09-03 2020-09-03 Image processing method

Country Status (1)

Country Link
CN (1) CN112135092B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109867A1 (en) * 2021-12-16 2023-06-22 华为技术有限公司 Camera image transmission method and apparatus, and camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104093001A (en) * 2014-07-23 2014-10-08 山东建筑大学 Online dynamic video compression method
CN106778448A (en) * 2015-11-23 2017-05-31 江南大学 A kind of video image clustering method of view-based access control model memory models
CN107071337A (en) * 2016-11-21 2017-08-18 浙江宇视科技有限公司 The transmission method and device of a kind of video monitoring image
CN109711318A (en) * 2018-12-24 2019-05-03 北京澎思智能科技有限公司 A kind of plurality of human faces detection and tracking based on video flowing
CN110457974A (en) * 2018-05-07 2019-11-15 浙江宇视科技有限公司 Image superimposing method, device, electronic equipment and readable storage medium storing program for executing
CN110874583A (en) * 2019-11-19 2020-03-10 北京精准沟通传媒科技股份有限公司 Passenger flow statistics method and device, storage medium and electronic equipment
WO2020087922A1 (en) * 2018-10-30 2020-05-07 平安科技(深圳)有限公司 Facial attribute identification method, device, computer device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104093001A (en) * 2014-07-23 2014-10-08 山东建筑大学 Online dynamic video compression method
CN106778448A (en) * 2015-11-23 2017-05-31 江南大学 A kind of video image clustering method of view-based access control model memory models
CN107071337A (en) * 2016-11-21 2017-08-18 浙江宇视科技有限公司 The transmission method and device of a kind of video monitoring image
CN110457974A (en) * 2018-05-07 2019-11-15 浙江宇视科技有限公司 Image superimposing method, device, electronic equipment and readable storage medium storing program for executing
WO2020087922A1 (en) * 2018-10-30 2020-05-07 平安科技(深圳)有限公司 Facial attribute identification method, device, computer device and storage medium
CN109711318A (en) * 2018-12-24 2019-05-03 北京澎思智能科技有限公司 A kind of plurality of human faces detection and tracking based on video flowing
CN110874583A (en) * 2019-11-19 2020-03-10 北京精准沟通传媒科技股份有限公司 Passenger flow statistics method and device, storage medium and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109867A1 (en) * 2021-12-16 2023-06-22 华为技术有限公司 Camera image transmission method and apparatus, and camera

Also Published As

Publication number Publication date
CN112135092B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
KR101548834B1 (en) An adaptable framework for cloud assisted augmented reality
KR20190128686A (en) Method and apparatus, equipment, and storage medium for determining the pose of an object in an image
CN112818149B (en) Face clustering method and device based on space-time track data and storage medium
CN107438173A (en) Video process apparatus, method for processing video frequency and storage medium
CN110084153A (en) For sharing the smart camera of picture automatically
US9639778B2 (en) Information processing apparatus, control method thereof, and storage medium
CN112364825B (en) Method, apparatus and computer-readable storage medium for face recognition
WO2011114668A1 (en) Data processing device and data processing method
KR20190120106A (en) Method for determining representative image of video, and electronic apparatus for processing the method
CN110889314A (en) Image processing method, device, electronic equipment, server and system
CN112651386A (en) Identity information determination method, device and equipment
CN112135092A (en) Image processing method
US8300256B2 (en) Methods, systems, and computer program products for associating an image with a communication characteristic
EP4087268A1 (en) Video processing method, apparatus, and system
JP6862596B1 (en) How to select video analysis equipment, wide area surveillance system and camera
US11074696B2 (en) Image processing device, image processing method, and recording medium storing program
CN112639870A (en) Image processing apparatus, image processing method, and image processing program
JP4279181B2 (en) Monitoring system
JP2006244424A (en) Image scene classifying method and device and program
CN111401170B (en) Face detection method and device
KR20060130647A (en) Methods and apparatuses for formatting and displaying content
CN114268730A (en) Image storage method and device, computer equipment and storage medium
CN113297889A (en) Object information processing method and device
CN112016609A (en) Image clustering method, device and equipment and computer storage medium
WO2014092553A2 (en) Method and system for splitting and combining images from steerable camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant