CN113516703A - Camera coverage detection method, device, equipment and storage medium - Google Patents

Camera coverage detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113516703A
CN113516703A CN202011157331.8A CN202011157331A CN113516703A CN 113516703 A CN113516703 A CN 113516703A CN 202011157331 A CN202011157331 A CN 202011157331A CN 113516703 A CN113516703 A CN 113516703A
Authority
CN
China
Prior art keywords
target
pixel
areas
camera
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011157331.8A
Other languages
Chinese (zh)
Inventor
刘俊龙
沈旭
黄建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011157331.8A priority Critical patent/CN113516703A/en
Publication of CN113516703A publication Critical patent/CN113516703A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a method, a device, equipment and a storage medium for detecting the coverage area of a camera, wherein the method comprises the following steps: sampling a video collected by a camera to obtain a multi-frame image; respectively detecting a target object for the multi-frame images to acquire a plurality of target areas containing the target object; determining a target physical size corresponding to a target pixel according to a preset physical size of a target object and a pixel size corresponding to each of at least one target area covering the target pixel; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel. The determination of the coverage area is determined only based on the image collected by the camera, does not depend on the internal and external parameters of the camera, eliminates the interference of environmental shielding, illumination and the like, and improves the accuracy.

Description

Camera coverage detection method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a storage medium for detecting the coverage area of a camera.
Background
At present, cameras are deployed in many scenes to acquire videos of surrounding environments, so that the purposes of video monitoring and the like are achieved.
In practical applications, there is a need to determine the coverage area of the camera. For example, when a plurality of cameras need to be deployed in a certain area, the coverage area of each camera needs to be known to reasonably position the deployment position of the camera, so as to avoid that the coverage areas of two adjacent cameras overlap too much or a large uncovered area appears.
At present, a commonly used way to determine the coverage area of a camera is as follows: the coverage area of the camera is mathematically modeled based on internal and external parameters of the camera, wherein the internal parameters comprise the focal length and the like, and the external parameters comprise the height of the camera from the ground, the angle with a horizontal plane or a vertical plane and the like.
What is obtained by means of the above mathematical modelling is a static analytical solution, i.e. the theoretical maximum coverage area, and the accuracy of the determination is influenced by the accuracy of the above-mentioned internal and external parameters used, which are likely to be inaccurate.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for detecting the coverage area of a camera, which can improve the accuracy of a result of determining the coverage area of the camera.
In a first aspect, an embodiment of the present invention provides a method for detecting a coverage area of a camera, where the method includes:
sampling a video collected by a camera to obtain a multi-frame image;
respectively detecting a target object for the multi-frame images to acquire a plurality of target areas containing the target object;
determining a target physical size corresponding to a target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
In a second aspect, an embodiment of the present invention provides a device for detecting a coverage of a camera, where the device includes:
the image acquisition module is used for sampling the video acquired by the camera to obtain a multi-frame image;
the object detection module is used for respectively detecting the target objects of the multi-frame images so as to acquire a plurality of target areas containing the target objects;
the area determining module is used for determining a target physical size corresponding to the target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to implement at least the camera coverage detection method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to implement at least the camera coverage detection method according to the first aspect.
In a fifth aspect, an embodiment of the present invention provides a method for detecting a coverage area of a camera, where the method includes:
sampling a video collected by a camera to obtain a multi-frame image;
respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple target areas corresponding to the multiple types of target objects;
determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas corresponds to one type of target object;
determining a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target areas, a pixel size of a target area contained in each of the at least one group of target areas, and a number of target areas contained in each of the at least one group of target areas;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
In a sixth aspect, an embodiment of the present invention provides a device for detecting a coverage of a camera, where the device includes:
the image acquisition module is used for sampling the video acquired by the camera to obtain a multi-frame image;
the object detection module is used for respectively detecting multiple types of target objects of the multi-frame images so as to obtain multiple target areas corresponding to the multiple types of target objects;
the area selection module is used for determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas correspond to one type of target object;
an area determination module, configured to determine a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target regions, a pixel size of a target region included in each of the at least one group of target regions, and a number of target regions included in each of the at least one group of target regions; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
In a seventh aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to implement at least the camera coverage detection method of the fifth aspect.
In an eighth aspect, the present invention provides a non-transitory machine-readable storage medium, on which an executable code is stored, and when the executable code is executed by a processor of an electronic device, the processor is enabled to implement at least the camera coverage detection method according to the fifth aspect.
In a ninth aspect, an embodiment of the present invention provides a method for detecting a coverage area of a camera, including:
receiving a request for calling a target service, wherein the request comprises a video acquired by a camera;
and executing the following steps by utilizing the resources corresponding to the target service:
sampling a video collected by the camera to obtain a multi-frame image;
respectively detecting a target object for the multi-frame images to acquire a plurality of target areas containing the target object;
determining a target physical size corresponding to a target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
In a tenth aspect, an embodiment of the present invention provides a method for detecting a coverage area of a camera, including:
receiving a request for calling a target service, wherein the request comprises a video acquired by a camera;
and executing the following steps by utilizing the resources corresponding to the target service:
sampling a video collected by the camera to obtain a multi-frame image;
respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple target areas corresponding to the multiple types of target objects;
determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas corresponds to one type of target object;
determining a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target areas, a pixel size of a target area contained in each of the at least one group of target areas, and a number of target areas contained in each of the at least one group of target areas;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
The camera coverage detection scheme provided by the embodiment of the invention is used for determining the effective coverage area of the camera, because actually, interference factors irrelevant to the application purpose may exist in the shooting range of the camera, and the interference of the irrelevant factors is eliminated by the coverage area determined by the scheme. The application purpose refers to that a camera is deployed mainly for video acquisition of which target objects or which types of target objects. Assuming that the purpose of deploying the camera is mainly to focus on a certain type of target object, in order to determine the effective coverage area of the camera, a video acquired by the camera within a period of time may be acquired, the video is sampled to obtain a plurality of frames of images, detection of the target object is performed on each frame of image, so that a target area including the target object in each frame of image is obtained, and a plurality of target areas may be obtained by summarizing the target areas respectively detected in all the plurality of frames of images. Then, for a pixel (referred to as a target pixel) at any position within the coverage range of the plurality of target areas, at least one target area covering the target pixel in the plurality of target areas is determined, and then, a target physical size corresponding to the target pixel is determined according to a preset physical size of the target object and a pixel size corresponding to each of the at least one target area. The target physical size of each pixel can be obtained by performing the above-described processing for each pixel covered by the plurality of target regions. Finally, the coverage area of the camera is determined by combining the target physical dimensions corresponding to the target pixels covered by the target areas (for example, the covered pixels). The determination of the coverage area is determined only based on the image collected by the camera, and the accuracy is improved without depending on the internal and external parameters of the camera. In addition, because the coverage area is determined based on the target object detection result of the image, the interference of factors such as shielding, illumination and other non-concerned objects in the environment is eliminated, and the finally determined coverage area is the accurate effective coverage area of the camera.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting a camera coverage area according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target area according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a process of determining a target physical size of a target pixel in a single-type target object scene according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a camera coverage area detection scene according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for detecting a coverage area of a camera according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a process for determining a target physical size of a target pixel in a multi-class target object scene according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a camera coverage detection apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device corresponding to the camera coverage detection apparatus provided in the embodiment shown in fig. 7;
fig. 9 is a schematic structural diagram of a camera coverage detection apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device corresponding to the camera coverage detection apparatus provided in the embodiment shown in fig. 9.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The camera coverage detection scheme provided by the embodiment of the invention is used for determining the effective coverage area of the camera.
First, the meaning of the effective coverage area will be explained.
It is understood that in practical applications, the purpose of deploying the camera in different application scenes may be different. For example, cameras are deployed beside roads in cities for road monitoring, that is, monitoring the condition of vehicles and pedestrians on the roads. As another example, a camera may be deployed inside or at an entrance to a public location to monitor traffic in and out of the public location.
That is, in different application scenarios, the purpose of the application deploying the cameras is often to focus on different categories of target objects. For example, in the road traffic scene, two target objects, i.e., a person and a vehicle, may be involved, and in the application scene in the public place, the target objects, i.e., the person, may be involved. In other application scenarios, for example, a human face, a certain animal, and other types of target objects may be involved.
However, it may be the case that, for a certain camera, in addition to a certain class or several classes of target objects that need to be focused, some objects that are not focused may be present in the shooting range of the camera. For example, in a road traffic scene, some road infrastructure may exist in a shooting range of a certain camera, and trees may also exist. In addition, even if there is a target object located in a certain position area within the shooting range of the camera, the image collected by the camera cannot see the target object due to factors such as the shielding of the environment and the illumination.
In summary, the purpose of deploying the cameras in different application scenes is as follows: the target object of a certain type or several types of target objects which need to be paid attention can be clearly shot through the camera, so that the area range which can meet the requirement in the whole shooting range of the camera is effective, namely, the effective coverage area of the camera is the area of the area range. However, the position areas where clear images cannot be obtained due to factors such as occlusion and illumination and the position areas where the objects not of interest are located in the above examples are invalid position areas and are not calculated in the effective coverage area of the camera.
Note that the coverage area of the camera referred to hereinafter is referred to as the effective coverage area in the above. In addition, the coverage area herein refers to the actual physical ground area, not the pixel area.
In practical applications, the purpose of determining the coverage area of the camera may be: when a certain camera is expected to cover a certain specific area, whether the coverage requirement of the specific area is met is found by determining the area which can be actually covered by the camera, and if not, the coverage of the specific area can be realized by adjusting the deployment position and the shooting angle of the camera.
The purpose of determining the coverage area of the camera may also be: when a plurality of cameras need to be deployed in a certain application scene, it is often desirable that coverage areas of two adjacent cameras do not intersect excessively to cause repeated shooting, and it is also desirable that coverage areas of two adjacent cameras do not have an excessively long distance to cause shooting holes. Therefore, based on the determination result of the coverage area of each camera, determination of the deployment position of each camera can be facilitated.
Based on the foregoing examples, in practical applications, in some application scenarios, a certain type of target object needs to be focused through a camera, and in some application scenarios, a plurality of types of target objects need to be focused through the camera. Therefore, in the embodiment of the invention, solutions for determining the coverage area of the camera are provided for the two situations respectively.
The camera coverage detection method provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a terminal device such as a PC (personal computer), a notebook computer and the like, and can also be a server at the cloud end. The server may be a physical server comprising a stand-alone host, or may be a virtual server, or may be a cloud server.
Fig. 1 is a flowchart of a method for detecting a camera coverage area according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
101. and sampling the video collected by the camera to obtain a multi-frame image.
102. And respectively detecting the target object for the multi-frame images to acquire a plurality of target areas containing the target object.
103. And determining a target physical size corresponding to the target pixel according to the preset physical size of the target object and the pixel size corresponding to at least one target area covering the target pixel.
104. And determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
For a camera, the objects shot at different moments are different, in order to accurately determine the coverage area of the camera, the video collected by the camera within a period of time needs to be used, and the length of the period of time can be set according to requirements.
Then, the image frame is sampled for the video to obtain a plurality of frame images. For example, the video may be divided into image frames, so that all or part of the divided image frames serve as the multi-frame image. It is understood that the plurality of frame images have the same size.
In this embodiment, it is assumed that the camera is deployed in an application scene for the purpose of focusing attention on a certain type of target object, such as a person, through the camera.
Therefore, after the multi-frame images are obtained, the target object is detected for each of the multi-frame images to detect a target area including the target object in each frame image. The target areas respectively detected in all the multi-frame images are collected to obtain a plurality of target areas. In practical applications, each target area may define a pixel position range covered by the target area by its vertex coordinates, for example, the pixel position range covered by the target area may be represented by coordinates of the upper left corner and the lower right corner.
In an alternative embodiment, when detecting the target object for each frame of image, a rectangular region including the target object, that is, a smallest rectangular region including the target object may be detected, and thus, a plurality of rectangular regions detected from a plurality of frames of images may be used as the plurality of target regions.
In another optional embodiment, after obtaining the plurality of rectangular areas, for each rectangular area, an area whose bottom is up to a preset height may be further extracted, and the extracted plurality of areas are used as the plurality of target areas. The preset height may be a set proportion such as 10% of the full height of the corresponding rectangular region.
For convenience of understanding, for example, as shown in fig. 2, assuming that a rectangular region a containing a target object is detected in a certain frame image, a lower sub-region B having a height of 10% of the height of the rectangular region a is cut out from the rectangular region a, and at this time, the sub-region B is to be regarded as a target region.
The above-described region is extracted because the camera is generally used for shooting in a top view, and the calculation of the coverage area of the camera is only required to focus on a portion of the target object closer to the ground, so that the amount of calculation in the subsequent calculation can be reduced by extracting only the lower portion of the rectangular region.
In addition, on the basis of the two alternative embodiments, optionally, the target area may be scaled in a set proportion to compress the size of the target area, and the calculation amount of the subsequent calculation may also be reduced.
For ease of understanding, the implementation of step 103 is described below with reference to the example in fig. 3.
In fig. 3, it is assumed that the above-described multi-frame image is composed of image 1, image 2, and image 3 illustrated in fig. 3. Assume that the target region Q1 and the target region Q2 are detected in image 1, the target region Q3 is detected in image 2, and the target region Q4 is detected in image 3.
As can be seen from the schematic diagram in fig. 3, in the images, pixels at some positions are covered by a plurality of target areas (illustrating that there are many target objects detected at the positions of the pixels), such as the pixel i shown in the diagram; pixels at some locations may be covered by only one target area (indicating that the target object was detected to exist only once at these pixel locations), such as the pixel j illustrated in the figure; there may be locations where pixels are not covered by any of the target areas (indicating that the presence of the target object is not detected at these pixel locations), such as the pixel k illustrated in the figure.
It should be noted that a pixel in this document should be understood as meaning a pixel corresponding to a certain position in a pixel coordinate system, and should not be understood as a certain pixel in a certain frame image. For example, taking the above-mentioned pixel i as an example, the pixel i refers to the pixel at the same position in all the images illustrated in fig. 3, and in short, the several frames of images all include the pixel i corresponding to the same position.
The target pixel in this embodiment refers to a pixel covered by at least one of the target regions, and for example, both the pixel i and the pixel j may be used as the target pixel.
For any target pixel, first, at least one target region covering the target pixel is determined among all the obtained target regions. In fig. 3, assuming that the target pixel is the pixel i, the at least one target region includes a target region Q1, a target region Q3, and a target region Q4. And then, determining the target physical size corresponding to the target pixel according to the preset physical size of the target object and the pixel size corresponding to the at least one target area covering the target pixel.
Several concepts are involved here to illustrate:
preset physical size of target object: the preset physical size of the target object includes a preset physical length in a preset direction of the target object, and the preset direction includes a length direction or a width direction. In short, the length or width of the target object is obtained through statistics in advance. For example, the average body width of a human is calculated to be 50cm in advance.
Pixel size corresponding to the target area: the number of pixels included in the target area corresponding to the preset direction is referred to. For example, the target area Q1 in fig. 3 has a pixel width of 50 (i.e., 50 pixels, which is commonly referred to as 50 pixels wide), the target area Q2 has a pixel width of 60, the target area Q3 has a pixel width of 40, and the target area Q4 has a pixel width of 100.
Target physical size corresponding to target pixel: the physical length of the target pixel corresponding to the preset direction is referred to. In the above example of the body width of the human body, the target physical size of the target pixel is the actual physical length corresponding to the target pixel. It should be noted that the target two words in the target physical size mainly emphasize the physical size corresponding to the finally determined target pixel. In determining the target physical size, the physical size of the target pixel for a target area may also need to be calculated, which may be considered as an intermediate variable.
Based on the above description of the concept, the target physical size corresponding to the target pixel may be optionally determined as follows:
for any target area in at least one target area covering the target pixel, determining the quotient of the preset physical size of the target object and the pixel size corresponding to the any target area as the corresponding physical size of the target pixel in the any target area;
and determining the target physical size corresponding to the target pixel according to the physical sizes of the target pixel respectively corresponding to the at least one target area.
Optionally, the target physical size corresponding to the target pixel may be determined as: and the target pixel is the mean value of the physical sizes respectively corresponding to the at least one target area.
For convenience of explanation and understanding, the case illustrated in fig. 3 is still used as an example for explanation, and the target pixel is assumed to be the pixel i illustrated in fig. 3. The target object is a human, and the preset average body width of the human is 50 cm. As can be seen from the schematic diagram in fig. 3, the target area Q1 covering the pixel i has a pixel width of 50, the target area Q3 has a pixel width of 40, and the target area Q4 has a pixel width of 100.
For the target area Q1, the physical size s1 of the pixel i in the target area Q1 is 50cm/50 is 1 cm. Similarly, for the target area Q3, the physical size s2 of the pixel i in the target area Q3 is 50cm/40 is 1.25 cm. For the target area Q4, the physical size s3 of the pixel i in the target area Q4 is 50cm/100 is 0.5 cm.
Finally, by performing an averaging calculation on the three physical dimensions, the target physical dimension of the pixel i can be obtained as follows: (1cm +1.25cm +0.5 cm)/3-0.92 cm.
By performing the above calculation processing on other pixels covered by one or more target areas, the target physical size corresponding to each pixel covered by all the target areas can be obtained.
Finally, the coverage area of the camera can be determined according to the target physical size corresponding to all the pixels.
For example, it can be determined that the coverage area of the camera is: and the square sum of the target physical lengths corresponding to the target pixels. The plurality of target pixels includes all pixels covered by the plurality of target areas. Taking the pixel i as an example, the square of the target physical length corresponding to the pixel i is used to represent the physical area corresponding to the pixel i.
According to the scheme, the obtained coverage area of the camera can be regarded as the effective coverage area of the camera in the period of time, assuming that the video used for determining the coverage area is the video acquired from the time T1 to the time T2. Assuming that the coverage area of the camera at the time T2 to T3 needs to be obtained, the coverage area of the camera in the period of time may be determined based on the videos acquired at the time T1 to T3, so that the coverage area corresponding to the camera at the time T1 to T3 is different from the coverage area corresponding to the camera at the time T1 to T2, that is, the coverage area corresponding to the camera at the time T2 to T3 may be obtained.
In summary, when a camera is deployed for the purpose of focusing attention on a certain type of target object, the determination of the effective coverage area of the camera can be achieved through the above scheme. The coverage area determined by the present solution excludes the influence of the above-mentioned extraneous factors. Specifically, areas that cannot be clearly imaged in an image due to occlusion or illumination, for example, and objects that are not of interest are not detected as target areas in the process of detecting the target objects, and thus are not objects to be calculated for the coverage area. In addition, the coverage area is determined directly based on the image shot by the camera, and the method is independent of the internal and external parameters of the camera and contributes to improving the accuracy of the coverage area determination result.
The following describes an exemplary process for determining the coverage area of the camera in an application scenario with reference to fig. 4.
In fig. 4, assuming that the target object is a human, a camera C1 is deployed in a platform of a certain station to perform video monitoring on the platform. To determine whether the actual coverage area of camera C1 can meet the required coverage area for the station, the coverage area of camera C1 needs to be calculated.
To do so, the video captured by the camera C1 over a period of time is acquired, from which multiple frames of images are sampled. And detecting a target area containing a target object, namely a person, in each frame of image to obtain a plurality of target areas. And aiming at any pixel in the coverage range of the target areas, determining the target physical length corresponding to the pixel according to the pixel width corresponding to at least one target area covering the pixel and the preset human body average body width. And performing square sum calculation on the target physical lengths corresponding to all pixels in the coverage range of the target areas, wherein the calculation result is the coverage area of the camera C1.
As described above, the camera coverage detection method provided by the present invention can be executed in the cloud, and a plurality of computing nodes may be deployed in the cloud, and each computing node has processing resources such as computation and storage. In the cloud, a plurality of computing nodes may be organized to provide a service, and of course, one computing node may also provide one or more services.
According to the scheme provided by the invention, the cloud end can provide a service for detecting the coverage of the camera, which is called a target service. When a user needs to detect the coverage of the camera, the target service is called so as to trigger a request for calling the target service to the cloud, and the request carries a video acquired by a certain camera. The cloud determines the compute nodes responding to the request, and executes the following steps by using the resources in the compute nodes:
sampling a video collected by the camera to obtain a multi-frame image;
respectively detecting the target object of the multi-frame images to acquire a plurality of target areas containing the target object;
determining a target physical size corresponding to a target pixel according to a preset physical size of a target object and a pixel size corresponding to each of at least one target area covering the target pixel;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
The specific implementation of the above steps may refer to the related descriptions in the foregoing other embodiments, which are not described herein again.
The above embodiment describes a case where the camera is used to focus on only a certain type of target object, and when a plurality of types of target objects need to be focused on by the camera, the scheme provided by the embodiment shown in fig. 5 may be adopted.
Fig. 5 is a flowchart of a method for detecting a camera coverage area according to an embodiment of the present invention, and as shown in fig. 5, the method includes the following steps:
501. and sampling the video collected by the camera to obtain a multi-frame image.
502. And respectively detecting multiple types of target objects of the multiple frames of images to acquire multiple target areas corresponding to the multiple types of target objects.
503. At least one set of target areas covering target pixels is determined from the plurality of target areas, wherein a set of target areas corresponds to a class of target objects.
504. And determining the target physical size corresponding to the target pixel according to the preset physical size of the target object corresponding to each of the at least one group of target areas, the pixel size of the target area contained in each of the at least one group of target areas, and the number of the target areas contained in each of the at least one group of target areas.
505. And determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
The scheme provided by the embodiment is suitable for application scenes in which attention needs to be paid to the conditions of various target objects acquired by the camera, for example, in a road traffic scene, traffic control and other vehicle structures need to pay attention to various target objects such as vehicles and people in a road.
In this embodiment, after the multi-frame images are obtained in the manner described above, a preset plurality of types of target objects are detected for each frame of image, so as to determine the target areas corresponding to the target objects in each frame of image.
Similar to the scene for detecting a single type of target object, optionally, multiple types of target objects may be detected for multiple frames of images, respectively, to obtain multiple rectangular regions containing any type of target object, and these rectangular regions are directly used as target regions.
Optionally, a lower region in each rectangular region may be further truncated as a target region. Wherein the height of the truncated lower region may be a preset proportion of the height of the corresponding rectangular region, the preset proportion being less than 1.
It can be understood that, assuming that the multiple types of target objects include a first type of target object and a second type of target object, in the process of acquiring the target area, it may be detected whether each frame of image includes the two types of target objects, and when the target area includes the two types of target objects, the target area corresponding to each type of target object is determined. In this way, the plurality of target regions corresponding to the two types of target objects in the entire image are collected, and the entire target regions detected in the entire image are obtained. That is, it is assumed that target areas corresponding to the first type of target object detected in each frame image are summarized to obtain N1 target areas; and summarizing the target areas corresponding to the second type target object detected in each frame image to obtain N2 target areas. Then, all the target regions detected in all the images are composed of the N1 target regions and N2 target regions.
In this embodiment, since a plurality of types of target objects are concerned, at this time, the determination of the target physical size corresponding to one pixel needs to consider two factors: firstly, the occurrence times of each type of target object at the pixel are respectively corresponding; second, the physical size of the pixel under each type of target object.
Since the influence of the above-mentioned first aspect factor needs to be considered, for the target pixel, at least one set of target regions covering the target pixel may be determined from all the detected target regions, where one set of target regions corresponds to one type of target object. That is, the target areas corresponding to the same target object type covering the target pixels are determined as a set of target areas according to the inclusion relationship between each target area and the target pixel and the target object type corresponding to each target area.
The target pixel may be a pixel at any position in a pixel coordinate system corresponding to the multi-frame image, or may be simply referred to as a pixel at any position in the image.
Of course, it is understood that for some pixels, any type of target object covering the pixel may not be detected in all the multi-frame images, and the pixels do not contribute to the calculation process of the coverage area of the camera and therefore can be ignored. Therefore, in this embodiment, the target pixel may be a pixel covered by one or more detected target regions, and a pixel not covered by any one of the target regions may not be considered.
After obtaining at least one set of target areas covering the target pixels, the target physical size corresponding to the target pixels may be determined according to the preset physical size of the target object corresponding to each of the at least one set of target areas, the pixel size of the target area included in each of the at least one set of target areas, and the number of target areas included in each of the at least one set of target areas.
Similarly to the definition in the above embodiment, in the present embodiment, the preset physical size of the target object includes a preset physical length in a preset direction of the target object, and the preset direction includes a length direction or a width direction. For example, when a certain type of target object is a person, the preset physical length may be a human body average body width: 50 cm. For another example, when the target object is a car, the preset physical length may be an average width of the car body: 160 cm.
Similarly, the pixel size corresponding to the target area includes: the number of pixels included in the target region corresponding to the preset direction. Such as the pixel width of the target area.
The target physical size corresponding to the target pixel comprises: and the physical length of the target pixel corresponding to the preset direction.
Optionally, the determining of the target physical size corresponding to the target pixel includes:
determining physical sizes corresponding to target pixels under at least one type of target objects according to preset physical sizes of the target objects corresponding to the at least one group of target areas and pixel sizes of the target areas contained in the at least one group of target areas, wherein the at least one type of target objects correspond to the at least one group of target areas one to one;
determining the weight of the physical size corresponding to the target pixel under at least one type of target object according to the number of the target areas contained in the at least one group of target areas;
and performing weighted summation on the physical sizes respectively corresponding to the target pixels according to the weights so as to determine the target physical size corresponding to the target pixel.
Optionally, determining, according to preset physical sizes of target objects corresponding to the at least one group of target areas and pixel sizes of target areas included in the at least one group of target areas, physical sizes corresponding to the target pixels under at least one type of target objects, respectively, may be implemented as:
for any group of target areas in the at least one group of target areas, determining a quotient of a preset physical size of a target object corresponding to the any group of target areas and a pixel size of each target area contained in the any group of target areas as a corresponding physical size of a target pixel in each target area;
and determining the physical size corresponding to the target pixel under the target object type corresponding to any group of target areas according to the physical size corresponding to the target pixel in each target area.
Optionally, it may be determined that the physical size corresponding to the target pixel under the target object category corresponding to any one group of the target areas is: and the target pixel is the mean value of the physical sizes corresponding to the target pixels in the target areas respectively.
Optionally, determining, according to the number of target regions included in each of the at least one group of target regions, weights of physical sizes corresponding to the target pixels under at least one type of target object, may be implemented as:
determining the total number of target areas corresponding to the at least one group of target areas according to the number of the target areas contained in the at least one group of target areas;
for any group of target areas in the at least one group of target areas, determining a weight of a physical size corresponding to a target pixel under a target object category corresponding to the any group of target areas as: the ratio of the number of target regions contained in any set of target regions to the total number of target regions.
In order to understand the determination process of the target physical size corresponding to the target pixel, an example is described with reference to fig. 6.
In fig. 6, it is assumed that the above-described multi-frame image is composed of an image a, an image b, and an image c illustrated in fig. 6. Assume that the multi-class target objects include two classes of target objects: people and vehicles. Assume that a target area Z1 including a person and a target area Z2 including a vehicle are detected in image a, a target area Z3 including a person is detected in image b, and a target area Z4 including a vehicle is detected in image c. The preset average body width of the human body is 50cm, and the average width of the vehicle body is 160 cm. Assume that the pixel width of the target zone Z1 is 50, the pixel width of the target zone Z2 is 160, the pixel width of the target zone Z3 is 40, and the pixel width of the target zone Z4 is 200.
For pixel i, illustrated in fig. 6, it is covered by the target zone Z1, the target zone Z3 and the target zone Z4, and for pixel j, it is covered by the target zone Z2 and the target zone Z4.
Based on the above-described assumed situation, assuming that the target pixel is pixel i, for pixel i, since the target object class corresponding to the target region Z1 and the target region Z3 is "person" and the target object class corresponding to the target region Z4 is "car", the target region Z1 and the target region Z3 constitute one set of target regions corresponding to pixel i, and the target region Z4 serves as another set of target regions corresponding to pixel i.
The determination process of the corresponding physical length of the pixel i under the category of "human" is as follows: firstly, the physical lengths of the pixels i in the target area Z1 and the target area Z3 are determined according to the pixel widths of the target area Z1 and the target area Z3 and the preset average body width of the human body. Specifically, the corresponding physical length of pixel i in the target area Z1 is: 50cm/50 is 1cm, and the corresponding physical length of the pixel i in the target zone Z3 is: 50 cm/40-1.25 cm.
Based on the physical lengths of the pixels i in the two target areas, the physical lengths of the pixels i in the category of "human" are determined as follows: (1cm +1.25cm)/2 is 1.125cm, which is the average of the two physical lengths.
Similarly, the determination process of the corresponding physical length of the pixel i in the category of "vehicle" is as follows: since the target area covering the pixel i under the category of "vehicle" is only the target area Z4, the physical length of the pixel i under the category of "vehicle" is the physical length of the pixel i in the target area Z4, specifically: 160 cm/200-0.8 cm.
Since the target object categories covering the pixel i include "person" and "car", a group of target areas corresponding to the category "person" is two target areas, namely a target area Z1 and a target area Z3, and a target area corresponding to the category "car" is one target area, namely a target area Z4, the weights for determining the physical lengths of the pixel i corresponding to the two target object categories "person" and "car" are: 2/3, 1/3.
Based on the weight and the physical lengths of the pixel i under the two target object categories of 'person' and 'car' of 1.125cm and 0.8cm, determining the target physical length corresponding to the pixel i as follows: 1.125 × 2/3+0.8 × 1/3 ═ 1.02 cm.
The above describes the process of determining the target physical length corresponding to the target pixel when the target pixel is the pixel i.
Similarly, assuming that the target pixel is pixel j, for pixel j, the target region Z2 and the target region Z4 form a set of target regions corresponding to pixel j, and the corresponding target object category is "vehicle".
The corresponding physical length of pixel j in target zone Z2 is: 160 cm/160-1 cm. The corresponding physical length of pixel j in target zone Z4 is: 160 cm/200-0.8 cm.
Since the pixel j is covered only by the target area corresponding to the same target object type, the target physical length corresponding to the pixel j is the average of the physical lengths corresponding to the pixel j in the plurality of target areas covering the pixel j: (1cm +0.8 cm)/2-0.9 cm.
For each pixel covered by each detected target area, the corresponding target physical length can be determined through the above process.
Finally, the coverage area of the camera can be determined as follows: and the square sum of the target physical lengths corresponding to the target pixels. Wherein the plurality of target pixels includes all pixels covered by all target areas.
In summary, when a camera is deployed for the purpose of focusing attention on multiple types of target objects, the determination of the effective coverage area of the camera can be achieved through the scheme. The coverage area determined by the present solution excludes the influence of the above-mentioned extraneous factors. Specifically, areas that cannot be clearly imaged in an image due to occlusion or illumination, for example, and objects that are not of interest are not detected as target areas in the process of detecting the target objects, and thus are not objects to be calculated for the coverage area. In addition, the coverage area is determined directly based on the image shot by the camera, and the method is independent of the internal and external parameters of the camera and contributes to improving the accuracy of the coverage area determination result.
As described above, the camera coverage detection method provided by the present invention can be executed in the cloud, and a plurality of computing nodes may be deployed in the cloud, and each computing node has processing resources such as computation and storage. In the cloud, a plurality of computing nodes may be organized to provide a service, and of course, one computing node may also provide one or more services.
According to the scheme provided by the invention, the cloud end can provide a service for detecting the coverage of the camera, which is called a target service. When a user needs to detect the coverage of the camera, the target service is called so as to trigger a request for calling the target service to the cloud, and the request carries a video acquired by a certain camera. The cloud determines the compute nodes responding to the request, and executes the following steps by using the resources in the compute nodes:
sampling a video collected by the camera to obtain a multi-frame image;
respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple target areas corresponding to the multiple types of target objects;
determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas corresponds to one type of target object;
determining a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target areas, a pixel size of a target area contained in each of the at least one group of target areas, and a number of target areas contained in each of the at least one group of target areas;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
The specific implementation of the above steps may refer to the related descriptions in the foregoing other embodiments, which are not described herein again.
The camera coverage detection apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these means can each be constructed using commercially available hardware components and by performing the steps taught in this disclosure.
Fig. 7 is a schematic structural diagram of a camera coverage detection apparatus according to an embodiment of the present invention, and as shown in fig. 7, the apparatus includes: an image acquisition module 11, an object detection module 12, and an area determination module 13.
And the image acquisition module 11 is configured to sample a video acquired by the camera to obtain a multi-frame image.
And an object detection module 12, configured to perform target object detection on the multiple frames of images respectively to obtain multiple target areas including the target object.
An area determining module 13, configured to determine a target physical size corresponding to the target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
Optionally, the plurality of target pixels comprises all pixels covered by the plurality of target areas.
Optionally, the preset physical size of the target object includes: a preset physical length of the target object in a preset direction, wherein the preset direction comprises a length direction or a width direction; the pixel size corresponding to the target area includes: the number of pixels corresponding to the preset direction and contained in the target area; the target physical size corresponding to the target pixel comprises: and the physical length of the target pixel corresponding to the preset direction.
Optionally, the area determining module 13 may be specifically configured to: determining the coverage area of the camera as follows: and the square sum of the target physical lengths corresponding to the target pixels.
Optionally, the object detection module 12 may be specifically configured to: respectively detecting a target object for the multi-frame images to acquire a plurality of rectangular areas containing the target object; for each rectangular area, intercepting an area from the bottom to a preset height; determining the plurality of cut-out regions as the plurality of target regions.
Optionally, the area determining module 13 may be specifically configured to: for any target area in the at least one target area, determining a quotient of the preset physical size and a pixel size corresponding to the any target area as a corresponding physical size of the target pixel in the any target area; and determining the target physical size corresponding to the target pixel according to the physical sizes of the target pixel respectively corresponding to the at least one target area.
Optionally, the area determining module 13 may be specifically configured to: determining a target physical size corresponding to the target pixel as follows: and the target pixel is the mean value of the physical sizes respectively corresponding to the at least one target area.
The apparatus shown in fig. 7 may perform the camera coverage detection method provided in the embodiments shown in fig. 1 to fig. 4, and the detailed execution process and technical effect refer to the description in the embodiments, which is not described herein again.
In one possible design, the structure of the camera coverage detection apparatus shown in fig. 7 may be implemented as an electronic device, as shown in fig. 8, where the electronic device may include: a first processor 21, a first memory 22. The first memory 22 stores executable codes thereon, and when the executable codes are executed by the first processor 21, the first processor 21 can at least implement the camera coverage detection method provided in the embodiments shown in fig. 1 to 4.
Optionally, the electronic device may further include a first communication interface 23 for communicating with other devices.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium, on which executable code is stored, and when the executable code is executed by a processor of an electronic device, the processor is enabled to implement at least the camera coverage detection method provided in the foregoing embodiments shown in fig. 1 to 4.
Fig. 9 is a schematic structural diagram of a camera coverage detection apparatus according to an embodiment of the present invention, and as shown in fig. 9, the apparatus includes: an image acquisition module 31, an object detection module 32, a region selection module 33, and an area determination module 34.
And the image acquisition module 31 is configured to sample a video acquired by the camera to obtain a multi-frame image.
And an object detection module 32, configured to perform detection on multiple types of target objects in the multiple frames of images, respectively, so as to obtain multiple target areas corresponding to the multiple types of target objects, respectively.
The area selection module 33 is configured to determine at least one group of target areas covering the target pixels from the plurality of target areas, where a group of target areas corresponds to a class of target objects.
An area determining module 34, configured to determine a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target regions, a pixel size of a target region included in each of the at least one group of target regions, and a number of target regions included in each of the at least one group of target regions; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
Optionally, the plurality of target pixels comprises all pixels covered by the plurality of target areas.
Optionally, the preset physical size of the target object includes: a preset physical length of the target object in a preset direction, wherein the preset direction comprises a length direction or a width direction; the pixel size corresponding to the target area includes: the number of pixels corresponding to the preset direction and contained in the target area; the target physical size corresponding to the target pixel comprises: and the physical length of the target pixel corresponding to the preset direction.
Optionally, the area determination module 34 may be specifically configured to: determining the coverage area of the camera as follows: and the square sum of the target physical lengths corresponding to the target pixels.
Optionally, the object detection module 32 may be specifically configured to: respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple rectangular areas containing any type of target object; for each rectangular area in the plurality of rectangular areas, intercepting an area from the bottom to a preset height; and determining the plurality of cut-out regions as a plurality of target regions corresponding to the any type of target object.
Optionally, the area determination module 34 may be specifically configured to: determining physical sizes corresponding to target pixels under at least one type of target objects according to preset physical sizes of the target objects corresponding to the at least one group of target areas and pixel sizes of the target areas contained in the at least one group of target areas, wherein the at least one type of target objects correspond to the at least one group of target areas one to one; determining the weight of the physical size corresponding to the target pixel under at least one type of target object according to the number of the target areas contained in the at least one group of target areas; and performing weighted summation on the physical sizes respectively corresponding to the target pixels according to the weights so as to determine the target physical size corresponding to the target pixel.
Optionally, the area determination module 34 may be specifically configured to: for any group of target areas in the at least one group of target areas, determining a quotient of a preset physical size of a target object corresponding to the any group of target areas and a pixel size of each target area contained in the any group of target areas as a corresponding physical size of the target pixel in each target area; and determining the physical size corresponding to the target pixel under the target object type corresponding to any group of target areas according to the physical size corresponding to the target pixel in each target area.
Optionally, the area determination module 34 may be specifically configured to: determining that the physical size corresponding to the target pixel under the target object category corresponding to any one set of target areas is: and the target pixel is the mean value of the physical sizes corresponding to the target pixels in the target areas respectively.
Optionally, the area determination module 34 may be specifically configured to: determining the total number of target areas corresponding to the at least one group of target areas according to the number of the target areas contained in the at least one group of target areas; for any set of target regions in the at least one set of target regions, determining a weight of a physical size corresponding to the target pixel under a target object category corresponding to the any set of target regions as: the ratio of the number of target regions contained in any set of target regions to the total number of target regions.
The apparatus shown in fig. 9 may execute the camera coverage detection method provided in the embodiments shown in fig. 5 to fig. 6, and the detailed execution process and technical effect refer to the description in the embodiments, which is not described herein again.
In one possible design, the structure of the camera coverage detection apparatus shown in fig. 9 may be implemented as an electronic device, as shown in fig. 10, where the electronic device may include: a second processor 41, a second memory 42. Wherein the second memory 42 has stored thereon executable code, which when executed by the second processor 41, makes the second processor 41 at least to implement the camera coverage detection method as provided in the foregoing embodiments of fig. 5 to 6.
Optionally, the electronic device may further include a second communication interface 43 for communicating with other devices.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium, on which executable code is stored, and when the executable code is executed by a processor of an electronic device, the processor is enabled to implement at least the camera coverage detection method as provided in the foregoing embodiments of fig. 5 to 6.
The above-described apparatus embodiments are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (24)

1. A camera coverage detection method is characterized by comprising the following steps:
sampling a video collected by a camera to obtain a multi-frame image;
respectively detecting a target object for the multi-frame images to acquire a plurality of target areas containing the target object;
determining a target physical size corresponding to a target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
2. The method of claim 1, wherein the plurality of target pixels comprises all pixels covered by the plurality of target regions.
3. The method of claim 1, wherein the preset physical size of the target object comprises: a preset physical length of the target object in a preset direction, wherein the preset direction comprises a length direction or a width direction;
the pixel size corresponding to the target area includes: the number of pixels corresponding to the preset direction and contained in the target area;
the target physical size corresponding to the target pixel comprises: and the physical length of the target pixel corresponding to the preset direction.
4. The method of claim 3, wherein determining the coverage area of the camera according to the target physical size corresponding to each of the plurality of target pixels comprises:
determining the coverage area of the camera as follows: and the square sum of the target physical lengths corresponding to the target pixels.
5. The method according to claim 1, wherein the detecting of the target object for the plurality of frames of images respectively to obtain a plurality of target areas containing the target object comprises:
respectively detecting a target object for the multi-frame images to acquire a plurality of rectangular areas containing the target object;
for each rectangular area, intercepting an area from the bottom to a preset height;
determining the plurality of cut-out regions as the plurality of target regions.
6. The method according to claim 1, wherein the determining the target physical size corresponding to the target pixel according to the preset physical size of the target object and the pixel size corresponding to each of at least one target area covering the target pixel comprises:
for any target area in the at least one target area, determining a quotient of the preset physical size and a pixel size corresponding to the any target area as a corresponding physical size of the target pixel in the any target area;
and determining the target physical size corresponding to the target pixel according to the physical sizes of the target pixel respectively corresponding to the at least one target area.
7. The method according to claim 6, wherein the determining the target physical size corresponding to the target pixel according to the physical sizes respectively corresponding to the target pixel in the at least one target area comprises:
determining a target physical size corresponding to the target pixel as follows: and the target pixel is the mean value of the physical sizes respectively corresponding to the at least one target area.
8. A camera coverage detection method is characterized by comprising the following steps:
sampling a video collected by a camera to obtain a multi-frame image;
respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple target areas corresponding to the multiple types of target objects;
determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas corresponds to one type of target object;
determining a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target areas, a pixel size of a target area contained in each of the at least one group of target areas, and a number of target areas contained in each of the at least one group of target areas;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
9. The method of claim 8, wherein the plurality of target pixels comprises all pixels covered by the plurality of target regions.
10. The method of claim 8, wherein the preset physical size of the target object comprises: a preset physical length of the target object in a preset direction, wherein the preset direction comprises a length direction or a width direction;
the pixel size corresponding to the target area includes: the number of pixels corresponding to the preset direction and contained in the target area;
the target physical size corresponding to the target pixel comprises: and the physical length of the target pixel corresponding to the preset direction.
11. The method of claim 10, wherein determining the coverage area of the camera according to the target physical size corresponding to each of the plurality of target pixels comprises:
determining the coverage area of the camera as follows: and the square sum of the target physical lengths corresponding to the target pixels.
12. The method according to claim 8, wherein the performing, for the multiple frames of images, detection on multiple classes of target objects respectively to obtain multiple target regions corresponding to the multiple classes of target objects respectively comprises:
respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple rectangular areas containing any type of target object;
for each rectangular area in the plurality of rectangular areas, intercepting an area from the bottom to a preset height;
and determining the plurality of cut-out regions as a plurality of target regions corresponding to the any type of target object.
13. The method of claim 8, wherein determining the target physical size corresponding to the target pixel comprises:
determining physical sizes corresponding to target pixels under at least one type of target objects according to preset physical sizes of the target objects corresponding to the at least one group of target areas and pixel sizes of the target areas contained in the at least one group of target areas, wherein the at least one type of target objects correspond to the at least one group of target areas one to one;
determining the weight of the physical size corresponding to the target pixel under at least one type of target object according to the number of the target areas contained in the at least one group of target areas;
and performing weighted summation on the physical sizes respectively corresponding to the target pixels according to the weights so as to determine the target physical size corresponding to the target pixel.
14. The method according to claim 13, wherein the determining the physical sizes of the target pixels respectively corresponding to at least one type of target object according to the preset physical sizes of the target objects respectively corresponding to the at least one set of target areas and the pixel sizes of the target areas respectively contained in the at least one set of target areas comprises:
for any group of target areas in the at least one group of target areas, determining a quotient of a preset physical size of a target object corresponding to the any group of target areas and a pixel size of each target area contained in the any group of target areas as a corresponding physical size of the target pixel in each target area;
and determining the physical size corresponding to the target pixel under the target object type corresponding to any group of target areas according to the physical size corresponding to the target pixel in each target area.
15. The method according to claim 14, wherein the determining, according to the physical sizes of the target pixels respectively corresponding to the target areas, the physical size corresponding to the target pixel in the target object class corresponding to any one of the sets of target areas comprises:
determining that the physical size corresponding to the target pixel under the target object category corresponding to any one set of target areas is: and the target pixel is the mean value of the physical sizes corresponding to the target pixels in the target areas respectively.
16. The method according to claim 13, wherein the determining the weight of the physical size corresponding to each of the target pixels under at least one type of target object according to the number of target areas included in each of the at least one set of target areas comprises:
determining the total number of target areas corresponding to the at least one group of target areas according to the number of the target areas contained in the at least one group of target areas;
for any set of target regions in the at least one set of target regions, determining a weight of a physical size corresponding to the target pixel under a target object category corresponding to the any set of target regions as: the ratio of the number of target regions contained in any set of target regions to the total number of target regions.
17. A camera coverage detection device, comprising:
the image acquisition module is used for sampling the video acquired by the camera to obtain a multi-frame image;
the object detection module is used for respectively detecting the target objects of the multi-frame images so as to acquire a plurality of target areas containing the target objects;
the area determining module is used for determining a target physical size corresponding to the target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
18. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the camera coverage detection method of any one of claims 1 to 7.
19. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the camera coverage detection method of any one of claims 1 to 7.
20. A camera coverage detection device, comprising:
the image acquisition module is used for sampling the video acquired by the camera to obtain a multi-frame image;
the object detection module is used for respectively detecting multiple types of target objects of the multi-frame images so as to obtain multiple target areas corresponding to the multiple types of target objects;
the area selection module is used for determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas correspond to one type of target object;
an area determination module, configured to determine a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target regions, a pixel size of a target region included in each of the at least one group of target regions, and a number of target regions included in each of the at least one group of target regions; and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
21. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the camera coverage detection method of any one of claims 8 to 16.
22. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the camera coverage detection method of any one of claims 8 to 16.
23. A camera coverage detection method is characterized by comprising the following steps:
receiving a request for calling a target service, wherein the request comprises a video acquired by a camera;
and executing the following steps by utilizing the resources corresponding to the target service:
sampling a video collected by the camera to obtain a multi-frame image;
respectively detecting a target object for the multi-frame images to acquire a plurality of target areas containing the target object;
determining a target physical size corresponding to a target pixel according to a preset physical size of the target object and a pixel size corresponding to each of at least one target area covering the target pixel;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
24. A camera coverage detection method is characterized by comprising the following steps:
receiving a request for calling a target service, wherein the request comprises a video acquired by a camera;
and executing the following steps by utilizing the resources corresponding to the target service:
sampling a video collected by the camera to obtain a multi-frame image;
respectively detecting multiple types of target objects of the multiple frames of images to obtain multiple target areas corresponding to the multiple types of target objects;
determining at least one group of target areas covering target pixels from the plurality of target areas, wherein one group of target areas corresponds to one type of target object;
determining a target physical size corresponding to the target pixel according to a preset physical size of a target object corresponding to each of the at least one group of target areas, a pixel size of a target area contained in each of the at least one group of target areas, and a number of target areas contained in each of the at least one group of target areas;
and determining the coverage area of the camera according to the target physical size corresponding to each target pixel.
CN202011157331.8A 2020-10-26 2020-10-26 Camera coverage detection method, device, equipment and storage medium Pending CN113516703A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011157331.8A CN113516703A (en) 2020-10-26 2020-10-26 Camera coverage detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011157331.8A CN113516703A (en) 2020-10-26 2020-10-26 Camera coverage detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113516703A true CN113516703A (en) 2021-10-19

Family

ID=78060892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011157331.8A Pending CN113516703A (en) 2020-10-26 2020-10-26 Camera coverage detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113516703A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223023A (en) * 2022-09-16 2022-10-21 杭州得闻天下数字文化科技有限公司 Human body contour estimation method and device based on stereoscopic vision and deep neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223023A (en) * 2022-09-16 2022-10-21 杭州得闻天下数字文化科技有限公司 Human body contour estimation method and device based on stereoscopic vision and deep neural network
CN115223023B (en) * 2022-09-16 2022-12-20 杭州得闻天下数字文化科技有限公司 Human body contour estimation method and device based on stereoscopic vision and deep neural network

Similar Documents

Publication Publication Date Title
CN110866480B (en) Object tracking method and device, storage medium and electronic device
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
US11024052B2 (en) Stereo camera and height acquisition method thereof and height acquisition system
US20210227139A1 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN104102069B (en) A kind of focusing method of imaging system and device, imaging system
CN112733690A (en) High-altitude parabolic detection method and device and electronic equipment
CN116778094B (en) Building deformation monitoring method and device based on optimal viewing angle shooting
JP7030451B2 (en) Image processing equipment
CN111105351B (en) Video sequence image splicing method and device
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN112053397A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113505643B (en) Method and related device for detecting violation target
CN113516703A (en) Camera coverage detection method, device, equipment and storage medium
CN112037148B (en) Big data moving target detection and identification method and system
CN116912517B (en) Method and device for detecting camera view field boundary
JP2020021368A (en) Image analysis system, image analysis method and image analysis program
CN112101134A (en) Object detection method and device, electronic device and storage medium
CN114782555B (en) Map mapping method, apparatus, and storage medium
CN109242900B (en) Focal plane positioning method, processing device, focal plane positioning system and storage medium
CN112019723B (en) Big data target monitoring method and system of block chain
CN111328099B (en) Mobile network signal testing method, device, storage medium and signal testing system
JP2004208209A (en) Device and method for monitoring moving body
CN112633158A (en) Power transmission line corridor vehicle identification method, device, equipment and storage medium
CN113011445A (en) Calibration method, identification method, device and equipment
CN113538477B (en) Method and device for acquiring plane pose, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination