CN116249015A - Camera shielding detection method and device, camera equipment and storage medium - Google Patents

Camera shielding detection method and device, camera equipment and storage medium Download PDF

Info

Publication number
CN116249015A
CN116249015A CN202211659053.5A CN202211659053A CN116249015A CN 116249015 A CN116249015 A CN 116249015A CN 202211659053 A CN202211659053 A CN 202211659053A CN 116249015 A CN116249015 A CN 116249015A
Authority
CN
China
Prior art keywords
pixel
contour
image
camera
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211659053.5A
Other languages
Chinese (zh)
Inventor
于振楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hongdian Technologies Corp
Original Assignee
Shenzhen Hongdian Technologies Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hongdian Technologies Corp filed Critical Shenzhen Hongdian Technologies Corp
Priority to CN202211659053.5A priority Critical patent/CN116249015A/en
Publication of CN116249015A publication Critical patent/CN116249015A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • H04N5/202Gamma control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image recognition, and provides a camera shielding detection method, a device, camera equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be detected; acquiring the number of feature points and contour information in the image to be detected, wherein the contour information comprises total contour area and contour number; and if the number of the feature points is larger than a preset first threshold, the number of the outlines is smaller than a preset second threshold and the total outline area is larger than a preset third threshold, judging that the camera is shielded. The camera shielding detection method and device can adapt to judging whether the camera is shielded under various conditions, and the efficiency and accuracy of camera shielding detection are improved.

Description

Camera shielding detection method and device, camera equipment and storage medium
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a camera shielding detection method, a camera shielding detection device, camera equipment and a storage medium.
Background
In-vehicle equipment, an infrared camera is generally adopted to monitor the state of a driver and assist the driver to work, and when the infrared camera or other cameras are blocked maliciously or by an object, the monitoring function of the camera is disabled.
In the prior art, a camera is used for collecting relevant image characteristics so as to judge whether the camera is shielded. When a camera acquires image features, a frame difference method based on a reference frame, a dynamic modeling background modeling method based on video continuous frames and a method based on corner points for detecting feature points are generally used. When a frame difference method based on reference frames is adopted, the limitation of the reference frames is easy to be caused when the camera is shielded, and the selected reference frames possibly do not accord with certain application scenes, so that whether the camera is shielded or not cannot be judged; when a dynamic modeling background modeling method based on video continuous frames is adopted, multi-frame data in a certain time is required to be acquired, the required reaction time is long, the method cannot be applied to scenes requiring real-time performance, and a mobile shielding object cannot be identified; when the method based on the angular point detection feature points is adopted, if a part of strong light irradiated pictures are obtained, the average distribution of the pixel intensity values of the whole picture is caused, and the extracted angular points are too few to generate erroneous judgment.
The three methods adopted in the prior art have certain limitations in shielding judgment, and when special conditions are met, whether the camera is shielded or not cannot be accurately and efficiently judged.
Disclosure of Invention
The embodiment of the application provides a camera shielding detection method, a device, camera equipment and a storage medium, which are used for improving the efficiency and accuracy of camera shielding detection.
In a first aspect, an embodiment of the present application provides a method for detecting occlusion of a camera, including the following steps:
acquiring an image to be detected;
acquiring the number of feature points and contour information in the image to be detected, wherein the contour information comprises total contour area and contour number;
and if the number of the feature points is larger than a preset first threshold, the number of the outlines is smaller than a preset second threshold and the total outline area is larger than a preset third threshold, judging that the camera is shielded.
In a possible implementation manner of the first aspect, before determining that the camera is blocked, the method further includes:
acquiring pixel values of feature points, and acquiring total pixel values according to the pixel values of the feature points, wherein the total pixel values are the sum of the pixel values of all the feature points;
obtaining a pixel mean value, wherein the pixel mean value is a quotient obtained by dividing a total pixel value by the number of the feature points;
if the pixel mean value is larger than a preset fourth threshold value, acquiring a face recognition result of the image to be detected;
if the face characteristics exist in the face recognition result, judging that the camera is not shielded, and if not, judging that the camera is shielded.
In a possible implementation manner of the first aspect, acquiring the number of feature points and the contour information in the image to be detected includes:
obtaining the image to be detected, and scaling the image to be detected to a preset target size;
performing color space conversion on the zoomed image to be detected, and converting the zoomed image to a gray level image;
acquiring all pixel points and pixel values corresponding to the pixel points from the gray level image;
acquiring the number of the characteristic points according to all the pixel points and the pixel values corresponding to the pixel points and combining a preset binarization threshold;
and according to all the pixel points and the pixel values corresponding to the pixel points, combining a preset binarization threshold value, and performing binarization processing on the gray level image to obtain contour information.
In a possible implementation manner of the first aspect, according to the all pixel points and the pixel values corresponding to the pixel points, the obtaining the number of the feature points in combination with a preset binarization threshold includes:
if the pixel value corresponding to any pixel point in all the pixel points is larger than the preset binarization threshold value, the pixel point is a characteristic point;
and acquiring the number of the characteristic points.
In a possible implementation manner of the first aspect, according to the all pixel points and pixel values corresponding to the pixel points, performing binarization processing on the gray scale image in combination with a preset binarization threshold to obtain contour information, where the binarization processing includes:
the binarization processing is to set any pixel point with a pixel value larger than the preset binarization threshold value in the gray image as a white pixel point, and otherwise set the pixel point as a black pixel point;
acquiring an object contour by combining the black pixel points through a contour searching method;
and acquiring contour information according to the object contour.
In a possible implementation manner of the first aspect, the acquiring, by the contour searching method, the object contour in combination with the black pixel point includes:
sequentially searching the black pixel points through the contour searching method;
the object outline is a sequential connection set of the black pixel points.
In a possible implementation manner of the first aspect, the acquiring the profile information according to the object profile includes:
retaining the outermost peripheral contour, excluding small contours and containing contours, wherein the small contours are contours smaller than a preset contour size, and the containing contours are contours contained in the outermost peripheral contours;
and acquiring contour information according to the outermost contour.
In a second aspect, an embodiment of the present application provides a camera shielding detection apparatus, including:
the image acquisition module is used for acquiring an image to be detected;
the feature point and contour acquisition module is used for acquiring the number of feature points and contour information in the image to be detected, wherein the contour information comprises total contour area and contour number;
and the judging module is used for judging that the camera is blocked if the number of the characteristic points is larger than a preset first threshold value, the number of the outlines is smaller than a preset second threshold value and the total outline area is larger than a preset third threshold value.
In a third aspect, an embodiment of the present application provides a camera device, including a lens, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the lens is used to obtain an image to be detected, and the processor implements the camera occlusion detection method according to any one of the first aspects when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a display screen, causes the display screen to perform the camera occlusion detection method of any of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
in the prior art, when judging and shielding a camera, three conventional judging methods exist: 1. the frame difference method based on the reference frame is limited by the selection of the reference frame, and is not applicable to certain scenes; 2. the method for modeling the dynamic modeling background based on the continuous frames of the video cannot identify the mobile shielding object, and is not applicable to a scene requiring real-time feedback because of the need of acquiring multi-frame images; 3. according to the method for detecting the characteristic points based on the angular points, when a picture with direct intense light is obtained, enough angular points cannot be obtained due to uniform and separate pixel values, and misjudgment is easy to generate. In summary, the three methods in the prior art have certain limitations, and it is not possible to efficiently and accurately determine whether the camera is blocked. According to the camera shielding judging method, device, camera and storage medium, only one frame of image is needed to be obtained for judging, the efficiency of camera shielding detection is improved, the shielding conditions are judged by respectively setting the first threshold value, the second threshold value and the third threshold value through extracting the characteristic points and the edge outlines of the image, when the number of the characteristic points is larger than the first threshold value, the number of the outlines is smaller than the second threshold value and the total outline area is larger than the third threshold value, the camera is judged to be shielded, and when the three conditions are met, the face recognition result can be further extracted to judge whether the camera is shielded truly or not, and the accuracy of camera shielding detection is improved. In conclusion, the method and the device can improve the efficiency and accuracy of judging whether the camera is shielded or not.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for detecting camera occlusion according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for judging camera occlusion by face recognition according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a contour searching method according to an embodiment of the present disclosure;
fig. 4 is a schematic view of a scene of camera occlusion detection according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a camera shielding detection device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a camera according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The application provides a camera shielding detection method, a camera shielding detection device, camera equipment and a storage medium, which can solve the problems of low camera shielding detection efficiency and low accuracy. In traditional camera shielding detection, the detection result may be limited by selected reference objects or environmental factors, and when the detection method is applied to camera shielding detection, the detection method can be well adapted to various different conditions or environments, and the efficiency and accuracy of camera shielding detection are improved.
Fig. 1 shows a schematic flow chart of a method for detecting camera occlusion according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
s101, acquiring an image to be detected.
In the above steps, one possible implementation manner is that the camera acquires a frame of image to be detected, reads the image to be detected, one possible type of the image to be detected is an infrared image, and the acquiring a frame of image to be detected is acquiring a still picture.
S102, acquiring the number of feature points and contour information in the image to be detected.
In the above steps, one possible implementation manner is to scale the target size of the image to be detected to a preset target size before obtaining the feature point in the image to be detected, where the preset target size is used to uniformly convert the size of the identified image when using the camera occlusion detection method.
In one possible implementation, after the Image to be detected is scaled, color space conversion is performed to obtain a Gray feature map, i.e., a Gray map (all english called Gray Scale Image or Gray Scale Image). Color space conversion refers to converting an image from one color space to another, where a color space is a color range, i.e., a color space, that can be defined by a coordinate system that represents a color in one, two, three, or even four dimensions of space coordinates. In the present application, the RGB color space (regular color image) is converted into the GRAY color space (i.e., GRAY scale map); a gray scale map is an image in which white and black are divided into several levels in logarithmic relation, the level division is called gray scale, and image characteristics are exhibited by gray scale.
In one possible implementation manner, after the gray feature map is obtained, all pixel points in the image and pixel values corresponding to the pixel points are obtained, and the number of the feature points in the gray feature map is calculated according to a preset binarization threshold through histogram statistics, wherein the histogram is a case that a series of longitudinal stripes or line segments with different heights represent data distribution, generally, a horizontal axis represents a data type, and a vertical axis represents a distribution case. In this step, the number of feature points is counted by the histogram, and the feature points are pixel points whose pixel values are larger than a preset binarization threshold.
In one possible implementation manner, when the contour information is acquired, the acquired gray feature map is firstly subjected to binarization processing according to a preset binarization threshold value, and the contour information is acquired after the edge contour is strengthened. Wherein the profile information includes a total profile area and a profile number. The binarization processing is as follows: and adjusting the pixel point with the pixel value larger than the preset binarization threshold value in the gray feature image to be a white pixel point, and adjusting the pixel point to be a black pixel point if the pixel value is smaller than the preset binarization threshold value, so as to strengthen the edge contour of the image. The white pixel point is a pixel point with a pixel value of 255px, the black pixel point is a pixel point with a pixel value of 0px, and px is a pixel value unit. In this step, the pixel point is adjusted to a white or black pixel point by adjusting the pixel value of the pixel point.
In one possible implementation manner, after the gray feature map is binarized, contour information in the binarized gray feature map is collected by using a contour search method, and a flow step and a schematic diagram of the contour search method are shown in fig. 3 in a subsequent embodiment.
And S103, if the number of the feature points is larger than a preset first threshold value, the number of the outlines is smaller than a preset second threshold value and the total outline area is larger than a preset third threshold value, jumping to S104 to judge that the camera is shielded, otherwise jumping to S105 to judge that the camera is not shielded.
In the above steps, one possible implementation manner is to judge the image to be detected after the number of feature points and contour information in the image to be detected are obtained.
In one possible implementation manner, the first threshold is a preset feature point number threshold, the second threshold is a contour number threshold, the third threshold is a preset shielding area threshold, and when the number of pixels in the image to be detected, which is greater than a preset binarization threshold, is greater than the preset feature point number threshold, the contour number is less than the contour number threshold, and the total contour area is greater than the preset shielding area, that is, when all three conditions are met, it can be determined that the camera is shielded.
In a possible application scenario, when a camera collects an image to be detected, if the camera is directly irradiated by sunlight or other light sources, the collected image to be detected may be excessively exposed, at this time, feature points and contours in the image to be detected may be lost to some extent, so that erroneous judgment is generated in the judging process of the three conditions, and an accurate judgment cannot be made on whether the camera is blocked.
Optionally, when all the three conditions are met, the brightness of the image can be judged through a preset brightness threshold, and when the acquired image to be detected is too high in brightness, a face recognition result can be further introduced to judge whether the camera is blocked. The process is shown in fig. 2, and fig. 2 shows a schematic flow chart for judging camera occlusion by adopting face recognition according to an embodiment of the present application. The method comprises the following steps:
s201, acquiring a pixel mean value.
In the above step, in S102 of the embodiment of the present application, the pixel points in the gray feature map and the pixel values corresponding to the pixel points are obtained. In the step, the pixel values of all the feature points are added and divided by the number of the feature points to obtain a pixel mean value, wherein the pixel mean value is used for judging the brightness of the image to be detected.
S202, if the pixel mean value is larger than a preset fourth threshold value, jumping to S203, obtaining a face recognition result, jumping to S205, and if not, jumping to S204, and judging that the camera is shielded.
In the above step, one possible implementation manner is that the preset fourth threshold is a preset bright threshold, that is, a preset pixel value, if the pixel mean value is greater than the preset bright threshold, the camera may be directly irradiated by strong light, and an error may exist in the feature point acquisition, at this time, a face recognition result of the image to be detected may be introduced to further judge whether the camera is blocked, so as to improve the accuracy of the judgment.
S205, if the face characteristics exist in the face recognition result, jumping to S206 to judge that the camera is not shielded, otherwise jumping to S207 to judge that the camera is shielded.
In the above step, one possible implementation manner is to obtain a face recognition result in the image to be detected, and if one or more face features such as an ear feature, a nose feature, an eye feature, an eyebrow feature, a jaw bone feature, a forehead feature, a cheek feature or a mouth feature exist in the face recognition result. And judging that the camera is not shielded, and if not, judging that the camera is shielded.
Fig. 3 shows a schematic flow chart of a contour searching method according to an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
s301, searching a black pixel point in the binarized gray feature map, and taking the first found black pixel point as a starting point.
In the above steps, one possible implementation manner is to search from the pixel point at the bottom left corner of the binarized gray feature map, and search from top to bottom and then from left to right circularly, so as to find the black pixel point with the pixel value of 0 px. When the first black pixel point is found, taking the first black pixel point as a starting point, and carrying out a subsequent searching step.
S302, starting from a starting point, searching for the next black pixel point.
In the above step, one possible implementation manner is to search all pixels in the molar area of the starting point in a clockwise order, and search the next black pixel.
In one possible implementation, the molar field is also called eight neighborhood or indirect neighborhood, assuming that each region has 1 pixel in a 3×3 region, the 3×3 region has 9 pixels in total, in this step, the central pixel in the 3×3 region is taken as a starting point, eight pixels near the starting point are searched in a clockwise direction, when the next black pixel is searched, the pixel is recorded as a boundary pixel, the white pixel passing before the search is returned, and the next black pixel is searched in the molar field of the white pixel in a clockwise direction until the black pixel is not searched.
S303, acquiring a sequence set of all the searched black pixel points, wherein the sequence set is an object contour.
In the above step, one possible implementation manner is to calculate the boundary pixel points recorded in the sequence in S302, and obtain a sequence set of all the searched boundary pixel points, where the sequence connection of all the boundary pixel points in the sequence set is the object contour.
S304, selecting the object contour, and calculating the total contour area and the contour quantity.
In the above steps, one possible implementation manner is to perform contour cleaning on all the acquired object contours, where the contour cleaning is to exclude small contours and including contours in the acquired contours, and only the outermost contour is reserved. The small outline is an outline with an outline size smaller than a preset outline size, and the included outline is an outline included in the outermost outline.
In one possible implementation, after the contours are cleaned, the number of contours and the total contour area are obtained from the remaining outermost contours, that is, the number of remaining outermost contours is calculated, the areas of the contours are calculated respectively, and the areas of all the contours are added to obtain the total contour area.
Fig. 4 illustrates a schematic view of a camera occlusion detection scenario provided in an embodiment of the present application. As shown in fig. 4, the first scene in the figure is a scene where the camera is not blocked at all under normal conditions. The second scenario is a scenario consistent with the steps of fig. 1 described previously in accordance with an embodiment of the present application. The third scenario is a scenario consistent with the steps of fig. 2 described previously in accordance with an embodiment of the present application.
In the first scene, the number of pixels in the image to be detected, which is larger than a preset binarization threshold, is smaller than a number threshold of preset feature points, the number of contours is larger than a number threshold of contours, and the total contour area is smaller than a preset shielding area, and at the moment, the camera is not shielded.
In the second scene, the number of pixels in the image to be detected, which is larger than a preset binarization threshold, is larger than a number threshold of preset feature points, the number of contours is smaller than a number threshold of contours, the total contour area is larger than a preset shielding area, and the pixel mean value of the feature points is smaller than a preset brightness threshold.
In the third scenario, the number of pixels in the image to be detected, which is greater than the preset binarization threshold, is greater than the number threshold of preset feature points, the number of contours is less than the number threshold of contours, the total contour area is greater than the preset shielding area, the pixel mean value of the feature points is greater than the preset brightness threshold, but the face recognition result has face features, and at this time, the camera is not shielded.
Fig. 5 shows a schematic structural diagram of a camera shielding detection device provided in an embodiment of the present application. As shown in fig. 5, the camera occlusion detection device includes an image acquisition module 501, a feature point and contour acquisition module 502, and a judgment module 503.
The camera shielding detection device is used for acquiring an image to be detected, collecting characteristic points and contour information in the image to be detected, judging whether the camera is shielded or not according to the collected characteristic points and contour information, and returning a judgment result. Wherein:
the image acquisition module 501 is used for acquiring an image to be detected.
Optionally, after the image obtaining module 501 obtains the image to be detected, the image to be detected may be converted twice, to obtain the scaled image to be detected, the gray feature map of the image to be detected, and the binarized gray feature map of the image to be detected, respectively.
The feature point and contour collection module 502 is configured to collect the number of feature points and contour information in the image to be detected.
Optionally, the feature point and contour collection module 502 obtains, through a preset binarization threshold, a pixel point and its pixel value that are greater than the preset binarization threshold in the gray feature map of the image to be detected.
Optionally, the feature point and contour collection module 502 further obtains all pixel points in the gray feature map of the image to be detected and pixel values corresponding to the pixel points, for subsequent face recognition result judgment and contour information collection.
Optionally, the feature point and contour collection module 502 searches and obtains the enhanced edge contour in the gray feature map of the binarized image to be detected through a contour searching method.
The determining module 503 is configured to determine whether the camera is blocked, and determine that the camera is blocked if the number of pixels in the image to be detected that is greater than the preset binarization threshold is greater than the number threshold of the preset feature points, the number of contours is less than the number threshold of the contours, and the total contour area is greater than the preset blocking area, that is, if all three conditions are satisfied.
Optionally, when the above three conditions are met, a face recognition result may be introduced to perform further judgment, if the pixel mean value is greater than a preset bright threshold, the face recognition result is obtained, if one or more of the face features such as ear feature, nose feature, eye feature, eyebrow feature, jawbone feature, forehead feature, cheek feature or mouth feature exist in the face recognition result, it is judged that the camera is not shielded, and if not, it is judged that the camera is shielded.
Fig. 6 is a schematic structural diagram of a camera device according to an embodiment of the present application. As shown in fig. 6, the camera apparatus 6 of this embodiment includes: at least one lens 60, a processor 61 (only one is shown in fig. 6), a memory 62 and a computer program 63 stored in the memory 62 and executable on the at least one processor 61, the lens 60 being adapted to obtain an image to be detected in any of the above-mentioned respective camera occlusion detection method embodiments, the processor 61 implementing the steps in any of the above-mentioned respective camera occlusion detection method embodiments when executing the computer program 63.
The camera device 6 may be a wide dynamic camera, a glare suppressing camera, a road monitoring dedicated camera, an infrared camera, an all-in-one camera, or the like. The camera device 6 may include, but is not limited to, a lens 60, a processor 61, a memory 62. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the camera device 6 and is not meant to be limiting of the camera device 6, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The lens 60 may be classified from a field size into a wide-angle lens, a telephoto lens, a zoom lens, a variable-focus lens, a pinhole lens, or the like, classified from a shape into a spherical lens, an aspherical lens, or the like, and classified from a focal length into a short-focal-length wide-angle lens, a mid-focal-distance standard lens, a long-focal-length telephoto lens, a zoom lens, or the like. In addition to the above-described lens types, a conventional lens classified from any other type may be used.
The processor 61 may be a central processing unit (Central Processing Unit, CPU), the processor 61 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 62 may in some embodiments be an internal storage unit of the camera device 6, such as a hard disk or a memory of the camera device 6. The memory 62 may in other embodiments also be an external storage device of the camera device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the camera device 6. Further, the memory 62 may also include both an internal storage unit and an external storage device of the camera device 6. The memory 62 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, such as program code for the computer program. The memory 62 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A camera occlusion detection method, the method comprising:
acquiring an image to be detected;
acquiring the number of feature points and contour information in the image to be detected, wherein the contour information comprises total contour area and contour number;
and if the number of the feature points is larger than a preset first threshold, the number of the outlines is smaller than a preset second threshold and the total outline area is larger than a preset third threshold, judging that the camera is shielded.
2. The method of claim 1, further comprising, prior to determining that the camera is occluded:
acquiring pixel values of feature points, and acquiring total pixel values according to the pixel values of the feature points, wherein the total pixel values are the sum of the pixel values of all the feature points;
obtaining a pixel mean value, wherein the pixel mean value is a quotient obtained by dividing a total pixel value by the number of the feature points;
if the pixel mean value is larger than a preset fourth threshold value, acquiring a face recognition result of the image to be detected;
if the face characteristics exist in the face recognition result, judging that the camera is not shielded, and if not, judging that the camera is shielded.
3. The method of claim 1, wherein obtaining the number of feature points and contour information in the image to be detected comprises:
obtaining the image to be detected, and scaling the image to be detected to a preset target size;
performing color space conversion on the zoomed image to be detected, and converting the zoomed image to a gray level image;
acquiring all pixel points and pixel values corresponding to the pixel points from the gray level image;
acquiring the number of the characteristic points according to all the pixel points and the pixel values corresponding to the pixel points and combining a preset binarization threshold;
and according to all the pixel points and the pixel values corresponding to the pixel points, combining a preset binarization threshold value, and performing binarization processing on the gray level image to obtain contour information.
4. The method of claim 3, wherein obtaining the number of feature points according to the total pixel points and the pixel values corresponding to the pixel points in combination with a preset binarization threshold value comprises:
if the pixel value corresponding to any pixel point in all the pixel points is larger than the preset binarization threshold value, the pixel point is a characteristic point;
and acquiring the number of the characteristic points.
5. The method of claim 3, wherein performing binarization processing on the gray-scale image according to the pixel points and the pixel values corresponding to the pixel points and in combination with a preset binarization threshold value to obtain contour information includes:
the binarization processing is to set any pixel point with a pixel value larger than the preset binarization threshold value in the gray image as a white pixel point, and otherwise set the pixel point as a black pixel point;
acquiring an object contour by combining the black pixel points through a contour searching method;
and acquiring contour information according to the object contour.
6. The method of claim 5, wherein obtaining the object contour by the contour finding method in combination with the black pixel points comprises:
sequentially searching the black pixel points through the contour searching method;
the object outline is a sequential connection set of the black pixel points.
7. The method of claim 5, wherein obtaining the profile information from the object profile comprises:
retaining the outermost peripheral contour, excluding small contours and containing contours, wherein the small contours are contours smaller than a preset contour size, and the containing contours are contours contained in the outermost peripheral contours;
and acquiring contour information according to the outermost contour.
8. A camera occlusion detection device, comprising:
the image acquisition module is used for acquiring an image to be detected;
the feature point and contour acquisition module is used for acquiring the number of feature points and contour information in the image to be detected, wherein the contour information comprises total contour area and contour number;
and the judging module is used for judging that the camera is blocked if the number of the characteristic points is larger than a preset first threshold value, the number of the outlines is smaller than a preset second threshold value and the total outline area is larger than a preset third threshold value.
9. A camera device comprising a lens, a processor, a memory and a computer program stored in the memory and executable on the processor, characterized in that the lens is used for acquiring an image to be detected; the processor, when executing the computer program, implements the method of any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN202211659053.5A 2022-12-22 2022-12-22 Camera shielding detection method and device, camera equipment and storage medium Pending CN116249015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211659053.5A CN116249015A (en) 2022-12-22 2022-12-22 Camera shielding detection method and device, camera equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211659053.5A CN116249015A (en) 2022-12-22 2022-12-22 Camera shielding detection method and device, camera equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116249015A true CN116249015A (en) 2023-06-09

Family

ID=86623284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211659053.5A Pending CN116249015A (en) 2022-12-22 2022-12-22 Camera shielding detection method and device, camera equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116249015A (en)

Similar Documents

Publication Publication Date Title
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
US10997696B2 (en) Image processing method, apparatus and device
US11457138B2 (en) Method and device for image processing, method for training object detection model
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN109327626B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
EP3798975A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN110490196B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110121031B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN109726678B (en) License plate recognition method and related device
CN113052754B (en) Method and device for blurring picture background
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN112581481B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN114140481A (en) Edge detection method and device based on infrared image
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN108040244B (en) Snapshot method and device based on light field video stream and storage medium
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN113674303A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110688926B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination