CN111382726A - Engineering operation detection method and related device - Google Patents

Engineering operation detection method and related device Download PDF

Info

Publication number
CN111382726A
CN111382726A CN202010251987.XA CN202010251987A CN111382726A CN 111382726 A CN111382726 A CN 111382726A CN 202010251987 A CN202010251987 A CN 202010251987A CN 111382726 A CN111382726 A CN 111382726A
Authority
CN
China
Prior art keywords
image
target
pixel value
pixel
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010251987.XA
Other languages
Chinese (zh)
Other versions
CN111382726B (en
Inventor
孙玉玮
马青山
陈宇
朱建宝
施烨
俞鑫春
邓伟超
叶超
郭伟
任馨怡
王枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Huayuan Technology Development Co ltd
Zhejiang Dahua Technology Co Ltd
Nantong Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Nantong Huayuan Technology Development Co ltd
Zhejiang Dahua Technology Co Ltd
Nantong Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Huayuan Technology Development Co ltd, Zhejiang Dahua Technology Co Ltd, Nantong Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical Nantong Huayuan Technology Development Co ltd
Priority to CN202010251987.XA priority Critical patent/CN111382726B/en
Publication of CN111382726A publication Critical patent/CN111382726A/en
Application granted granted Critical
Publication of CN111382726B publication Critical patent/CN111382726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an engineering operation detection method and a related device, wherein the engineering operation detection method comprises the following steps: acquiring an original image shot by a camera device on a working site, wherein the original image comprises a preset detection area; carrying out target detection on the original image, and acquiring a target area corresponding to a target object in the original image, wherein the target object is used for realizing warning; and determining whether the operation site meets the operation specification or not based on the position relation between the preset detection area and the target area. By the scheme, the engineering operation detection quality can be improved.

Description

Engineering operation detection method and related device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an engineering work detection method and a related apparatus.
Background
In engineering work, in order to warn operators at a work site and prevent the operators from entering a dangerous area by mistake so as to ensure the safety of the work, warning objects such as warning slogans are usually arranged near the dangerous area of the work site. Taking electric power overhaul as an example, due to the influence of factors such as incomplete function of five prevention of switch equipment or dispersed energy of operators, dangerous phenomena such as mistaken entering of the operators into a live interval, mistaken separation and mistaken closing of switches and the like may occur. Therefore, in switching operation or maintenance operation, the red cloth mantle is generally used to warn operators so that the operators can obviously distinguish a power failure maintenance screen cabinet and an adjacent electrified non-maintenance screen cabinet. Therefore, the warning objects such as the red cloth mantle, the warning slogans and the like play an important role in guaranteeing the operation safety.
At present, whether the warning object is used normally or not is still determined in a manual inspection mode, so that the problem of low efficiency exists. In addition, the manual inspection is inevitably careless and careless due to the limitation of human resources, the energy of inspectors and other factors. Therefore, the quality of engineering operation detection is reduced. In view of this, how to improve the quality of engineering work detection is an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide an engineering operation detection method and a related device, which can improve the engineering operation detection quality.
In order to solve the above problem, a first aspect of the present application provides an engineering work detection method, including: acquiring an original image shot by a camera device on a working site, wherein the original image comprises a preset detection area; carrying out target detection on the original image, and acquiring a target area corresponding to a target object in the original image, wherein the target object is used for realizing warning; and determining whether the operation site meets the operation specification or not based on the position relation between the preset detection area and the target area.
In order to solve the above problem, a second aspect of the present application provides an engineering task detection apparatus, which includes mutually coupled memory processors, and the processors are configured to execute program instructions stored in the memories to implement the engineering task detection method in the first aspect.
In order to solve the above problem, a third aspect of the present application provides a storage device storing program instructions executable by a processor, the program instructions being for implementing the engineering work detection method of the first aspect.
According to the scheme, the original image shot by the camera device on the operation site is obtained, the original image comprises the preset detection area, the target detection is carried out on the original image, the target area corresponding to the target object used for realizing warning in the original image is obtained, and whether the operation site meets the operation specification or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site meets the operation specification or not can be detected based on the original image shot by the camera device on the operation site, the operation site does not need to be checked manually, the detection efficiency can be improved, the probability of omission is reduced, and the detection quality of engineering operation can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an exemplary engineering task testing method of the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S12 in FIG. 1;
FIG. 3 is a schematic diagram of an embodiment of a process for obtaining the first integral map of FIG. 2;
FIG. 4 is a schematic diagram of one embodiment of the morphological processing of FIG. 2;
FIG. 5 is a flowchart illustrating an embodiment of step S123 in FIG. 2;
FIG. 6 is a schematic diagram of a frame of an embodiment of the engineering task detection apparatus of the present application;
FIG. 7 is a block diagram of another embodiment of an engineering action detection apparatus according to the present application;
FIG. 8 is a block diagram of an embodiment of a memory device according to the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a project operation detection method according to the present application. Specifically, the method may include the steps of:
step S11: and acquiring an original image shot by the camera device on the operation site, wherein the original image comprises a preset detection area.
In this embodiment, the image pickup device may include a monitoring camera such as a ball machine or a card machine, and this embodiment is not limited specifically herein. In practical application, a plurality of areas needing to be detected may exist in a working site, and in this case, in order to enable the image pickup device to automatically shoot a plurality of areas needing to be detected in the working site, the pose parameters set by a user on the image pickup device can be obtained, so that a plurality of preset bits are set for the image pickup device, the image pickup device shoots the working site at the plurality of preset bits, and the whole working site is covered. Use electric power to overhaul as the example, camera device can set up the northeast corner at electric power computer lab, and during the detection, need shoot the southeast region of electric power computer lab, northwest region, southwest region, consequently, can set up three preset for camera device shoots towards southeast region, southwest region, the northwest region of electric power computer lab respectively in engineering operation testing process, thereby can cover the operation scene.
In this embodiment, the preset detection area is an area that needs to be detected by engineering work and is preset by a user, specifically, the preset detection area may be set as a rectangular area, and when the user sets the area, the coordinates of the rectangular area in the image may be set. In a specific implementation scenario, when the image pickup device is provided with a plurality of preset bits, the preset detection area may also be provided in plurality and correspond to the preset bits one to one. Still take electric power to overhaul as an example, be equipped with above-mentioned three presetting bit respectively when camera device to make camera device in engineering operation testing process, when shooting towards electric power computer lab southeast region, southwest region, northwest region respectively, can set up electrified regional to above-mentioned three presetting bit respectively, thereby can be based on the position relation between the target object that follow-up detection obtained and the electrified region that sets up in advance, confirm whether electric power overhauls the scene and accords with the operation standard.
In addition, similar settings can be performed in other engineering applications such as building engineering, communication engineering, etc., and this embodiment is not illustrated here.
Step S12: and carrying out target detection on the original image to obtain a target area corresponding to the target object in the original image.
In this embodiment, the target object is used for realizing warning, and taking electric power overhaul as an example, the target object may be a red cloth mantle. In other engineering works, the target object can also be other warning objects, for example, in a building project, the target object can also be a warning line and the like; in the communication engineering, the target object may also be a warning slogan, and the like, which is not illustrated herein.
In this embodiment, the specific target detection mode may be a detection mode based on a neural network. For example, a training image marked with a target object is used to train a preset neural network to obtain a trained preset neural network, and then the trained preset neural network is used to detect an original image to obtain a target area corresponding to the target object. Or, the specific method of target detection may also be a detection method based on traditional image analysis, for example, the color features of each pixel point in the original image are analyzed to obtain pixel points similar to the color features of the target object, then the pixel points are subjected to noise reduction and other processing, and finally the minimum circumscribed rectangle of the pixel points subjected to the noise reduction and other processing is taken as the target region corresponding to the target object. The embodiment is not particularly limited herein.
Step S13: and determining whether the operation site meets the operation specification or not based on the position relation between the preset detection area and the target area.
In an implementation scenario, when an overlap area exists between the preset detection area and the target area, it can be considered that a target object is arranged in the preset detection area, and it is determined that the operation site meets the operation specification, otherwise, it is determined that the operation site does not meet the operation specification.
In another implementation scenario, when the preset detection area completely includes the target area, it is determined that the target object is disposed in the preset detection area, and it is determined that the operation site meets the operation specification, otherwise, it is determined that the operation site does not meet the operation specification.
In another implementation scenario, when an intersection ratio (IoU) between a preset detection area and a target area is greater than a preset intersection ratio threshold (e.g., 0.5), it is determined that a target object is disposed in the preset detection area, and if not, it is determined that the operation site meets the operation specification.
In a specific implementation scenario, when it is determined that the operation site does not meet the operation specification, preset alarm information may be output. In addition, when the work site is determined to meet the work specification, preset safety information may be output, or no information may be output. The preset alarm information and the preset safety information may include, but are not limited to: text, sound, image, etc., and the embodiment is not illustrated here. In addition, when the target detection is carried out on the original image and the target area corresponding to the target object is not detected, the preset alarm information can be directly output, so that the warning that the operation site does not conform to the operation specification is realized.
According to the scheme, the original image shot by the camera device on the operation site is obtained, the original image comprises the preset detection area, the target detection is carried out on the original image, the target area corresponding to the target object used for realizing warning in the original image is obtained, and whether the operation site meets the operation specification or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site meets the operation specification or not can be detected based on the original image shot by the camera device on the operation site, the operation site does not need to be checked manually, the detection efficiency can be improved, the probability of omission is reduced, and the detection quality of engineering operation can be improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S12 in fig. 1. The method specifically comprises the following steps:
step S121: and performing threshold segmentation on the original image by using a preset threshold related to the color characteristics of the target object to obtain an image to be detected.
In a specific implementation scenario, taking electric power overhaul as an example, if the target object is a red cloth mantle, a preset threshold related to the color characteristics of the red cloth mantle can be adopted to perform threshold segmentation on the original image, so as to obtain an image to be detected. In other application scenarios, the same can be analogized, and the embodiment is not illustrated here.
In addition, since the RGB (Red Green Blue ) color space is very different from human eyes, in order to make the definition of the color distance conform to human visual characteristics, the present embodiment may further map the color space of the original image to an HSV (Hue, Saturation, and brightness) color space before performing the threshold segmentation. In a specific implementation scenario, when the color space of the original image is an RGB color space, the color space of the original image may be mapped to an HSV color space by the following formula:
Figure BDA0002435832270000051
Figure BDA0002435832270000061
Figure BDA0002435832270000062
Figure BDA0002435832270000063
in the above formula, (R, G, B) represents an R channel pixel value, a G channel pixel value, and a B channel pixel value of a certain pixel point in the original image.
In addition, the (H, S, V) may be mapped to the interval of 0 to 255, which is not described herein again. In a specific implementation scenario, the preset threshold may include: in other implementation scenarios, the value may be other values, and the embodiment is not limited specifically herein. By utilizing the preset threshold, whether each pixel point in the original image mapped to the HSV color space meets the following conditions can be sequentially judged: whether the H channel pixel value is within a preset H channel threshold interval, whether the S channel pixel value is within a preset S channel threshold interval, and whether the V channel pixel value is within a preset V channel threshold interval, if yes, setting the pixel value of the corresponding pixel point in the image to be detected as a first pixel value (for example, 255), and otherwise, setting the pixel value of the corresponding pixel point in the image to be detected as a second pixel value (for example, 0). Specifically, the pixel value of a pixel point related to the color feature of the target object in the image to be detected is a first pixel value, and the pixel value of a pixel point unrelated to the color feature of the target object is a second pixel value.
Step S122: and counting the sum of the pixel values of all the pixel points in the upper left area of each pixel point in the image to be detected, and taking the sum as the pixel value of the corresponding pixel point in the first integral image corresponding to the image to be detected.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an embodiment of an obtaining process of the first integral map in fig. 2. As shown in FIG. 3, P1Is the image to be detected, P1(i, j) is the pixel value of the pixel point positioned in the ith row and the jth column of the image to be detected, Q1Is a first integral image, Q1And (i, j) is the pixel value of the pixel point positioned on the ith row and the jth column of the first integral image. The first integral image may be obtained based on the image to be detected by:
Figure BDA0002435832270000071
step S123: and performing morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image.
The preset structural element in this embodiment is a two-dimensional matrix, the value of the matrix element is 1, and the size of the matrix element may be 15 × 15, 13 × 13, 11 × 11, or 9 × 9, which is not limited in this embodiment.
In one implementation scenario, the image to be processed may be morphologically processed directly using the preset structural elements. For example, when the preset structural element is used for performing corrosion morphological processing on an image to be processed, a target pixel point with a pixel value of a first pixel value in the image to be processed can be traversed by using an atom (namely, a central point) of the preset structural element, when the pixel values of other pixel points in a size region of the preset structural element are the same as the pixel values of the target pixel point, namely the pixel values of the other pixel points are the first pixel values, the pixel value of the target pixel point is reserved, and otherwise, the pixel value of the target pixel point is set to be a second pixel value. Referring to fig. 4 in combination, fig. 4 is a schematic diagram of an embodiment of morphological processing in fig. 2, specifically, fig. 4 is a schematic diagram of an embodiment of etching morphological processing by using a preset structural element, as shown in fig. 4, the size of the preset structural element is k × k, the size of the image to be detected is w × h, and the image to be detected after the etching morphological processing is shown in fig. 4, where the pixel value is a target pixel point of a first pixel value, as shown in fig. 4, the burr in the image to be processed can be eliminated after the etching morphological processing. However, the algorithm complexity for performing the erosion morphology processing on the image to be processed by directly adopting the preset structural elements is O (w × h × k), which is high in complexity and high in processing load and resource overhead.
In another implementation scenario, in order to reduce algorithm complexity, processing load, and resource overhead, the first integral image may be used to perform morphological processing on the image to be processed. Specifically, referring to fig. 5 in combination, fig. 5 is a schematic flowchart of an embodiment of step S123 in fig. 2, which may specifically include:
step S1231: and carrying out corrosion morphological processing on the image to be detected by utilizing a preset structural element and the first integral image to obtain a first processed image.
Specifically, a first target pixel point with a pixel value of a first pixel value in an image to be detected can be determined, a second target pixel point corresponding to the first target pixel point in a first integral image is determined, a third target pixel point in the first integral image is determined based on the second target pixel point and the size of a preset element structure, then the pixel value of the third target pixel point is utilized to calculate the sum of the pixel values of all pixel points in the size range around the first target pixel point, if the quotient of the calculated sum of the pixel values and the size is the first pixel value, the first target pixel point can be considered to be not a burr/noise point, the pixel value of the first target pixel point is kept as the first pixel value, if the quotient of the sum of the pixel values and the size is not the first pixel value, the first target pixel point can be considered to be the burr/noise point, the pixel value of the first target pixel point is reset to the second pixel value.
In one embodiment, with continued reference to FIG. 3, the image P to be detected is shown in FIG. 31In the pixel (i, j) is a first target pixel, and correspondingly, a first integral image Q1In the method, the pixel point (i, j) is a second target pixel point corresponding to the first target pixel point, the thick black frame area is a size (k × k) area of a preset structural element, and in order to calculate the image P to be detected1The sum of pixel values of pixels in the size region of the medium-sized preset structural element can use the first integral image Q1The calculation is performed, and during the calculation, the first integral image Q can be determined by using the size of the preset structural element1The third target pixel point in (1) may be, first, the first integral image Q1The pixel point at the lower right corner of the size area of the preset structural element
Figure BDA0002435832270000081
Determining a first third target pixel point with a pixel value of
Figure BDA0002435832270000082
Representing pixel points
Figure BDA0002435832270000083
The sum of the pixel values of all pixel points in the upper left corner area is determined, and then three third target pixel points are determined
Figure BDA0002435832270000084
Figure BDA0002435832270000085
Therefore, the sum of the pixel values of all pixels within the size (k × k) around the first target pixel (i, j) can be expressed as:
Figure BDA0002435832270000086
Figure BDA0002435832270000087
after calculating the sum S of the pixel values of all pixels within the range of the size (k × k) around the first target pixel (i, j), the quotient (i.e. the sum S of the pixel values and the size (k × k)) can be further calculated
Figure BDA0002435832270000088
) If the quotient is the first pixel value, the image P to be detected is indicated1The pixel values of all the pixels in the size (k × k) range of the first target pixel (i, j) are the first pixel values, the first target pixel (i, j) can be considered as not a burr/noise point, and at the moment, the image P to be detected can be kept1If not, the first target pixel point (i, j) is a burr/noise point, and the image P to be detected is processed1And resetting the pixel value of the first target pixel point (i, j) to be the second pixel value.
By using the integral graph to perform the corrosion morphology processing, the algorithm complexity can be reduced to O (w × h), and the algorithm complexity is remarkably reduced.
Step S1232: and counting the sum of the pixel values of all the pixels in the upper left region of each pixel in the first processed image, and taking the sum as the pixel value of the corresponding pixel in the second integral image corresponding to the first processed image.
In an implementation scenario, after the image to be processed is subjected to erosion morphological processing, the dilation morphological processing may be further continued, and similarly to the above steps, the sum of the pixel values of all the pixel points in the upper left region of each pixel point in the first processed image may be counted as the pixel value of the corresponding pixel point in the second integral image corresponding to the first processed image. The related steps in this embodiment may be referred to, and are not described herein again.
Step S1233: and performing expansion morphological processing on the first processed image by using a preset structural element and the second integral image to obtain a second processed image.
Specifically, each pixel point in the first processed image may be sequentially used as a fourth target pixel point, a fifth target pixel point corresponding to the fourth target pixel point in the second integral image is determined, a sixth target pixel point in the second integral image is determined based on the size of the fifth target pixel point and a preset structural element, the sum of pixel values of all pixel points in a size range around the fourth target pixel point is calculated by using the pixel value of the sixth target pixel point, if the quotient of the sum of the pixel values and the size is the second pixel value, the pixel value of the fourth target pixel point is set as the second pixel value, and if the quotient of the sum of the pixel values and the size is not the second pixel value, the pixel value of the fourth target pixel point is set as the first pixel value. The specific manner of determining the sixth target pixel point in the second integral image based on the size of the fifth target pixel point and the preset structural element, and calculating the sum of the pixel values of all pixel points within the size range around the fourth target pixel point by using the pixel value of the sixth target pixel point may refer to step S1231 in this embodiment, which is not described herein again. Different from the erosion morphological processing, the purpose of the dilation morphological processing is to expand the edge and fill the void, so as long as a pixel with a first pixel value exists in the size range of the preset structural element, that is, as long as the quotient of the sum of the pixel values and the size is not the second pixel value, the pixel value of the fourth target pixel is set to be the first pixel value, otherwise, if the pixel with the first pixel value exists in the size range of the preset structural element, that is, if the quotient of the sum of the pixel values and the size is the second pixel value, the pixel value of the fourth target pixel is set to be the second pixel value, and the edge can be significantly expanded and the void can be filled in the edge in this way.
Step S1234: and determining the pixel value of each pixel point in the second processed image as a target connected domain of the first pixel value.
After the image to be processed is subjected to corrosion morphological processing and expansion morphological processing, a second processed image is obtained, at the moment, if the pixel value of a certain pixel point and the pixel value of a neighborhood pixel point in the second processed image are both the first pixel value, the pixel point and the neighborhood pixel point are divided into the same connected domain, all the pixel points in the second processed image are traversed, and at least one target connected domain can be obtained.
Step S1235: and acquiring the minimum circumscribed rectangle of the target connected domain as a target area.
And taking the minimum bounding rectangle of the target connected domain as a target area.
Different from the embodiment, the method includes the steps of performing threshold segmentation on an original image by using a preset threshold related to color features of a target object to obtain an image to be detected, counting the sum of pixel values of all pixel points in an upper left region of each pixel point in the image to be detected, and using the sum as the pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, so that morphological processing is performed on the image to be detected by using a preset structural element and the first integral image to obtain a target region corresponding to the target object in the original image, the algorithm complexity can be reduced, the processing time can be shortened, the target detection speed is accelerated, real-time detection of whether engineering operation is standard is facilitated, and timely alarming is performed when the engineering operation is not standard.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of the engineering work inspection device 60 according to the present application. The engineering operation detection device 60 comprises an acquisition module 61, a detection module 62 and a determination module 63, wherein the acquisition module 61 is used for acquiring an original image shot by an image pickup device on an operation site, the original image comprises a preset detection area, the detection module 62 is used for performing target detection on the original image and acquiring a target area corresponding to a target object in the original image, the target object is used for realizing warning, and the determination module 63 is used for determining whether the operation site meets an operation specification or not based on a position relation between the preset detection area and the target area.
According to the scheme, the original image shot by the camera device on the operation site is obtained, the original image comprises the preset detection area, the target detection is carried out on the original image, the target area corresponding to the target object used for realizing warning in the original image is obtained, and whether the operation site meets the operation specification or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site meets the operation specification or not can be detected based on the original image shot by the camera device on the operation site, the operation site does not need to be checked manually, the detection efficiency can be improved, the probability of omission is reduced, and the detection quality of engineering operation can be improved.
In some embodiments, the detection module 62 includes a threshold segmentation sub-module configured to perform threshold segmentation on the original image by using a preset threshold related to a color feature of the target object to obtain an image to be detected, the detection module 62 further includes an integral statistics sub-module configured to perform statistics on a sum of pixel values of all pixel points in an upper left region of each pixel point in the image to be detected, where the sum is used as a pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, and the detection module 62 further includes a morphology processing sub-module configured to perform morphology processing on the image to be detected by using a preset structural element and the first integral image to obtain a target region corresponding to the target object in the original image.
Different from the embodiment, the method includes the steps of performing threshold segmentation on an original image by using a preset threshold related to color features of a target object to obtain an image to be detected, counting the sum of pixel values of all pixel points in an upper left region of each pixel point in the image to be detected, and using the sum as the pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, so that morphological processing is performed on the image to be detected by using a preset structural element and the first integral image to obtain a target region corresponding to the target object in the original image, the algorithm complexity can be reduced, the processing time can be shortened, the target detection speed is accelerated, real-time detection of whether engineering operation is standard is facilitated, and timely alarming is performed when the engineering operation is not standard.
In some embodiments, the pixel value of a pixel point related to the color feature of the target object in the image to be detected is a first pixel value, the pixel value of a pixel point unrelated to the color feature of the target object is a second pixel value, the morphological processing submodule includes an erosion processing unit configured to perform erosion morphological processing on the image to be detected by using a preset structural element and a first integral image to obtain a first processed image, the integral statistics submodule is further configured to count a sum of pixel values of all pixel points in an upper left region of each pixel point in the first processed image as a pixel value of a corresponding pixel point in a second integral image corresponding to the first processed image, the morphological processing submodule further includes an expansion processing unit configured to perform expansion morphological processing on the first processed image by using the preset structural element and the second integral image to obtain a second processed image, the morphological processing submodule further comprises a connected domain determining unit used for determining a target connected domain of which the pixel value of each pixel point in the second processed image is the first pixel value, and the morphological processing submodule further comprises a target area determining unit used for obtaining the minimum circumscribed rectangle of the target connected domain as a target area.
In some embodiments, the corrosion processing unit is specifically configured to determine a first target pixel point whose pixel value in the image to be detected is the first pixel value, determine a second target pixel point corresponding to the first target pixel point in the first integral image, determine a third target pixel point in the first integral image based on the second target pixel point and the size of the preset element structure, calculate a sum of pixel values of all pixel points within a size range around the first target pixel point by using the pixel value of the third target pixel point, keep the pixel value of the first target pixel point as the first pixel value if a quotient of the sum of the pixel values and the size is the first pixel value, and reset the pixel value of the first target pixel point as the second pixel value if the quotient of the sum of the pixel values and the size is not the first pixel value.
In some embodiments, the dilation processing unit is specifically configured to sequentially use each pixel point in the first processed image as a fourth target pixel point, determine a fifth target pixel point corresponding to the fourth target pixel point in the second integral image, determine a sixth target pixel point in the second integral image based on the size of the fifth target pixel point and a preset structural element, calculate a sum of pixel values of all pixel points within a size range around the fourth target pixel point by using a pixel value of the sixth target pixel point, set the pixel value of the fourth target pixel point to a second pixel value if a quotient of the sum of the pixel values and the size is the second pixel value, and set the pixel value of the fourth target pixel point to the first pixel value if the quotient of the sum of the pixel values and the size is not the second pixel value.
In some embodiments, the detection module 62 further comprises a color space mapping sub-module for mapping the color space of the original image to the HSV color space, the preset threshold comprising: the threshold segmentation submodule comprises a judgment unit and a comparison unit, wherein the judgment unit is used for sequentially judging whether each pixel point in an original image mapped to an HSV color space meets the following conditions: the threshold segmentation submodule further comprises a pixel value setting unit, wherein the pixel value of the H channel is within a preset H channel threshold interval, the pixel value of the S channel is within a preset S channel threshold interval, the pixel value of the V channel is within a preset V channel threshold interval, the pixel value setting unit is used for setting the pixel value of a corresponding pixel point in an image to be detected as a first pixel value when the judging unit judges that the conditions are met, and the pixel value setting unit is further used for setting the pixel value of the corresponding pixel point in the image to be detected as a second pixel value when the judging unit judges that the conditions are not met.
In some embodiments, the determining module 63 includes a coincidence region determining submodule configured to determine whether a coincidence region exists between the preset detection region and the target region, the determining module 63 further includes an operation determining submodule configured to determine that the operation site meets the operation specification when the coincidence region determining submodule determines that the coincidence region exists, and the operation determining submodule is further configured to determine that the operation site does not meet the operation specification when the coincidence region determining submodule determines that the coincidence region does not exist.
In some embodiments, the determination module 63 further includes an information output sub-module for outputting a preset alarm message when the job determination sub-module determines that the job site does not meet the job specification, and for outputting a preset safety message when the job determination sub-module determines that the job site meets the job specification.
In some embodiments, the engineering work detection apparatus 60 further includes a setting module configured to obtain a pose parameter set by the user on the image capturing device, wherein the pose parameter is used to control the image capturing device to capture the work site.
Referring to fig. 7, fig. 7 is a schematic diagram of a framework of an embodiment of the engineering work detection apparatus 70 of the present application. The engineering work detection device 70 may include a memory 71 and a processor 72 coupled to each other; the processor 72 is configured to execute the program instructions stored in the memory 71 to implement the steps of any of the above-described embodiments of the engineering work detection method.
Specifically, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the above-described embodiments of the engineering work detection method. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, processor 72 may be commonly implemented by a plurality of integrated circuit chips.
In this embodiment, the processor 72 is configured to obtain an original image of a job site captured by the imaging device, where the original image includes a preset detection area, and the processor 72 is further configured to perform target detection on the original image, and obtain a target area corresponding to a target object in the original image, where the target object is used to implement warning, and the processor 72 is further configured to determine whether the job site meets a job specification based on a position relationship between the preset detection area and the target area.
According to the scheme, the original image shot by the camera device on the operation site is obtained, the original image comprises the preset detection area, the target detection is carried out on the original image, the target area corresponding to the target object used for realizing warning in the original image is obtained, and whether the operation site meets the operation specification or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site meets the operation specification or not can be detected based on the original image shot by the camera device on the operation site, the operation site does not need to be checked manually, the detection efficiency can be improved, the probability of omission is reduced, and the detection quality of engineering operation can be improved.
In some embodiments, the processor 72 is further configured to perform threshold segmentation on the original image by using a preset threshold related to a color feature of the target object to obtain an image to be detected, the processor 72 is further configured to count a sum of pixel values of all pixel points in an upper-left region of each pixel point in the image to be detected, where the sum is used as a pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, and the processor 72 is further configured to perform morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target region corresponding to the target object in the original image.
Different from the embodiment, the method includes the steps of performing threshold segmentation on an original image by using a preset threshold related to color features of a target object to obtain an image to be detected, counting the sum of pixel values of all pixel points in an upper left region of each pixel point in the image to be detected, and using the sum as the pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, so that morphological processing is performed on the image to be detected by using a preset structural element and the first integral image to obtain a target region corresponding to the target object in the original image, the algorithm complexity can be reduced, the processing time can be shortened, the target detection speed is accelerated, real-time detection of whether engineering operation is standard is facilitated, and timely alarming is performed when the engineering operation is not standard.
In some embodiments, the pixel value of a pixel point related to the color feature of the target object in the image to be detected is a first pixel value, the pixel value of a pixel point unrelated to the color feature of the target object is a second pixel value, the processor 72 is further configured to perform erosion morphological processing on the image to be detected by using a preset structural element and a first integral image to obtain a first processed image, the processor 72 is further configured to count the sum of the pixel values of all pixel points in an upper left region of each pixel point in the first processed image as the pixel value of a corresponding pixel point in a second integral image corresponding to the first processed image, the processor 72 is further configured to perform dilation morphological processing on the first processed image by using the preset structural element and the second integral image to obtain a second processed image, the processor 72 is further configured to determine that the pixel value of each pixel point in the second processed image is a target connected domain of the first pixel value, the processor 72 is further configured to obtain a minimum bounding rectangle of the target connected component as the target region.
In some embodiments, the processor 72 is further configured to determine a first target pixel point in the image to be detected, where the pixel value is the first pixel value, and determines a second target pixel point in the first integral image corresponding to the first target pixel point, the processor 72 is further configured to determine a third target pixel point in the first integral image based on the second target pixel point and the size of the preset element structure, the processor 72 is further configured to calculate the sum of pixel values of all pixel points within the size range around the first target pixel point by using the pixel value of the third target pixel point, the processor 72 is further arranged to, when the quotient of the sum of the pixel values and the size is the first pixel value, the processor 72 is further configured to reset the pixel value of the first target pixel to the second pixel value when the quotient of the sum of the pixel values and the size is not the first pixel value.
In some embodiments, the processor 72 is further configured to sequentially treat each pixel point in the first processed image as a fourth target pixel point, and determining a fifth target pixel point in the second integral image corresponding to the fourth target pixel point, the processor 72 is further configured to determine a sixth target pixel point in the second integral image based on the size of the fifth target pixel point and a preset structural element, the processor 72 is further configured to calculate the sum of pixel values of all pixel points within a size range around the fourth target pixel point by using the pixel value of the sixth target pixel point, the processor 72 is further arranged to, when the quotient of the sum of the pixel values and the size is the second pixel value, the processor 72 is further configured to set the pixel value of the fourth target pixel to the second pixel value, and set the pixel value of the fourth target pixel to the first pixel value when the quotient of the sum of the pixel values and the size is not the second pixel value.
In some embodiments, the processor 72 is further configured to map the color space of the original image to an HSV color space, and the preset threshold includes: the processor 72 is further configured to sequentially determine whether each pixel point in the original image mapped to the HSV color space satisfies the following conditions: the H channel pixel value is within a preset H channel threshold interval, the S channel pixel value is within a preset S channel threshold interval, and the V channel pixel value is within a preset V channel threshold interval, the processor 72 is further configured to set the pixel value of the corresponding pixel point in the image to be detected as a first pixel value when a condition is satisfied, and the processor 72 is further configured to set the pixel value of the corresponding pixel point in the image to be detected as a second pixel value when the condition is not satisfied.
In some embodiments, the processor 72 is further configured to determine whether there is an overlap area between the preset detection area and the target area, the processor 72 is further configured to determine that the worksite meets the job specification when the determination is yes, and the processor 72 is further configured to determine that the worksite does not meet the job specification when the determination is no.
In some embodiments, the engineering work detection device 70 also includes a human-machine interaction circuit for outputting a preset alarm message when the processor 72 determines that the work site does not meet the work specification. The human-computer interaction circuit is also configured to output preset safety information when the processor 72 determines that the job site meets the job specification.
In some embodiments, the processor 72 is further configured to control the human-computer interaction circuit to acquire pose parameters set by the user on the image capture device, wherein the pose parameters are used for controlling the image capture device to capture the work site.
In some embodiments, the engineering work detection device 70 further includes an image pickup device for picking up an original image of the work site.
Referring to fig. 8, fig. 8 is a schematic diagram of a memory device 80 according to an embodiment of the present disclosure. The storage device 80 stores program instructions 81 executable by the processor, the program instructions 81 being for implementing the steps in any of the above-described embodiments of the method of engineering job detection.
According to the scheme, whether the operation site meets the operation specification or not can be detected based on the original image shot by the camera device at the operation site, the operation site does not need to be checked manually, so that the detection efficiency can be improved, the probability of careless omission is reduced, and the detection quality of engineering operation can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. An engineering operation detection method is characterized by comprising the following steps:
acquiring an original image shot by a camera device on a working site, wherein the original image comprises a preset detection area;
performing target detection on the original image, and acquiring a target area corresponding to a target object in the original image, wherein the target object is used for realizing warning;
and determining whether the operation site meets the operation specification or not based on the position relation between the preset detection area and the target area.
2. The project work detection method according to claim 1, wherein the performing the target detection on the original image and acquiring the target area corresponding to the target object in the original image comprises:
performing threshold segmentation on the original image by using a preset threshold related to the color feature of the target object to obtain an image to be detected;
counting the sum of pixel values of all pixel points in the upper left area of each pixel point in the image to be detected, and taking the sum as the pixel value of the corresponding pixel point in the first integral image corresponding to the image to be detected;
and performing morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image.
3. The engineering work detection method according to claim 2, wherein a pixel value of a pixel point related to the color feature of the target object in the image to be detected is a first pixel value, and a pixel value of a pixel point unrelated to the color feature of the target object is a second pixel value;
the morphological processing of the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image comprises the following steps:
carrying out corrosion morphological processing on the image to be detected by utilizing the preset structural element and the first integral image to obtain a first processed image;
counting the sum of pixel values of all pixel points in the upper left area of each pixel point in the first processed image, and taking the sum as the pixel value of the corresponding pixel point in the second integral image corresponding to the first processed image;
performing dilation morphological processing on the first processed image by using the preset structural element and the second integral image to obtain a second processed image;
determining the pixel value of each pixel point in the second processed image as a target connected domain of the first pixel value;
and acquiring the minimum circumscribed rectangle of the target connected domain as the target area.
4. The engineering work detection method according to claim 3, wherein the performing corrosion morphological processing on the image to be detected by using the preset structural element and the first integral image to obtain a first processed image comprises:
determining a first target pixel point of which the pixel value in the image to be detected is the first pixel value, and determining a second target pixel point corresponding to the first target pixel point in the first integral image;
determining a third target pixel point in the first integral image based on the second target pixel point and the size of the preset element structure;
calculating the sum of the pixel values of all the pixels in the size range around the first target pixel point by using the pixel value of the third target pixel point;
if the quotient of the sum of the pixel values and the size is the first pixel value, keeping the pixel value of the first target pixel point as the first pixel value;
if the quotient of the sum of the pixel values and the size is not the first pixel value, the pixel value of the first target pixel point is reset to the second pixel value.
5. The engineering work detection method according to claim 3, wherein the performing the dilation morphology on the first processed image using the preset structural element and the second integrated image to obtain a second processed image comprises:
sequentially taking each pixel point in the first processed image as a fourth target pixel point, and determining a fifth target pixel point corresponding to the fourth target pixel point in the second integral image;
determining a sixth target pixel point in the second integral image based on the size of the fifth target pixel point and the size of the preset structural element;
calculating the sum of the pixel values of all the pixels in the size range around the fourth target pixel point by using the pixel value of the sixth target pixel point;
setting the pixel value of the fourth target pixel as the second pixel value if the quotient of the sum of the pixel values and the size is the second pixel value;
and if the quotient of the sum of the pixel values and the size is not the second pixel value, setting the pixel value of the fourth target pixel point as the first pixel value.
6. The method for detecting engineering works according to claim 2, wherein before the threshold segmentation is performed on the original image by using the preset threshold related to the color feature of the target object to obtain the image to be detected, the method further comprises:
mapping the color space of the original image to an HSV color space;
the preset threshold includes: presetting an H channel threshold interval, an S channel threshold interval and a V channel threshold interval; the threshold segmentation is carried out on the original image by using a preset threshold related to the color feature of the target object to obtain an image to be detected, and the method comprises the following steps:
whether each pixel point in the original image mapped to the HSV color space meets the following conditions is sequentially judged: the H channel pixel value is within the preset H channel threshold interval, the S channel pixel value is within the preset S channel threshold interval, and the V channel pixel value is within the preset V channel threshold interval;
if so, setting the pixel value of the corresponding pixel point in the image to be detected as a first pixel value;
and if not, setting the pixel value of the corresponding pixel point in the image to be detected as a second pixel value.
7. The project work detection method according to claim 1, wherein the determining whether the work site meets a work specification based on the positional relationship between the preset detection area and the target area comprises:
judging whether a coincidence region exists between the preset detection region and the target region;
if so, determining that the operation site meets the operation specification;
if not, determining that the operation site does not meet the operation specification.
8. The project work detection method of claim 7, wherein after said determining that said work site is not compliant with a work specification, said method further comprises:
outputting preset alarm information;
and/or, after determining that the job site meets a job specification, the method further comprises:
preset security information is output, or no information is output.
9. The work order detection method according to claim 1, wherein before the obtaining of the original image taken of the work site by the imaging device, the method comprises:
and acquiring pose parameters set by a user on the camera device, wherein the pose parameters are used for controlling the camera device to shoot the operation site.
10. An engineering work detection device, comprising mutually coupled memory processors for executing program instructions stored in the memories to implement the engineering work detection method according to any one of claims 1 to 9.
11. The work detection apparatus of claim 10, further comprising an imaging device for capturing an original image of the work site.
12. A storage device storing program instructions executable by a processor to implement the method of any one of claims 1 to 9.
CN202010251987.XA 2020-04-01 2020-04-01 Engineering operation detection method and related device Active CN111382726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010251987.XA CN111382726B (en) 2020-04-01 2020-04-01 Engineering operation detection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010251987.XA CN111382726B (en) 2020-04-01 2020-04-01 Engineering operation detection method and related device

Publications (2)

Publication Number Publication Date
CN111382726A true CN111382726A (en) 2020-07-07
CN111382726B CN111382726B (en) 2023-09-01

Family

ID=71217739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010251987.XA Active CN111382726B (en) 2020-04-01 2020-04-01 Engineering operation detection method and related device

Country Status (1)

Country Link
CN (1) CN111382726B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070030375A1 (en) * 2005-08-05 2007-02-08 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
JP2016225720A (en) * 2015-05-28 2016-12-28 住友電気工業株式会社 Monitoring device, monitoring method, and monitoring program
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
CN107895138A (en) * 2017-10-13 2018-04-10 西安艾润物联网技术服务有限责任公司 Spatial obstacle object detecting method, device and computer-readable recording medium
CN108345819A (en) * 2017-01-23 2018-07-31 杭州海康威视数字技术股份有限公司 A kind of method and apparatus sending warning message
CN109034124A (en) * 2018-08-30 2018-12-18 成都考拉悠然科技有限公司 A kind of intelligent control method and system
CN109035629A (en) * 2018-07-09 2018-12-18 深圳码隆科技有限公司 A kind of shopping settlement method and device based on open automatic vending machine
CN110472623A (en) * 2019-06-29 2019-11-19 华为技术有限公司 Image detecting method, equipment and system
WO2020006907A1 (en) * 2018-07-05 2020-01-09 平安科技(深圳)有限公司 Photographing control method, terminal, and computer readable storage medium
US20200034657A1 (en) * 2017-07-27 2020-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for occlusion detection on target object, electronic device, and storage medium
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN110889403A (en) * 2019-11-05 2020-03-17 浙江大华技术股份有限公司 Text detection method and related device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070030375A1 (en) * 2005-08-05 2007-02-08 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
JP2016225720A (en) * 2015-05-28 2016-12-28 住友電気工業株式会社 Monitoring device, monitoring method, and monitoring program
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
CN108345819A (en) * 2017-01-23 2018-07-31 杭州海康威视数字技术股份有限公司 A kind of method and apparatus sending warning message
US20200034657A1 (en) * 2017-07-27 2020-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for occlusion detection on target object, electronic device, and storage medium
CN107895138A (en) * 2017-10-13 2018-04-10 西安艾润物联网技术服务有限责任公司 Spatial obstacle object detecting method, device and computer-readable recording medium
WO2020006907A1 (en) * 2018-07-05 2020-01-09 平安科技(深圳)有限公司 Photographing control method, terminal, and computer readable storage medium
CN109035629A (en) * 2018-07-09 2018-12-18 深圳码隆科技有限公司 A kind of shopping settlement method and device based on open automatic vending machine
CN109034124A (en) * 2018-08-30 2018-12-18 成都考拉悠然科技有限公司 A kind of intelligent control method and system
CN110472623A (en) * 2019-06-29 2019-11-19 华为技术有限公司 Image detecting method, equipment and system
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN110889403A (en) * 2019-11-05 2020-03-17 浙江大华技术股份有限公司 Text detection method and related device

Also Published As

Publication number Publication date
CN111382726B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
KR101472615B1 (en) System and method for warning lane departure
CN112348784A (en) Method, device and equipment for detecting state of camera lens and storage medium
WO2015070723A1 (en) Eye image processing method and apparatus
CN110766899A (en) Method and system for enhancing electronic fence monitoring early warning in virtual environment
CN112149649B (en) Road spray detection method, computer equipment and storage medium
KR20140095333A (en) Method and apparratus of tracing object on image
CN110781853A (en) Crowd abnormality detection method and related device
WO2021060077A1 (en) Fish counting system, fish counting method, and program
CN110913209A (en) Camera shielding detection method and device, electronic equipment and monitoring system
CN114821274A (en) Method and device for identifying state of split and combined indicator
CN111382726B (en) Engineering operation detection method and related device
CN111695374B (en) Segmentation method, system, medium and device for zebra stripes in monitoring view angles
JP2016144049A (en) Image processing apparatus, image processing method, and program
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN105979230A (en) Monitoring method and device realized through images by use of robot
WO2018110377A1 (en) Video monitoring device
KR101715247B1 (en) Apparatus and method for processing image to adaptively enhance low contrast, and apparatus for detecting object employing the same
CN109193935B (en) Power distribution room monitoring method and system based on image processing
CN116977630A (en) Image detection method, device, electronic equipment and computer readable storage medium
CN115631477B (en) Target identification method and terminal
CN118038039A (en) Camera shielding detection method and device, electronic equipment and storage medium
CN112822496B (en) Video analysis method and device
KR101067516B1 (en) Fast shadow removal Method using Normalized Cross CorrelationNCC and Integral Image
Hakeem et al. A Real-time System for Fire Detection and Localization in Outdoors
Raj et al. Shadow Processing Method of High Resolution Remote Sensing Image Using IOBA Matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant