CN116703787B - Building construction safety risk early warning method and system - Google Patents

Building construction safety risk early warning method and system Download PDF

Info

Publication number
CN116703787B
CN116703787B CN202310995004.7A CN202310995004A CN116703787B CN 116703787 B CN116703787 B CN 116703787B CN 202310995004 A CN202310995004 A CN 202310995004A CN 116703787 B CN116703787 B CN 116703787B
Authority
CN
China
Prior art keywords
pixel
image
obtaining
pixel point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310995004.7A
Other languages
Chinese (zh)
Other versions
CN116703787A (en
Inventor
吴树涛
曹永康
宋炳贤
吴标兴
伏圣岗
李启安
高鲁凡
王宝智
王雨
王鸣凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Railway Construction Engineering Group Co Ltd
China Railway Construction Engineering Group Second Construction Co Ltd
Original Assignee
China Railway Construction Engineering Group Co Ltd
China Railway Construction Engineering Group Second Construction Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Railway Construction Engineering Group Co Ltd, China Railway Construction Engineering Group Second Construction Co Ltd filed Critical China Railway Construction Engineering Group Co Ltd
Priority to CN202310995004.7A priority Critical patent/CN116703787B/en
Publication of CN116703787A publication Critical patent/CN116703787A/en
Application granted granted Critical
Publication of CN116703787B publication Critical patent/CN116703787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, in particular to a building construction safety risk early warning method and system, comprising the following steps: collecting continuous frame building construction images, and obtaining a frame difference image; obtaining a marked pixel point according to the frame difference map; obtaining a target pixel point according to the neighborhood chain code variance of the marked pixel point; obtaining the similarity of two target pixel points according to the neighborhood pixel points of the two target pixel points; obtaining the moving distance and the moving direction of the connected domain according to the similarity of the two target pixel points; obtaining the outline of the moving object and the corresponding moving direction and moving distance according to the difference of the moving distance and the moving direction between the communicating areas; obtaining a final second segmentation result according to the moving direction and the moving distance of the moving object; obtaining the size of a sliding window of a dark channel of each pixel point according to the final second segmentation result; therefore, a defogging effect diagram is obtained, and a neural network is used for safety risk early warning. The invention obtains the optimal size of the sliding window of the dark channel and improves the defogging effect of the image.

Description

Building construction safety risk early warning method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a building construction safety risk early warning method and system.
Background
The building construction safety risk early warning method is characterized in that through effective technical means, through analysis and processing of images and videos of a building construction site, potential safety risks are found and predicted in time in the building construction process, and corresponding early warning measures are provided, so that safety of construction operators and construction sites is guaranteed. The building construction safety risk early warning method based on computer vision depends on high-quality video data and accurate target detection, and in practical application, reasonable selection and customized design are required according to specific project requirements and site conditions.
In the process of acquiring high-quality video data, a large amount of dust on a construction site floats in the air, so that the problems of blurred vision and low visibility are caused, the observation range of monitoring equipment and personnel is influenced, and the accuracy and the effectiveness of safety monitoring are reduced. For the traditional dark channel defogging algorithm, when the sliding window is used for calculating the dark channel value, a proper sliding window size and shape are not provided, and the dark channel value only uses the minimum approximation of surrounding pixel points in three channels, so that the problems of detail and structure information loss, contrast reduction, color shift or color saturation change and the like in an image can be caused. Therefore, reasonable blocking is carried out on the basis of super-pixel segmentation by utilizing the characteristics of the image, so that the concentration of fog in each block is ensured to be at the same or similar level, a better defogging effect is achieved, and the accuracy of building construction safety risk early warning is further improved.
Disclosure of Invention
The invention provides a building construction safety risk early warning method and system, which aim to solve the existing problems.
The invention discloses a building construction safety risk early warning method and a system, which adopt the following technical scheme:
the embodiment of the invention provides a building construction safety risk early warning method, which comprises the following steps:
collecting continuous frame building construction images;
super-pixel segmentation is carried out on the building construction image to obtain super-pixel blocks;
obtaining the target degree of the super pixel block according to the gray value and the gradient value of the pixel point in the super pixel block, and obtaining the target degree of each pixel point according to the target degree of the super pixel block;
obtaining a frame difference image according to building construction images of adjacent frames, and obtaining outer edge pixel points of each connected domain according to the frame difference image; obtaining a target pixel point in the frame difference image according to the gray value of the outer edge pixel point in the neighborhood of each outer edge pixel point in the frame difference image; obtaining the similarity of any two target pixel points according to the neighborhood pixel points of any two target pixel points; obtaining the moving distance and the moving direction of each connected domain in the frame difference image according to the similarity of any two target pixel points; obtaining the connected domains of the moving object and the corresponding moving direction and moving distance of the connected domains of each moving object according to the difference of the moving distance and the moving direction between each connected domain;
dividing the connected domain of each moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain image blocks; obtaining the difference degree between adjacent image blocks according to the difference of the corresponding adjacent frames between the adjacent image blocks; merging the image blocks according to the difference degree between the adjacent image blocks to obtain a plurality of second-time segmentation image blocks;
obtaining the size of a sliding window of a dark channel of each pixel point in each second divided image block according to the number of the pixel points of each second divided image block and the target degree of each second divided image block; obtaining a dark channel result diagram according to the size of a dark channel sliding window of each pixel point in each second divided image block, and defogging the building construction image according to the dark channel result diagram to obtain a final defogging effect diagram;
and finally, carrying out safety pre-warning on the defogging effect graph of the building construction image of each frame.
Further, the target degree of the super pixel block comprises the following specific steps:
the formula of the target degree of the super pixel block is:
in the method, in the process of the invention,denoted as +.>The number of pixels in each super pixel block, < +.>Denoted as the firstThe +.f in the i super pixel blocks>Gradient magnitude corresponding to each pixel, < ->Denoted as the +.f in the ith super pixel block>Gray value of each pixel, +.>Is the number of super pixel blocks, +.>Denoted as +.>Target degree of each super pixel block.
Further, the target pixel point comprises the following specific steps:
after obtaining the frame difference image, the outer edge pixel points of any one connected domain are respectively marked asWherein->Representing the ith outer edge pixel point, and n represents the number of outer edge pixel points in the connected domain; determining a neighborhood pixel point of each outer edge pixel point, and then screening target pixel points; the screening rules are as follows: if at->The variance of gray values of all outer edge pixel points in the neighborhood of (2) is smaller than a preset threshold value +.>When in use, record->For the target pixel, otherwise +.>And is not the target pixel point.
Further, the similarity of any two target pixel points comprises the following specific steps:
calculating the ratio of the number of the same marked pixels in the two target pixel neighborhood and the number of all the marked pixels in the two target pixel neighborhood, and recording the ratio as the similarity of any two target pixels.
Further, the moving distance and moving direction of each connected domain in the frame difference map comprise the following specific steps:
for a target pixel pointTaking the target pixel point with the highest similarity as +.>Target pixel point +.>And->Matching the two pixel points into one edge pixel point pair to obtain a plurality of edge pixel point pairs matched one by one in any one connected domain, wherein the connecting line between each pair of pixel points has the direction and the length of a connecting line segment, and the angle interval pair is->Equally dividing the pixel points into a plurality of sub-intervals, acquiring the sub-interval to which the direction angle of the line segment between each pair of pixel points belongs, counting the frequency of occurrence of the line segment direction in each sub-interval, and selecting the target pixel point pair in the interval with the largest occurrence frequency; regarding the selected pixel point pair, taking the distance average value of the pixel point pair as the +.>The moving distance and the direction angle average value of the connected domains are used as the connectionThe direction of movement of the pass-through region.
Further, the moving direction and the moving distance corresponding to the communicating domain of the moving object and the communicating domain of each moving object include the following specific steps:
analyzing a plurality of connected domains in a frame difference graph to obtain the corresponding moving distance and direction of each connected domain, and matching in a plurality of connected domains, if the first connected domain isThe communicating domain and->The angle of the moving direction among the two communicating domains is smaller than a preset angle, and the moving distance is smaller than a preset number of pixel points in the image, so that the two communicating domains correspond to the same moving object; selecting a plurality of connected domains belonging to the same object to perform convex hull detection of the plurality of connected domains, wherein the obtained region formed by the convex hulls is used as a connected domain of a mobile object; and finally, taking the moving direction and the moving distance of the same moving object as the average value of the moving directions and the moving distances of a plurality of communicating domains in the communicating domain of the moving object.
Further, the method for dividing the connected domain of the moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain the image block includes the following specific steps:
the first isThe connected domain corresponding to the individual moving object is marked +.>The corresponding direction of movement is marked +.>The corresponding movement distance is marked as +.>The method comprises the steps of carrying out a first treatment on the surface of the Edge->Direction is from->One end starts at +.>For the division spacing, use is made of a coding perpendicular to +.>Dividing the outline of the moving object by the straight line of (2) and marking each image block after division as +.>WhereinIndicate->And the n-th image block after the connected domain corresponding to the mobile object is segmented, wherein n represents the number of the segmented image blocks.
Further, the difference degree between the adjacent image blocks comprises the following specific steps:
the formula of the degree of difference between adjacent image blocks is:
in the method, in the process of the invention,representation->The number of corresponding pixels, +.>Representation->The number of corresponding pixel points is determined,representation->Appear at +.>Frame time->Target degree corresponding to each pixel point, < +.>Representation->Appear at +.>Under frame->Target degree corresponding to each pixel point, < +.>Representation->Appear at +.>Frame time->Gray value corresponding to each pixel point, < >>Representation->Appear at +.>Under frame->Gray value corresponding to each pixel point, < >>Indicating the degree of difference between the j-th block and the j+1-th block.
Further, the size of the sliding window of the dark channel of each pixel point in each second divided image block comprises the following specific steps:
the formula of the dark channel sliding window size of each pixel point in each second divided image block is as follows:
in the method, in the process of the invention,for the +.>The number of pixels in each block, < >>For the +.>Target degree of individual blocks,/->The size of the sliding window of the pixel point in the ith block.
On the other hand, the embodiment of the invention provides a building construction safety risk early warning system, which comprises the following modules:
and an image acquisition module: collecting continuous frame building construction images;
an image processing module: super-pixel segmentation is carried out on the building construction image to obtain super-pixel blocks, and gray values of initial dark channel pixel points are obtained;
obtaining the target degree of the super pixel block according to the gray value and the gradient value of the pixel point in the super pixel block, and obtaining the target degree of each pixel point according to the target degree of the super pixel block;
obtaining a frame difference image according to building construction images of adjacent frames, and obtaining outer edge pixel points of each connected domain according to the frame difference image; obtaining a target pixel point in the frame difference image according to the gray value of the outer edge pixel point in the neighborhood of each outer edge pixel point in the frame difference image; obtaining the similarity of any two target pixel points according to the neighborhood pixel points of any two target pixel points; obtaining the moving distance and the moving direction of each connected domain in the frame difference image according to the similarity of any two target pixel points; obtaining the connected domains of the moving object and the corresponding moving direction and moving distance of the connected domains of each moving object according to the difference of the moving distance and the moving direction between each connected domain;
dividing the connected domain of each moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain image blocks; obtaining the difference degree between adjacent image blocks according to the difference of the corresponding adjacent frames between the adjacent image blocks; merging the image blocks according to the difference degree between the adjacent image blocks to obtain a plurality of second-time segmentation image blocks;
defogging processing module: obtaining the size of a sliding window of a dark channel of each pixel point in each second divided image block according to the number of the pixel points of each second divided image block and the target degree of each second divided image block; obtaining a dark channel result diagram according to the size of a dark channel sliding window of each pixel point in each second divided image block, and defogging the building construction image according to the dark channel result diagram to obtain a final defogging effect diagram;
and the early warning module is used for: and finally, carrying out safety pre-warning on the defogging effect graph of the building construction image of each frame.
The technical scheme of the invention has the beneficial effects that: aiming at the phenomenon that dust emission often occurs in a building construction video image, the invention obtains the minimum value of RGB three channels and the information of a moving object under an adjacent frame, more finely segments each block of an image on the basis of super-pixel segmentation, and then adaptively selects the dark channel value of pixel points in each block by utilizing the target degree of each block so as to ensure that each pixel point has similar background and fog concentration according to the channel value, and the selection of the dark channel value avoids the problem of poor defogging effect caused by overlarge value of part sliding windows and simultaneously eliminates the problem of color distortion caused by overlarge value of part sliding windows. The defogging result is favorable for target detection in the construction process of subsequent neural network training, and the accuracy of the early warning effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for early warning of safety risk of construction according to the present invention;
fig. 2 is a block flow diagram of a construction safety risk early warning system according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of a construction safety risk early warning method and system according to the invention, which are specific embodiments, structures, features and effects thereof, with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a building construction safety risk early warning method and a system specific scheme by combining the drawings.
Referring to fig. 1, a flowchart of steps of a construction safety risk early warning method according to an embodiment of the present invention is shown, where the method includes the following steps:
step S001: and collecting the building construction video image for preprocessing.
In this embodiment, the images of the construction sites are collected by analyzing the specific construction sites by using the images of the continuous frames, so that the video images of the construction sites are collected.
Specifically, after the building construction video image is collected, an image of each frame is obtained according to the building construction video image, and a building construction image is obtained.
Step S002: and performing super-pixel segmentation on the image, obtaining the target degree of each block, and performing secondary segmentation on the area occupied by the moving object by utilizing the change condition of the continuous frame to obtain the dark channel value of each pixel point in the block.
It should be noted that, the traditional dark channel value is the minimum value of three channels in the neighborhood, and the neighborhood size is consistent, which is not suitable for selecting dark channel values of different areas, so on the basis of super-pixel segmentation, the image is finely segmented by combining the gray scale change conditions of moving objects under different frames, so as to ensure that the background and fog concentration of the moving objects in each block are in a stable range, and further, the dark channel values of the moving objects corresponding to all pixel points are selected in a self-adaptive manner, and the dark channel values of the moving objects which do not move are acquired in a self-adaptive manner according to the corresponding target degrees.
(1) And performing super-pixel segmentation on the gray level image to obtain super-pixel segmentation blocks, and obtaining the target degree of each super-pixel segmentation block.
It should be noted that, by using the idea of SLIC super-pixel segmentation, the gray value of each pixel point in the block can be kept in a smaller gray level interval to a certain extent, and then, according to the idea of defogging the dark channels, for the portion without fog in the image, even if the image is well exposed, some pixel values of one channel usually exist in three channels of RGB are very low and close to 0; whereas for the portion of the image where fog is present, the minimum value in all three channels of RGB is much greater than 0, giving off-white. The minimum value of each block in three RGB channels is utilized to represent the target degree of the block.
The gray image of any one frame is recorded as the current frame.
Specifically, an initial block threshold is presetWherein the present embodiment is +.>To describe the example, the present embodiment is not particularly limited, wherein +.>Depending on the particular implementation. Initial block size for current frame image using SLIC super-pel segmentation>And performing segmentation to obtain a super-pixel segmented image.
Firstly, acquiring the minimum gray value of each pixel point in an image in three RGB channels to obtain an initial dark channel diagram, wherein the pixel points in the initial dark channel diagramCorresponding gray value +.>The method comprises the following steps:
in the method, in the process of the invention,represented as pixel +.>At->Corresponding gray values in the channel +.>Represented as pixel +.>Gray values in the initial dark channel map, < >>Any one of the R, G and B channels is designated as the c channel, and the +.>Representing taking a minimum function.
The above operation is equivalent to taking 1 in the conventional dark channel sliding window size, and although the dark channel map window thus obtained is small, the global atmospheric illumination and transmittance change cannot be captured, and the direct obtaining of the dark channel map by such a method results in distortion of the estimated result, but the result reflects the fog concentration to some extent.
When the original color of the object is white, the gray value of the original dark channel map will be larger, but compared with the area with higher fog concentration, the edge gradient of the area with white original color will be larger, so the gradient size of each pixel point in the block in the original dark channel map is utilized, the inverse of the gradient information is adopted as the weight, and the corresponding original dark channel value is weighted and averaged. The target level is obtained to initially measure the concentration of mist in the block.
Then after superpixel segmentation, record the firstTarget degree of each super pixel block +.>The method comprises the following steps:
in the method, in the process of the invention,denoted as +.>The number of pixels in each super pixel block, < +.>Denoted as the +.f in the ith super pixel block>Gradient magnitude corresponding to each pixel, < ->Denoted as the +.f in the ith super pixel block>Gray value of each pixel, +.>Is the number of super pixel blocks, +.>Denoted as +.>Target degree of each super pixel block.
When the gray value of the pixel point is larger, the gradient value is smaller, and the corresponding target degree of the super pixel is larger, the super pixel block is more toward fog.
Since there are multiple pixels in the super pixel block, the target degree of the multiple pixels in the super pixel block is equal to the target degree of the super pixel block.
So far, the target degree of all the super pixel blocks and the target degree of the pixel points are obtained.
It should be noted that, the dark channel value is not obtained precisely enough only according to the result of the super-pixel segmentation, and further division needs to be performed on each pixel point in the block on the basis of the dark channel value.
(2) And obtaining the position information of the moving object by utilizing a frame difference method.
It should be noted that, for the frame difference method, a frame difference image is obtained by making a difference between a current frame and a previous frame image, the frame difference image can reflect specific position information of an object with a large position change, and for fog in an image, it is considered that the position of a pixel point with a small change, i.e. a high gray level in the frame difference image, in a time period of two adjacent frames is the outer edge contour of a moving object.
Specifically, the frame difference map is subjected to binarization threshold segmentation by using an Ojin method, so that a communicating region of the outer edge profile of the moving object can be obtained, the communicating region is in a strip-shaped communicating region along the edge profile of the moving object, and the strip width of the communicating region can represent the moving distance of the object in two adjacent frame time periods. The following is a process for acquiring the running direction and distance of a moving object:
A. and selecting target pixel points in the outer edge contour connected domain of the moving object.
Preset threshold valueAnd->Wherein the present embodiment is +.>And->To describe the example, the present embodiment is not particularly limited, wherein +.>And->Depending on the particular implementation.
After obtaining the frame difference image, judging the edge distribution condition of each pixel point at the outer edge of the connected domain, and recording the firstThe outer edge of each connected domainEach pixel point at the edge is +.>Wherein->The ith outer edge pixel point is represented, and n represents the number of all outer edge pixel points; taking the +.>Judging the distribution condition of each outer edge pixel point in the neighborhood, and screening target pixel points: if at->The distribution of the pixel points at the outer edge of the neighborhood is approximately in a straight line, when the variance of gray values of all the pixel points at the outer edge in the neighborhood is smaller than the threshold value +.>When in use, record->For the target pixel, otherwise +.>And is not the target pixel point.
B. And calculating the similarity of any two target pixel points in the same connected domain.
For the firstA plurality of target pixel points in the connected domain are calculated to be +.>And->Similarity of (c): by->Is->Neighborhood of (2), and->Is->Is compared with the neighborhood of (2), is (are) added>And->The similarity of the two adjacent areas is the ratio of the number of the points of the intersection of the edge points in the two adjacent areas to the number of the points of the union of the edge points in the two adjacent areas, and the similarity is recorded as the similarity between any two target pixel points.
C. The connected domain corresponds to the determination of the moving distance and moving direction of the object.
In the kth connected domain, for the target pixel pointTaking the target pixel point with the highest similarity as +.>Target pixel point +.>And->Matching the pixel points into an edge pixel point pair to obtain the +.>A plurality of edge pixel point pairs which are matched one by one in the connected domain, wherein the connecting line between each pair of pixel points has the direction and the length of a connecting line segment, and the angle interval pair is->Equally dividing into 12 sub-intervals to obtain each pair of pixel pointsThe direction angle of the line segment in each subinterval is belonged to, the frequency of the line segment direction occurrence in each subinterval is counted, and the target pixel point pair in the interval with the largest occurrence frequency is selected. Regarding the selected pixel point pairs, taking the distance average value of the pixel point pairs as the first +.>The direction angle average value is used as the moving direction of the object corresponding to the connected domain.
D. And determining the outline of the moving object.
Analyzing a plurality of connected domains in a frame difference graph to obtain the corresponding moving distance and direction of each connected domain, and matching in a plurality of connected domains, if the first connected domain isThe communicating domain and->The angle of the direction of movement between the communicating areas is smaller than +.>And the moving distance is smaller than 2 pixel points in the image, the two connected domains are considered to correspond to the same moving object. And selecting a plurality of connected domains belonging to the same object to perform convex hull detection of the plurality of connected domains, wherein the obtained region formed by the convex hulls is used as the connected domain of the mobile object. And finally, taking the moving direction and the moving distance of the same moving object as the average value of the moving directions and the moving distances of a plurality of communicating domains in the communicating domain of the moving object.
Thus, the connected domain of each moving object in the image is obtained.
(3) And obtaining the difference degree of the corresponding blocks of the moving object, and performing second segmentation.
In the first time, super-pixel segmentation is used, and when dark channel processing is performed on a building construction image, the recognition and detection of a moving object in the image are unclear because the influence of haze is not considered, so that the recognition and detection of the moving object are performed again through second segmentation. In the adjacent frame difference image, the fog concentration is considered to be unchanged, but in the same position of the obtained image due to the movement of the object, the gray value is changed due to the change of the background; similarly, after determining the contour of a moving object, the same position of the object changes in gray value in different images due to the change in the fog concentration. And acquiring the fog concentration difference of each pixel point in the corresponding outline of the moving object by utilizing the difference of the gray values, so as to acquire the difference degree of different areas in the outline of the moving object.
Specifically, the firstFrame and->The frame images are differenced to obtain a frame difference image, and the first part of the frame difference image is added>The connected domain corresponding to the individual moving object is marked +.>The corresponding direction of movement is marked +.>The corresponding movement distance is marked as +.>. Edge->Direction fromOne end starts at +.>For the partition interval, will be perpendicular to +.>Is used as a connecting domain of a parting line to a moving objectDividing into blocks of +.>. Wherein (1)>An image representing the nth block of the kth moving object after being divided is +.>The corresponding region should be a region where the moving object disappears in the previous frame, which is +.>The corresponding areas are matched; i.e. in +.>Moving object under frame->When (1) occurs, at the present +.>The same position of the object under the frame will be +.>Is present in->Under frame->Gray value and->Under frame->The difference between gray values of (2) can be characterized by +.>And->Is a concentration difference of mist.
The outline of the moving object occupies part of the image blocks in the super-pixel segmentation result diagram, and the edge of the outline is used for segmenting the image blocks occupied by the edge of the outline to judgeAnd->Whether or not further division is needed, record->And->Degree of difference->The method comprises the following steps:
in the method, in the process of the invention,representation->The number of corresponding pixels, +.>Representation->The number of corresponding pixel points is determined,representation->Appear at +.>Frame time->Target degree corresponding to each pixel point, < +.>Representation->Appear at +.>Under frame->Target degree corresponding to each pixel point, < +.>Representation->Appear at +.>Frame time->Gray value corresponding to each pixel point, < >>Representation->Appear at +.>Under frame->Gray value corresponding to each pixel point, < >>Indicating the degree of difference between the j-th block and the j+1-th block.
Wherein the target degree of each pixel point is the target degree of the corresponding block of the pixel point,and->The corresponding region occupies a plurality of super-pixel blocks, each super-pixel block having a different target level, using +.>Under frame->Gray value and->Under frame->The difference of gray values of the super pixel blocks to which each point belongs is weighted and averaged, and the gray values of the super pixel blocks are further weighted and averaged>And->Correcting the concentration difference of mist to obtain +.>And->Degree of difference between whenThe smaller the value of (c), the more similar the corresponding two images are to the same moving object.
A difference threshold T is preset, where the embodiment is described by taking t=20 as an example, and the embodiment is not specifically limited, where T may be determined according to the specific implementation situation. Degree of variationIn the case of->And->The concentration difference of the mist is large, and the mist is divided into two parts, wherein the difference is +.>In the case of->And->The difference in the density of the mist is small, and the mist should not be divided, and should be analyzed as one image block. In summary, on the basis of super-pixel segmentation, the image block is finally segmented for the second time by utilizing the external contour of the moving object and each segmentation line which is respectively perpendicular to the moving direction in the moving object.
Thus, an image after the second segmentation is obtained.
(4) And according to the second segmentation result, calculating the dark channel value of each pixel point by combining the target degree.
After the second division, the first divisionAnd (3) carrying out linear normalization on the target degree of each block, and then weighting the size of the block where each pixel point is located to obtain the size of the self-adaptive sliding window of each pixel point. However, the value range of the sliding window with the point of the pixel in each block cannot exceed the edge of the segmentation result, namely, the value range of the dark channel of each pixel near the edge of the block should be +.>Intersection of the sliding window with the secondary partition to which it belongs.
Specifically, after the secondary segmentation result is obtained, the target program of each newly obtained segmentation block is calculatedDegree ofThen->Dark channel sliding window size of pixel points in each block>The method comprises the following steps:
in the method, in the process of the invention,for the second division after the +.>The number of pixels in each block, < >>For the second division after the +.>And finally, obtaining the size of the sliding window of each pixel point by utilizing a downward rounding mode.
Continuing to use the dark channel formula:
in the method, in the process of the invention,representing all pixels in the neighborhood centered on the ith pixel, +.>Expressed as the set->All pixels in (1)/>Corresponding gray values in the channel +.>Expressed as a set of pixels +.>Gray values in dark channel map, +.>Any one of the R, G and B channels is designated as the c channel, and the +.>Representing taking a minimum function.
Here, the minimum value in R, G, B channels in the neighborhood of each pixel point is taken as the dark channel value of each pixel point.
Thus, a final dark channel result graph is obtained.
Step S003: and defogging the dark channel to obtain a defogging result diagram, and acquiring a security risk early warning result by using a neural network.
Using the foggy weather degradation model to the first according to the final dark channel result graphAnd defogging the frame image to obtain a final defogging effect diagram. The fog degradation model is a known technology, and is not specifically described herein, but the present embodiment is not specifically limited thereto, where the defogging operation may be determined according to the specific implementation.
The preset time interval tt is taken as an example of tt=5 seconds in this embodiment, and this embodiment is not particularly limited, where tt may be determined according to the specific implementation.
Manually marking the boundary frame position information of the potential safety hazard target by utilizing a site monitoring video, and inputting the obtained video as a training set into a YOLO neural network; and (3) inputting the obtained defogging result graph into a YOLO neural network as an input set, measuring the difference between the output of the model and the real label by using a cross entropy loss function, dividing the feature graph into grids with different scales at an output layer in the YOLO network, predicting the category and the position of each grid, predicting the running speed and the track of each target according to the position change of each target under continuous frames, and carrying out safety early warning if the running speed and the track are intersected in the running track in a follow-up shorter tt time interval.
The embodiment provides a building construction safety risk early warning system, as shown in fig. 2, which comprises the following modules:
the image acquisition module 101: collecting continuous frame building construction images;
the image processing module 102: super-pixel segmentation is carried out on the building construction image to obtain super-pixel blocks, and gray values of initial dark channel pixel points are obtained;
obtaining the target degree of the super pixel block according to the gray value and the gradient value of the pixel point in the super pixel block, and obtaining the target degree of each pixel point according to the target degree of the super pixel block;
obtaining a frame difference image according to building construction images of adjacent frames, and obtaining outer edge pixel points of each connected domain according to the frame difference image; obtaining a target pixel point in the frame difference image according to the gray value of the outer edge pixel point in the neighborhood of each outer edge pixel point in the frame difference image; obtaining the similarity of any two target pixel points according to the neighborhood pixel points of any two target pixel points; obtaining the moving distance and the moving direction of each connected domain in the frame difference image according to the similarity of any two target pixel points; obtaining the connected domains of the moving object and the corresponding moving direction and moving distance of the connected domains of each moving object according to the difference of the moving distance and the moving direction between each connected domain;
dividing the connected domain of each moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain image blocks; obtaining the difference degree between adjacent image blocks according to the difference of the corresponding adjacent frames between the adjacent image blocks; merging the image blocks according to the difference degree between the adjacent image blocks to obtain a plurality of second-time segmentation image blocks;
defogging processing module 103: obtaining the size of a sliding window of a dark channel of each pixel point in each second divided image block according to the number of the pixel points of each second divided image block and the target degree of each second divided image block; obtaining a dark channel result diagram according to the size of a dark channel sliding window of each pixel point in each second divided image block, and defogging the building construction image according to the dark channel result diagram to obtain a final defogging effect diagram;
the early warning module 104: and finally, carrying out safety pre-warning on the defogging effect graph of the building construction image of each frame.
This embodiment is completed.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. The building construction safety risk early warning method is characterized by comprising the following steps of:
collecting continuous frame building construction images;
super-pixel segmentation is carried out on the building construction image to obtain super-pixel blocks;
obtaining the target degree of the super pixel block according to the gray value and the gradient value of the pixel point in the super pixel block, and obtaining the target degree of each pixel point according to the target degree of the super pixel block;
obtaining a frame difference image according to building construction images of adjacent frames, and obtaining outer edge pixel points of each connected domain according to the frame difference image; obtaining a target pixel point in the frame difference image according to the gray value of the outer edge pixel point in the neighborhood of each outer edge pixel point in the frame difference image; obtaining the similarity of any two target pixel points according to the neighborhood pixel points of any two target pixel points; obtaining the moving distance and the moving direction of each connected domain in the frame difference image according to the similarity of any two target pixel points; obtaining the connected domains of the moving object and the corresponding moving direction and moving distance of the connected domains of each moving object according to the difference of the moving distance and the moving direction between each connected domain;
dividing the connected domain of each moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain image blocks; obtaining the difference degree between adjacent image blocks according to the difference of the corresponding adjacent frames between the adjacent image blocks; merging the image blocks according to the difference degree between the adjacent image blocks to obtain a plurality of second-time segmentation image blocks;
obtaining the size of a sliding window of a dark channel of each pixel point in each second divided image block according to the number of the pixel points of each second divided image block and the target degree of each second divided image block; obtaining a dark channel result diagram according to the size of a dark channel sliding window of each pixel point in each second divided image block, and defogging the building construction image according to the dark channel result diagram to obtain a final defogging effect diagram;
and finally, carrying out safety pre-warning on the defogging effect graph of the building construction image of each frame.
2. The building construction safety risk early warning method according to claim 1, wherein the target degree of the super pixel block comprises the following specific steps:
the formula of the target degree of the super pixel block is:
in the method, in the process of the invention,denoted as +.>The number of pixels in each super pixel block, < +.>Denoted as the +.f in the ith super pixel block>Gradient corresponding to each pixel point is largeSmall (I)>Denoted as the +.f in the ith super pixel block>Gray value of each pixel, +.>Is the number of super pixel blocks, +.>Denoted as +.>Target degree of each super pixel block.
3. The construction safety risk early warning method according to claim 1, wherein the target pixel comprises the following specific steps:
after obtaining the frame difference image, the outer edge pixel points of any one connected domain are respectively marked asWherein->Representing the ith outer edge pixel point, and n represents the number of outer edge pixel points in the connected domain; determining a neighborhood pixel point of each outer edge pixel point, and then screening target pixel points; the screening rules are as follows: if at->The variance of gray values of all outer edge pixel points in the neighborhood of (2) is smaller than a preset threshold value +.>When in use, record->For the target pixel, otherwise +.>And is not the target pixel point.
4. The building construction safety risk early warning method according to claim 1, wherein the similarity of any two target pixel points comprises the following specific steps:
calculating the ratio of the number of the same marked pixels in the two target pixel neighborhood and the number of all the marked pixels in the two target pixel neighborhood, and recording the ratio as the similarity of any two target pixels.
5. The method for early warning of safety risk of building construction according to claim 1, wherein the moving distance and moving direction of each connected domain in the frame difference map comprises the following specific steps:
for a target pixel pointTaking the target pixel point with the highest similarity as +.>Target pixel point +.>And->Matching the two pixel points into one edge pixel point pair to obtain a plurality of edge pixel point pairs matched one by one in any one connected domain, wherein the connecting line between each pair of pixel points has the direction and the length of a connecting line segment, and the angle interval pair is->Equally divided into a plurality of sub-partsThe method comprises the steps of obtaining subintervals of the direction angles of the line segments between each pair of pixel points, counting the occurrence frequency of the line segment directions in each subinterval, and selecting the target pixel point pair in the interval with the largest occurrence frequency; regarding the selected pixel point pair, taking the distance average value of the pixel point pair as the +.>The moving distance of each connected domain and the direction angle average value are taken as the moving direction of the connected domain.
6. The construction safety risk early warning method according to claim 1, wherein the communicating domain of the moving object and the moving direction and the moving distance corresponding to the communicating domain of each moving object comprise the following specific steps:
analyzing a plurality of connected domains in a frame difference graph to obtain the corresponding moving distance and direction of each connected domain, and matching in a plurality of connected domains, if the first connected domain isThe communicating domain and->The angle of the moving direction among the two communicating domains is smaller than a preset angle, and the moving distance is smaller than a preset number of pixel points in the image, so that the two communicating domains correspond to the same moving object; selecting a plurality of connected domains belonging to the same object to perform convex hull detection of the plurality of connected domains, wherein the obtained region formed by the convex hulls is used as a connected domain of a mobile object; and finally, taking the moving direction and the moving distance of the same moving object as the average value of the moving directions and the moving distances of a plurality of communicating domains in the communicating domain of the moving object.
7. The method for early warning of risk of building construction according to claim 1, wherein the step of dividing the connected domain of each moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain the image block comprises the following specific steps:
the first isThe connected domain corresponding to the individual moving object is marked +.>The corresponding direction of movement is marked +.>The corresponding movement distance is marked as +.>The method comprises the steps of carrying out a first treatment on the surface of the Edge->Direction is from->One end starts at +.>For the division spacing, use is made of a coding perpendicular to +.>Dividing the outline of the moving object by the straight line of (2) and marking each image block after division as +.>Wherein->Indicate->And the n-th image block after the connected domain corresponding to the mobile object is segmented, wherein n represents the number of the segmented image blocks.
8. The method for early warning of safety risk of building construction according to claim 1, wherein the degree of difference between the adjacent image blocks comprises the following specific steps:
the formula of the degree of difference between adjacent image blocks is:
in the method, in the process of the invention,representation->The number of corresponding pixels, +.>Representation->The number of corresponding pixels, +.>Representation->Appear at +.>Frame time->Target degree corresponding to each pixel point, < +.>Representation->Appear at +.>Under frame->Target degree corresponding to each pixel point, < +.>Representation->Appear at +.>Frame time->Gray value corresponding to each pixel point, < >>Representation->Appear at +.>Under frame->Gray value corresponding to each pixel point, < >>Indicating the degree of difference between the j-th block and the j+1-th block.
9. The method for early warning of risk of building construction according to claim 1, wherein the size of the sliding window of the dark channel of each pixel point in each second divided image block comprises the following specific steps:
the formula of the dark channel sliding window size of each pixel point in each second divided image block is as follows:
in the method, in the process of the invention,for the +.>The number of pixels in each block, < >>For the +.>Target degree of individual blocks,/->The size of the sliding window of the pixel point in the ith block.
10. The building construction safety risk early warning system is characterized by comprising the following modules:
and an image acquisition module: collecting continuous frame building construction images;
an image processing module: super-pixel segmentation is carried out on the building construction image to obtain super-pixel blocks, and gray values of initial dark channel pixel points are obtained;
obtaining the target degree of the super pixel block according to the gray value and the gradient value of the pixel point in the super pixel block, and obtaining the target degree of each pixel point according to the target degree of the super pixel block;
obtaining a frame difference image according to building construction images of adjacent frames, and obtaining outer edge pixel points of each connected domain according to the frame difference image; obtaining a target pixel point in the frame difference image according to the gray value of the outer edge pixel point in the neighborhood of each outer edge pixel point in the frame difference image; obtaining the similarity of any two target pixel points according to the neighborhood pixel points of any two target pixel points; obtaining the moving distance and the moving direction of each connected domain in the frame difference image according to the similarity of any two target pixel points; obtaining the connected domains of the moving object and the corresponding moving direction and moving distance of the connected domains of each moving object according to the difference of the moving distance and the moving direction between each connected domain;
dividing the connected domain of each moving object according to the moving direction and the moving distance corresponding to the connected domain of each moving object to obtain image blocks; obtaining the difference degree between adjacent image blocks according to the difference of the corresponding adjacent frames between the adjacent image blocks; merging the image blocks according to the difference degree between the adjacent image blocks to obtain a plurality of second-time segmentation image blocks;
defogging processing module: obtaining the size of a sliding window of a dark channel of each pixel point in each second divided image block according to the number of the pixel points of each second divided image block and the target degree of each second divided image block; obtaining a dark channel result diagram according to the size of a dark channel sliding window of each pixel point in each second divided image block, and defogging the building construction image according to the dark channel result diagram to obtain a final defogging effect diagram;
and the early warning module is used for: and finally, carrying out safety pre-warning on the defogging effect graph of the building construction image of each frame.
CN202310995004.7A 2023-08-09 2023-08-09 Building construction safety risk early warning method and system Active CN116703787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310995004.7A CN116703787B (en) 2023-08-09 2023-08-09 Building construction safety risk early warning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310995004.7A CN116703787B (en) 2023-08-09 2023-08-09 Building construction safety risk early warning method and system

Publications (2)

Publication Number Publication Date
CN116703787A CN116703787A (en) 2023-09-05
CN116703787B true CN116703787B (en) 2023-10-31

Family

ID=87834321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310995004.7A Active CN116703787B (en) 2023-08-09 2023-08-09 Building construction safety risk early warning method and system

Country Status (1)

Country Link
CN (1) CN116703787B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980659B (en) * 2023-09-22 2024-01-30 深圳市雅源光电科技有限公司 Intelligent encryption method for optical lens image

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982513A (en) * 2012-12-04 2013-03-20 电子科技大学 Adaptive image defogging method based on textures
WO2019205707A1 (en) * 2018-04-26 2019-10-31 长安大学 Dark channel based image defogging method for linear self-adaptive improvement of global atmospheric light
CN110428371A (en) * 2019-07-03 2019-11-08 深圳大学 Image defogging method, system, storage medium and electronic equipment based on super-pixel segmentation
CN111696123A (en) * 2020-06-15 2020-09-22 荆门汇易佳信息科技有限公司 Remote sensing image water area segmentation and extraction method based on super-pixel classification and identification
CN113344796A (en) * 2020-02-18 2021-09-03 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN113554702A (en) * 2021-09-22 2021-10-26 南通林德安全设备科技有限公司 Infusion progress evaluation method and system based on artificial intelligence
CN114155173A (en) * 2022-02-10 2022-03-08 山东信通电子股份有限公司 Image defogging method and device and nonvolatile storage medium
CN114842033A (en) * 2022-06-29 2022-08-02 江西财经大学 Image processing method for intelligent AR equipment
CN115063404A (en) * 2022-07-27 2022-09-16 建首(山东)钢材加工有限公司 Weathering resistant steel weld joint quality detection method based on X-ray flaw detection
CN115439494A (en) * 2022-11-08 2022-12-06 山东大拇指喷雾设备有限公司 Spray image processing method for quality inspection of sprayer
CN115914634A (en) * 2022-12-16 2023-04-04 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN116188331A (en) * 2023-04-28 2023-05-30 淄博市淄川区市政环卫服务中心 Construction engineering construction state change monitoring method and system
CN116416577A (en) * 2023-05-06 2023-07-11 苏州开普岩土工程有限公司 Abnormality identification method for construction monitoring system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982513A (en) * 2012-12-04 2013-03-20 电子科技大学 Adaptive image defogging method based on textures
WO2019205707A1 (en) * 2018-04-26 2019-10-31 长安大学 Dark channel based image defogging method for linear self-adaptive improvement of global atmospheric light
CN110428371A (en) * 2019-07-03 2019-11-08 深圳大学 Image defogging method, system, storage medium and electronic equipment based on super-pixel segmentation
CN113344796A (en) * 2020-02-18 2021-09-03 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN111696123A (en) * 2020-06-15 2020-09-22 荆门汇易佳信息科技有限公司 Remote sensing image water area segmentation and extraction method based on super-pixel classification and identification
CN113554702A (en) * 2021-09-22 2021-10-26 南通林德安全设备科技有限公司 Infusion progress evaluation method and system based on artificial intelligence
CN114155173A (en) * 2022-02-10 2022-03-08 山东信通电子股份有限公司 Image defogging method and device and nonvolatile storage medium
CN114842033A (en) * 2022-06-29 2022-08-02 江西财经大学 Image processing method for intelligent AR equipment
CN115063404A (en) * 2022-07-27 2022-09-16 建首(山东)钢材加工有限公司 Weathering resistant steel weld joint quality detection method based on X-ray flaw detection
CN115439494A (en) * 2022-11-08 2022-12-06 山东大拇指喷雾设备有限公司 Spray image processing method for quality inspection of sprayer
CN115914634A (en) * 2022-12-16 2023-04-04 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN116188331A (en) * 2023-04-28 2023-05-30 淄博市淄川区市政环卫服务中心 Construction engineering construction state change monitoring method and system
CN116416577A (en) * 2023-05-06 2023-07-11 苏州开普岩土工程有限公司 Abnormality identification method for construction monitoring system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Haze Removal Algorithm for Optical Remote Sensing Image Based on Multi-Scale Model and Histogram Characteristic;Shiqi Huang等;ieee;全文 *
基于超像素和暗通道先验的图像去雾复原方法;徐浩;谭一博;刘博文;王国宇;;中国海洋大学学报(自然科学版)(10期);全文 *

Also Published As

Publication number Publication date
CN116703787A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN105678811A (en) Motion-detection-based human body abnormal behavior detection method
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN116703787B (en) Building construction safety risk early warning method and system
CN104978567B (en) Vehicle checking method based on scene classification
US20180268556A1 (en) Method for detecting moving objects in a video having non-stationary background
CN103630496B (en) Based on the traffic video visibility detecting method of road surface apparent brightness and least square method
CN114332650B (en) Remote sensing image road identification method and system
CN103020906B (en) A kind of preprocess method of star sensor measuring star in daytime image
CN115797641B (en) Electronic equipment gas leakage detection method
CN105957356B (en) A kind of traffic control system and method based on pedestrian&#39;s quantity
CN117132510B (en) Monitoring image enhancement method and system based on image processing
CN116630813B (en) Highway road surface construction quality intelligent detection system
CN112258525B (en) Image abundance statistics and population identification algorithm based on bird high-frame frequency sequence
CN116758081B (en) Unmanned aerial vehicle road and bridge inspection image processing method
CN111709964B (en) PCBA target edge detection method
CN110321855A (en) A kind of greasy weather detection prior-warning device
CN111460917A (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN117036346B (en) Silica gel sewage treatment intelligent monitoring method based on computer vision
CN114155493A (en) Dam flow early warning system and method based on video analysis technology
CN113689399B (en) Remote sensing image processing method and system for power grid identification
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN111860289B (en) Time sequence action detection method and device and computer equipment
CN111127515B (en) Method and system for predicting sand and dust moving path and electronic equipment
CN114332144A (en) Sample granularity detection method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant