CN112529853A - Method and device for detecting damage of netting of underwater aquaculture net cage - Google Patents

Method and device for detecting damage of netting of underwater aquaculture net cage Download PDF

Info

Publication number
CN112529853A
CN112529853A CN202011372756.0A CN202011372756A CN112529853A CN 112529853 A CN112529853 A CN 112529853A CN 202011372756 A CN202011372756 A CN 202011372756A CN 112529853 A CN112529853 A CN 112529853A
Authority
CN
China
Prior art keywords
image
netting
damage
processing
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011372756.0A
Other languages
Chinese (zh)
Inventor
陈巍
高天宇
郭铁铮
陈国军
许鑫
金俊
郝笑
王杰
贺晨煜
郑亦峰
杨刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202011372756.0A priority Critical patent/CN112529853A/en
Publication of CN112529853A publication Critical patent/CN112529853A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for detecting the damage of netting of an underwater net cage, wherein the method comprises the following steps: preprocessing an original netting image of a target net cage, wherein the preprocessing comprises color space conversion processing and image enhancement processing; carrying out edge detection on the preprocessed netting image; carrying out image segmentation on the netting image; performing morphological processing and noise point elimination processing on the segmented image to obtain a netting characteristic image; and carrying out skeleton extraction on the netting image to obtain netting damage information. The method can quickly and accurately obtain the characteristics of the damage of the netting and detect the damage degree of the netting.

Description

Method and device for detecting damage of netting of underwater aquaculture net cage
Technical Field
The invention particularly relates to a method for detecting the damage of a netting of an underwater net cage, and also relates to a device for detecting the damage of the netting of the underwater net cage, belonging to the technical field of image recognition of underwater aquaculture net cages.
Background
World ocean area reaches 3.6 multiplied by 108km2And the potential of marine biological resources development and utilization is huge. With the increasing demand for marine products, underwater cage culture has gradually become the main development direction of the marine culture industry in China in order to realize the sustainable development of fishery. According to statistics, about two thirds of the loss of cage culture is caused by the damage of netting. Damage to the netting not only causes unpredictable economic losses, but also can cause severe ecological environmental problems. At present, in order to reduce the loss of the damaged netting, the mode of regularly replacing the net or frequently checking the net cage by a diver is often adopted to check the integrity of the netting, check a mooring system, detect sludge accumulation and the like, but the working efficiency is low, and great potential safety hazards exist. Therefore, a more reliable and safer net cage detection solution is needed to solve the problem of influencing the healthy development of underwater net cage cultureThe key problem of (1).
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a method and a device for detecting the damage of a netting of an underwater net cage, which can quickly and accurately obtain the characteristics of the breakage of the netting.
In order to solve the technical problem, the invention provides a method for detecting the damage of a netting of an underwater net cage, which comprises the following steps:
preprocessing an original netting image of a target net cage, wherein the preprocessing comprises color space conversion processing and image enhancement processing;
carrying out edge detection on the preprocessed netting image;
carrying out image segmentation on the netting image;
performing morphological processing and noise point elimination processing on the segmented image to obtain a netting characteristic image;
and carrying out skeleton extraction on the netting image to obtain netting damage information.
Further, the color space conversion process includes:
the original netting image is converted from RGB color space to HSV color space and the V value is increased to enhance the image brightness.
Further, the image enhancement processing includes:
and performing image enhancement processing by adopting a Retinex algorithm of guided filtering.
Further, the performing image enhancement processing by using the Retinex algorithm with guided filtering includes:
estimating illumination components from the original image based on the guide filtering, and removing the illumination components to obtain a reflection image;
the estimate of the illumination component is corrected based on the actual light intensity and re-mapped into the reflected image.
Further, the estimating, from the original image, a luminance component based on the guided filtering includes;
the guided filtering can be represented as a local linear model:
Figure BDA0002807238790000021
in the formula: q. q.siIs an image IvWindow omegakLinear transformation gray value at the pixel of m; k represents a window ωkThe center pixel of (a); at omegakIn the coefficient akAnd bkIs a constant. Local linear coefficient akAnd bkThe following solution can be used:
Figure BDA0002807238790000022
bk=(1-ak)uk
in the formula: i isv,jIs represented byvOmega of the imagekPixel values of j points in the window; u. ofkAs an image window omegakMean of the middle pixels; sigmakAs an image window omegakStandard deviation of medium pixels;
Figure BDA0002807238790000023
is window omegakThe number of pixels in (a); delta is a regularization parameter;
in order to obtain stable qiAveraging the values, applying a linear model to the entire image to obtain a guided filter function as:
Figure BDA0002807238790000031
wherein f isj(Iv(x, y)) represents the pair IvAnd (3) processing the guiding filter function f by the pixel point j of each (x, y) coordinate in the image.
Further, the edge detection of the preprocessed netting image includes:
and (5) performing edge extraction on the preprocessed netting image by using a Canny edge detection operator.
Further, the image segmentation of the netting image includes:
and (4) carrying out image segmentation on the netting image by adopting an adaptive maximum between-class variance threshold segmentation algorithm.
Correspondingly, the invention also provides a netting damage detection device for the underwater net cage, which comprises:
the system comprises a preprocessing module, a color space conversion module and an image enhancement module, wherein the preprocessing module is used for preprocessing an original netting image of a target net cage, and the preprocessing comprises color space conversion processing and image enhancement processing;
the edge detection module is used for carrying out edge detection on the preprocessed netting image;
the image segmentation module is used for carrying out image segmentation on the netting image;
the image denoising module is used for performing morphological processing and noise point elimination processing on the segmented image to obtain a netting characteristic image;
and the netting damage judgment module is used for carrying out skeleton extraction on the netting image to obtain netting damage information.
Compared with the prior art, the invention has the following beneficial effects:
1. aiming at the phenomena of a large amount of noise interference, edge blurring, low contrast and the like of an underwater netting image, a Retinex image enhancement algorithm of multi-scale guiding filtering is provided for preprocessing the underwater netting. The algorithm firstly converts the underwater netting image into HSV color space, and then carries out smoothing processing on the brightness image by adopting guide filtering, and the illumination component of the image is estimated.
2. Aiming at the problems of low contrast, uneven illumination, high background interference and the like existing in an underwater netting image, Canny edge detection combined with a self-adaptive maximum between-class variance threshold segmentation method is adopted for the underwater netting enhanced image, the process of edge extraction is converted into the problem of obtaining a multidimensional function extreme value, and boundary positioning information can be reflected more accurately and more in detail.
3. Aiming at the problems of a large amount of small noise and edge burrs of an image, the image is segmented and processed by combining morphological processing and noise elimination technology. Clear underwater netting binary images and skeleton images are detected, and netting images are refined into single-pixel width through characteristic parameter extraction.
4. The method can quickly and accurately acquire the characteristics of the damage of the netting, and detect the damage position and the damage degree of the netting. The method not only relieves the problems of color cast and 'halo artifact' existing in the high-contrast edge area in the Retinex image enhancement, but also effectively extracts the parameter characteristic value through the extraction algorithm of the parameter characteristic value to detect the integrity of the netting in the complex underwater environment.
Drawings
FIG. 1 is a schematic structural view of detection of an underwater aquaculture net cage;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is an HSV color space model;
FIG. 4 is a color channel of an original netting image and HSV color model segmentation;
FIG. 5 is a pixel distribution histogram;
FIG. 6 is an algorithm flow chart;
FIG. 7 is a diagram of a Canny edge detection process;
fig. 8 is a comparison of edge detection effects, where a) is an original gray image b) is a Laplacian edge detection result c) is a Canny edge detection result;
FIG. 9 is a diagram of binarization results;
FIG. 10 is a diagram of morphological processing and noise rejection effects;
fig. 11 is a diagram showing the effect of feature parameter extraction.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1
The invention provides a method for detecting the integrity of an underwater netting based on an image processing technology by taking the practical application of underwater net cage detection as the background. The method has the characteristics of non-contact and intellectualization. Firstly, a large amount of analysis is carried out on the netting images of the underwater aquaculture net cage to know that: 1) because the image acquisition environment is underwater, the light is dark and uneven due to the influence of the optical action of a water body and the limitation of the shooting hardware environment, and a reflective point may exist in the image; 2) the netting is in an underwater environment for a long time, the surface of the netting is easy to corrode, so that the texture is complex, the grids are easy to have impurities, and the surface of the netting can have noise interference; 3) the target of the underwater netting is weak, the collected image is fuzzy, and particularly the marginal information of the grid is interfered by impurities, and even color cast and other problems occur under the influence of an underwater environment.
Aiming at the characteristics of an underwater netting image, the invention provides a netting damage detection method for an underwater aquaculture net cage, and a processing flow chart 2 is shown in the figure and comprises the following steps:
step 1, preprocessing an original netting image of a target net cage, wherein the preprocessing comprises color space conversion and image denoising and enhancement by adopting a Retinex algorithm of guided filtering.
The image preprocessing process comprises the steps of firstly converting an image into an HSV color space image, improving the problem of image color distortion, then estimating the illumination component of the image by adopting guide filtering, keeping the edge information of the image, effectively solving the phenomenon of 'halo artifact' of the underwater netting image, and verifying that the algorithm can better realize the preprocessing of the underwater netting image through experiments.
1.1 conversion of color space
The underwater netting image is often lack of illumination due to the limitation of shooting conditions, the overall brightness of the image is darker, so the brightness of the image needs to be enhanced first when the low-illumination image is enhanced, in an RGB color space, a three-dimensional vector [ R, G, B ] contains color information and also contains brightness information, if a gray image enhancement method is directly used for enhancing a color image, when the brightness information changes, the color information also changes to a certain extent, and therefore, the enhancement of the low-illumination underwater image is realized under the condition of keeping the color unchanged, and the key point is the determination of the brightness information. In the HSV color space, the HSV color space model is shown in fig. 3, and three of Hue (Hue), Saturation (Saturation), and brightness (Value) directly correspond to the human visual perception characteristics and have a low degree of association. Therefore, the method selects the HSV color space model to improve the adverse effect of color distortion on the later-stage feature extraction. The operation is carried out on the brightness component V in the HSV space (the brightness is improved by increasing V, and color cast can be avoided), the direct operation is not carried out on the three primary colors, and the relation between the primary color phase and the saturation is not changed.
And converting the original netting image of the target net cage into HSV space from RGB space. V represents brightness in HSV color space, and takes a value range of [0, 255], and adjusts a value as needed, and in this embodiment, the value of V is increased to enhance the brightness of an image.
Then, histogram equalization is adopted to make the image uniformly distributed on the whole gray level, so as to achieve the effect of improving the contrast of the whole image, and the binarization threshold value can be preliminarily determined through the histogram shown in fig. 5.
A schematic diagram of the color channels segmented by the original netting image and the HSV color model is shown in fig. 4. It can be seen that the purpose of image brightness enhancement can be achieved by increasing the value of V in the HSV space, and the enhancement result is better than adjusting H and S. Compared with the original image on the left side, the image a adjusts the H value to cause the transition between the background and the netting to be unclear, the image b adjusts the S value to cause the background to be excessively enhanced and the edge interference to be increased, and the image c not only enhances the brightness of the image but also has no color cast phenomenon, so that the characteristics of the original image can be basically kept.
1.2 guided filtering based illumination component estimation
The invention provides an underwater image enhancement algorithm based on the guided filtering on the basis of the Retinex algorithm, solves the problems of 'halo artifact', excessive processing enhancement and the like existing in a high-contrast edge area in the traditional Retinex algorithm, and mainly improves the reality of colors and the estimation of image illumination components. A flow chart of an enhancement algorithm for underwater images based on guided filtering optimized on the basis of the Retinex algorithm is shown in fig. 6. It can be understood that the image is divided into three layers, i.e. an original image I, a reflection image R, and a luminance image L, wherein the reflection component in R determines the intrinsic property of the image, and the dynamic range of the image effect depends on the luminance component in L.
The core idea of the algorithm is to estimate and remove the illumination component from the original image, so that the influence caused by the illumination condition can be partially improved, the reflected image is obtained, then the estimation of the illumination component is corrected according to the actual light intensity, and the illumination component is re-mapped into the reflected image, so that the image is restored and improved to a certain degree. In the process, the image is usually converted into a logarithmic domain to reduce the complexity of calculation, and then restored by an exponential function after the process.
The invention adopts the linear guide filtering with the functions of smoothing and edge protection to estimate the illumination component, the filtering adopts the thought of a least square method, the operation is carried out through box filtering (box filter) and integral image technology (integral image technique), the time complexity is reduced, the gradient is kept, the execution speed is independent of the size of a filtering window, and compared with bilateral filtering, the illumination component can be estimated with higher efficiency.
The guided filtering algorithm computes the output image using the guided image, which may be the input image itself or another image. The algorithm is not only applied to image smoothing processing, but also widely applied to image defogging and underwater image processing.
The formula is solved for the reflection component in the reflection image, and the core idea is that the original image is removed the illumination component and is equal to the reflection component. The specific formula is expressed as:
Rv(x,y)=logIv(x,y)-log[f(Iv(x,y))]
in the formula: rv(x, y) represents the reflection component in logarithmic format; f denotes a guided filter function, Iv(x, y) represents a pixel of the original image.
Figure BDA0002807238790000081
Taking the logarithm of the pixels of the original image, log [ f (I)v(x,y))]And logarithm is taken after the original image is subjected to guide filtering function processing.
The guided filtering can be represented as a local linear model:
Figure BDA0002807238790000082
in the formula: q. q.siIs an image IvWindow omegakLinear transformation gray value at the pixel of m; k represents a window ωkThe center pixel of (a); at omegakIn the coefficient akAnd bkIs a constant. Local linear coefficient akAnd bkThe following solution can be used:
Figure BDA0002807238790000083
bk=(1-ak)uk
in the formula: i isv,jIs represented byvOmega of the imagekPixel values of j points in the window; u. ofkAs an image window omegakMean of the middle pixels; sigmakAs an image window omegakStandard deviation of medium pixels;
Figure BDA0002807238790000084
is window omegakThe number of pixels in (a); δ is a regularization parameter that acts to balance the degree of smoothing and edge retention, with larger values giving better smoothness and poorer edge retention.
To obtain a stable qi, it is necessary to average it, apply a linear model to the entire image, and obtain a guided filter function as:
Figure BDA0002807238790000085
wherein f isj(Iv(x, y)) represents the pair IvAnd (3) processing the guiding filter function f by the pixel point j of each (x, y) coordinate in the image.
And 2, carrying out edge detection on the preprocessed netting image to extract the edges of the netting.
The change of the physical characteristics of the object, including the brightness of the surrounding environment when imaging, the shape of the object, the reflection coefficient of the medium, etc., causes the edge of the image to change. In the method, the edge of the netting is extracted by using an edge detection technology so as to further analyze and calculate the integrality of the netting.
In this document, a Canny edge detection operator is used to perform edge extraction on an experimental original image, and algorithm steps are shown in fig. 7 and include:
1) firstly, preprocessing the image through a Gaussian filter, and reducing the interference of high-frequency noise.
2) And calculating the gradient amplitude and direction of each pixel by convolution operation of 4-by-4 template kernels and each pixel of the image.
3) And carrying out non-maximum suppression on the calculated gradient amplitude of the pixel, eliminating interference and reserving an edge part.
4) And detecting and connecting edges by using a dual-threshold algorithm.
The two thresholds are respectively a high threshold and a low threshold, if the gradient amplitude of the pixel is smaller than the low threshold, the edge is not considered to exist, and edge exclusion is carried out; if the pixel gradient amplitude is larger than the high threshold, the edge is considered to exist, the edge is reserved, if the pixel gradient amplitude is between the pixel gradient amplitude and the high threshold, surrounding adjacent pixels are observed, and if the adjacent pixels are larger than the high threshold, the edge is reserved.
The ratio of the high and low thresholds set during this experiment was 3.2: 1.
And compared with the Laplacian edge extraction result, the comparison result is shown in fig. 8. Compared with other edge detection operators, Canny operators convert the process of edge extraction into the problem of solving the extreme value of a function, and when whether a pixel is an edge point or not is determined, the gradient operation of a certain pixel is not simply carried out, but other pixels are associated with a processing pixel to be comprehensively considered. From the extraction effect, the edge location of the Canny edge detection is more accurate, and the boundary information is reflected in more detail.
And 3, carrying out image segmentation on the netting image, and separating the target from the background.
The image segmentation is to divide an image into a plurality of specific regions which are not overlapped with each other according to characteristics such as gray scale, color, texture and shape, so that the regions can successfully extract an interested object, which is a key step from image processing to image analysis. Aiming at the problems of low contrast, uneven illumination, large background interference and the like existing in an underwater netting image, the method adopts a self-adaptive maximum between-class variance threshold segmentation algorithm, can be divided into a background part and a target part according to gray characteristics, and when the between-class variance is maximum, the obtained threshold is an optimal binarization threshold.
The formula for calculating the variance G:
G=W0×W1×(U0-U1)2
in the formula, W0Representing the proportion of the total number of image pixels occupied by the background; w1Representing the proportion of the total number of image pixels occupied by the target; u shape0Representing a background pixel gray level mean; u shape1Representing a target pixel gray level mean value; g is the calculated inter-class variance value, when the value G is maximum, the variance between the foreground and the background is maximum, which shows that the larger the difference between the two parts is, the lower the possibility of wrong division is, namely t is the optimal threshold value for the division at the moment.
The binarization results at different t are shown in FIG. 9, where t is 10, 50, 100, 150, and 200, respectively. Compared with the original image, the processing result should avoid the over-processing and under-processing conditions, the contour boundary can be accurately determined, and the effect of closed continuous edges can be obtained. In fig. 9 a) is an input diagram for image segmentation. The graph b) c) is processed, and excessive pixel points are defined as the target object. The graph d) e) has better treatment effect. Map f) under-processing occurs to define too few pixel points as the target object. It can be seen that the optimal threshold is between 100-150.
And 4, performing morphological processing and noise point elimination processing on the segmented image to obtain a netting damage characteristic image.
The underwater netting image is subjected to an experiment of an adaptive threshold segmentation algorithm, and the experiment result shows that a large number of small noise points and edge burrs still exist in the image, so that the underwater netting image is processed by adopting morphological opening operation.
The open operation is the result of erosion followed by expansion. In erosion, the target area is narrowed, so that the boundary of the image is shrunk, and small and meaningless target objects can be eliminated. The expansion enlarges the range of the target area, and combines the points of the involved background with the objects of interest, so that the target boundary moves outwards, which can be used for filling some holes in the target area and eliminating small particle noise contained in the target area. The advantage of the opening operation is that it weakens narrow parts, removes elongated protrusions, edge burrs and isolated spots and smoothes larger object boundaries, and breaks up the noise interference of adhesion between objects, while keeping object dimensions well unchanged. The implementation function is:
dst=open(src,element)=dilate(erode(src,element),element)
where src is an input image, dst is an output image, element is a structural element, and open, partition, and anode are function names.
If an origin needs to be defined in A and B by using a structure B, the moving process of B is consistent with the moving process of a convolution kernel and is calculated after the convolution kernel and the image are overlapped, when the origin of B is translated to a pixel (x, y) of the image A, if B is completely contained in the overlapped area of the image A at the position (x, y) (namely, all the corresponding A image values on the element positions of 1 in B are also 1), the pixel (x, y) corresponding to the output image is assigned to be 1, otherwise, the pixel is assigned to be 0. B moves sequentially over a (with the convolution kernel moving over the image, performing morphological operations over the coverage area of B) when it covers the area of a by 1, 1; 1,1] or [1, 0; 1,1 (i.e. '1' in B is a subset of the covered area) will be 1 for the position of the corresponding output image.
Expanding A with structure B, and translating the origin of the structural element B to the image pixel (x, y) position. And if the intersection of B and A at the image pixel (x, y) is not empty (namely at least one image value corresponding to A at the element position of 1 in B is 1), the pixel (x, y) corresponding to the output image is assigned to be 1, otherwise, the pixel is assigned to be 0.
Whether erosion or expansion exists, the convolution operation of the structural element B is translated on the image, the origin point of the structural element B is equivalent to the kernel center of a convolution kernel, and the result is stored on the element at the corresponding position of the kernel center. Except that the corrosion is that B is completely contained in the area covered by B, and B intersects with the area covered by B when B expands.
In matlab image processing, a bwaeeaopen () function can delete objects in a small-area region, and the basic idea is to delete objects with the area smaller than P in a binary image and control the size of the objects in the region to be deleted by setting the size of a threshold value P, so that the removal of isolated noise points of the underwater netting image is realized. The results of the morphological processing and noise rejection experiments are shown in fig. 10, where (a) is the input image with the edge detection completed; (b) for morphological processing of the images, the case is used to cover the netting target with an expanded line bundle; (c) the effect graph, namely the netting characteristic graph, is removed for the interference noise, and the next grid damage information identification is facilitated.
And 5, carrying out skeleton extraction on the netting image subjected to interference elimination to obtain netting damage information.
The characteristic image of the netting is obtained through the image processing, the lines of the characteristic image are damage information of the underwater netting image, if the netting is intact, the lines are continuous and have no bending, and breakpoints and bending appear in the image to prove that the netting is damaged in different degrees. The black part is background information.
Therefore, infinite skeleton extraction (skeleton extraction is also called image refinement, namely a continuous region is refined into the width of one pixel) is carried out on the linear netting, and the length of linear damage can be estimated by counting the number of the pixel points.
The actual length of the target break can be estimated by calibrating the resolution of the camera used to take the image on the ideal netting profile as shown in figure 11.
Figure BDA0002807238790000121
In the formula, E (u, v) represents the length autocorrelation function; ω (x, y) represents the measurement window function, I (x, y) represents the original image gray scale, and I (x + u, y + v) represents the image gray scale after the window translation. The idea of the evaluation is to determine the breakage length by determining the starting point and the emphasis of the discontinuity. Specifically, the window gray scale at the end point can be obviously changed when the local measurement window moves along each direction.
And acquiring the mesh breakage characteristics, and detecting the mesh breakage position and the breakage degree.
Finally, the ROV carrying the camera is used for verifying the method in the wave-flow water tank, the effectiveness of the method is verified, and the loss caused by the damage of the netting in the net cage culture can be effectively reduced.
The technical conception of the invention is as follows: the invention aims at detecting the integrity of the underwater net cage, utilizes the underwater robot to collect video images, improves the color distortion of the images and keeps the edge information of the images by preprocessing methods such as converting color space and fusing Retinex algorithm of guiding filtering, and leads the net cage images to be better identified and calculated by a computer; accurately positioning the image edge by using a Canny algorithm to enable the boundary information to be reflected in more detail; the underwater net cage image is segmented by adopting an adaptive threshold segmentation algorithm and combining a mathematical morphology open operation processing method, and finally, skeleton extraction is carried out on the decomposed image, so that the detection of the integrity of the underwater netting is realized.
Example 2
Correspondingly, the invention provides a netting damage detection device for an underwater net cage, which comprises:
the system comprises a preprocessing module, a color space conversion module and an image enhancement module, wherein the preprocessing module is used for preprocessing an original netting image of a target net cage, and the preprocessing comprises color space conversion processing and image enhancement processing;
the edge detection module is used for carrying out edge detection on the preprocessed netting image;
the image segmentation module is used for carrying out image segmentation on the netting image;
the image denoising module is used for performing morphological processing and noise point elimination processing on the segmented image to obtain a netting characteristic image;
and the netting damage judgment module is used for carrying out skeleton extraction on the netting image to obtain netting damage information.
The implementation scheme of each module in the device of the invention refers to the specific implementation steps of the method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A method for detecting the damage of a netting of an underwater net cage is characterized by comprising the following steps:
preprocessing an original netting image of a target net cage, wherein the preprocessing comprises color space conversion processing and image enhancement processing;
carrying out edge detection on the preprocessed netting image;
carrying out image segmentation on the netting image;
performing morphological processing and noise point elimination processing on the segmented image to obtain a netting characteristic image;
and carrying out skeleton extraction on the netting image to obtain netting damage information.
2. The method for detecting the damage of the netting of the underwater cage as claimed in claim 1, wherein the converting process of the color space comprises:
the original netting image is converted from RGB color space to HSV color space and the V value is increased to enhance the image brightness.
3. The method for detecting the damage of the netting of the underwater cage as claimed in claim 1, wherein the image enhancement process comprises:
and performing image enhancement processing by adopting a Retinex algorithm of guided filtering.
4. The method for detecting the damage of the netting of the underwater net cage according to claim 3, wherein the image enhancement processing by the Retinex algorithm of the guided filtering comprises the following steps:
estimating illumination components from the original image based on the guide filtering, and removing the illumination components to obtain a reflection image;
the estimate of the illumination component is corrected based on the actual light intensity and re-mapped into the reflected image.
5. The method of claim 4, wherein the estimating the illumination component from the original image based on guided filtering comprises;
the guided filtering can be represented as a local linear model:
Figure FDA0002807238780000021
in the formula: q. q.siIs an image IvWindow omegakLinear transformation gray value at the pixel of m; k represents a window ωkThe center pixel of (a); at omegakIn the coefficient akAnd bkIs a constant. Local linear coefficient akAnd bkThe following solution can be used:
Figure FDA0002807238780000022
bk=(1-ak)uk
in the formula: i isv,jIs represented byvOmega of the imagekPixel values of j points in the window; u. ofkAs an image window omegakMean of the middle pixels; sigmakAs an image window omegakStandard deviation of medium pixels;
Figure FDA0002807238780000023
is window omegakThe number of pixels in (a); delta is a regularization parameter;
in order to obtain stable qiTo it is flatAveraging, applying the linear model to the whole image to obtain a guided filtering function as:
Figure FDA0002807238780000024
wherein f isj(Iv(x, y)) represents the pair IvAnd (3) processing the guiding filter function f by the pixel point j of each (x, y) coordinate in the image.
6. The method for detecting the damage of the netting of the underwater cage as claimed in claim 1, wherein the edge detection of the preprocessed netting image comprises:
and (5) performing edge extraction on the preprocessed netting image by using a Canny edge detection operator.
7. The method for detecting the mesh damage of the underwater cage as claimed in claim 1, wherein the image segmentation of the mesh image comprises:
and (4) carrying out image segmentation on the netting image by adopting an adaptive maximum between-class variance threshold segmentation algorithm.
8. The utility model provides a damaged detection device of netting for box with a net under water, characterized by includes:
the system comprises a preprocessing module, a color space conversion module and an image enhancement module, wherein the preprocessing module is used for preprocessing an original netting image of a target net cage, and the preprocessing comprises color space conversion processing and image enhancement processing;
the edge detection module is used for carrying out edge detection on the preprocessed netting image;
the image segmentation module is used for carrying out image segmentation on the netting image;
the image denoising module is used for performing morphological processing and noise point elimination processing on the segmented image to obtain a netting characteristic image;
and the netting damage judgment module is used for carrying out skeleton extraction on the netting image to obtain netting damage information.
CN202011372756.0A 2020-11-30 2020-11-30 Method and device for detecting damage of netting of underwater aquaculture net cage Pending CN112529853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011372756.0A CN112529853A (en) 2020-11-30 2020-11-30 Method and device for detecting damage of netting of underwater aquaculture net cage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011372756.0A CN112529853A (en) 2020-11-30 2020-11-30 Method and device for detecting damage of netting of underwater aquaculture net cage

Publications (1)

Publication Number Publication Date
CN112529853A true CN112529853A (en) 2021-03-19

Family

ID=74995093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011372756.0A Pending CN112529853A (en) 2020-11-30 2020-11-30 Method and device for detecting damage of netting of underwater aquaculture net cage

Country Status (1)

Country Link
CN (1) CN112529853A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840889A (en) * 2022-07-04 2022-08-02 海南浙江大学研究院 System for detecting vulnerable part of net cage under stress
CN114852289A (en) * 2022-04-06 2022-08-05 五邑大学 Method, device and system for inspecting net cage of deep sea fishing ground and storage medium
CN116228757A (en) * 2023-05-08 2023-06-06 山东省海洋科学研究院(青岛国家海洋科学研究中心) Deep sea cage and netting detection method based on image processing algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416764A (en) * 2018-01-25 2018-08-17 北京农业信息技术研究中心 Etting damage detection device in a kind of cultivation of underwater net cage and method
CN110163798A (en) * 2019-04-18 2019-08-23 中国农业大学 Fishing ground purse seine damage testing method and system
CN111047583A (en) * 2019-12-23 2020-04-21 大连理工大学 Underwater netting system damage detection method based on machine vision
CN111882555A (en) * 2020-08-07 2020-11-03 中国农业大学 Net detection method, device, equipment and storage medium based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416764A (en) * 2018-01-25 2018-08-17 北京农业信息技术研究中心 Etting damage detection device in a kind of cultivation of underwater net cage and method
CN110163798A (en) * 2019-04-18 2019-08-23 中国农业大学 Fishing ground purse seine damage testing method and system
CN111047583A (en) * 2019-12-23 2020-04-21 大连理工大学 Underwater netting system damage detection method based on machine vision
CN111882555A (en) * 2020-08-07 2020-11-03 中国农业大学 Net detection method, device, equipment and storage medium based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈文静: "水下堤坝裂缝图像检测方法的研究", 中国优秀硕士学位论文全文数据库 (农业科技辑), pages 3 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114852289A (en) * 2022-04-06 2022-08-05 五邑大学 Method, device and system for inspecting net cage of deep sea fishing ground and storage medium
CN114852289B (en) * 2022-04-06 2024-03-26 五邑大学 Deep sea fishing ground net cage inspection method, device and system and storage medium
CN114840889A (en) * 2022-07-04 2022-08-02 海南浙江大学研究院 System for detecting vulnerable part of net cage under stress
CN114840889B (en) * 2022-07-04 2022-10-28 海南浙江大学研究院 System for detecting stress-vulnerable part of net cage netting
CN116228757A (en) * 2023-05-08 2023-06-06 山东省海洋科学研究院(青岛国家海洋科学研究中心) Deep sea cage and netting detection method based on image processing algorithm
CN116228757B (en) * 2023-05-08 2023-08-29 山东省海洋科学研究院(青岛国家海洋科学研究中心) Deep sea cage and netting detection method based on image processing algorithm

Similar Documents

Publication Publication Date Title
CN109596634B (en) Cable defect detection method and device, storage medium and processor
CN108629343B (en) License plate positioning method and system based on edge detection and improved Harris corner detection
CN112529853A (en) Method and device for detecting damage of netting of underwater aquaculture net cage
CN110163219B (en) Target detection method based on image edge recognition
CN111260616A (en) Insulator crack detection method based on Canny operator two-dimensional threshold segmentation optimization
CN110415208B (en) Self-adaptive target detection method and device, equipment and storage medium thereof
CN109472788B (en) Method for detecting flaw on surface of airplane rivet
CN109118466B (en) Processing method for fusing infrared image and visible light image
CN113034399A (en) Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN113592782B (en) Method and system for extracting X-ray image defects of composite material carbon fiber core rod
CN112614062A (en) Bacterial colony counting method and device and computer storage medium
CN109781737B (en) Detection method and detection system for surface defects of hose
CN109850518B (en) Real-time mining adhesive tape early warning tearing detection method based on infrared image
CN102609903B (en) A kind of method of the movable contour model Iamge Segmentation based on marginal flow
CN115272362A (en) Method and device for segmenting effective area of digital pathology full-field image
Wang et al. An efficient method for image dehazing
CN110348442B (en) Shipborne radar image offshore oil film identification method based on support vector machine
CN117541582B (en) IGBT insulation quality detection method for high-frequency converter
CN116843581B (en) Image enhancement method, system, device and storage medium for multi-scene graph
CN117830134A (en) Infrared image enhancement method and system based on mixed filtering decomposition and image fusion
CN110298816B (en) Bridge crack detection method based on image regeneration
Yu et al. MSER based shadow detection in high resolution remote sensing image
CN116739943A (en) Image smoothing method and target contour extraction method
CN114862889A (en) Road edge extraction method and device based on remote sensing image
CN111127450B (en) Bridge crack detection method and system based on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination