CN115880573A - Method, device and equipment for obtaining seaweed area based on neural network - Google Patents

Method, device and equipment for obtaining seaweed area based on neural network Download PDF

Info

Publication number
CN115880573A
CN115880573A CN202310182014.9A CN202310182014A CN115880573A CN 115880573 A CN115880573 A CN 115880573A CN 202310182014 A CN202310182014 A CN 202310182014A CN 115880573 A CN115880573 A CN 115880573A
Authority
CN
China
Prior art keywords
seaweed
segmentation
image
processing
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310182014.9A
Other languages
Chinese (zh)
Inventor
李少文
刘兆伟
苏海霞
陈建强
李凡
王占宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai University
Shandong Marine Resource and Environment Research Institute
Original Assignee
Yantai University
Shandong Marine Resource and Environment Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai University, Shandong Marine Resource and Environment Research Institute filed Critical Yantai University
Priority to CN202310182014.9A priority Critical patent/CN115880573A/en
Publication of CN115880573A publication Critical patent/CN115880573A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method, a device and equipment for obtaining the area of sea grass based on a neural network relate to the technical field of ocean image processing, and the method comprises the following operations: the method comprises the following steps: obtaining a seaweed image data set, and performing initial segmentation on the seaweed image data set by using a fusion network of a short-term dense connection network and a bidirectional segmentation network to obtain an initial seaweed bed segmentation image; step two: the initial seaweed bed segmentation image is subjected to cutting head processing, segmentation loss processing, detail extraction head processing and detail extraction loss processing in sequence to obtain a refined seaweed bed segmentation image; step three: carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image; step four: and segmenting the image based on the fine seaweed bed to obtain the area of the seaweed. The method is beneficial to obtaining the area of the seaweed with high accuracy, and compared with manual on-site surveying and mapping calculation, the method has the advantages of time saving and labor saving, and can be widely popularized and used.

Description

Method, device and equipment for obtaining seaweed area based on neural network
Technical Field
The invention relates to the technical field of marine image processing, in particular to a method, a device and equipment for acquiring the area of seaweed based on a neural network.
Background
The image processing is widely applied to data processing and extraction in various industries and is used for acquiring rich information. Semantic segmentation of an image is a means in image processing, and means that a computer performs segmentation processing according to the semantics (content ontology) of the image in order to assign pixel-level labels in the image. The explosion of neural networks greatly promotes various breakthroughs in the performance of semantic segmentation, and has been rapidly developed in many application demand fields.
Sea is an important field of research in recent years, and sea grass is an important subject of research not only on the marine ecosystem but also on the atmospheric environment as a carbon source. In the past, the area of the seaweed bed is marked by field survey, and the carbon sink amount of the seaweed bed is further obtained.
Ocean image acquisition is a mature technology, and ocean image processing can provide great convenience for researches such as ocean resource investigation and ocean environment monitoring.
Disclosure of Invention
The invention aims to provide a method, a device and equipment for obtaining the area of seaweed based on a neural network.
The technical scheme of the invention is as follows:
the invention provides a method for acquiring a seaweed area based on a neural network, which comprises the following operations:
the method comprises the following steps: obtaining a seaweed image data set, and initially segmenting the seaweed image data set by using a fusion network of a short-term dense connection network and a bidirectional segmentation network to obtain an initial seaweed bed segmentation image;
step two: the initial seaweed bed segmentation image is sequentially subjected to cutting head processing, segmentation loss processing, detail extraction head processing and detail extraction loss processing to obtain a refined seaweed bed segmentation image:
step three: carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image;
step four: and obtaining the area of the seaweed based on the fine seaweed bed segmentation image.
In the above method, the seaweed area S can be obtained by the following calculation formula:
Figure SMS_1
s is the area of the seaweed, f (x) is the corresponding basic value of the mapping coordinate point location, x is the mapping coordinate point location,ato fall on the minimum abscissa of the pixel point correspondence,bto fall on the maximum abscissa corresponding to the pixel point,nthe total amount of the seaweed is the total amount of the seaweed,iis a coordinate point location.
In the second step of the method, the initial seaweed bed segmentation image is sequentially subjected to segmentation head processing, segmentation loss processing, detail extraction head processing and detail extraction loss processing, and specifically is sequentially realized by a segmentation head function, a segmentation loss function, a detail extraction head function and a detail loss function.
In the method, in the third step, the detail guiding process is specifically to generate a binary detail by processing the refined seaweed bed segmentation image through semantic segmentation, and convert the binary detail into an image with edge and corner information, so as to obtain the refined seaweed bed segmentation image.
In the method, a post-sampling processing module is arranged in the converged network of the short-term dense connection network and the bidirectional split network, and the sampling ratio of the post-sampling processing module is 8.
The invention provides a device for obtaining the area of sea grass based on a neural network, which comprises:
a fusion network module: carrying out initial segmentation on the seaweed image data set to obtain an initial seaweed bed segmentation image;
a re-thinning and segmenting module: sequentially carrying out dividing head processing, dividing loss processing, detail extraction head processing and detail extraction loss processing on the initial seaweed bed divided image to obtain a refined seaweed bed divided image;
a detail guidance module; carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image;
sea grass area generation module: and obtaining the area of the seaweed based on the fine seaweed bed segmentation image.
In the above apparatus, the converged network module is constructed by:
connecting the UNET + + network with the short-term dense connection module to obtain a short-term dense connection network, training to obtain a seaweed identification model, and fusing the seaweed identification model with the bidirectional segmentation network to obtain the fused network module.
The apparatus as described above, the re-refining segmentation module comprising:
dividing the head module: the method comprises the steps of processing an initial seaweed bed segmentation image to obtain a first re-segmentation image;
and (3) segmentation loss treatment: the image processing device is used for processing the first re-segmentation image to obtain a second re-segmentation image;
processing a detail extraction head: processing the second re-segmented image to obtain a first re-refined image;
detail extraction loss processing: and the image processing unit is used for processing the first re-refined image to obtain the refined seaweed bed segmentation image.
The invention provides a device for obtaining seaweed area based on image processing, which comprises a processor and a memory, wherein the processor executes a computer program stored in the memory to realize the method for obtaining seaweed area based on a neural network.
The present invention provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements a method for obtaining seaweed area based on a neural network as described above.
The invention has the beneficial effects that:
the method for obtaining the seaweed area based on the neural network obtains the fusion network with high processing speed and low segmentation performance loss by using the short-term dense connection network and the bidirectional segmentation network, processes a seaweed image data set through the fusion network, then processes the seaweed image data set through the segmentation head, the segmentation loss, the detail extraction head and the detail extraction loss, and finally processes the seaweed image data set through detail guiding, so that more characteristic detail information in the seaweed image data set can be captured, the seaweed bed segmentation image with rich details can be obtained, the high-precision seaweed segmentation image can be obtained, the seaweed area can be obtained based on the high-precision seaweed segmentation image, and the accuracy of a seaweed area result can be improved;
according to the method for obtaining the seaweed area based on the neural network, the high-accuracy seaweed area is obtained based on the seaweed segmentation image, the seaweed image data set is calculated based on software, compared with manual field surveying and mapping, the method has the advantages of time saving and labor saving, the obtained seaweed area has high accuracy, and the method can be widely popularized and used.
Drawings
The aspects and advantages of the present application will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
In the drawings:
FIG. 1 is a schematic flow chart of the method in the example;
fig. 2 is a schematic diagram of a short-term dense link module in an embodiment, wherein a is a schematic diagram of an overall structure of the short-term dense link module, b is a schematic diagram of a general STDC module, and c is a schematic diagram of an STDC module with a step of 3;
FIG. 3 is a schematic structural diagram of a seaweed segmentation model in an embodiment, in which a is a schematic structural diagram of a seaweed segmentation network, b is a schematic structural diagram of a re-segmentation refinement module, and c is a schematic structural diagram of a detail guidance module;
FIG. 4 is a schematic structural view of the apparatus in the example;
FIG. 5 is a schematic view of the structure of the apparatus in the example.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings.
The embodiment provides a method for obtaining a seaweed area based on a neural network, and with reference to fig. 1, the method includes the following operations:
the method comprises the following steps: obtaining a seaweed image data set, and initially segmenting the seaweed image data set by using a fusion network of a short-term dense connection network and a bidirectional segmentation network to obtain an initial seaweed bed segmentation image;
step two: the initial seaweed bed segmentation image is sequentially subjected to cutting head processing, segmentation loss processing, detail extraction head processing and detail extraction loss processing to obtain a refined seaweed bed segmentation image:
step three: carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image;
step four: and obtaining the area of the seaweed based on the fine seaweed bed segmentation image.
The method comprises the following specific steps:
1. obtaining converged networks
In this embodiment, the UNET + + Network is connected to the short-term dense connection module, so as to obtain a short-term dense connection Network with a faster image information processing speed and a small Segmentation performance loss, and then the seaweed identification model and a bidirectional Segmentation Network (BiseNet) are fused to obtain a fused Network.
1.1 UNET + + network
The UNET + + network is an upgrade of the UNET network. The UNET network has the problem that the optimal depth of the network is unknown, and the processing efficiency is low. Aiming at the problem, the UNET + + network is optimized as follows: searching for an optimal depth by supervised learning with efficient integration of UNETs of different depths (which UNETs share one encoder); the skip connection (skip connection) is redesigned, so that the sub-networks of the decoder can aggregate the characteristics of different scales, and the skip connection is more flexible; the paper-cut technology is utilized to improve the reasoning speed of the network. In brief, the UNET + + network is a network integrating long links and short links, and can capture features of different levels, integrate the features in a feature superposition mode, and add a shallower U-Net structure, so that the feature graph size difference during fusion is smaller.
1.2 Short-term dense connection module
Referring to fig. 2, the short-term dense connection module can connect image features of a plurality of different continuous corresponding layers, thereby realizing multi-scale feature representation of an image, facilitating improvement of information processing speed, and effectively reducing segmentation performance loss.
Fig. 2 a shows an overall structure of a Short-Term Dense connection module, fig. 2 b shows a common Short-Term Dense cascade module (STDC) in the overall structure of the Short-Term Dense connection module, and fig. 2 c shows an STDC module with a stride of 3, that is, the STDC module used in the present embodiment.
M is the number of input feature map channels, and N is the number of output feature map channels; the ConvX operation, also referred to as the Conv BN ReLU, comprises three operations, namely a convolution operation, a Batch Normalization operation (BN), and a ReLU activation function activation operation, wherein,
Figure SMS_2
representing the blocks (blocks) currently being processed, each Block being ConvX with a different kernel size, so the output of the mth Block is defined as follows:/>
Figure SMS_3
in the above formula>
Figure SMS_4
And &>
Figure SMS_5
Respectively represents the input and output of the mth block, is selected>
Figure SMS_6
Representing the core size of the convolutional layer. The BN operation is a deep neural network training operation which is used for normalizing each batch of data, and for the data { x) of a certain batch of training (batch) in training 1 ,x 2 ,…,x n And the data can be input or output at a certain layer in the module, the BN operation can be performed at any layer in the module, the BN operation can accelerate the convergence rate of the processing, and the problem of "gradient dispersion (characteristic distribution is scattered)" in the processing can be alleviated to a certain extent, so that the processing is easier and more stable.
Referring to a diagram in fig. 2, the overall structure of the short-term dense connection module comprises 6 stages (hereinafter referred to as Stage), stage1 and Stage2 are used for surface layer feature extraction, stage3, stage4 and Stage5 are used for implementing feature map downsampling work, and Stage6 is output.
With respect to the feature map, it should be explained that, since the data exists in three dimensions in each convolution layer, it can be viewed as a stack of a plurality of two-dimensional pictures, each of which is referred to as a feature map. In the input layer, if the image is a gray scale image, only one characteristic image exists; if the picture is a color picture, 3 feature maps with red, green and blue colors are provided. There are several convolution kernels (kernel) between the convolutional layers, and the convolution of the previous layer and each feature map with each convolution kernel will generate a feature map of the next layer.
Referring to the diagram c in fig. 2, in the STDC module, the size of the convolution kernel of the first block is 1 × 1, and the size of the convolution kernel of the remaining blocks is 3 × 3, so that the processing effect of the feature map is the best, the efficiency is the highest, and the extracted data is more accurate.
Meanwhile, the final output channel number of the STDC module is N, and except the last block, the output channel number of the ith block is (
Figure SMS_7
) The number of output feature channels of the last block is consistent with that of the previous (second to last) block, so that rich feature detail information of the image can be better extracted to facilitate subsequent segmentation processing.
To enable more efficient integration of image features so that images can be outlined and marked with high quality. And on the premise of computational efficiency, the fusion computations of all the blocks are connected in series. Therefore, the parameter formula of the short-term dense connection module is derived as follows:
Figure SMS_8
used as a feature capture process for an image,Sparamas the value of the parameter(s),Mis the number of input channels and is,Nand outputting the channel number.
The final output of the short-term dense-connected module is:
Figure SMS_9
. Wherein +>
Figure SMS_10
Represents the output of the short-term densely connected module, F is a fusion calculation function, F is the value of>
Figure SMS_11
Representative is a feature map of a total of m blocks. The number of the characteristic channels in the deep layer in the STDC module is small, and the number of the characteristic channels in the shallow layer is large. Therefore, more channels of feature coding detail information are needed in the shallow layer, so that richer feature information can be obtained. To obtain more abundant characteristic information, the present embodiment will +>
Figure SMS_12
Connected to +>
Figure SMS_13
As output of the short-term dense connection module via the hop path.
1.3 Obtaining a short-term dense connection network
The short-term dense connection module is coupled with the UNet + + network to obtain the short-term dense connection network, so that the performance of image recognition and area calculation tasks can be greatly improved.
Specifically, a short-term dense connection module is connected between the UNet + + network and the up-down sampling, that is, after the up-sampling of the UNet + + network is finished, the positions of the down-sampled feature maps are fused, and the short-term dense connection network module is seamlessly connected in. The UNET + + network and the short-term dense connection module are mutually influenced and mutually controlled.
1.4 obtaining seaweed recognition model
The method comprises the steps of obtaining a seaweed image data set of a target research area by means of remote sensing satellites and the like, training a short-term dense connection network by means of the seaweed image data set to obtain training parameters, and importing the training parameters into the short-term dense connection network to obtain a seaweed identification model.
1.5 Fusion seaweed recognition model and bidirectional segmentation network
The seaweed identification model and a bidirectional Segmentation Network (biseNet) are fused to obtain a fusion Network with good Segmentation effect, namely the seaweed Segmentation Network.
Specifically, a seaweed recognition model and a bidirectional segmentation network are fused, and a BiseNet network is used as a bottom layer frame of the seaweed recognition model to form a seaweed segmentation network, which is shown as a seaweed segmentation network frame in fig. 3. During the segmentation process, the seaweed identification model appears as an encoder that segments the seaweed image dataset, and the BiseNet network can encode context information in the seaweed image dataset. The BiseNet network comprises a spatial path and a context path, can decouple functions provided by a spatial information storage domain and a spatial information receiving domain into two paths, and is provided with a Feature Fusion Module (FFM) and an Attention Refinement Module (ARM), so that the segmentation precision can be further improved.
2. Obtaining an initial seaweed segmentation image
In order to acquire a seaweed bed image with mark information (different colors) for carrying out seaweed area calculation in the following process, a seaweed image data set is input into a seaweed segmentation network, a short-term dense connection module in a segmentation model starts segmentation processing aiming at the input seaweed image data set, stage1 and Stage2 are used for extracting surface layer features, and then Stage3, stage4 and Stage5 carry out down-sampling operation, wherein the down-sampling rates are respectively 1/8, 1/16 and 1/32.
In the present embodiment, in order to reduce the amount of calculation and improve the segmentation efficiency, 1 convolution layer is used in Stage1 and Stage2, and 2 convolution blocks are used in each of Stage3, stage4, and Stage 5.
The downsampling information processed by the Stage3 and a part of downsampling information processed by the Stage5 directly enter an FFM module for further feature information extraction, and the downsampling information processed by the Stage4 and the other part of downsampling information processed by the Stage5 firstly enter an ARM module for attention thinning processing and then enter the FFM module for full fusion.
The ARM module adopts a Global Average potential of earth (GAP) method to capture Global context and calculates an attention vector to guide feature learning, the design can refine the output characteristics of each stage in a context path, no up-sampling operation is needed, global context information can be easily integrated, and richer information can be obtained on the basis of improving refinement processing.
And outputting the information fused by the FFM module to obtain an initial seaweed bed segmentation image after post-sampling processing of the post-sampling module.
In this embodiment, to obtain an initial seaweed bed segmentation image with better segmentation effect and richer details, the sampling ratio is set to 8 after post-sampling processing.
3. Obtaining refined seaweed segmented images
In order to capture more spatial feature detail information in the seaweed bed data set, the initial seaweed bed segmentation image needs to be subjected to re-segmentation thinning processing so as to obtain a thinned seaweed bed segmentation image.
In this embodiment, in order to achieve the above object, referring to the diagram b in fig. 3, a resegmentation refining module is added on the basis of the seaweed segmentation network, specifically, a resegmentation refining module is connected after the seaweed segmentation network, and is used to process the initial seaweed bed segmentation image, so that data with information richer than that of the initial seaweed bed segmentation image can be obtained, that is, refined seaweed bed segmentation data is obtained.
The operation of the re-segmentation refinement processing comprises re-segmentation processing and re-refinement processing, specifically, the re-segmentation processing comprises processing of a segmentation head and processing of segmentation loss, and the operation of the re-refinement processing comprises processing of a detail extraction head and processing of detail extraction loss.
The initial seaweed bed segmentation image is sequentially subjected to cutting head processing and segmentation loss processing, and then sequentially subjected to detail extraction head processing and detail extraction loss processing, so that the aims of capturing more spatial feature detail information and obtaining data with richer details, namely obtaining a refined seaweed bed segmentation image can be fulfilled.
The header processing can be realized by a header function (Seg Head function) of
Figure SMS_14
uIs a seaweed bed dividing block,nin order to divide the number of blocks,ito divide the convolution layer.
The division Loss processing can be realized by a division Loss function (Seg Loss function) of
Figure SMS_15
LIn order to segment the head cross product operator,lfor the number of stages of the STDC module,tan operator is extracted for the spatial features,ia pretreatment layer is divided for the seaweed bed.
The Detail extraction header processing can be realized by a Detail extraction header function (Detail Head function) of
Figure SMS_16
,/>
Figure SMS_17
In order to be a spatial distance of the detail feature,
Figure SMS_18
the parameters are extracted for the purpose of detail features,Rthe number is a real number set,Hin order to be the height of the container,Wis width, m is pixel, ->
Figure SMS_19
Is a laplacian smoothing term. In this embodiment, setting +>
Figure SMS_20
The purpose is in order to promote the refinement efficiency, extract the more detailed sea grass bed image.
The Detail Loss processing can be realized by a Detail Loss function (Detail Loss function) which is
Figure SMS_21
,/>
Figure SMS_22
For binary cross entropy multiplication, be>
Figure SMS_23
Is a one-time cross entropy multiplication.
In order to eliminate unnecessary information possibly existing in the refined seaweed bed segmentation image, obtain useful information and improve the subsequent processing speed, in the embodiment, the segmentation head processing and the detail extraction head processing all adopt 3 × 3 convolutional layer processing, batch normalization processing, reLU activation and 1 × 1 convolutional layer processing.
4. Obtaining fine seaweed segmentation images
In order to further enrich the information in the refined seaweed bed segmentation image and enable the information to clearly reflect the characteristics of the seaweed, in the embodiment, a detail guiding module is added on the basis of the re-segmentation refining module, for example, see a diagram c in fig. 3, and the refined seaweed segmentation image is processed by using the detail guiding module, so that a seaweed image with more abundant information than refined seaweed bed segmentation data can be obtained, namely, the refined seaweed bed segmentation image is obtained.
Specifically, the detail guiding module is accessed after the segmentation thinning module is accessed, so that the detailed guiding processing of the thinned seaweed bed segmentation image can be realized. When the detail guide processing is carried out, the seaweed bed segmentation image refined through the semantic segmentation processing is used for generating binary details, and then the binary details are converted into the image with the edge and corner information to obtain the refined seaweed bed segmentation image
Specifically, the Detail guiding processing performs semantic segmentation on ground-truth (ground-route) in the refined seaweed bed segmentation image through a Detail Aggregation module (Detail Aggregation module) inside the Detail guiding processing to generate binary details, the operation can be realized through convolution of a two-dimensional convolution kernel (an initial operator is 1, and a Laplace additional operator is-8) of a Laplace kernel and a trainable 1 × 1, finally, the binary details are converted into an image with edge and corner information by adopting a threshold value of 0.1, and the fine seaweed bed segmentation image is obtained through output.
5. Obtaining the area of the seaweed
In this embodiment, the area of the fine seaweed bed segmentation image is calculated by using the monte carlo algorithm in Matlab, thereby obtaining the seaweed area.
A large number of pixel points are randomly selected in a specific area in an image, the frequency of the pixel points falling in a Monte Carlo function area is counted, and the area of the seaweed is quickly calculated according to the frequency.
The formula for calculating the seaweed area S is as follows:
Figure SMS_24
f (x) is a Monte Carlo algorithm and is expressed as a corresponding basic value of the mapping coordinate point, x is the mapping coordinate point, a is a minimum abscissa corresponding to the pixel point, b is a maximum abscissa corresponding to the pixel point, n is the total number of seaweed, and i is the coordinate point.
The embodiment provides an apparatus for obtaining the area of sea grass based on neural network, referring to fig. 4, including:
a fusion network module: carrying out initial segmentation on the seaweed image data set to obtain an initial seaweed bed segmentation image;
a re-thinning and dividing module: sequentially carrying out division head processing, division loss processing, detail extraction head processing and detail extraction loss processing on the initial seaweed bed division image to obtain a refined seaweed bed division image;
a detail guide module; carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image;
sea grass area generation module: and (4) segmenting the image based on the fine seaweed bed to obtain the area of the seaweed.
The fusion network module is constructed in the following way: and connecting the UNET + + network with the short-term dense connection module to obtain a short-term dense connection network, training to obtain a seaweed recognition model, and fusing the seaweed recognition model with the bidirectional segmentation network to obtain a fusion network module.
In addition, the re-refinement segmentation module includes:
dividing the head module: the method comprises the steps of processing an initial seaweed bed segmentation image to obtain a first re-segmentation image;
and (3) segmentation loss treatment: the image segmentation device is used for processing the first re-segmentation image to obtain a second re-segmentation image;
processing a detail extraction head: processing the second re-segmented image to obtain a first re-refined image;
detail extraction loss processing: and the image processing unit is used for processing the first re-thinned image to obtain a thinned seaweed bed segmentation image.
The embodiment provides a device for obtaining seaweed area based on image processing, which is shown in fig. 5 and comprises a processor and a memory, wherein the processor executes a computer program stored in the memory to implement the method for obtaining seaweed area based on neural network.
The present invention provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements a method for obtaining seaweed area based on a neural network as described above.
According to the method for obtaining the seaweed area based on the neural network, the short-term dense connection network and the bidirectional segmentation network are used for obtaining the fusion network with high processing speed and low segmentation performance loss, the seaweed image data set is processed through the fusion network, then is subjected to segmentation head processing, segmentation loss processing, detail extraction head processing and detail extraction loss processing, and finally is subjected to detail guide processing, so that more characteristic detail information in the seaweed image data set can be favorably captured, seaweed bed segmentation images with rich details can be favorably obtained, high-precision seaweed segmentation images can be favorably obtained, the seaweed area can be obtained based on the high-precision seaweed segmentation images, and the accuracy of a seaweed area result can be favorably improved;
according to the method for obtaining the seaweed area based on the neural network, the high-accuracy seaweed area is obtained based on the seaweed segmentation image, the seaweed image data set is calculated based on software, compared with manual on-site surveying and mapping, the method has the advantages of time saving and labor saving, the obtained seaweed area has high accuracy, and the method can be widely popularized and used.

Claims (10)

1. A method for obtaining seaweed area based on a neural network is characterized by comprising the following operations:
the method comprises the following steps: obtaining a seaweed image data set, and initially segmenting the seaweed image data set by using a fusion network of a short-term dense connection network and a bidirectional segmentation network to obtain an initial seaweed bed segmentation image;
step two: the initial seaweed bed segmentation image is subjected to cutting head processing, segmentation loss processing, detail extraction head processing and detail extraction loss processing in sequence to obtain a refined seaweed bed segmentation image;
step three: carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image;
step four: and obtaining the area of the seaweed based on the fine seaweed bed segmentation image.
2. The method of claim 1, wherein the seaweed area S is obtained by the following calculation:
Figure QLYQS_1
s is the area of the seaweed, f (x) is the corresponding basic value of the mapping coordinate point location, x is the mapping coordinate point location,ato fall on the minimum abscissa corresponding to the pixel point,bto fall on the maximum abscissa corresponding to the pixel point,nthe total amount of the seaweed is the total amount of the seaweed,iis a coordinate point location.
3. The method according to claim 1, wherein in the second step, the initial seaweed bed segmentation image is sequentially subjected to segmentation header processing, segmentation loss processing, detail extraction header processing and detail extraction loss processing, and specifically is sequentially realized by a segmentation header function, a segmentation loss function, a detail extraction header function and a detail loss function.
4. The method according to claim 1, wherein in step three, the detail guiding process is specifically to generate binary details by processing the refined seaweed bed segmentation image through semantic segmentation, and convert the binary details into an image with edge and corner information, i.e. to obtain the refined seaweed bed segmentation image.
5. The method according to claim 1, wherein a post-sampling processing module is provided in the converged network of the short-term dense connection network and the bidirectional split network, and the sampling ratio of the post-sampling processing module is 8.
6. An apparatus for obtaining seaweed area based on neural network, comprising:
a fusion network module: carrying out initial segmentation on the seaweed image data set to obtain an initial seaweed bed segmentation image;
a re-thinning and segmenting module: sequentially carrying out dividing head processing, dividing loss processing, detail extraction head processing and detail extraction loss processing on the initial seaweed bed divided image to obtain a refined seaweed bed divided image;
a detail guide module; carrying out detail guide processing on the refined seaweed bed segmentation image to obtain a refined seaweed bed segmentation image;
sea grass area generation module: and obtaining the area of the seaweed based on the fine seaweed bed segmentation image.
7. The apparatus of claim 6, wherein the converged network module is constructed by:
connecting the UNET + + network with the short-term dense connection module to obtain a short-term dense connection network, training to obtain a seaweed identification model, and fusing the seaweed identification model with the bidirectional segmentation network to obtain the fused network module.
8. The apparatus of claim 6, wherein the re-refinement segmentation module comprises:
dividing the head module: the method comprises the steps of processing an initial seaweed bed segmentation image to obtain a first re-segmentation image;
and (3) segmentation loss treatment: the image segmentation unit is used for processing the first re-segmentation image to obtain a second re-segmentation image;
processing a detail extraction head: processing the second re-segmented image to obtain a first re-refined image;
detail extraction loss processing: and the processor is used for processing the first re-refined image to obtain the refined seaweed bed segmentation image.
9. An apparatus for obtaining sea grass area based on neural network, comprising a processor and a memory, wherein the processor implements the method for obtaining sea grass area based on neural network according to any one of claims 1-5 when executing the computer program stored in the memory.
10. A computer-readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements a method for obtaining seaweed area based on a neural network according to any one of claims 1 to 5.
CN202310182014.9A 2023-03-01 2023-03-01 Method, device and equipment for obtaining seaweed area based on neural network Pending CN115880573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310182014.9A CN115880573A (en) 2023-03-01 2023-03-01 Method, device and equipment for obtaining seaweed area based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310182014.9A CN115880573A (en) 2023-03-01 2023-03-01 Method, device and equipment for obtaining seaweed area based on neural network

Publications (1)

Publication Number Publication Date
CN115880573A true CN115880573A (en) 2023-03-31

Family

ID=85761736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310182014.9A Pending CN115880573A (en) 2023-03-01 2023-03-01 Method, device and equipment for obtaining seaweed area based on neural network

Country Status (1)

Country Link
CN (1) CN115880573A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128956A (en) * 2023-04-04 2023-05-16 山东省海洋资源与环境研究院(山东省海洋环境监测中心、山东省水产品质量检验中心) Method, device and equipment for obtaining seaweed bed carbon sink based on remote sensing image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020078269A1 (en) * 2018-10-16 2020-04-23 腾讯科技(深圳)有限公司 Method and device for three-dimensional image semantic segmentation, terminal and storage medium
CN114005024A (en) * 2021-10-20 2022-02-01 青岛浩海网络科技股份有限公司 Seaweed bed identification method based on multi-source multi-temporal data fusion
CN114240961A (en) * 2021-11-15 2022-03-25 西安电子科技大学 U-Net + + cell division network system, method, equipment and terminal
US20220309674A1 (en) * 2021-03-26 2022-09-29 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on u-net
CN115272370A (en) * 2022-07-29 2022-11-01 中国银行股份有限公司 Image segmentation method and device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020078269A1 (en) * 2018-10-16 2020-04-23 腾讯科技(深圳)有限公司 Method and device for three-dimensional image semantic segmentation, terminal and storage medium
US20220309674A1 (en) * 2021-03-26 2022-09-29 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on u-net
CN114005024A (en) * 2021-10-20 2022-02-01 青岛浩海网络科技股份有限公司 Seaweed bed identification method based on multi-source multi-temporal data fusion
CN114240961A (en) * 2021-11-15 2022-03-25 西安电子科技大学 U-Net + + cell division network system, method, equipment and terminal
CN115272370A (en) * 2022-07-29 2022-11-01 中国银行股份有限公司 Image segmentation method and device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGYUAN FAN等: ""Rethinking BiSeNet For Real-time Semantic Segmentation"", 《2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
张丽娜等: ""基于蒙特卡罗法的图形面积估算"", 《软件工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128956A (en) * 2023-04-04 2023-05-16 山东省海洋资源与环境研究院(山东省海洋环境监测中心、山东省水产品质量检验中心) Method, device and equipment for obtaining seaweed bed carbon sink based on remote sensing image
CN116128956B (en) * 2023-04-04 2024-06-07 山东省海洋资源与环境研究院(山东省海洋环境监测中心、山东省水产品质量检验中心) Method, device and equipment for obtaining seaweed bed carbon sink based on remote sensing image

Similar Documents

Publication Publication Date Title
CN111179324B (en) Object six-degree-of-freedom pose estimation method based on color and depth information fusion
CN110738207B (en) Character detection method for fusing character area edge information in character image
CN107330439B (en) Method for determining posture of object in image, client and server
CN109685768B (en) Pulmonary nodule automatic detection method and system based on pulmonary CT sequence
CN110659582A (en) Image conversion model training method, heterogeneous face recognition method, device and equipment
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN111666921A (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
US20140254922A1 (en) Salient Object Detection in Images via Saliency
US9002071B2 (en) Image search system, image search apparatus, image search method and computer-readable storage medium
AU2018202767B2 (en) Data structure and algorithm for tag less search and svg retrieval
CN111429460A (en) Image segmentation method, image segmentation model training method, device and storage medium
CN113988147B (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
CN114283162A (en) Real scene image segmentation method based on contrast self-supervision learning
CN103632153A (en) Region-based image saliency map extracting method
CN115880573A (en) Method, device and equipment for obtaining seaweed area based on neural network
CN111932577A (en) Text detection method, electronic device and computer readable medium
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
CN112396036A (en) Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction
CN113592015B (en) Method and device for positioning and training feature matching network
CN115830375A (en) Point cloud classification method and device
CN113706562A (en) Image segmentation method, device and system and cell segmentation method
CN112712066B (en) Image recognition method and device, computer equipment and storage medium
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN108460383A (en) Saliency refined method based on neural network and image segmentation
CN115471901A (en) Multi-pose face frontization method and system based on generation of confrontation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230331