CN114359387A - Bag cultivation mushroom detection method based on improved YOLOV4 algorithm - Google Patents

Bag cultivation mushroom detection method based on improved YOLOV4 algorithm Download PDF

Info

Publication number
CN114359387A
CN114359387A CN202210009676.1A CN202210009676A CN114359387A CN 114359387 A CN114359387 A CN 114359387A CN 202210009676 A CN202210009676 A CN 202210009676A CN 114359387 A CN114359387 A CN 114359387A
Authority
CN
China
Prior art keywords
feature map
mushroom
network
dimension
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210009676.1A
Other languages
Chinese (zh)
Inventor
黄英来
李大明
白家瀛
李宁
李超
侯畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202210009676.1A priority Critical patent/CN114359387A/en
Publication of CN114359387A publication Critical patent/CN114359387A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting mushroom cultivated in bags based on an improved YOLOV4 algorithm, which comprises the following steps: constructing an improved YOLO v4 network, training an improved YOLO v4 network and detecting the images of the mushrooms. The improved YOLO v4 algorithm uses a depth separable convolution to remove classification loss and reconstruct a loss function; a predicted feature map transmission path is added to the PANET structure part, and an attention mechanism module R _ cbam with residual edges is embedded in the path, so that a key feature region can be quickly found in a group of feature maps, and the weight of the key feature region is emphasized for prediction. Through improvement, the algorithm can improve the detection precision and greatly reduce the algorithm parameters, thereby better providing visual algorithm support for mechanical picking.

Description

Bag cultivation mushroom detection method based on improved YOLOV4 algorithm
Technical Field
The invention relates to the field of machine vision target detection, in particular to a machine vision detection method for mushroom cultivated in bagged materials based on an improved YOLO v4 algorithm.
Background
At present, the machine vision detection of the mushrooms mostly depends on traditional digital image knowledge, the characteristics of mushroom cap color, appearance, thickness and the like of the mushrooms are manually designed and extracted, the method is greatly influenced by illumination brightness factors, and the identification effect on the adhesion and shielding conditions of the mushrooms in the actual environment is poor. Machine vision target detection emerging in recent years provides a new method for mushroom detection, a neural network is built, mushroom pictures are supervised and trained, characteristics are automatically learned through the network, and accuracy is improved. Compared with the traditional detection method, by means of the detection of the YOLO v4 algorithm, more complete identification features can be extracted.
For mushroom detection and identification in a bag material cultivation environment, especially for identification and detection of small targets which are mutually shielded, adhered and far away due to natural growth, the detection effect of the traditional digital image knowledge cannot meet the requirements of practical application.
Therefore, the technical staff in the field needs to solve the problem of how to improve the detection accuracy of mushrooms in the bag cultivation environment and improve the detection effect of mushrooms under the condition of interference of mutual shielding and adhesion of the mushrooms due to diversified brightness of images.
Disclosure of Invention
In view of the above, the invention provides a machine vision detection method for mushroom cultivated in bagged material based on an improved YOLO v4 algorithm. By improving the YOLO v4 algorithm, the problem that the detection precision, the image brightness diversity and the like of the existing method are difficult to meet the requirements of practical application is effectively solved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a machine vision detection method for mushroom cultivated in bagged materials based on an improved YOLO v4 algorithm comprises the following steps:
s1, constructing an improved YOLO v4 network: the prediction method comprises a main network part, an SPP structure part, a PANET structure part and a YOLO _ three _ head prediction part, wherein a prediction characteristic map transmission path is added in the PANET structure part, the prediction characteristic map transmission path is initially connected to the main network and spliced and fused with a characteristic map in the PANET structure, and an attention mechanism module R _ cbam with a residual error edge is embedded in the path;
s2, training an improved YOLO v4 network: inputting a training set made of mushroom pictures shot in a bag material cultivation environment and mushroom position marking information into a built improved YOLO v4 network for weight parameter training to obtain trained YOLO v4 network parameters;
s3, mushroom image detection: and loading the trained parameters and inputting the color map of the mushroom into a trained YOLO v4 network to obtain a final characteristic map, generating a detection frame and automatically positioning the position of the mushroom in the image.
Preferably, in the SPP structure part, the PANet structure part, and the YOLO _ thread _ head prediction part, a depth separable convolution is used.
Preferably, the predicted profile delivery path starts at the 1 st CSP-8 structure of the backbone network, and the profile size at the 1 st CSP-8 structure is 2 times that at the next CSP-8 structure.
Preferably, an attention mechanism module R _ cbam with residual edges is added to 2 input terminals and 1 output terminal of the predicted feature map transmission path, and the 1 st input terminal receives a feature map generated after the 1 st CSP-8 of the backbone network; the 2 nd input end receives a maximum size characteristic diagram generated after the input end is subjected to upsampling (UPsam) and splicing (concat) in the PANet of the original algorithm to a DBL layer; and splicing and fusing the characteristic diagrams of the output end and the 2 nd input end, and fusing the characteristic diagrams into the PANet.
Preferably, the residual edge attention machine mechanism module R _ cbam includes two sub-parts Res _ cam and Res _ sam, the two sub-modules are connected in series, so that the feature maps sequentially and continuously pass through, and the feature map dimension output by each sub-module is the same as the dimension in input;
res _ cam is implemented as follows:
s111, performing dimensionality reduction on the initial feature map of the Res _ cam module through a maximum pooling layer and an average pooling layer respectively to obtain two groups of feature maps I, wherein the dimensionalities are both c multiplied by 1;
s112, respectively passing the two groups of feature maps through a weight sharing network, outputting two obtained groups of feature maps II with the dimension of c multiplied by 1, adding the two obtained groups of feature maps to obtain a feature map with the dimension of c multiplied by 1, and multiplying the obtained feature map with the dimension of c multiplied by 1 with the initial feature map of a Res _ cam module to obtain a feature map with the dimension of c multiplied by w multiplied by h;
s113, finally adding the initial characteristic diagram of the Res _ cam module to the characteristic diagram obtained in the step S112 through residual edges;
res _ sam is implemented as follows:
s121, performing dimensionality reduction on the initial feature map of the Res _ sam module through a maximum pooling layer and an average pooling layer respectively to obtain two groups of feature maps, wherein the dimensionalities are both 1 xwxh, and then splicing to obtain a feature map with the dimensionality of 2 xwxh;
s122, after the spliced feature map is subjected to convolution dimension reduction, reducing the dimension of the feature map with the dimension of 2 xwxh to obtain the feature map with the dimension of 1 xwxh, activating a sigmoid function, and multiplying the feature map with the initial feature map of the Res _ sam module to obtain the feature map with the dimension of c xwxh;
and S123, finally adding the residual edges of the initial characteristic diagram of the Res _ sam module to the characteristic diagram obtained in the step S122.
Preferably, the overall loss function is:
Loss_object=c1·loss_predict+c2·loss_conf
wherein, loss _ conf is confidence loss, loss _ predict is return loss of the predicted frame position, C1、C2Is the equilibrium coefficient of the two.
Preferably, the mushroom pictures are subjected to gamma _ two transformation, and the mushroom pictures before and after transformation are used as the mushroom picture training set.
Preferably, the training set obtained after gamma _ two transformation is added with noise and then trained together with the training set obtained after gamma _ two transformation.
Preferably, during training, the pre-training weight of the COCO-Train2017 data set is loaded in the backbone network, then the loaded training parameters are frozen, unfreezing is carried out after the Nth epoch, training is carried out together with the parameters of other modules in the improved YOLO v4 network, a loss function is calculated after each batch, the parameters are updated reversely, and the process is circulated deceive until the loss function meets the requirements, and the training is stopped.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, through improving the YOLO v4 algorithm, the detection precision of mushrooms in the bag material cultivation environment is improved, the detection effect under the interference condition of diversified image brightness and mutual shielding and adhesion of mushrooms is improved, and meanwhile, the algorithm parameters are greatly reduced, so that equipment deployment is facilitated, and visual algorithm support is better provided for mechanical picking.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only embodiments of the invention, and that for a person skilled in the art, other drawings can be obtained from the provided drawings without inventive effort.
FIG. 1 is a flow chart of a machine vision detection method for bagged culture mushrooms based on an improved YOLO v4 algorithm, provided by the invention;
FIG. 2 is a sample gamma _ two transform provided by the present invention;
FIG. 3 is a schematic diagram of the improved YOLO v4 algorithm provided by the present invention;
FIG. 4 is a diagram of the dimensional change structure of the algorithm feature graph provided by the present invention;
FIG. 5 is a schematic diagram of the Res _ cam internal structure provided by the present invention;
FIG. 6 is a schematic diagram of the Res _ sam internal structure provided by the present invention;
FIG. 7 is a diagram illustrating the degradation of the loss function in the training set and the validation set provided by the present invention;
FIG. 8 is a sample of the detection effect of the present invention on small occluded, stuck, fuzzy and distant targets.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Specifically, the embodiment of the invention discloses a machine vision detection method for shiitake cultivation in bags based on an improved YOLO v4 algorithm, which comprises the following steps of:
s1, constructing an improved YOLO v4 network: the prediction method comprises a main network part, an SPP structure part, a PANET structure part and a YOLO _ three _ head prediction part, wherein a prediction characteristic map transmission path is added in the PANET structure part, the prediction characteristic map transmission path is initially connected to the first CSP-8 (see figure 3) of the main network and is spliced and fused with a characteristic map in the PANET structure, and an attention mechanism module R _ cbam with residual edges is embedded in the path. The first CSP-8 of the backbone network is 2 times of the feature diagram of the next CSP-8 structure, and the large size enables small target features not to be easily lost.
In the SPP structure part, the PANet structure part and the YOLO _ three _ head prediction part, a depth separable convolution (DSP) is used;
adding an attention mechanism module R _ cbam with residual edges at 2 input ends and 1 output end of a predicted feature map transfer path, wherein the 1 st input end receives a feature map generated after the 1 st CSP-8 of a backbone network; the 2 nd input end receives a maximum size characteristic diagram generated after the characteristic diagram is subjected to upsampling (UPsam) and splicing (concat) in the PANet of the original algorithm to a DBL layer; after 2 input end feature maps pass through 2R _ cbam respectively, DBL, UPsam and concat are sequentially carried out to generate a feature map, the feature map is subjected to DBL and downsampling (Dsam) output feature map, the feature map output after the R _ cbam is added serves as an output end, the feature map is spliced and fused with the feature map of the 2 nd input end and is fused into the PANet, and the dimensionality change of the algorithm feature map is as shown in FIG. 4;
the residual edge attention mechanism module R _ cbam comprises two sub-parts of Res _ cam and Res _ sam, the two sub-modules are connected in series, so that the feature maps sequentially and continuously pass through, and the dimension of the feature map output by each sub-module is the same as the dimension in input. The realization process of each part is as follows:
res _ cam is implemented in each part (as shown in FIG. 5) as follows:
s111, performing dimensionality reduction on the initial feature map of the Res _ cam module through a maximum pooling layer and an average pooling layer respectively to obtain two groups of feature maps I, wherein the dimensionalities are both c multiplied by 1;
s112, respectively passing the two groups of feature maps obtained in the step S111 through a weight sharing network to obtain two groups of feature maps with the dimension of c multiplied by 1, adding the two groups of feature maps to obtain a feature map with the dimension of c multiplied by 1, and multiplying the feature map with the dimension of c multiplied by 1 with the initial feature map of a Res _ cam module to obtain a feature map with the dimension of c multiplied by w multiplied by h;
s113, finally adding the initial characteristic diagram of the Res _ cam module to the characteristic diagram obtained in the step S112 through residual edges;
res _ sam is implemented as follows (as shown in FIG. 6):
s121, performing dimensionality reduction on the initial feature map of the Res _ sam module through a maximum pooling layer and an average pooling layer respectively to obtain two groups of feature maps, wherein the dimensionalities are both 1 xwxh, and then splicing to obtain a feature map with the dimensionality of 2 xwxh;
s122, reducing the dimension of the feature map with the dimension of 2 xwxh through a convolution dimension reduction layer to obtain a feature map with the dimension of 1 xwxh, activating a sigmoid function on the feature map with the dimension of 1 xwxh, and multiplying the feature map with the initial feature map of a Res _ sam module to obtain a feature map with the dimension of c xwxh;
and S123, finally adding the characteristic diagram obtained by the step S122 to the initial characteristic diagram of the Res _ sam module through residual edges.
The detection target of the algorithm is only one type, namely mushrooms. In order to simplify the algorithm structure, the algorithm is easier to maintain, and classification loss is eliminated. The overall loss function is found to be:
Loss_object=c1·loss_predict+c2·loss_conf
wherein, loss _ conf is confidence loss, loss _ predict is return loss of the predicted frame position, C1、C2Is the equilibrium coefficient of the two.
S2, training an improved YOLO v4 network: inputting a training set made of shiitake mushroom pictures shot in a bag material environment and marking information into a built improved YOLO v4 network for weight parameter training to obtain trained YOLO v4 network parameters;
firstly, in the aspect of data image processing, the lentinus edodes in a bag material cultivation environment is shot in multiple angles and multiple distances, and a part of shot photos is randomly extracted in proportion to be subjected to gamma _ two transformation (the effect is shown in fig. 2), wherein the gamma _ two expression is as follows:
kx=xσ
wherein x is the pixel value of the original image, kxFor the transformed values of x pixels, sigma is the transformation factor (the sigma values are taken to be 2 and 0.5 respectively),
the transformed photographs comprise a training set, and in one embodiment, the created dataset is randomly extracted 4/5 for gamma _ two transformation, leaving 1/5 parts as 1: the scale of 1 is divided into a validation set and a prediction set. When 1000 pictures containing mushrooms are obtained, marking the positions of target frames of the mushrooms, randomly extracting 800 of the pictures, performing gamma _ two conversion, expanding the converted pictures into 2400 pictures to simulate different brightness, and leaving 200 pictures with different brightness according to the ratio of 1: 1, dividing the picture into a verification set and a prediction set, and setting 2400 expanded pictures as a training set;
secondly, adding noise to the extended training set, in one embodiment, extracting 1/4 pictures of the training set to add gaussian noise, where the expression of gaussian noise is:
Py=y+G[rand(y),u,σ],
Figure BDA0003458523930000071
wherein y is the pixel value of the picture before noise addition, pxFor noisy pixel values, the rand function is to generate random numbers, G (x, u, σ) is to satisfy the mean value u,the variance is a gaussian distribution of σ, x is the gaussian distribution input value, in one example, the value of u takes 0 and the value of σ takes 25;
during training, firstly loading the training weight of the COCO-Train2017 data set in the backbone network part, then freezing the loaded training parameters, unfreezing the training parameters after the Nth epoch, training the training parameters together with the parameters of other modules in the improved YOLO v4 network, calculating loss functions after each batch, reversely updating the parameters, and circulating deceive for replacement until the loss functions meet requirements, and stopping training. In one embodiment, thawing after up to 70 epochs is used as an example, where the loss function in the test is shown in FIG. 7 with epoch times on the abscissa and loss values on the ordinate. In the figure, at 100 epochs, the curve is basically stable, the loss value is not reduced any more, the stable state is reached, and the training parameters at the moment are stored as a pth file.
S3, mushroom image detection: and loading the pth file obtained by training into the constructed network, inputting the color image of the mushroom into the trained YOLO v4 network, converting the shot color image into an RGB format through testing, taking the obtained 3 channels R, G and B branch characteristic diagrams as input, sequentially passing through a main network, an SPP structure area part, a PANET part and a YOLO _ three _ head prediction part to obtain a final characteristic diagram, and generating a detection frame, so that the mushroom detection is realized, and the position of the mushroom in the image is automatically positioned. The detection effect is shown in fig. 8, for example.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for detecting mushroom cultivated in bags based on an improved YOLOV4 algorithm is characterized by comprising the following steps:
s1, constructing an improved YOLO v4 network: the prediction method comprises a main network part, an SPP structure part, a PANET structure part and a YOLO _ three _ head prediction part, wherein a prediction characteristic map transmission path is added in the PANET structure part, the prediction characteristic map transmission path is initially connected to the main network and spliced and fused with a characteristic map in the PANET structure, and an attention mechanism module R _ cbam with a residual error edge is embedded in the path;
s2, training an improved YOLO v4 network: inputting a training set made of mushroom pictures shot in a bag material cultivation environment and mushroom position marking information into a built improved YOLO v4 network for weight parameter training to obtain trained YOLO v4 network parameters;
s3, mushroom image detection: and loading the trained parameters and inputting the color map of the mushroom into a trained YOLO v4 network to obtain a final characteristic map, generating a detection frame and automatically positioning the position of the mushroom in the image.
2. The method for detecting mushroom cultivated in bags based on the improved YOLOV4 algorithm in claim 1, wherein in the SPP structural part, the PANet structural part and the YOLO _ three _ head prediction part, depth separable convolution is used.
3. The method for detecting mushroom cultivated in bagged material based on the improved YOLOV4 algorithm in claim 1, wherein the predicted feature map transmission path starts from the 1 st CSP-8 structure of the backbone network, and the feature map size at the 1 st CSP-8 structure is 2 times that at the next CSP-8 structure.
4. The method for detecting mushroom cultivated in bagged materials based on the improved YOLOV4 algorithm as claimed in claim 1, wherein an attention mechanism module R _ cbam with residual edges is added to 2 inputs and 1 output of the predicted feature map transmission path, and the 1 st input receives the feature map generated after passing through the 1 st CSP-8 of the main network; the 2 nd input end receives a maximum size characteristic diagram generated after the characteristic diagram is subjected to up-sampling and splicing in the PANet of the original algorithm and reaches a DBL layer; and splicing and fusing the characteristic diagrams of the output end and the 2 nd input end, and fusing the characteristic diagrams into the PANet.
5. The method for detecting mushroom cultivated in bags based on the improved YOLOV4 algorithm as claimed in claim 1, wherein the residual edge attention mechanism module R _ cbam comprises two sub-parts Res _ cam and Res _ sam, the two sub-modules are connected in series to make the feature map pass through successively, and the feature map dimension output by each sub-module is the same as the dimension in inputting;
res _ cam is implemented as follows:
s111, performing dimensionality reduction on the initial feature map of the Res _ cam module through a maximum pooling layer and an average pooling layer respectively to obtain two groups of feature maps I, wherein the dimensionalities are both c multiplied by 1;
s112, respectively passing the two groups of feature maps through a weight sharing network, outputting two obtained groups of feature maps II with the dimension of c multiplied by 1, adding the two obtained groups of feature maps to obtain a feature map with the dimension of c multiplied by 1, and multiplying the obtained feature map with the dimension of c multiplied by 1 with the initial feature map of a Res _ cam module to obtain a feature map with the dimension of c multiplied by w multiplied by h;
s113, finally adding the initial characteristic diagram of the Res _ cam module to the characteristic diagram obtained in the step S112 through residual edges;
res _ sam is implemented as follows:
s121, performing dimensionality reduction on the initial feature map of the Res _ sam module through a maximum pooling layer and an average pooling layer respectively to obtain two groups of feature maps, wherein the dimensionalities are both 1 xwxh, and then splicing to obtain a feature map with the dimensionality of 2 xwxh;
s122, after the spliced feature map is subjected to convolution dimension reduction, reducing the dimension of the feature map with the dimension of 2 xwxh to obtain the feature map with the dimension of 1 xwxh, activating a sigmoid function, and multiplying the feature map with the initial feature map of the Res _ sam module to obtain the feature map with the dimension of c xwxh;
and S123, finally adding the residual edges of the initial characteristic diagram of the Res _ sam module to the characteristic diagram obtained in the step S122.
6. The method for detecting mushroom cultivated in bags based on the improved YOLOV4 algorithm as claimed in claim 1, wherein the total loss function is:
Loss_object=c1·loss_predict+c2·loss_conf
wherein, loss _ conf is confidence loss, loss _ predict is return loss of the predicted frame position, C1、C2Is the equilibrium coefficient of the two.
7. The method for detecting mushroom cultivated in bags based on the improved YOLOV4 algorithm as claimed in claim 1, wherein the mushroom pictures are subjected to gamma _ two transformation, and the mushroom pictures before and after transformation are used as the mushroom picture training set.
8. The method for detecting mushroom cultivated in bags based on the improved YOLOV4 algorithm as claimed in claim 1, wherein the training set obtained after gamma _ two transformation is added with noise and then trained together with the training set obtained after gamma _ two transformation.
9. The method for detecting mushroom cultivated in bags based on the improved YOLOV4 algorithm according to claim 1, wherein during training, the pre-training weight of the COCO-Train2017 data set is loaded in the backbone network, then the loaded training parameters are frozen and unfrozen after the Nth epoch, the training is performed together with the parameters of other modules in the improved YOLOV4 network, the loss function is calculated after each batch, the parameters are updated reversely, and the loop deceive is performed for replacement until the loss function meets the requirement, and the training is stopped.
CN202210009676.1A 2022-01-06 2022-01-06 Bag cultivation mushroom detection method based on improved YOLOV4 algorithm Pending CN114359387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210009676.1A CN114359387A (en) 2022-01-06 2022-01-06 Bag cultivation mushroom detection method based on improved YOLOV4 algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210009676.1A CN114359387A (en) 2022-01-06 2022-01-06 Bag cultivation mushroom detection method based on improved YOLOV4 algorithm

Publications (1)

Publication Number Publication Date
CN114359387A true CN114359387A (en) 2022-04-15

Family

ID=81108017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210009676.1A Pending CN114359387A (en) 2022-01-06 2022-01-06 Bag cultivation mushroom detection method based on improved YOLOV4 algorithm

Country Status (1)

Country Link
CN (1) CN114359387A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310785A (en) * 2022-12-23 2023-06-23 兰州交通大学 Unmanned aerial vehicle image pavement disease detection method based on YOLO v4
CN116740651A (en) * 2023-08-11 2023-09-12 南京吾悦农业科技有限公司 Edible fungus cultivation monitoring method and system based on intelligent decision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310785A (en) * 2022-12-23 2023-06-23 兰州交通大学 Unmanned aerial vehicle image pavement disease detection method based on YOLO v4
CN116310785B (en) * 2022-12-23 2023-11-24 兰州交通大学 Unmanned aerial vehicle image pavement disease detection method based on YOLO v4
CN116740651A (en) * 2023-08-11 2023-09-12 南京吾悦农业科技有限公司 Edible fungus cultivation monitoring method and system based on intelligent decision
CN116740651B (en) * 2023-08-11 2023-10-17 南京吾悦农业科技有限公司 Edible fungus cultivation monitoring method and system based on intelligent decision

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN109949255B (en) Image reconstruction method and device
CN113569667B (en) Inland ship target identification method and system based on lightweight neural network model
CN108780508A (en) System and method for normalized image
CN109902715B (en) Infrared dim target detection method based on context aggregation network
CN114359387A (en) Bag cultivation mushroom detection method based on improved YOLOV4 algorithm
CN111126412B (en) Image key point detection method based on characteristic pyramid network
CN113658057B (en) Swin converter low-light-level image enhancement method
CN112749621B (en) Remote sensing image cloud layer detection method based on deep convolutional neural network
CN109389667B (en) High-efficiency global illumination drawing method based on deep learning
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN111539314A (en) Cloud and fog shielding-oriented sea surface target significance detection method
CN114092793B (en) End-to-end biological target detection method suitable for complex underwater environment
CN113592715B (en) Super-resolution image reconstruction method for small sample image set
CN113205103A (en) Lightweight tattoo detection method
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
CN112257727A (en) Feature image extraction method based on deep learning self-adaptive deformable convolution
CN116580184A (en) YOLOv 7-based lightweight model
CN116912680A (en) SAR ship identification cross-modal domain migration learning and identification method and system
CN117523394A (en) SAR vessel detection method based on aggregation characteristic enhancement network
CN117292117A (en) Small target detection method based on attention mechanism
CN115984949B (en) Low-quality face image recognition method and equipment with attention mechanism
CN112541916A (en) Waste plastic image segmentation method based on dense connection
CN115861595B (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination