CN110008947B - Granary grain quantity monitoring method and device based on convolutional neural network - Google Patents

Granary grain quantity monitoring method and device based on convolutional neural network Download PDF

Info

Publication number
CN110008947B
CN110008947B CN201910295466.1A CN201910295466A CN110008947B CN 110008947 B CN110008947 B CN 110008947B CN 201910295466 A CN201910295466 A CN 201910295466A CN 110008947 B CN110008947 B CN 110008947B
Authority
CN
China
Prior art keywords
grain
image
grain surface
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910295466.1A
Other languages
Chinese (zh)
Other versions
CN110008947A (en
Inventor
李磊
李智
董卓莉
费选
石帅锋
李铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN201910295466.1A priority Critical patent/CN110008947B/en
Publication of CN110008947A publication Critical patent/CN110008947A/en
Application granted granted Critical
Publication of CN110008947B publication Critical patent/CN110008947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a granary grain quantity monitoring method and device based on a convolutional neural network. The method is a monitoring method for automatically monitoring the change of the grain quantity by utilizing an image processing technology. The method has high monitoring precision and reliability, and can effectively promote the intelligent upgrade and reconstruction process of the grain depot. In addition, the method can be combined with a detection method based on infrared laser scanning, when the grain surface changes, the infrared laser scanner can be used for scanning the whole grain bin to obtain the accurate grain volume, and therefore the service life of infrared laser is prolonged.

Description

Granary grain quantity monitoring method and device based on convolutional neural network
Technical Field
The invention relates to a method and a device for monitoring grain quantity of a granary based on a convolutional neural network.
Background
At present, the grain inventory inspection usually adopts a direct measurement method, which is time-consuming and labor-consuming and is easily interfered by subjective factors. In recent years, some advanced measurement methods are proposed by related researchers to improve the accuracy and the intelligence of grain quantity detection, such as a grain inventory inspection method based on a pressure sensor, a grain inventory inspection method based on infrared laser scanning, a grain inventory inspection method based on radar detection, a grain inventory inspection method based on ultrasonic waves, and the like. Compared with the traditional manual grain inventory inspection method, the method not only improves the efficiency and the detection precision of warehouse cleaning and inspection, but also reduces the cost of warehouse cleaning and inspection. However, in addition to the above advantages, these methods also have some disadvantages, such as: the method based on the pressure sensor has higher requirement on the arrangement of the sensor, and the sensitivity of the sensor is gradually degraded along with time; the infrared laser scanning-based method cannot be used for a long time due to the limitation of the service life of an infrared laser, and in addition, the time overhead of accurately scanning the granary once by adopting a common laser scanner is large; the method based on radar detection and ultrasonic waves is limited to a cylindrical bin, is not suitable for detecting the quantity of other bin-type grains such as a tall and large horizontal warehouse, and has poor detection reliability. In addition, the method is high in cost and difficult to maintain when being deployed in a grain depot.
Disclosure of Invention
The invention aims to provide a granary grain quantity monitoring method based on a convolutional neural network, which is used for solving the problem that the existing granary grain quantity monitoring method is poor in reliability. The invention also provides a device for monitoring the grain quantity of the granary based on the convolutional neural network, which is used for solving the problem of poor reliability of the conventional mode for monitoring the grain quantity of the granary.
In order to achieve the above object, the scheme of the invention comprises:
a granary grain quantity monitoring method based on a convolutional neural network comprises the following steps:
(1) collecting grain surface images; the grain surface image comprises grain surfaces and reference lines above the grain surfaces;
(2) inputting the grain surface image into a constructed in-bin image segmentation model based on a convolutional neural network, and identifying the grain surface and a reference line;
(3) and calculating the area between the grain surface in the grain surface image and the reference line, calculating the error between the area and the preset area to obtain an error value, and if the error value is greater than a set error threshold value, judging that the grain quantity of the granary changes.
The method is a monitoring method for automatically monitoring the grain quantity change by combining the grain surface image with the image processing technology based on the convolutional neural network, and realizes the grain quantity safety monitoring function based on computer vision. Compared with the existing monitoring method, the method does not need to invest too much professional equipment, only needs to invest image acquisition and processing equipment, can realize automatic monitoring by processing the image, and is low in cost and simple in deployment. In addition, the method can obtain whether the grain quantity of the granary changes or not only by detecting the grain surface image and processing the grain surface image, and has the advantages of high detection speed and low energy consumption. Therefore, the method has high monitoring precision and reliability, and can effectively promote the intelligent upgrade and reconstruction process of the grain depot. In addition, the method can be combined with a detection method based on infrared laser scanning, when the grain surface changes, the infrared laser scanner can be used for scanning the whole grain bin to obtain the accurate grain volume, and therefore the service life of infrared laser is prolonged.
Further, in order to improve the grain quantity monitoring reliability of the granary, the building process of the image segmentation model in the granary comprises the following steps: collecting a grain sample image, marking grain and reference lines in the grain sample image, and generating a training set; and (4) building a convolutional neural network model, inputting the training set into the convolutional neural network model for training, and obtaining an in-bin image segmentation model.
Further, in order to improve the reliability of the image segmentation model in the bin, in the building process of the image segmentation model in the bin, after grain surfaces and reference lines in grain surface sample images are marked, a larger-scale training set is generated in an image set enhancement mode.
Further, in order to improve the accuracy of identifying the grain level and the reference line, in the step (2), the process of identifying the grain level and the reference line in the grain level image includes: the grain surface image is scaled to the size of the image corresponding to the training set, and the grain surface image is segmented by using the obtained image segmentation model in the bin, so that initial grain surface and a reference line are obtained; then, respectively expanding the obtained initial grain surface and the reference line; and finally, respectively thinning the grain surface and the reference line, and identifying to obtain the grain surface and the reference line.
Further, in order to improve the identification accuracy of the grain surface and the reference line, after the obtained initial grain surface and the obtained reference line are respectively expanded, a rectangular frame with the width equal to the width of the corresponding image of the training set is respectively used for covering the expanded grain surface and the expanded reference line, and the grain surface and the reference line are respectively refined by using a GrabCT algorithm or a full-connection CRF algorithm.
Furthermore, in order to reduce the identification error, after the grain surface image is segmented by using the obtained in-bin image segmentation model, the misclassified area is removed by using the prior information of the grain surface below the grain loading line.
And further, the reference line is a grain loading line, and the upper boundary of the grain loading line is obtained after the reference line is thinned. The upper boundary of the grain loading line is used as a reference, so that the situation that the grain loading line is partially covered by grains and the change of the grain surface cannot be accurately monitored can be prevented.
Further, in order to improve the calculation accuracy of the area between the grain surface and the reference line, the process of calculating the area between the grain surface and the reference line in the grain surface image includes: and calculating the vertical distance between each pixel point of the grain surface and the upper boundary of the grain containing line, and calculating the area between the grain surface and the upper boundary of the grain containing line according to each vertical distance.
The invention also provides a device for monitoring the grain quantity of a granary based on a convolutional neural network, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processing process realized by the processor when the computer program is executed comprises the following steps:
(1) collecting grain surface images; the grain surface image comprises grain surfaces and reference lines above the grain surfaces;
(2) inputting the grain surface image into a constructed in-bin image segmentation model based on a convolutional neural network, and identifying the grain surface and a reference line;
(3) and calculating the area between the grain surface in the grain surface image and the reference line, calculating the error between the area and the preset area to obtain an error value, and if the error value is greater than a set error threshold value, judging that the grain quantity of the granary changes.
The monitoring method corresponding to the device is an automatic monitoring method for automatically monitoring the grain quantity change by combining the grain surface image with the image processing technology based on the convolutional neural network, and the grain quantity safety monitoring function based on computer vision is realized. Compared with the existing method, the method does not need to invest too much professional equipment, only needs to invest image acquisition and processing equipment, and then can realize automatic monitoring by processing the image, and is low in cost and convenient to deploy. And only need detect the grain face image, can judge that granary grain quantity changes after through handling, have that detection speed is fast, the advantage that energy resource consumption is low. Therefore, the monitoring method corresponding to the device is high in monitoring precision and reliability, and can effectively promote the intelligent upgrading and transformation process of the grain depot. In addition, the infrared laser scanning-based detection method can be combined, when grain surface changes are found, the infrared laser scanner can be used for scanning the whole grain bin, accurate grain volume is obtained, and therefore the service life of infrared laser is prolonged.
Further, in order to improve the grain quantity monitoring reliability of the granary, the building process of the image segmentation model in the granary comprises the following steps: collecting a grain sample image, marking grain and reference lines in the grain sample image, and generating a training set; and (4) building a convolutional neural network model, inputting the training set into the convolutional neural network model for training, and obtaining an in-bin image segmentation model.
Drawings
FIG. 1 is a schematic view of a monitoring framework of a method for monitoring the quantity of grain in a granary based on a convolutional neural network provided by the present invention;
FIG. 2 is a monitoring flow chart of the granary grain quantity monitoring method based on the convolutional neural network provided by the present invention;
FIG. 3 is a schematic diagram of boundary fitting on a grain loading line and area calculation between grain surfaces;
wherein, the first step is a grain loading line, the second step is grain flour, and the third step is a grain outlet.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic view of a monitoring framework of the method for monitoring grain quantity of a granary based on a convolutional neural network, as shown in fig. 1, the method comprises a camera C, a grain loading line (I), a grain surface (II) and a grain outlet (III). The grain loading lines are distributed around the granary. The camera C is used for acquiring images of a grain loading line I and grain surface II at a grain outlet of the granary. Of course, as other embodiments, the grain loading line may be replaced by other reference lines (or called markers), such as: the marking lines arranged above or below the grain containing line can be replaced, the distance between the marking lines and the grain containing line is not far away in general, and in addition, the marking lines arranged at other corresponding positions can be replaced according to the monitoring requirement. If the marking line is adopted, the grain loading line in the subsequent description is replaced by the marking line, and the related methods and steps are consistent. Wherein, the grain loading line and the marking line both belong to a reference line and are used for providing reference when calculating the overall height of the grain surface.
In order to enable the camera to shoot images at the appointed position every time, the parameters of a camera holder are set, the parameters of a horizontal angle, a vertical angle, an aperture and the like shot by the camera are preset, the camera is aligned to the grain outlet, the shot images can cover grain surface information of the grain outlet as much as possible, if four grain outlets are detected, four sets of parameters need to be set, and meanwhile, the shot images can be considered to cover the whole granary. In order to automate the whole process, parameters such as the position, the aperture and the like of the camera when the grain surface is full, half and empty are considered at the same time, so that the system can automatically call different preset parameters of the camera according to the current detection result, a background is used for sending a command to the pan-tilt to complete parameter transmission and execute related operations, parameters such as a detection interval and the like are set in the background system, the camera is started at regular time, and the corresponding preset parameters are called to capture the grain surface image in the bin. In the embodiment, for convenience of description, one grain outlet is monitored, so that only one set of camera parameters is needed to be set, and then, in the embodiment, whether the grain storage quantity of the granary changes or not is judged by monitoring the grain surface change condition at one grain outlet; as other implementation modes, whether the grain storage quantity of the granary changes or not can be judged by monitoring the grain surface change conditions at a plurality of grain outlets in the granary. In addition, although the change of the grain level is monitored at the grain outlet, the method provided by the invention is not limited to the grain level monitoring at the grain outlet, and the grain level monitoring can be carried out in other positions in the barn.
On the whole, the method aims to judge whether the grain in the bin is abnormal or not by taking the grain loading line as a reference and automatically monitoring whether the grain surface changes relative to the grain loading line or not by means of the camera in the bin. If the whole grain surface close to the wall needs to be monitored, a week of images in the barn need to be acquired. When the images at the grain outlet are obtained, parameters such as the horizontal angle, the vertical angle, the aperture and the like of the camera are set, so that the camera can shoot the images of the same place each time.
Based on the monitoring framework shown in fig. 1, a specific process of the grain quantity monitoring method for the granary is given below as shown in fig. 2, but the invention is not limited to the monitoring framework shown in fig. 1.
Use full storehouse as example (half storehouse and empty storehouse accessible set up a plurality of presetting bits for the camera and realize), acquire in advance that grain outlet department is a plurality of images (grain face sample image promptly) that contain dress grain line and grain face as the training image set, adopt labelme instrument to mark the training image, mark the training image as 4 types: grain face, dress grain line, window and background, save training image's mark result (promptly mark the image), be about to the pixel in all training images divide into four types, promptly: grain loading lines, grain surfaces, windows and other background information. Certainly, the above is an optimized marking mode, and since the method provided by the invention realizes grain quantity monitoring according to the grain surface and the grain loading line, as a general implementation mode, the grain quantity monitoring can be realized only by marking the grain surface and the grain loading line.
And enhancing the marked training image set, wherein the training image set is enhanced by mainly adopting operations of intercepting an interested region with a specified size by a certain step length, adjusting image Gamma correction parameters, zooming an image, turning the image, rotating the image to be not more than +/-10 degrees relative to an original image, increasing Gaussian noise and the like, and the training image set meeting the requirement of the subsequent deep convolutional neural network model training is generated. In the enhancement process, the original training image and the labeled image are changed simultaneously, and if the labeled image has interpolation operation, nearest neighbor interpolation is selected. Because the grain surface and the grain containing line have a certain semantic context relationship, the original training image cannot be rotated in a large angle. The number of the expected target training image sets is set, and the image enhancement mode is set according to the number of the targets, wherein the number of the target training images is 5000 as an example. For any training image and the marked image thereof, intercepting an interested region with a specified size from the upper left corner of the image by a certain step length, respectively carrying out operations such as inversion, Gamma parameter adjustment, rotation of less than +/-10 degrees, addition of Gaussian noise and the like on the region, wherein each operation has a corresponding parameter set, and finally generating training image sets with the number close to the target number. The method of enhancing by using the image set in the training process can increase the training sample set, namely, the scale of the training sample set is increased. Although a specific image enhancement process is given above, the present invention is not limited to the above specific process, and a suitable image enhancement mode may be selected according to actual needs, or if the training image set meets the requirements, the training image may not be enhanced.
And then, building a convolutional neural network model, inputting the training image set into the convolutional neural network model for training, and obtaining an in-bin image segmentation model. The method comprises the following specific steps: firstly, a deep neural network architecture is selected, then parameters required by a training process are set, wherein the parameters comprise values of a learning rate, a parameter discarding rate, an epoch, a batch and the like, a training image set is input into a model to be trained in a GPU environment, and the trained deep neural network model is stored for testing. The built full convolution neural network model can adopt models such as deplab, segNet, U-Net and the like. Taking the deplab v2 model as an example, a caffe framework is adopted to realize the training of the deep full convolution neural network model. The method comprises the following specific steps:
firstly, generating an ID file train _ id.txt of a training image and a corresponding path file train.txt of the training image and a labeling result;
setting a train-prototxt file required by model training, changing the crop _ size parameter into 417, setting the average value of the training images, outputting the model into 4 types and the like; adopting an vgg16 neural network model as a basic model, and adding a corresponding upper sampling layer according to the requirement of a deplab v2 model; the results of the initial model parameters trained on the voc2012 database using VGG 16;
then, constructing a loss function of the model by adopting cross entropy and boundary loss;
further, in the training process, the learning rate is set to be 1e-3, the learning rate reduction parameter is set to be 0.9, the loss is calculated every 20 pieces on average, the maximum iteration number is 20000, and the parameter attenuation rate is set to be 0.0005;
further, training an image input model to perform forward calculation, obtaining a prediction result through softmax of the last layer, calculating a loss function by combining an artificial marking result, and updating parameters through a parameter iteration formula obtained by gradient reduction according to values in the current network;
and finally, when the network reaches the maximum iteration times or a preset stop condition, finishing the training to obtain the image segmentation model in the bin.
Certainly, the deep neural network architecture can also adopt U-Net and Tensorflow to realize the training and testing of the model, and as the U-Net is suitable for two types of segmentation, 3 times of training of two types of segmentation models are respectively carried out on grain surfaces, grain loading lines and windows, the trained models are stored, and other models can adopt a multi-type segmentation model training mode. Therefore, the key point of the invention is to establish an in-bin image segmentation model based on the convolutional neural network framework, and the method is not limited to which kind of fully convolutional neural network framework is used, and carries out a corresponding establishment process according to the selected fully convolutional neural network framework.
In the monitoring process, a real grain surface image (namely a test image) at a grain outlet is obtained firstly, and the grain surface image contains grain surfaces and a grain containing line. And zooming the grain surface image into a training image, inputting the training image into a trained in-bin image segmentation model, and segmenting the grain surface image by using the trained in-bin image segmentation model to obtain an initial grain containing line, a grain surface, a window and the like. And further, the trained in-bin image segmentation model only carries out forward calculation to obtain a predicted value, so that a segmentation result is obtained, and the segmentation result is further zoomed into the size of the original image.
After the initial grain loading line, the grain surface and the window are obtained, some misdivided areas, namely noise areas, are removed according to the fact that the grain loading line and the grain surface have banded prior information, namely the grain surface is required to be below the grain loading line, the window is required to be above the grain loading line and the like.
The boundary of an initial grain loading line and grain surface obtained by using an in-bin image segmentation model is rough, the grain loading line and the grain surface need to be refined, and the method comprises the following specific steps: firstly expanding the grain loading line and the grain surface, then utilizing the pixel coordinate information of the grain loading line and the grain surface in the image segmentation model in the bin, respectively covering the grain loading line and the grain surface by using a rectangular frame with the width being the width of the training image, and respectively refining the grain loading line and the grain surface on the pixel level by using a GrabCT algorithm. And removing the area above the grain loading line when the grain loading line is thinned, generating a larger rectangular frame containing the grain loading line again when the grain loading line is thinned, taking the large rectangular frame as an image, thinning the grain loading line to obtain a final segmentation result, and correcting the boundary information of the grain loading line and the grain loading face through the step. Specifically, for a grain loading line: the method comprises the steps of firstly carrying out expansion processing on a grain containing line, using a rectangular frame, enabling the width of the rectangular frame to be the same as that of a test image, covering the expanded grain containing line as priori knowledge, namely, the grain containing line is fixed in the frame, then, constructing a rectangular frame outside the rectangular frame, ensuring the balance of the number of pixels of the grain containing line and other areas in an outer rectangular frame, then, establishing a Gaussian mixture model for a foreground and a background, adopting the probability of deep neural network model output for a data item in an energy item, adopting RGB and Lab color characteristics of the pixels for calculation for a smooth item, refining the grain containing line by using a GrabCut algorithm, obtaining a final segmentation result of the grain containing line, namely identifying the upper boundary of the grain containing line, and using the upper boundary of the grain containing line as reference to prevent the grain containing line part from being covered by grains and being incapable of accurately monitoring the change of the grain. And similarly, processing the grain in the segmentation result by using the same method and process and obtaining the final segmentation result. In the above, the GrabCut algorithm is not limited to be used, and other algorithms such as the fully-connected CRF algorithm can be used to refine the image segmentation result.
And obtaining coordinate information of boundary pixel points on the grain loading line to prevent the grain from covering the grain loading line. And fitting the straight line of the upper boundary of the grain loading line by using a segmented least square method, replacing the upper boundary of the grain loading line with the fitted straight line to remove external points and noise points, wherein the maximum number of segments is 2, so that straight lines at one surface and corners of the granary are simulated and are used as the upper boundary of the grain loading line. Of course, the above-described straight line fitting is an optimized embodiment, and as another embodiment, the straight line fitting may not be performed. Then, the vertical distance between the pixel point of the upper boundary of each grain containing line and the grain surface (namely the upper boundary of the grain surface) is calculated, or the vertical distance between each pixel point of the upper boundary of the grain surface and the upper boundary of the grain containing line is calculated, as shown in fig. 3, the distance between the grain containing line and the grain surface is calculated according to the vertical distance, and the change of the area between the grain containing line and the grain surface is more sensitive than the average distance change between the grain containing line and the grain surface.
Assuming that the area of the area between the current grain loading line and the grain surface is AcSetting a preset area, which is the area A of the historical regionhFor example, calculate AcAnd AhThe difference R, i.e. R ═ Ac-AhIf R is larger than the set error threshold, the grain surface is considered to be changed greatly, namely the grain quantity of the granary is judged to be changed, an alarm is sent to an administrator, and otherwise, the current data is recorded.
In order to verify the superiority of the method provided by the invention, a series of quantitative analysis experiments are designed.
10 images in the bin are collected, the size of each image is 1080 multiplied by 1920, and 10 images are shot under different illumination and preset angles. Wherein 6 images are training image sets, and the training image sets are manually marked to obtain a manual marking result (Ground Truth) so as to evaluate the precision of the segmentation result.
And marking the training image and enhancing the image set to obtain a marked image.
And scaling the annotation image to 417 multiplied by 417 for deeplab v2 model training (scaling to 512 multiplied by 512 for U-Net model training) to obtain a well-trained deep neural network model.
And scaling the image to be segmented to 417 multiplied by 417, inputting the image to the trained deep neural network model for image segmentation, and scaling the obtained segmentation result back to the size of the original image. And then, resetting the area which is above the grain loading line and is wrongly divided into the grain surface as the background, and resetting the area which is below the grain loading line and is wrongly divided into the window as the grain surface or the background according to the similarity of the adjacent pixels, thereby obtaining the final segmentation result.
When GrabCut is used for thinning the grain loading line and the grain surface, the weight of the smoothing item is selected to be 50.
The final segmentation result is tested first. And after the final segmentation result is obtained, evaluating the three segmentation results by adopting four image segmentation field evaluation standards. As shown in table 1, respectively: PRI (probability Rand Index), VOI (Variation of Information Index), GCE (global consistency error Index), and BDE (boundary offset error Index); wherein, the PRI is in the interval [0, 1], and the larger the value is, the higher the segmentation accuracy is; the VOI is in the range of [0, + ∞ ], and the smaller the value is, the better the segmentation effect is; the smaller the values of BDE and GCE are, the better the segmentation effect is, and the closer the result is to the Ground Truth.
TABLE 1
Method PRI VOI GCE BDE
Ground Truth 0.98 1.20 0.11 0.56
The method of the invention 0.97 1.22 0.12 0.58
As can be seen from Table 1, the segmentation result of the method of the present invention is very close to the result of the artificial marking on four evaluation indexes, so that the effectiveness of the method of the present invention can be demonstrated.
In grain surface change monitoring, a camera with a holder is fixed on a wall, corresponding parameters are set, a liftable wood board is placed 15 m away from the camera, wheat with the thickness of 2 cm is uniformly placed on the wood board, a white baffle with red marking lines is used for simulating the inner wall of a bin, and grain surface change is simulated by the lifting wood board for placing grains. Through three tests, the grain surface is reduced by about 12 cm each time, and the detection results are shown in table 2.
TABLE 2
Number of measurements Grain flour height (rice) Area (pixel)2) Area difference
1 0.385 122737
2 0.26 167808 45071
3 0.14 214584 46776
As can be seen from table 2, in the three tests, the grain level was obtained under the condition that the grain level was lowered by about 0.12 m, the height difference was the difference between the second time and the first time, and the difference between the third time and the second time, respectively, the average distance was calculated according to the image width of 640 pixels, the area difference between the two adjacent measurements was 45071 and 46776, respectively, in general, a threshold value of 39000 was set here, and 39000 was approximately equal to the actual height of 0.1 m estimated from experiments. The experimental result shows that the actual grain surface reduction corresponds to the change of the number of pixels, and the requirement of monitoring the grain quantity change can be met, so that the method is feasible.
The specific embodiments are given above, but the present invention is not limited to the described embodiments. The basic idea of the present invention lies in the above basic scheme, and it is obvious to those skilled in the art that no creative effort is needed to design various modified models, formulas and parameters according to the teaching of the present invention. Variations, modifications, substitutions and alterations may be made to the embodiments without departing from the principles and spirit of the invention, and still fall within the scope of the invention.
The method for monitoring the grain quantity of the granary based on the convolutional neural network can be used as a computer program, stored in a memory of the granary grain quantity monitoring device based on the convolutional neural network, and executed by a processor of the granary grain quantity monitoring device based on the convolutional neural network.

Claims (4)

1. A granary grain quantity monitoring method based on a convolutional neural network is characterized by comprising the following steps:
(1) collecting grain surface images; the grain surface image comprises grain surfaces and reference lines above the grain surfaces;
(2) inputting the grain surface image into a constructed in-bin image segmentation model based on a convolutional neural network, and identifying the grain surface and a reference line;
(3) calculating the area between the grain surface in the grain surface image and a reference line, calculating the error between the grain surface in the grain surface image and a preset area to obtain an error value, and if the error value is greater than a set error threshold value, judging that the grain quantity of the granary changes;
the building process of the image segmentation model in the bin comprises the following steps: collecting a grain sample image, marking grain and reference lines in the grain sample image, and generating a training set; building a convolutional neural network model, inputting a training set into the convolutional neural network model for training to obtain an in-bin image segmentation model;
in the building process of the image segmentation model in the bin, after marking grain surfaces and reference lines in grain surface sample images, generating a larger-scale training set in an image set enhancement mode, wherein the process of generating the larger-scale training set in the image set enhancement mode comprises the following steps: setting the number of target training image sets, and setting an image enhancement mode according to the number of the target training image sets; for any training image and the marked image thereof, intercepting an interested region with a specified size from the upper left corner of the image in a certain step length, respectively carrying out inversion, Gamma parameter adjustment, rotation of less than +/-10 degrees and addition of Gaussian noise operation on the region, wherein each operation has a corresponding parameter set, and finally generating a training image set with a target quantity;
the deep full convolution neural network model training is realized by adopting a deplab v2 model and a caffe framework, and the specific steps are as follows: firstly, generating an ID file train _ ID and txt of a training image and a corresponding path file train and txt of the training image and a labeling result; then, setting a train prototxt file required by model training, changing a crop _ size parameter into 417, setting the average value of training images, and outputting the model into 4 classes; adopting an vgg16 neural network model as a basic model, and adding a corresponding upper sampling layer according to the requirement of a deplab v2 model; the results of the initial model parameters trained on the voc2012 database using VGG 16; then, constructing a loss function of the model by adopting cross entropy and boundary loss; finally, when the network reaches the maximum iteration times or a preset stop condition, finishing training to obtain an in-bin image segmentation model; in the training process, setting the learning rate to be 1e-3, the learning rate reduction parameter to be 0.9, calculating loss every 20 pieces on average, the maximum iteration number to be 20000 and the parameter attenuation rate to be 0.0005; training an image input model to perform forward calculation, obtaining a prediction result through softmax of the last layer, calculating a loss function by combining an artificial marking result, and updating parameters through a parameter iteration formula obtained by gradient reduction according to values in the current network;
in the step (2), the process of identifying the grain surface and the reference line in the grain surface image comprises the following steps: the grain surface image is scaled to the size of the image corresponding to the training set, and the grain surface image is segmented by using the obtained image segmentation model in the bin, so that initial grain surface and a reference line are obtained; then, respectively expanding the obtained initial grain surface and the reference line; finally, respectively thinning the grain surface and the reference line, and identifying to obtain the grain surface and the reference line;
the reference line is a grain loading line, and the upper boundary of the grain loading line is obtained after the reference line is thinned;
the process for calculating the area between the grain surface and the reference line in the grain surface image comprises the following steps: and calculating the vertical distance between each pixel point of the grain surface and the upper boundary of the grain containing line, and calculating the area between the grain surface and the upper boundary of the grain containing line according to each vertical distance.
2. The method for monitoring the grain quantity of the granary based on the convolutional neural network according to claim 1, wherein after the obtained initial grain surface and the reference line are respectively expanded, a rectangular frame with the width equal to the width of the corresponding image in the training set is used for covering the expanded grain surface and the reference line, and the GrabCT algorithm or the full-connection CRF algorithm is used for respectively refining the grain surface and the reference line.
3. The method for monitoring the grain quantity of the granary based on the convolutional neural network as claimed in claim 1, wherein after the obtained image segmentation model in the granary is used for segmenting the grain surface image, the prior information of the grain surface below a grain loading line is used for removing the misclassified area.
4. A convolutional neural network-based grain quantity monitoring device for a grain bin, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements a processing procedure when executing the computer program, and the processing procedure comprises:
(1) collecting grain surface images; the grain surface image comprises grain surfaces and reference lines above the grain surfaces;
(2) inputting the grain surface image into a constructed in-bin image segmentation model based on a convolutional neural network, and identifying the grain surface and a reference line;
(3) calculating the area between the grain surface in the grain surface image and a reference line, calculating the error between the grain surface in the grain surface image and a preset area to obtain an error value, and if the error value is greater than a set error threshold value, judging that the grain quantity of the granary changes;
the building process of the image segmentation model in the bin comprises the following steps: collecting a grain sample image, marking grain and reference lines in the grain sample image, and generating a training set; building a convolutional neural network model, inputting a training set into the convolutional neural network model for training to obtain an in-bin image segmentation model;
in the building process of the image segmentation model in the bin, after marking grain surfaces and reference lines in grain surface sample images, generating a larger-scale training set in an image set enhancement mode, wherein the process of generating the larger-scale training set in the image set enhancement mode comprises the following steps: setting the number of target training image sets, and setting an image enhancement mode according to the number of the target training image sets; for any training image and the marked image thereof, intercepting an interested region with a specified size from the upper left corner of the image in a certain step length, respectively carrying out inversion, Gamma parameter adjustment, rotation of less than +/-10 degrees and addition of Gaussian noise operation on the region, wherein each operation has a corresponding parameter set, and finally generating a training image set with a target quantity;
the deep full convolution neural network model training is realized by adopting a deplab v2 model and a caffe framework, and the specific steps are as follows: firstly, generating an ID file train _ ID and txt of a training image and a corresponding path file train and txt of the training image and a labeling result; then, setting a train prototxt file required by model training, changing a crop _ size parameter into 417, setting the average value of training images, and outputting the model into 4 classes; adopting an vgg16 neural network model as a basic model, and adding a corresponding upper sampling layer according to the requirement of a deplab v2 model; the results of the initial model parameters trained on the voc2012 database using VGG 16; then, constructing a loss function of the model by adopting cross entropy and boundary loss; finally, when the network reaches the maximum iteration times or a preset stop condition, finishing training to obtain an in-bin image segmentation model; in the training process, setting the learning rate to be 1e-3, the learning rate reduction parameter to be 0.9, calculating loss every 20 pieces on average, the maximum iteration number to be 20000 and the parameter attenuation rate to be 0.0005; training an image input model to perform forward calculation, obtaining a prediction result through softmax of the last layer, calculating a loss function by combining an artificial marking result, and updating parameters through a parameter iteration formula obtained by gradient reduction according to values in the current network;
in the step (2), the process of identifying the grain surface and the reference line in the grain surface image comprises the following steps: the grain surface image is scaled to the size of the image corresponding to the training set, and the grain surface image is segmented by using the obtained image segmentation model in the bin, so that initial grain surface and a reference line are obtained; then, respectively expanding the obtained initial grain surface and the reference line; finally, respectively thinning the grain surface and the reference line, and identifying to obtain the grain surface and the reference line;
the reference line is a grain loading line, and the upper boundary of the grain loading line is obtained after the reference line is thinned;
the process for calculating the area between the grain surface and the reference line in the grain surface image comprises the following steps: and calculating the vertical distance between each pixel point of the grain surface and the upper boundary of the grain containing line, and calculating the area between the grain surface and the upper boundary of the grain containing line according to each vertical distance.
CN201910295466.1A 2019-04-12 2019-04-12 Granary grain quantity monitoring method and device based on convolutional neural network Active CN110008947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910295466.1A CN110008947B (en) 2019-04-12 2019-04-12 Granary grain quantity monitoring method and device based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910295466.1A CN110008947B (en) 2019-04-12 2019-04-12 Granary grain quantity monitoring method and device based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110008947A CN110008947A (en) 2019-07-12
CN110008947B true CN110008947B (en) 2021-06-29

Family

ID=67171558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910295466.1A Active CN110008947B (en) 2019-04-12 2019-04-12 Granary grain quantity monitoring method and device based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110008947B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507148B (en) * 2019-12-31 2023-10-24 浙江苏泊尔家电制造有限公司 Control system and control method of rice storage device
CN111200723B (en) * 2020-01-07 2021-07-23 苏州恒志汇智能科技有限公司 Progressive die arch material monitoring method, device, equipment and storage medium
CN111582778B (en) * 2020-04-17 2024-04-12 上海中通吉网络技术有限公司 Method, device, equipment and storage medium for measuring accumulation of cargos in operation site
CN111967473B (en) * 2020-06-29 2023-01-20 山东浪潮通软信息科技有限公司 Grain depot storage condition monitoring method, equipment and medium based on image segmentation and template matching
CN112734826B (en) * 2020-12-29 2022-05-31 华信咨询设计研究院有限公司 Grain quantity estimation method based on deep learning and LSD (least squares-based) linear detection algorithm
EP4273517A1 (en) * 2022-05-06 2023-11-08 Insylo Technologies, S.L. Method and system for measuring the quantity of a bulk product stored in a silo
CN116755088B (en) * 2023-08-09 2023-11-17 中国科学院空天信息创新研究院 Granary depth and foreign matter detection and imaging method based on radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103063136A (en) * 2012-12-28 2013-04-24 大连工大(泗阳)光源与照明工程技术研究院有限公司 Detecting system for granary reserves
CN105674908A (en) * 2015-12-29 2016-06-15 中国科学院遥感与数字地球研究所 Measuring device, and volume measuring and monitoring system
CN108833313A (en) * 2018-07-12 2018-11-16 北京邮电大学 A kind of radio channel estimation method and device based on convolutional neural networks
CN109472261A (en) * 2018-06-15 2019-03-15 河南工业大学 A kind of quantity of stored grains in granary variation automatic monitoring method based on computer vision

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170119658A1 (en) * 2015-11-01 2017-05-04 Justin Turvey Preparation for enhanced fingerprint image formation on a transparent surface of a live scan device
US20170173262A1 (en) * 2017-03-01 2017-06-22 François Paul VELTZ Medical systems, devices and methods
CN108090628A (en) * 2018-01-16 2018-05-29 浙江大学 A kind of grain feelings security detection and analysis method based on PSO-LSSVM algorithms

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103063136A (en) * 2012-12-28 2013-04-24 大连工大(泗阳)光源与照明工程技术研究院有限公司 Detecting system for granary reserves
CN105674908A (en) * 2015-12-29 2016-06-15 中国科学院遥感与数字地球研究所 Measuring device, and volume measuring and monitoring system
CN109472261A (en) * 2018-06-15 2019-03-15 河南工业大学 A kind of quantity of stored grains in granary variation automatic monitoring method based on computer vision
CN108833313A (en) * 2018-07-12 2018-11-16 北京邮电大学 A kind of radio channel estimation method and device based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Environmental Monitoring in Grain Granary Based on Embedded System;Zhang Xiaodong 等;《IEEE》;20170323;第1051-1054页 *
基于自适应区域限制FCM的图像分割方法;李磊 等;《电子学报》;20180630;第46卷(第6期);第1312-1318页 *

Also Published As

Publication number Publication date
CN110008947A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110008947B (en) Granary grain quantity monitoring method and device based on convolutional neural network
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
WO2021000524A1 (en) Hole protection cap detection method and apparatus, computer device and storage medium
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN108564085B (en) Method for automatically reading of pointer type instrument
US20130155235A1 (en) Image processing method
US11593656B2 (en) Using a first stain to train a model to predict the region stained by a second stain
CN113109368B (en) Glass crack detection method, device, equipment and medium
CN109472261B (en) Computer vision-based automatic monitoring method for grain storage quantity change of granary
CN109615653A (en) Percolating water area detecting and recognition methods based on deep learning and visual field projection model
CN110046570B (en) Method and device for dynamically supervising grain stock of granary
CN103957771A (en) Image processing device, image processing method, and image processing program
CN112740267A (en) Learning data collection device, learning data collection method, and program
CN110378227B (en) Method, device and equipment for correcting sample labeling data and storage medium
US20210214765A1 (en) Methods and systems for automated counting and classifying microorganisms
US20170178341A1 (en) Single Parameter Segmentation of Images
CN111199523A (en) Power equipment identification method and device, computer equipment and storage medium
CN113658192A (en) Multi-target pedestrian track acquisition method, system, device and medium
CN104732520A (en) Cardio-thoracic ratio measuring algorithm and system for chest digital image
CN112819821A (en) Cell nucleus image detection method
CN108229473A (en) Vehicle annual inspection label detection method and device
CN113989353A (en) Pig backfat thickness measuring method and system
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN117746077A (en) Chip defect detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant