CN118247618A - Monitoring image analysis method and system applied to intelligent agricultural park - Google Patents

Monitoring image analysis method and system applied to intelligent agricultural park Download PDF

Info

Publication number
CN118247618A
CN118247618A CN202410622775.6A CN202410622775A CN118247618A CN 118247618 A CN118247618 A CN 118247618A CN 202410622775 A CN202410622775 A CN 202410622775A CN 118247618 A CN118247618 A CN 118247618A
Authority
CN
China
Prior art keywords
monitoring image
growth
target
image
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410622775.6A
Other languages
Chinese (zh)
Inventor
叶师瞳
吴铭基
周阳阳
彭文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bangsheng Beidou Agricultural Technology Co ltd
Original Assignee
Guangdong Bangsheng Beidou Agricultural Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bangsheng Beidou Agricultural Technology Co ltd filed Critical Guangdong Bangsheng Beidou Agricultural Technology Co ltd
Priority to CN202410622775.6A priority Critical patent/CN118247618A/en
Publication of CN118247618A publication Critical patent/CN118247618A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, in particular to a monitoring image analysis method and a monitoring image analysis system applied to an intelligent agricultural park. And then, deeply analyzing the monitoring image through a target growth condition type analysis network to extract visual description knowledge reflecting the growth condition of crops. The knowledge not only comprises the surface characteristics of the image, but also fuses multi-dimensional information such as the blocking characteristics, the distribution characteristics, the region characteristics and the like of the image. Finally, based on the visual description knowledge, the growth condition category corresponding to the monitoring image is determined. The method aims at improving the accuracy and the comprehensiveness of crop growth condition assessment and provides powerful support for the development of intelligent agriculture.

Description

Monitoring image analysis method and system applied to intelligent agricultural park
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a monitoring image analysis method and a monitoring image analysis system applied to an intelligent agricultural park.
Background
With the rapid development of modern agriculture, smart agriculture is receiving more and more attention as an emerging agriculture form. In intelligent agriculture, real-time monitoring and accurate assessment of crop growth conditions are key to ensuring efficient and high-quality agricultural production. However, the conventional crop growth monitoring method often depends on manual inspection and experience judgment, so that the efficiency is low, the effect of subjective factors is easy to influence, and the accuracy and timeliness of evaluation are difficult to ensure.
In order to solve the above problems, in recent years, crop growth monitoring techniques based on image analysis are becoming a hot spot of research. The technology evaluates the growth condition of crops by acquiring a growth monitoring image of the crops and extracting characteristic information in the image by utilizing an image processing and analyzing algorithm. However, the existing image analysis methods still have some disadvantages when processing complex crop growth monitoring images. On one hand, the methods can only extract the surface features of the images, and the inherent rule and related information of crop growth are difficult to deeply mine; on the other hand, they have limited capabilities for image blocking and feature fusion, which can easily lead to one-sided and inaccuracy of the evaluation result.
Disclosure of Invention
In order to improve the technical problems in the related art, the invention provides a monitoring image analysis method and a monitoring image analysis system applied to an intelligent agricultural park.
In a first aspect, an embodiment of the present invention provides a method for analyzing a monitoring image applied to an intelligent agricultural park, and the method is applied to a monitoring image analysis system, and includes:
Acquiring a target crop growth monitoring image and a target growth condition category analysis network of a target intelligent agricultural park;
acquiring target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image through the target growth condition type analysis network, wherein the target growth monitoring visual description knowledge is used for reflecting the growth condition type of the target crop growth monitoring image, and the target growth monitoring visual description knowledge is determined according to the monitoring image blocking characteristics of each monitoring image blocking included in the target crop growth monitoring image, the distribution characteristics of each monitoring image blocking included in the target crop growth monitoring image and the regional characteristics of the target crop growth monitoring image;
and determining the growth condition category corresponding to the target crop growth monitoring image based on the target growth monitoring visual description knowledge.
With reference to the first aspect, in a possible implementation manner of the first aspect, the obtaining, by the target growth condition category analysis network, target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image includes:
Acquiring target crop growth monitoring image growth monitoring visual description knowledge corresponding to the target crop growth monitoring image through the target growth condition type analysis network, wherein the target crop growth monitoring image growth monitoring visual description knowledge comprises target area characteristics of the target crop growth monitoring image, target monitoring image blocking characteristics corresponding to each monitoring image blocking in the target crop growth monitoring image and target distribution characteristics corresponding to each monitoring image blocking in the target crop growth monitoring image;
And determining target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image based on the target region characteristics, the target monitoring image blocking characteristics corresponding to each monitoring image blocking and the target distribution characteristics corresponding to each monitoring image blocking.
With reference to the first aspect, in a possible implementation manner of the first aspect, the method for debugging the target growth condition class resolution network includes:
Acquiring a first crop growth monitoring image, a second crop growth monitoring image, a third crop growth monitoring image and a growth condition type analysis network to be debugged, wherein the growth condition types of the first crop growth monitoring image and the second crop growth monitoring image are the same, and the growth condition types of the first crop growth monitoring image and the third crop growth monitoring image are different;
acquiring first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image, second growth monitoring visual description knowledge corresponding to the second crop growth monitoring image and third growth monitoring visual description knowledge corresponding to the third crop growth monitoring image through the to-be-debugged growth condition type analysis network, wherein the first growth monitoring visual description knowledge, the second growth monitoring visual description knowledge and the third growth monitoring visual description knowledge respectively reflect the growth condition types of the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image, and the growth monitoring visual description knowledge corresponding to each crop growth monitoring image is determined according to the monitoring image blocking characteristics of each monitoring image blocking included by each crop growth monitoring image, the distribution characteristics of each monitoring image blocking included by each crop growth monitoring image and the area characteristics of each crop growth monitoring image;
Determining a first network training error value based on the first growth monitoring visual description knowledge, the second growth monitoring visual description knowledge and the third growth monitoring visual description knowledge, the first network training error value being used to reflect a relationship between a first common metric value and a second common metric value, the first common metric value being a common metric value between growth condition categories of the first crop growth monitoring image and the second crop growth monitoring image, the second common metric value being a common metric value between growth condition categories of the first crop growth monitoring image and the third crop growth monitoring image;
And optimizing the to-be-debugged growth condition type analysis network according to the fact that the first network training error value is larger than an error threshold to obtain a target growth condition type analysis network, wherein the target growth condition type analysis network is used for analyzing a growth condition type corresponding to the crop growth monitoring image.
With reference to the first aspect, in a possible implementation manner of the first aspect, the obtaining, through the to-be-debugged growth condition category analysis network, first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image includes:
Acquiring first crop growth monitoring image growth monitoring visual description knowledge corresponding to the first crop growth monitoring image through the to-be-debugged growth condition type analysis network, wherein the first crop growth monitoring image growth monitoring visual description knowledge comprises first region characteristics of the first crop growth monitoring image, first monitoring image blocking characteristics corresponding to each monitoring image blocking in the first crop growth monitoring image and first distribution characteristics corresponding to each monitoring image blocking in the first crop growth monitoring image;
And determining first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image based on the first region characteristics, the first monitoring image blocking characteristics corresponding to each monitoring image blocking and the first distribution characteristics corresponding to each monitoring image blocking.
With reference to the first aspect, in a possible implementation manner of the first aspect, the determining, based on the first region feature, a first monitoring image blocking feature corresponding to the each monitoring image blocking, and a first distribution feature corresponding to the each monitoring image blocking, first growth monitoring vision description knowledge corresponding to the first crop growth monitoring image includes:
acquiring a confidence coefficient array based on the first region feature and the first monitoring image block feature corresponding to each monitoring image block, wherein the confidence coefficient array comprises confidence coefficients corresponding to each monitoring image block, and the confidence coefficients corresponding to each monitoring image block are used for reflecting the growth discrimination decision factors of each monitoring image block;
And determining first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image based on the confidence coefficient array, the first monitoring image blocking features corresponding to each monitoring image blocking and the first distribution features corresponding to each monitoring image blocking.
With reference to the first aspect, in a possible implementation manner of the first aspect, the obtaining a confidence array based on the first region feature and a first monitored image block feature corresponding to the each monitored image block includes:
Carrying out knowledge embedding on the first regional characteristics to obtain first growth monitoring visual description embedded knowledge;
And weighting the first growth monitoring visual description embedding knowledge and the first monitoring image blocking characteristics corresponding to each monitoring image blocking respectively to obtain the confidence coefficient array.
With reference to the first aspect, in a possible implementation manner of the first aspect, the determining, based on the confidence array, the first monitoring image blocking feature corresponding to each monitoring image blocking, and the first distribution feature corresponding to each monitoring image blocking, first growth monitoring vision description knowledge corresponding to the first crop growth monitoring image includes:
Determining second growth monitoring visual description embedding knowledge based on first monitoring image blocking features corresponding to the monitoring image blocking and first distribution features corresponding to the monitoring image blocking, wherein the second growth monitoring visual description embedding knowledge is used for reflecting the growth condition category of the first crop growth monitoring image;
And embedding the confidence coefficient array and the second growth monitoring visual description into knowledge elements corresponding to the same dimension in knowledge, and performing characteristic multiplication operation to obtain first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image.
With reference to the first aspect, in a possible implementation manner of the first aspect, the determining the second growth monitoring visual description embedding knowledge based on the first monitoring image blocking feature corresponding to the each monitoring image blocking and the first distribution feature corresponding to the each monitoring image blocking includes:
Determining target quantized growth characteristics corresponding to each monitoring image block based on first monitoring image block characteristics corresponding to each monitoring image block and first distribution characteristics corresponding to each monitoring image block, wherein the target quantized growth characteristics corresponding to each monitoring image block are used for reflecting each monitoring image block;
And forming the second growth monitoring visual description embedding knowledge by the target quantized growth characteristics corresponding to the monitoring image blocks.
With reference to the first aspect, in a possible implementation manner of the first aspect, the determining, based on the first monitored image block feature corresponding to the each monitored image block and the first distribution feature corresponding to the each monitored image block, the target quantized growth feature corresponding to the each monitored image block includes:
For any one of the monitoring image blocks, summing the characteristic of the first monitoring image block corresponding to the any one of the monitoring image blocks and the characteristic member corresponding to the same dimension in the first distribution characteristic corresponding to the any one of the monitoring image blocks to obtain the associated block characteristic corresponding to the any one of the monitoring image blocks;
And acquiring target quantized growth characteristics corresponding to any one of the monitoring image blocks based on the associated block characteristics corresponding to any one of the monitoring image blocks.
With reference to the first aspect, in a possible implementation manner of the first aspect, the determining a first network training error value based on the first growth monitor visual description knowledge, the second growth monitor visual description knowledge, and the third growth monitor visual description knowledge includes:
Determining a first common metric value based on the first and second growth monitoring visual description knowledge, the first common metric value being used to reflect a common metric value between a growth condition category of the first crop growth monitoring image and a growth condition category of the second crop growth monitoring image;
Determining a second common metric value based on the first and third growth monitoring visual description knowledge, the second common metric value being used to reflect a common metric value between a growth condition category of the first crop growth monitoring image and a growth condition category of the third crop growth monitoring image;
and determining the first network training error value through a target training error index based on the first commonality measurement value and the second commonality measurement value.
With reference to the first aspect, in a possible implementation manner of the first aspect, the optimizing the to-be-debugged growth condition category analysis network according to the first network training error value being greater than an error threshold to obtain a target growth condition category analysis network includes:
Optimizing the to-be-debugged growth condition type analysis network according to the fact that the first network training error value is larger than the error threshold to obtain a debugged growth condition type analysis network;
Acquiring a first associated blocking feature corresponding to the first crop growth monitoring image, a second associated blocking feature corresponding to the second crop growth monitoring image and a third associated blocking feature corresponding to the third crop growth monitoring image through the debugged growth condition type analysis network, wherein the first associated blocking feature, the second associated blocking feature and the third associated blocking feature respectively reflect the growth condition types of the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image;
Determining a second network training error value based on the first associated blocking feature, the second associated blocking feature, and the third associated blocking feature;
And taking the debugged growth condition type analysis network as the target growth condition type analysis network according to the fact that the second network training error value does not exceed the error threshold.
In a second aspect, the present invention also provides a monitoring image analysis system, including: a memory for storing program instructions and data; and a processor coupled to the memory for executing instructions in the memory to implement the method as described above.
In a third aspect, the present invention also provides a computer storage medium containing instructions which, when executed on a processor, implement the above-described method.
The invention provides a crop growth condition category analysis method based on deep learning. The method comprises the steps of firstly, obtaining a target crop growth monitoring image and a target growth condition category analysis network of a target intelligent agricultural park. And then, deeply analyzing the monitoring image through a target growth condition type analysis network to extract visual description knowledge reflecting the growth condition of crops. The knowledge not only comprises the surface characteristics of the image, but also fuses multi-dimensional information such as the blocking characteristics, the distribution characteristics, the region characteristics and the like of the image. Finally, based on the visual description knowledge, the growth condition category corresponding to the monitoring image is determined. The method aims at improving the accuracy and the comprehensiveness of crop growth condition assessment and provides powerful support for the development of intelligent agriculture.
In summary, the invention provides a crop growth condition category analysis method based on deep learning aiming at the defects of the existing crop growth monitoring technology. By acquiring and analyzing the growth monitoring image of the crop, comprehensive and accurate growth information is extracted, a scientific and efficient decision basis is provided for an agricultural manager, and the rapid development of intelligent agriculture is promoted.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a monitoring image analysis method applied to an intelligent agricultural park according to an embodiment of the present invention.
Fig. 2 is a block diagram of a monitoring image analysis system according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention will be described below with reference to the accompanying drawings.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention.
It should be noted that the terms "first," "second," and the like in the description of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided by the embodiments of the present invention may be performed in a monitored image analysis system, a computer device, or similar computing means. Taking as an example operation on a monitored image analysis system, the monitored image analysis system may comprise one or more processors (which may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory for storing data, and optionally the monitored image analysis system may further comprise transmission means for communication functions. It will be appreciated by those of ordinary skill in the art that the above-described configuration is merely illustrative and is not intended to limit the configuration of the monitoring image analysis system described above. For example, the monitoring image analysis system may also include more or fewer components than shown above, or have a different configuration than shown above.
The memory may be used to store a computer program, for example, a software program of application software and a module, for example, a computer program corresponding to a method for analyzing a monitoring image applied to an intelligent agricultural park in an embodiment of the present invention, and the processor executes various functional applications and data processing by running the computer program stored in the memory, that is, implements the method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the monitoring image analysis system through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider monitoring the image analysis system. In one example, the transmission means comprises a network adapter (Network Interface Controller, simply referred to as NIC) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Referring to fig. 1, fig. 1 is a flow chart of a method for analyzing a monitoring image applied to an intelligent agricultural park according to an embodiment of the invention, where the method is applied to a monitoring image analysis system, and further may include S110-S130.
S110, acquiring a target crop growth monitoring image and a target growth condition category analysis network of the target intelligent agricultural park.
S120, acquiring target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image through the target growth condition type analysis network, wherein the target growth monitoring visual description knowledge is used for reflecting the growth condition type of the target crop growth monitoring image, and the target growth monitoring visual description knowledge is determined according to the monitoring image blocking characteristics of each monitoring image blocking included in the target crop growth monitoring image, the distribution characteristics of each monitoring image blocking included in the target crop growth monitoring image and the regional characteristics of the target crop growth monitoring image.
S130, determining the growth condition category corresponding to the target crop growth monitoring image based on the target growth monitoring visual description knowledge.
In an intelligent agricultural park, a monitoring image analysis system is used as a core component and is responsible for acquiring and analyzing the growth condition of crops in real time. The following is a detailed procedure for the system to execute the technical solutions S110 to S130. The monitoring image analysis system captures a growth image of a target crop through a high-definition camera deployed in a park. These images are transmitted to the image processing unit of the system and marked as "target crop growth monitoring image". Meanwhile, the system loads a pre-trained target growth condition type analysis network, which is a deep learning model and is specially used for analyzing a crop growth image and identifying the growth condition type of the crop. Next, the system uses a target growth condition class resolution network to perform a depth analysis on the acquired target crop growth monitoring image. This process involves extracting features (e.g., color, texture, etc.) of individual tiles in the image, analyzing the distribution features (e.g., location, size, interrelationships, etc.) of the tiles in the image, and the regional features (e.g., overall brightness, contrast, etc.) of the entire image. These features are combined to form a set of 'visual description knowledge of target growth monitoring', which can comprehensively and accurately reflect the growth condition of target crops in the monitoring image. And finally, the system determines the growth condition type corresponding to the target crop growth monitoring image by comparing the preset growth condition type standard according to the extracted visual description knowledge of the target growth monitoring. For example, the system may identify that a crop in a certain area is deficient in water, pest, or nutrient, and feed this information back to the campus manager in real time. The manager can quickly take corresponding measures such as adjusting irrigation plans, applying pesticides or adding fertilizers according to the information so as to ensure healthy growth of crops. Through the application scene example, the actual application process of the monitoring image analysis technology in the intelligent agricultural park can be clearly seen, and how the monitoring image analysis technology helps the park to realize accurate and efficient agricultural production management.
The technical solutions described in S110 to S130 are described below by way of a more detailed example.
In a wide intelligent agricultural park, a monitoring image analysis system is quietly and efficiently operating. The crop growth device continuously captures each growth moment of crops through the carefully arranged high-definition cameras in the park. These image data may take a significant amount of time and effort for an agricultural specialist to carefully analyze and interpret; however, for this advanced set of systems, accurate decisions can be given in a very short time.
When the system acquires a new target crop growth monitoring image, the system can quickly call a pre-trained target growth condition category analysis network. The network is like an experienced agricultural expert, and can accurately identify various features in the image. It not only focuses on the basic information of the overall hue, brightness, etc. of the image, but also goes deep into every detail of the image, such as the color, texture of the leaves, height, density, etc. of the plants. At the same time, it also considers the distribution and interrelationship of these features in the image, such as whether the plants in a certain area are too dense, whether illumination and ventilation are affected, etc.
By a comprehensive analysis of these features, the system is able to develop a complete set of visual descriptive knowledge of the target growth monitoring. The set of knowledge not only contains the current growth state of crops, but also reflects possible problems and hidden troubles of the crops. For example, the system may find that plant leaves in an area show yellowing, which generally means that the soil in the area may lack certain nutrient elements; or the system may detect that the plant growth rate in one area is significantly slower than that in other areas, possibly due to a failure of the irrigation system in that area, resulting in an insufficient supply of water.
Once the system acquires the set of visual descriptive knowledge of the target growth monitoring, the visual descriptive knowledge is immediately compared with the preset growth condition category standard. These standards are established based on actual demands and experience of agricultural production, covering a variety of possible growth conditions and problems. By comparison, the system can rapidly determine the growth condition types corresponding to the growth monitoring images of the target crops, such as healthy growth, water shortage, fertilizer shortage, plant diseases and insect pests and the like.
Once the growth status category is determined, the system presents the information in an intuitive and understandable manner, such as by generating a detailed report or by directly annotating the monitored image. Thus, the manager of the park can know the growth condition of crops at the first time and take corresponding management measures according to the information. For example, if crops in a certain area are found to be lack of water, a manager can adjust an irrigation plan in time; if signs of pest damage are found, the control work can be quickly organized.
Through this set of monitored image analysis system, intelligent agricultural garden not only can realize real-time supervision and accurate judgement to the crop growth situation, can also improve agricultural production's efficiency and quality greatly. Meanwhile, the work load of the manager is greatly reduced, so that the manager can concentrate on innovation and improvement of agricultural production.
In S110, the target smart agricultural park refers to a modern agricultural park that uses advanced information technology, internet of things technology, big data analysis, artificial intelligence and other modern technological means to achieve the goal of intellectualization, precision, high efficiency and sustainability of agricultural production. For example, a large agricultural campus in a region integrates multiple intelligent technologies such as intelligent irrigation systems, climate monitoring systems, soil analysis systems, and crop growth monitoring systems. The crop growth environment in the park is monitored and intelligently controlled in real time so as to ensure the optimal condition for crop growth. Meanwhile, the garden optimizes the planting structure through data analysis, improves the yield and the quality, reduces the production cost and realizes agricultural sustainable development.
Explanation: the target crop growth monitoring image refers to a real-time or timed image which reflects the growth state and growth environment of a specific crop, captured by a monitoring camera installed in the intelligent agricultural park. These images are used to analyze the growth of the crop in order to find problems in time and take corresponding measures. For example, in a target smart agricultural park, a set of high-definition cameras are set for a rice growing area to capture a rice growing image. These images clearly show the growth information of the plant height, leaf color, density, etc. of the rice, and the surrounding environmental conditions such as light, soil humidity, etc. By analyzing the images, whether the rice grows normally or not can be judged, and whether management measures need to be adjusted in time or not can be judged.
The target growth condition category analysis network is a deep learning model or algorithm and is specially used for processing and analyzing crop growth monitoring images and automatically identifying and classifying the growth condition of crops in the images. The network can accurately identify various growth conditions and problems by training and learning a large amount of crop growth image data. For example, a deep learning model constructed based on Convolutional Neural Network (CNN) is trained on a large number of crop growth monitoring images, and has the capability of automatically identifying and classifying the growth conditions of crops. When a new crop growth image is input, the model can rapidly analyze the characteristics in the image and output the current growth status category of the crop, such as healthy growth, water shortage, insect disease and the like. The automatic identification and classification greatly improves the efficiency and accuracy of agricultural production management.
Based on this, for S110, the monitoring image analysis system plays a role in real-time monitoring and analysis of the growth state of crops as a core component of the intelligent agricultural park. During its operation, the primary task is to acquire target crop growth monitoring images of the target intelligent agricultural park and to load a target growth condition category resolution network for resolving these images. In a vast intelligent agricultural park, countless cameras look like intelligent eyes, focusing on every moment of crop growth. These high definition cameras are carefully placed at the corners of the campus to ensure that every detail of crop growth can be captured. The monitoring image analysis system continuously receives the image data transmitted by the cameras by establishing close connection with the cameras. When the system receives the image data, it will first perform preliminary screening and processing on the image to ensure that the quality and sharpness of the image meets the requirements of subsequent analysis. In the process, the system can utilize advanced image processing technology to perform operations such as denoising, enhancing and the like on the image, so that clearer and more accurate crop growth information is extracted. Meanwhile, the monitoring image analysis system also loads a pre-trained target growth condition category analysis network. The network is a core tool for analyzing and judging the system, has strong feature extraction and classification capability, and can accurately identify various growth conditions of crops in the image. The training process of this network is subject to extensive data learning and optimization, enabling it to accurately classify and judge various growth conditions. After loading the parsing network, the monitoring image analysis system may begin to perform in-depth analysis and processing of the received image data. The method can input image data into an analysis network, extract characteristic information in the image by utilizing the strong computing capacity of the network, and classify and judge the growth condition of crops according to the characteristic information. In the process, the system comprehensively considers various factors in the image, such as the appearance characteristics of the colors, textures, shapes and the like of crops, the environmental conditions of the crops and the like, so that a comprehensive and accurate judgment result is obtained. Through this series of operations and processes, the monitoring image analysis system successfully acquires target crop growth monitoring images of the target intelligent agricultural park and loads a target growth condition category analysis network for analyzing these images. The method lays a solid foundation for subsequent image analysis and crop growth condition judgment, and provides powerful support for efficient and accurate management of the intelligent agricultural park.
In S120, the target growth monitoring visual descriptive knowledge (target growth monitoring visual descriptive feature) refers to a comprehensive description of a series of visual features regarding the growth state of the crop extracted by the image analysis technique. These characteristics may include color, texture, shape, size, etc. that together form a comprehensive, detailed picture of the growth status of the crop. For example, in analyzing the rice growth monitoring image, the extracted visual descriptive knowledge of the target growth monitoring may include: the green degree of the blade, the fineness of the texture of the blade, the shape and density of the rice ears and the like. These characteristics are combined to describe the current growth state of the rice, such as health, water shortage, diseases and insect pests, etc.
The growth status category refers to a series of categories which are artificially divided according to different characteristics and states of crops in the growth process. These categories are commonly used to describe the growth of crops for targeted management and intervention. For example, common growth condition categories include "healthy growth", "water deficient", "fertilizer deficient", "pest" and the like. By analyzing the crop growth monitoring images, crops can be classified into corresponding growth condition categories, so that decision basis is provided for agricultural management.
The monitoring image blocking refers to the process of dividing the complete crop growth monitoring image into a plurality of small blocks or areas. The blocking process is helpful for more finely analyzing different parts in the image, and improves the accuracy and efficiency of analysis. For example, a rice growth monitor image may be divided into a plurality of segments, each segment containing a different portion of the rice or a different region of the growing environment. By analyzing each partition individually, more detailed growth information can be obtained.
The monitoring image segmentation feature refers to a feature extracted from each monitoring image segmentation for describing the growth state of crops in the segmentation. These features may be color, texture, shape, etc. that reflect the specific growth of the crop in the segments. For example, in a certain monitored image segment, the extracted features may include the average green level of the rice leaf within the segment, the roughness of the texture, etc. These features help to determine whether the rice growth within the segment is normal.
Distribution characteristics refer to characteristics that describe the spatial distribution relationship between different segments or elements in a monitored image. These features may reflect the overall layout and growth trend of the crop in the image. For example, in a rice growth monitor image, the distribution characteristics may describe the density, height distribution, etc. of rice plants in different sections. By analyzing these distribution characteristics, the uniformity and consistency of the growth of rice in the whole growth area can be known.
The region feature refers to a feature describing the overall attribute of a specific region in the monitoring image. These features are typically used to characterize the overall differences and similarities of different regions in an image. For example, in the rice growth monitoring image, the regional characteristics may include the overall brightness, color distribution, and the like of the different growth regions. By analyzing the regional characteristics, abnormal regions or problem regions possibly existing in the image can be identified, and guidance is provided for subsequent agricultural management.
Based on this, for S120, the monitoring image analysis system is used as an execution subject, and after the target crop growth monitoring image is acquired, the image is further analyzed by using the preloaded target growth condition category analysis network, so as to acquire the target growth monitoring visual description knowledge capable of reflecting the growth condition of the target crop. In the process, the system firstly carries out integral scanning on the received growth monitoring image, identifies key elements and structures in the image, and lays a foundation for subsequent feature extraction and analysis. The system then employs advanced image processing techniques to divide the image into a number of monitored image segments. Each tile contains a portion of the information of the image, and such processing helps the system analyze the various portions of the image more carefully. The system then extracts for each monitored image patch its characteristic monitored image patch feature. These characteristics may include color, texture, shape, etc. that together constitute a detailed description of the state of crop growth within the segment. At the same time, the system also analyzes the distribution characteristics among different blocks, namely the spatial relationship and the layout mode of the blocks in the image. These distribution characteristics reflect the overall trend and regularity of the crop during growth. In addition, the system captures a regional signature of the target crop growth monitoring image. These region features describe the overall properties and differences of the different regions in the image, helping the system to identify areas of anomalies or problems that may exist. After extracting the abundant characteristic information, the monitoring image analysis system inputs the characteristic information into a target growth condition category analysis network. The network integrates the characteristic information into a set of comprehensive and fine target growth monitoring visual description knowledge through complex calculation and reasoning processes. The set of knowledge not only contains the current growth state information of crops, but also reflects possible problems and hidden dangers. Finally, the set of visual descriptive knowledge for target growth monitoring will be used to determine the type of growth condition to which the target crop growth monitoring image corresponds. By comparing the preset growth condition category standards, the system can rapidly identify the growth condition of crops in the image, thereby providing timely and accurate decision support for agricultural management personnel. In the whole process, the monitoring image analysis system comprehensively utilizes advanced image processing technology and deep learning algorithm to realize comprehensive analysis and accurate judgment of the crop growth monitoring image. The intelligent level of agricultural production is improved, and powerful technical support is provided for realizing sustainable development of agriculture.
For S130, the monitoring image analysis system, after acquiring the visual description knowledge of the target growth monitoring, enters a key stage of determining the growth status category corresponding to the target crop growth monitoring image. This process is not an exemplary classification task, but rather requires the system to use its powerful computing power and a rich experience of previous learning to deep parse and match the visual description knowledge. The system firstly carries out comprehensive examination on the visual description knowledge of target growth monitoring, and identifies key features and modes in the visual description knowledge. These characteristics may include the shade of the leaf, the fineness of the texture, the variation in morphology, etc., each of which may be an important basis for determining the growth condition. At the same time, the system may also focus on the correlation and trend of changes between these features to capture possible anomalies or problem signs. The monitoring image analysis system then compares these features to predefined growth condition categories. These categories are carefully designed based on agricultural specialists and domain knowledge, each corresponding to a unique set of feature combinations and performance patterns. The system searches for the most consistent growth condition category by calculating the similarity or matching between the target monitoring visual description knowledge and each category. In this process, the system may apply complex algorithms and models, such as deep learning classifiers, support vector machines, etc., to ensure accuracy and reliability of the matching. The algorithms can remain efficient and stable when processing large amounts of data and complex patterns, and are powerful tools for the system to achieve accurate classification. Finally, through a series of calculation and comparison, the monitoring image analysis system can determine the growth condition category corresponding to the target crop growth monitoring image. This category reflects not only the current growth status of the crop but may also be predictive of its future trends and possible problems. According to the category information, agricultural management personnel can timely take corresponding management measures and intervention measures to ensure healthy growth and high yield and quality of crops. In the whole process, the monitoring image analysis system provides timely and accurate decision support for agricultural production by using a powerful computing capacity and an accurate matching algorithm. The intelligent and accurate level of agricultural production is improved, and a solid foundation is laid for realizing green sustainable development of agriculture.
The beneficial effects of the present invention are mainly embodied in the following aspects.
Firstly, by acquiring the target crop growth monitoring image of the target intelligent agricultural park, the method can grasp the growth state of crops in real time, and provide visual and accurate growth information for agricultural managers. The real-time monitoring mode greatly improves timeliness and pertinence of crop management, and is helpful for timely finding and solving the problems in the growth process of crops.
Secondly, the invention utilizes the target growth condition category analysis network to carry out deep analysis on the obtained crop growth monitoring image, and extracts the visual description knowledge of the target growth monitoring. The knowledge comprehensively reflects various characteristics and conditions of crop growth, and provides abundant and accurate data support for subsequent growth condition type judgment.
In addition, when the growth condition type is determined, the characteristics of the monitored image blocks, the distribution characteristics and the regional characteristics are comprehensively considered, so that the judgment result is more comprehensive and careful. The multidimensional feature fusion method improves accuracy and reliability of category judgment, and is beneficial to an agricultural manager to make more scientific and reasonable decisions.
Finally, by determining the growth condition category corresponding to the target crop growth monitoring image, the invention provides a convenient and efficient crop growth condition assessment tool for an agricultural manager. The intelligent agricultural resource management system is not only beneficial to improving the intelligent level of agricultural production, but also can promote the reasonable utilization of agricultural resources and the protection of ecological environment, and promotes the sustainable development of agriculture.
In summary, the invention provides comprehensive and accurate growth information and assessment tools for agricultural managers by acquiring and analyzing the crop growth monitoring images in real time, thereby being beneficial to improving the efficiency and quality of agricultural production and promoting the development of intelligent agriculture.
In some possible embodiments, the step of obtaining, through the target growth condition category analysis network, the target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image described in step S120 includes steps S121-S122.
S121, acquiring target crop growth monitoring image growth monitoring vision description knowledge corresponding to the target crop growth monitoring image through the target growth condition type analysis network, wherein the target crop growth monitoring image growth monitoring vision description knowledge comprises target area characteristics of the target crop growth monitoring image, target monitoring image blocking characteristics corresponding to each monitoring image blocking in the target crop growth monitoring image and target distribution characteristics corresponding to each monitoring image blocking in the target crop growth monitoring image.
S122, determining target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image based on the target region characteristics, the target monitoring image blocking characteristics corresponding to each monitoring image blocking and the target distribution characteristics corresponding to each monitoring image blocking.
In some possible embodiments, for the step S120 in the technical solution, that is, the target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image is obtained through the target growth condition category analysis network, the steps S121 and S122 may be further refined to be explained and illustrated in detail.
First, when step S121 is performed, the monitoring image analysis system uses a preloaded target growth status category analysis network to perform depth analysis on the obtained target crop growth monitoring image. In this process, the network will extract various feature information for different areas and details in the image. In particular, the system identifies and extracts target area features of a target crop growth monitoring image, which may include color, texture, shape, etc., for describing the general properties and status of a particular area in the image. Meanwhile, the system divides the image into a plurality of monitoring image blocks, and extracts the characteristic target monitoring image block characteristics of each block. These blocking features can reflect detailed information and growth conditions of different local areas in the image. In addition, the system also analyzes the distribution and arrangement modes of the monitoring image blocks in the images, and extracts the corresponding target distribution characteristics. These distribution features help reveal the overall trend and spatial relationship of crop growth.
Next, step S122 is entered, where the monitoring image analysis system fuses and integrates the target region feature extracted in the previous step, the target monitoring image block feature corresponding to each monitoring image block, and the target distribution feature corresponding to each monitoring image block. This process may involve complex feature fusion algorithms and models to ensure efficient binding and complementation between different features. Finally, the system can generate a set of comprehensive and detailed target growth monitoring visual description knowledge, and the set of knowledge integrates various characteristic information in the image, so that the growth condition and the category of the target crop growth monitoring image can be accurately reflected.
It is noted that the "target region feature", "target monitoring image blocking feature" and "target distribution feature" herein are high-level feature representations automatically extracted from an image based on a deep learning algorithm. They differ from low-level features (e.g., pixel values, color histograms, etc.) in conventional image processing, but rather are high-level feature descriptions that contain more semantic information and abstractions. These advanced features enable the system to more accurately understand and interpret the image content, thereby enabling efficient monitoring and assessment of crop growth conditions.
In summary, through the detailed implementation and explanation of the two sub-steps S121 and S122, it can be more clearly understood how to obtain the process and method of the visual description knowledge of target growth monitoring corresponding to the target crop growth monitoring image through the target growth condition category analysis network in the technical scheme. The process not only relates to the key technical points such as extraction and fusion of image features, but also embodies great potential and value of deep learning in the application of the agricultural field.
The debugging steps of the target growth condition class resolution network are described below through S210-S240.
S210, acquiring a first crop growth monitoring image, a second crop growth monitoring image, a third crop growth monitoring image and a growth condition type analysis network to be debugged, wherein the growth condition types of the first crop growth monitoring image and the second crop growth monitoring image are the same, and the growth condition types of the first crop growth monitoring image and the third crop growth monitoring image are different.
S220, acquiring first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image, second growth monitoring visual description knowledge corresponding to the second crop growth monitoring image and third growth monitoring visual description knowledge corresponding to the third crop growth monitoring image through the to-be-debugged growth condition type analysis network, wherein the first growth monitoring visual description knowledge, the second growth monitoring visual description knowledge and the third growth monitoring visual description knowledge reflect the growth condition types of the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image respectively, and the growth monitoring visual description knowledge corresponding to each crop growth monitoring image is determined according to the monitoring image blocking characteristics of each monitoring image blocking included in each crop growth monitoring image, the distribution characteristics of each monitoring image blocking included in each crop growth monitoring image and the area characteristics of each crop growth monitoring image.
S230, determining a first network training error value based on the first growth monitoring visual description knowledge, the second growth monitoring visual description knowledge and the third growth monitoring visual description knowledge, wherein the first network training error value is used for reflecting the relation between a first common measurement value and a second common measurement value, the first common measurement value is a common measurement value between growth condition categories of the first crop growth monitoring image and the second crop growth monitoring image, and the second common measurement value is a common measurement value between growth condition categories of the first crop growth monitoring image and the third crop growth monitoring image.
And S240, optimizing the to-be-debugged growth condition type analysis network according to the fact that the first network training error value is larger than an error threshold to obtain a target growth condition type analysis network, wherein the target growth condition type analysis network is used for analyzing a growth condition type corresponding to a crop growth monitoring image.
In the area of smart agriculture, the system requires a complex series of processing and analysis steps in order to accurately identify and classify the growth conditions of crops. The implementation of this solution will be explained in detail below.
First, in step S210, the system acquires three crop growth monitoring images: the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image, and a growth condition category analysis network to be debugged. The selection of the three images is taught wherein the growth categories of the first and second images are the same and the growth categories of the first and third images are different. Such an arrangement is intended to be able to effectively distinguish between characteristic differences of different growth condition categories in subsequent network training.
Then, step S220 is performed, where the system processes the three images by using the growth status type analysis network to be debugged. Through the deep learning and feature extraction capability of the network, the system can acquire the growth monitoring visual description knowledge corresponding to the three images respectively: first growth monitoring visual description knowledge, second growth monitoring visual description knowledge, and third growth monitoring visual description knowledge. The knowledge is obtained based on the combination of the characteristics of each monitoring image block, the distribution characteristics and the overall area characteristics in the image, and can comprehensively reflect the growth condition and the category information of crops in the image.
Then, in step S230, the system performs further calculations and analysis based on the three previously acquired knowledge of the visual description of growth monitoring. The system will evaluate the accuracy of the network to be commissioned in identifying the same and different growth condition categories by calculating a first network training error value. This error value reflects the link between the network's commonality metric value identifying two images of the same class (first and second images) and the commonality metric value identifying two images of different classes (first and third images). If the network is able to accurately identify images of the same and different categories, then this error value will be small; conversely, if the network is confused or otherwise erroneous in the identification process, the error value will be greater.
Finally, in step S240, the system determines whether the first network training error value is greater than a preset error threshold. If the error value is greater than the threshold, the recognition capability of the network is still to be improved, and the system optimizes and adjusts the network to improve the recognition accuracy. This optimization process may include adjusting parameters, structure, or training strategies of the network, etc. After optimization, the system can obtain a more accurate and reliable target growth condition type analysis network, and the network can be better used for analyzing the growth condition type corresponding to the crop growth monitoring image, so that powerful support and guidance are provided for agricultural production.
To facilitate an understanding of the overall scheme, a detailed description of the sub-steps of S210-S240 is continued below.
For S210, in the application scenario of smart agriculture, the monitoring image analysis system plays a core role, and is responsible for collecting and analyzing the growth monitoring image of crops to accurately evaluate the growth condition thereof. To achieve this, the system needs to acquire a plurality of crop images with different growth categories, and a growth category resolution network to be commissioned. The system first acquires three key crop growth monitoring images: a first crop growth monitor image, a second crop growth monitor image, and a third crop growth monitor image. The selection of these three images is not arbitrary, but rather is carefully chosen. The first crop growth monitor image and the second crop growth monitor image show crops of the same growth status category, meaning that they share similar visual characteristics and growth patterns. Such images are very important to the system as they can be used to verify the accuracy and stability of the network to be commissioned in identifying the same growth condition categories. Meanwhile, the first crop growth monitoring image and the third crop growth monitoring image show different growth status categories. This difference may be due to moisture, nutrients, pests, or other environmental factors. The two images are present in order to test the sensitivity and degree of differentiation of the network to be commissioned when differentiating between different growth condition categories. The system needs to ensure that the network is able to capture these subtle but critical differences in order to accurately classify the growth conditions of the crop. In addition to the image data, the system also requires a growth condition class resolution network to be debugged. This network is the core tool for analysis and identification of the system, which requires extensive training and learning to continuously optimize its performance. After three key images are acquired, the system inputs the images into a network to be debugged, and key information in the images, such as the characteristics of colors, textures, shapes and the like of crops, and the distribution and arrangement modes of the characteristics in the images are extracted by utilizing the deep learning and characteristic extraction capabilities of the images. In summary, by acquiring the crop images having the same and different growth condition categories and a growth condition category analysis network to be debugged, the monitoring image analysis system can more comprehensively and accurately evaluate the growth condition of the crop. This not only helps to improve the efficiency and quality of agricultural production, but also provides a more powerful and convenient tool for related personnel and agricultural specialists to monitor and manage the growth process of crops.
For S220, the monitoring image analysis system, after receiving the first, second and third crop growth monitoring images, uses the growth status category analysis network to be debugged to perform deep analysis and processing on the images. The design of this network aims to extract key information about the growth condition of crops from complex image data and convert it into visual descriptive knowledge that is easy to understand and process. During processing, the system will first divide each crop growth monitoring image into a plurality of monitoring image tiles. The tiles may be regular grids or regions adaptively divided according to image content. Each monitored image segment contains a portion of detailed information in the image, such as the color, texture, shape, and relationship with other objects of the crop. The system extracts the characteristics of each monitoring image block by utilizing the deep learning capability of the network to be debugged, wherein the characteristics are called monitoring image block characteristics. They can reflect the growth status, health status and possible problems of the crops within the segments. In addition to monitoring the image segmentation characteristics, the system also analyzes the distribution and arrangement of the segments in the whole image. Such distribution characteristics can reveal the overall trend of crop growth, spatial relationships, and environmental impact that may be experienced. For example, if the crop distribution is sparse in an area, it may mean that the area is under-fertilized or other growth limiting factors are present. In addition, the system can also extract the regional characteristics of each crop growth monitoring image. These features describe the overall properties and states of the different regions in the image, such as brightness, contrast, color balance, etc. They can provide macroscopic information about the crop growth environment, such as lighting conditions, soil type, etc. The three characteristics are combined: the monitoring image is divided into a block feature, a distribution feature and a region feature, and the system can generate a comprehensive and detailed crop growth monitoring visual description knowledge map. For the first, second and third crop growth monitoring images, the system will generate a first growth monitoring visual description knowledge, a second growth monitoring visual description knowledge and a third growth monitoring visual description knowledge, respectively. The knowledge not only contains visual information in the image, but also integrates deep learning and reasoning capabilities of the network, so that the system can more accurately understand and explain the image content. In this way, the monitoring image analysis system can provide powerful support for agricultural production. Related personnel and agricultural specialists can quickly and accurately evaluate the growth condition of crops according to the visual description knowledge, and timely discover and solve potential problems, so that healthy growth and high yield and high quality of the crops are ensured. Meanwhile, the knowledge can also provide data support for agricultural production decisions, and promote agriculture to develop towards more intelligent and accurate directions.
For S240, the monitoring image analysis system further uses the knowledge of the growth monitoring visual descriptions corresponding to the first, second and third crop growth monitoring images to determine the error value of the network training. This error value is critical for evaluating the performance of the network and for subsequent optimization adjustments. The system first calculates a common metric value, i.e., a first common metric value, between the first crop growth monitor image and the second crop growth monitor image. Since the growth status categories of the two images are the same, the commonality metric between them should be relatively high. This commonality metric reflects the accuracy and consistency of the network in identifying the same growth condition category. The system then calculates a common metric, i.e., a second common metric, between the first crop growth monitor image and the third crop growth monitor image. Since the growth status categories of the two images are different, the commonality metric between them should be relatively low. This commonality metric reveals the sensitivity and degree of differentiation of the network in differentiating between different growth condition categories. After the two common metric values are obtained, the system further analyzes the relationship between them. This association effectively reflects the performance differences of the network in identifying the same and different growth condition categories. If the network performs excellently when identifying the same category, but performs poorly when distinguishing between different categories, the first commonality metric will be higher than the second commonality metric, and vice versa. To quantify this performance difference, the system calculates a first network training error value. This error value is determined based on the relative magnitudes of the first and second commonality metric values and the degree of difference therebetween. If the first commonality measurement value is far higher than the second commonality measurement value and the difference between the first commonality measurement value and the second commonality measurement value is obvious, the first network training error value is smaller, and the network performance is better; otherwise, if the difference between the two is not large or the second commonality metric is higher than the first commonality metric, the first network training error value is larger, which indicates that the performance of the network is still to be improved. Through this first network training error value, the monitoring image analysis system can objectively evaluate the accuracy and stability of the current network in identifying the crop growth status category. This provides a powerful basis and guidance for subsequent network optimization and tuning. The system can pertinently adjust parameters, structures or training strategies of the network according to the magnitude and the direction of the error value so as to improve the performance and the accuracy of the system in identifying the crop growth condition category.
Under some preferred design ideas, the step S220 of obtaining, through the to-be-debugged growth condition category analysis network, first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image includes steps S221-S222.
S221, acquiring first crop growth monitoring image growth monitoring visual description knowledge corresponding to the first crop growth monitoring image through the to-be-debugged growth condition type analysis network, wherein the first crop growth monitoring image growth monitoring visual description knowledge comprises first region characteristics of the first crop growth monitoring image, first monitoring image blocking characteristics corresponding to each monitoring image blocking in the first crop growth monitoring image and first distribution characteristics corresponding to each monitoring image blocking in the first crop growth monitoring image.
S222, determining first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image based on the first region characteristics, the first monitoring image blocking characteristics corresponding to each monitoring image blocking and the first distribution characteristics corresponding to each monitoring image blocking.
Under some preferred design considerations, the monitoring image analysis system further refines its processing when executing the steps described in S220, which includes two sub-steps S221 and S222. How these two sub-steps are implemented will be explained in detail below.
Firstly, the monitoring image analysis system can utilize a growth condition type analysis network to be debugged to deeply analyze the first crop growth monitoring image. The network is specially trained to identify and extract key information related to crop growth conditions in the image. By analysis, the system will acquire a series of detailed features of the first crop growth monitoring image, which together constitute a visual descriptive knowledge of the first crop growth monitoring image growth monitoring.
In particular, these knowledge include three main aspects: the first region feature, the first monitoring image blocking feature, and the first distribution feature. The first region features describe the general properties and status of the different regions in the image, such as brightness, contrast, color balance, etc., which reflect macroscopic information of the crop growth environment. The first monitoring image blocking feature focuses on detailed information of each blocking in the image, such as color, texture, shape and the like of crops, and can reveal specific growth conditions and health states of the crops. The first distribution characteristic analyzes the arrangement and distribution mode of the blocks in the image, and reveals the spatial relationship and overall trend of crop growth.
After the first regional features, the first monitoring image blocking features and the first distribution features are obtained, the monitoring image analysis system fuses and processes the features to generate first growth monitoring vision description knowledge corresponding to the first crop growth monitoring image. The process is based on deep learning and pattern recognition technology, and the system weights, screens and combines the features according to preset algorithms and models so as to generate a comprehensive and accurate visual description knowledge graph.
The first growth monitoring visual description knowledge not only contains visual information in the image, but also integrates deep learning and reasoning capabilities of the system, so that the system can more accurately understand and explain the image content. It provides powerful data support for subsequent crop growth condition evaluation, problem diagnosis and decision support.
In summary, through the detailed execution of the two sub-steps S221 and S222, the monitoring image analysis system can obtain the first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image, and provides important technical support for the intelligent management of agricultural production. The technical scheme not only improves the accuracy and efficiency of crop growth monitoring, but also provides more convenient and reliable analysis tools for related personnel and agricultural specialists, and promotes the modernization and intelligent development of agricultural production.
In some optional solutions, the determining, based on the first region feature, the first monitoring image partition feature corresponding to the each monitoring image partition, and the first distribution feature corresponding to the each monitoring image partition, the first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image described in S222 includes: acquiring a confidence coefficient array based on the first region feature and the first monitoring image block feature corresponding to each monitoring image block, wherein the confidence coefficient array comprises confidence coefficients corresponding to each monitoring image block, and the confidence coefficients corresponding to each monitoring image block are used for reflecting the growth discrimination decision factors of each monitoring image block; and determining first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image based on the confidence coefficient array, the first monitoring image blocking features corresponding to each monitoring image blocking and the first distribution features corresponding to each monitoring image blocking.
In some alternative embodiments, the implementation of step S222 may be further refined into two main phases.
First, the monitoring image analysis system obtains a confidence array based on the first region feature and the first monitoring image block feature corresponding to each monitoring image block. Each element in the confidence coefficient array corresponds to a monitoring image block in the image and reflects the reliability of the block in judging the growth condition of crops. These confidence levels are calculated by a deep learning algorithm and represent the confidence level of the growth decision made by the system for each monitored image patch.
Specifically, the system analyzes each monitored image block in detail to extract its unique visual features such as color, texture, shape, etc. These features are then matched and compared to pre-trained models to calculate the confidence level for each segment. The first region feature also plays an important role in this process, and it provides context information for the image as a whole, helping the system to more accurately understand and interpret the features of each tile.
Next, the system uses the confidence coefficient array to combine the first monitoring image blocking feature and the first distribution feature corresponding to each monitoring image blocking to determine the first growth monitoring vision description knowledge corresponding to the first crop growth monitoring image. In this process, the system will weight the features of each segment according to the values in the confidence array to reflect their importance in determining the crop growth condition. At the same time, the first distribution feature is also taken into consideration, which reveals the spatial relationship and distribution pattern of each block in the image, helping the system to more fully understand the growth condition of the crops.
Finally, by comprehensively considering the features and confidence information, the system will generate a comprehensive and accurate first growth monitoring visual description knowledge. The knowledge not only contains detailed information and visual characteristics in the image, but also integrates deep learning and reasoning capabilities of the system, so that the system can more accurately understand and explain the growth condition of crops. The method provides powerful data support for subsequent agricultural production decisions, and is beneficial to realizing the intellectualization and the precision of agricultural production.
Further, the obtaining a confidence coefficient array based on the first region feature and the first monitoring image blocking feature corresponding to each monitoring image blocking includes: carrying out knowledge embedding on the first regional characteristics to obtain first growth monitoring visual description embedded knowledge; and weighting the first growth monitoring visual description embedding knowledge and the first monitoring image blocking characteristics corresponding to each monitoring image blocking respectively to obtain the confidence coefficient array.
Further, in the process of obtaining the confidence array, the monitoring image analysis system performs the following detailed steps.
First, the system performs knowledge embedding processing on the first region feature. Knowledge embedding is a technique that converts raw data or features into a low-dimensional vector representation that retains the critical information and structure of the raw data. In this step, the system converts the first region features into a vector representation called first growth monitor visual description embedded knowledge using a pre-trained embedded model or algorithm. This embedded knowledge vector contains compressed and abstract information of the first region features, which helps the system to process and understand the image data more efficiently.
Next, the system uses the first growth monitoring visual description embedded knowledge to weight the first monitoring image block features corresponding to each monitoring image block. The purpose of the weighting is to adjust the weight or influence of each block feature in subsequent analysis based on the importance of the region feature. Specifically, the system assigns a weight value to each of the partitioned features based on the similarity or correlation between the first growth monitor visual description embedded knowledge and each of the partitioned features. This weight reflects the importance or reliability of the blocking feature in determining the growth status of the crop.
Through the weighting process, the system obtains a confidence array. Each element in the array corresponds to a monitored image patch in the image, the value of which represents the confidence or reliability of the patch in determining the condition of crop growth. The high confidence segmented features will play a greater role in subsequent analysis and decision making, while the low confidence segmented features may be ignored or considered with less weight.
In general, by performing knowledge embedding on the first regional features and weighting the first monitored image block features corresponding to each monitored image block, the monitored image analysis system can obtain a confidence coefficient array reflecting the importance or reliability of each block in judging the growth condition of crops. This array provides an important basis and support for the generation of subsequent growth monitoring visual description knowledge.
Further, the determining, based on the confidence coefficient array, the first monitoring image blocking feature corresponding to each monitoring image blocking and the first distribution feature corresponding to each monitoring image blocking, first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image includes: determining second growth monitoring visual description embedding knowledge based on first monitoring image blocking features corresponding to the monitoring image blocking and first distribution features corresponding to the monitoring image blocking, wherein the second growth monitoring visual description embedding knowledge is used for reflecting the growth condition category of the first crop growth monitoring image; and embedding the confidence coefficient array and the second growth monitoring visual description into knowledge elements corresponding to the same dimension in knowledge, and performing characteristic multiplication operation to obtain first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image.
Further, upon determining the first crop growth monitoring visual description knowledge corresponding to the first crop growth monitoring image, the monitoring image analysis system performs the following detailed steps.
First, the system determines second growing monitoring visual description embedding knowledge based on first monitoring image segmentation features corresponding to each monitoring image segmentation and first distribution features corresponding to each monitoring image segmentation. The embedded knowledge is calculated by a deep learning algorithm, and the embedded knowledge is fused with key information of the blocking characteristics and the distribution characteristics, so that the growth condition category of the first crop growth monitoring image can be comprehensively and accurately reflected.
Specifically, the system firstly analyzes and processes the first monitoring image blocking feature of each monitoring image blocking in detail, and extracts key information related to the growth condition. The system then combines the first distribution characteristics to further enrich and refine the content of the embedded knowledge, taking into account the spatial relationship and distribution pattern of the tiles in the image. The resulting second growth monitoring visual description embedding knowledge is a comprehensive representation containing information of the growth conditions of the individual segments in the image.
Next, the system embeds the confidence array and the second growth monitor visual description into knowledge elements of knowledge corresponding to the same dimension for a feature multiplication operation. The purpose of this operation is to combine the weight information in the confidence array with the growth status information embedded in the knowledge, thereby obtaining a more accurate and reliable first growth monitoring visual description knowledge.
In a feature multiplication operation, the system multiplies each element in the set of opposite beliefs by the element of the corresponding dimension in the embedded knowledge. Thus, high confidence blocked features will take up more weight in the result, while low confidence blocked features will be correspondingly weakened. By the method, the system can make full use of the weight information provided by the confidence coefficient array to adjust and optimize the growth condition information embedded in the knowledge in a targeted manner.
Finally, the first growth monitoring visual description knowledge obtained through the characteristic multiplication operation is a comprehensive representation which comprehensively considers the growth condition and weight information of each block in the image. The knowledge provides powerful data support and analysis basis for subsequent crop growth condition evaluation, problem diagnosis and decision support.
Still further, the determining the second growth monitoring visual description embedding knowledge based on the first monitoring image segmentation feature corresponding to the each monitoring image segmentation and the first distribution feature corresponding to the each monitoring image segmentation includes: determining target quantized growth characteristics corresponding to each monitoring image block based on first monitoring image block characteristics corresponding to each monitoring image block and first distribution characteristics corresponding to each monitoring image block, wherein the target quantized growth characteristics corresponding to each monitoring image block are used for reflecting each monitoring image block; and forming the second growth monitoring visual description embedding knowledge by the target quantized growth characteristics corresponding to the monitoring image blocks.
Further, in determining the second growth monitor visual description embedding knowledge, the monitor image analysis system performs the following detailed steps.
First, the system determines a target quantized growth feature corresponding to each monitored image patch based on a first monitored image patch feature corresponding to each monitored image patch and a first distribution feature corresponding to each monitored image patch. The purpose of this step is to translate the visual features and distribution patterns in the image into quantifiable growth features in order to more accurately describe and compare the growth conditions of the different tiles.
Specifically, the system analyzes and processes the first monitored image block feature of each monitored image block in detail, and extracts key information related to the growth condition, such as color, texture, shape, etc. These information reflect the growth status and behavior of the crop in different segments. At the same time, the system also considers the first distribution characteristics of the individual blocks in the image, namely the spatial relationship and distribution pattern among them. This information helps the system understand the overall trends and laws of crop growth.
By comprehensively considering the first monitoring image blocking feature and the first distribution feature, the system calculates target quantized growth features corresponding to each monitoring image blocking. These characteristics are represented in numerical form, and can accurately reflect the growth conditions of each block, including growth rate, health degree, pest and disease conditions, etc.
Next, the system combines the target quantized growth features corresponding to each monitored image patch to form a second growth monitoring visual description embedding knowledge. The embedded knowledge is a comprehensive representation containing all the block growth status information in the image, and the embedded knowledge exists in the form of vectors, so that the embedded knowledge can be conveniently stored, processed and compared.
By determining the target quantitative growth characteristics and constructing the second growth monitoring visual description embedded knowledge, the system can convert complex image information into a simple and easily understood numerical representation, and provides powerful data support and analysis basis for subsequent growth condition assessment, problem diagnosis and decision support. Meanwhile, the capability and efficiency of processing large-scale image data of the system are improved, and important technical support is provided for intelligent management of agricultural production.
Still further, the determining, based on the first monitored image block feature corresponding to the each monitored image block and the first distribution feature corresponding to the each monitored image block, the target quantized growth feature corresponding to the each monitored image block includes: for any one of the monitoring image blocks, summing the characteristic of the first monitoring image block corresponding to the any one of the monitoring image blocks and the characteristic member corresponding to the same dimension in the first distribution characteristic corresponding to the any one of the monitoring image blocks to obtain the associated block characteristic corresponding to the any one of the monitoring image blocks; and acquiring target quantized growth characteristics corresponding to any one of the monitoring image blocks based on the associated block characteristics corresponding to any one of the monitoring image blocks.
Still further, the monitoring image analysis system performs the following detailed steps in determining the target quantized growth features corresponding to each of the monitoring image segments.
Firstly, the system traverses each monitoring image block, and for any monitoring image block, the system performs summation operation on the corresponding first monitoring image block feature and the feature members corresponding to the same dimension in the first distribution feature. The purpose of this summation operation is to fuse the information related to the two features together to obtain a more comprehensive and accurate description of the segmented features.
Specifically, the first monitored image tile feature may include visual information such as color, texture, shape, etc. of the tile, and the first distribution feature describes the location, size, and relationship of the tile to surrounding tiles in the image. When performing the summation operation, the system will add the members of the same dimension in the two features correspondingly, for example, add the feature member describing the color and the feature member describing the position separately. The associated blocking feature obtained in this way not only contains the visual information of the blocking, but also integrates the distribution information of the blocking in the image.
Then, the system acquires the target quantized growth characteristics corresponding to the blocks based on the associated block characteristics corresponding to any one of the monitored image blocks. This step typically involves complex computational and conversion processes, such as learning and training associated patch features using machine learning algorithms, to obtain target quantized growth features that accurately reflect the condition of the patch growth. These features may be represented in numerical form and can be used directly to compare and analyze the growth conditions of the different segments.
By traversing all the monitoring image blocks and executing the operation, the system can finally obtain the target quantized growth characteristics corresponding to each block. These features provide an important data basis and support for the generation of subsequent growth monitoring visual descriptive knowledge.
It should be noted that, in practical application, the summation operation is just an exemplary feature fusion manner, and other more complex feature fusion methods, such as feature stitching, feature weighting, etc., may also be adopted by the system according to specific requirements and situations. Meanwhile, the specific method of obtaining the target quantized growth features will also vary depending on the machine learning algorithm and model used.
In some preferred embodiments, the determining a first network training error value based on the first growth monitor visual description knowledge, the second growth monitor visual description knowledge, and the third growth monitor visual description knowledge comprises: determining a first common metric value based on the first and second growth monitoring visual description knowledge, the first common metric value being used to reflect a common metric value between a growth condition category of the first crop growth monitoring image and a growth condition category of the second crop growth monitoring image; determining a second common metric value based on the first and third growth monitoring visual description knowledge, the second common metric value being used to reflect a common metric value between a growth condition category of the first crop growth monitoring image and a growth condition category of the third crop growth monitoring image; and determining the first network training error value through a target training error index based on the first commonality measurement value and the second commonality measurement value.
In some preferred embodiments, the monitoring image analysis system performs the following detailed steps in determining the first network training error value.
First, the system determines an index, referred to as a first commonality metric, based on the first growth monitor visual descriptive knowledge and the second growth monitor visual descriptive knowledge. This first common metric value is intended to reflect the commonality or similarity between the category of growth status of the first crop growth monitor image and the category of growth status of the second crop growth monitor image. In other words, it measures the common characteristics or trends in the growth conditions of the two images.
To calculate the first similarity measure, the system may employ various similarity measures, such as cosine similarity, euclidean distance, and the like. These methods are capable of quantifying the degree of similarity between two vectors (here visual descriptive knowledge). The result of the calculation of the first measure of commonality will be a value of a size indicative of the degree of similarity in growth of the two images.
Next, the system will determine a second commonality metric value based on the first and third growth monitoring visual description knowledge in a similar manner. This second commonality measure is intended to reflect the commonality or similarity between the category of growth conditions of the first crop growth monitor image and the category of growth conditions of the third crop growth monitor image.
Likewise, the system calculates a second common metric value using a similarity metric method. The purpose of this step is to evaluate the consistency and stability of the growth status features of the first image from another perspective, i.e. a comparison with the third image.
Finally, the system combines the first common metric value and the second common metric value to determine a first network training error value via a predefined target training error indicator. The target training error index may be a complex mathematical formula or algorithm which comprehensively considers the common metric value among different images and other related factors (such as the complexity of a network model, the scale of training data and the like) so as to obtain an error value capable of comprehensively reflecting the network training effect.
In other preferred embodiments, the optimizing the to-be-debugged growth condition category analysis network according to the first network training error value being greater than an error threshold to obtain a target growth condition category analysis network includes: optimizing the to-be-debugged growth condition type analysis network according to the fact that the first network training error value is larger than the error threshold to obtain a debugged growth condition type analysis network; acquiring a first associated blocking feature corresponding to the first crop growth monitoring image, a second associated blocking feature corresponding to the second crop growth monitoring image and a third associated blocking feature corresponding to the third crop growth monitoring image through the debugged growth condition type analysis network, wherein the first associated blocking feature, the second associated blocking feature and the third associated blocking feature respectively reflect the growth condition types of the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image; determining a second network training error value based on the first associated blocking feature, the second associated blocking feature, and the third associated blocking feature; and taking the debugged growth condition type analysis network as the target growth condition type analysis network according to the fact that the second network training error value does not exceed the error threshold.
In other preferred embodiments, when the monitoring image analysis system finds that the first network training error value is greater than the error threshold, it performs a series of steps to optimize the growth condition class resolution network to be debugged, and finally obtains the target growth condition class resolution network. The following is a detailed explanation of these steps.
Firstly, the system optimizes the growth condition category analysis network to be debugged according to the judgment result that the first network training error value is larger than the error threshold. This optimization process may involve operations of adjusting parameters of the network, changing the network structure, increasing or decreasing the number of layers of the network, etc., in order to enable the network to better learn and identify the growth status categories of the crop growth monitor image. After optimization, the system obtains a debugged growth condition category analysis network.
Next, the system will process the first crop growth monitor image, the second crop growth monitor image and the third crop growth monitor image, respectively, through the commissioned growth status category resolution network. During processing, the network extracts features of the images and generates corresponding first, second, and third associated block features, respectively. The associated block features reflect growth condition category information of different blocks in the image, and are the result of deep understanding and analysis of the image by the network.
The system then calculates a second network training error value based on the associated block features. This calculation process may involve comparing the associated chunking features with the true growth condition class labels, quantifying the differences between the network predicted results and the true results.
Finally, the system checks whether the second network training error value does not exceed the error threshold. If not, the system takes the debugged growth condition class resolution network as a target growth condition class resolution network if the performance of the debugged growth condition class resolution network is up to the expected requirement. If the error threshold is still exceeded, the system may continue to optimize the network or take other actions to improve the performance of the network.
In this way, the monitoring image analysis system can gradually optimize the growth condition category analysis network, and improve the recognition and analysis capability of the monitoring image analysis system on the crop growth. This helps to more accurately assess the growth status of crops, providing a more powerful support for agricultural production.
When referring to the target growth condition class resolution network, the convolutional neural network (Convolutional Neural Network, CNN) may be taken as an example to specifically describe the working principle and application thereof. Convolutional neural networks are a type of deep learning algorithm that is particularly useful for processing image data. In the field of crop growth monitoring, CNNs can effectively extract features from images and identify different growth condition categories. First, a crop growth monitoring image is input into the CNN. CNN consists of multiple convolutional layers, pooled layers, and fully connected layers. The convolution layer is responsible for extracting local features in the image, and the image is converted into a feature map through convolution operation. And the pooling layer downsamples the feature map, so that the dimension and the calculated amount of data are reduced. The fully connected layer converts the output of the last convolved or pooled layer into a vector representing the global features of the image. In the training process, the CNN continuously optimizes network parameters through a back propagation algorithm, so that the network can better learn and identify the growth condition category in the image. Through a large amount of training data, the CNN can gradually extract features related to the growth condition, such as color, texture, shape, etc., so as to establish a mapping relationship from the image to the growth condition category. Once training is complete, the CNN can be applied to the actual crop growth monitoring image. For an input image, the CNN extracts its features and outputs a probability distribution representing the likelihood that the image belongs to different growth condition categories. The category with the highest probability may be selected as the prediction result of the image. The target growth condition class resolution network is constructed on the basis of the method. By selecting the proper network structure and parameters and training with a large amount of training data, a CNN model which can accurately identify the crop growth condition type can be obtained. The model can be applied to an actual monitoring system, helps farmers to know the growth condition of crops in time, and makes corresponding management and decision.
When referring to the target growth condition class resolution network, the depth residual network (ResNet) may also be taken as an example to introduce the working principle and application of the target growth condition class resolution network. The depth residual network is a very successful neural network architecture in the field of deep learning, and is particularly suitable for processing complex tasks such as image classification and the like. In a crop growth monitoring scenario, resNet's powerful feature extraction capability makes it an ideal target growth condition class resolution network. For example, there is a series of crop growth monitoring images, each of which records the condition of the crop at different stages of growth. These images contain rich information such as leaf color, texture, shape, and possible disease symptoms. The aim is to construct a neural network that can automatically resolve these images and accurately identify the crop growth status categories. ResNet is to solve the problems of gradient elimination and performance degradation of deep neural networks during training by introducing residual connections. This means that the network can learn the "residual" or difference between the input and output, rather than directly learn the complex mapping from input to output. This design enables ResNet to maintain efficient learning in very deep network structures. In constructing the target growth condition class resolution network, first, an appropriate ResNet architecture, such as ResNet-50 or ResNet-101, is selected, which has proven its effectiveness in a large number of image classification tasks. The network is then trained using a plurality of crop growth monitoring images and corresponding growth status category labels. During training, resNet gradually extracts the features in the image through the deep convolution layer. These features gradually transition from low-level edge and texture information to high-level abstractions. Through residual connection, the network can retain detailed information of the features and effectively transfer information in a deep network. Finally, fully trained ResNet can accurately map the input crop growth monitoring image onto the corresponding growth condition category. For a new input image, the network can quickly extract the characteristics and give a classification result, so that farmers or agricultural specialists can be helped to know the growth condition of crops in time and make corresponding management decisions. In general, the depth residual network, as a powerful neural network algorithm, exhibits excellent performance in crop growth status category resolution tasks. The unique residual connection design enables the network to maintain effective learning capability under the condition of increased depth, so that the growth condition type information in the image can be accurately identified.
When referring to the target growth condition class resolution network, another popular neural network architecture, a variant of long short term memory network (LSTM), may also be used in combination with Convolutional Neural Networks (CNN) to construct a target growth condition class resolution network. The network combines the characteristic extraction capability of CNN and the time sequence modeling capability of LSTM, and is particularly suitable for processing a crop growth monitoring image sequence with time sequence property. For example, there are a series of crop growth monitoring images arranged in time series. These images not only contain static features of the crop, such as leaf shape, color, etc., but also their dynamic growth process. To take full advantage of this information, a CNN-LSTM hybrid network can be constructed to identify growth status categories. First, a single image is processed using CNN. The convolutional layer of CNN is capable of capturing local features in the image, such as edges, textures, etc., and gradually reducing the dimensions of the features through the pooling layer. In this way, a compact representation of the characteristics can be extracted from each image, which encodes the critical growth information of the crop. These feature representations are then entered into the LSTM network in chronological order. LSTM is a special Recurrent Neural Network (RNN) that can effectively handle dependencies in long-time series data by introducing memory units and gating mechanisms. In a crop growth monitoring scenario, the LSTM may help capture temporal variations in the image sequence, such as gradual changes in leaf color, changes in growth rate, etc. The training process of the CNN-LSTM hybrid network is an end-to-end optimization process. The network is trained using a large number of image sequences with growing condition category labels. Through the back propagation algorithm, the network continually adjusts its parameters to minimize the difference between the predicted and actual categories. After training is completed, the CNN-LSTM hybrid network can be applied to a new crop growth monitoring image sequence. For each input sequence, the network will first extract image features using the CNN, and then input these features into the LSTM for timing modeling. Eventually, the network will output a probability distribution indicating the likelihood that the sequence belongs to different growth condition categories. The class with the highest probability may be selected as the prediction result of the sequence. In general, by combining the feature extraction capability of CNN and the time-series modeling capability of LSTM, a powerful target growth condition class resolution network can be constructed. The network can fully utilize static and dynamic information in the crop growth monitoring image sequence to realize accurate and efficient growth condition type identification. This is of great importance for accurate management and decision making in agricultural production.
Fig. 2 shows a block diagram of a monitoring image analysis system 300, comprising: memory 310 for storing program instructions and data; a processor 320, coupled to the memory 310, executes instructions in the memory 310 to implement the methods described above.
Further, a computer storage medium is provided containing instructions which, when executed on a processor, implement the above-described method.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for monitoring image analysis applied to an intelligent agricultural park, characterized in that the method is applied to a monitoring image analysis system, and comprises the following steps:
Acquiring a target crop growth monitoring image and a target growth condition category analysis network of a target intelligent agricultural park;
acquiring target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image through the target growth condition type analysis network, wherein the target growth monitoring visual description knowledge is used for reflecting the growth condition type of the target crop growth monitoring image, and the target growth monitoring visual description knowledge is determined according to the monitoring image blocking characteristics of each monitoring image blocking included in the target crop growth monitoring image, the distribution characteristics of each monitoring image blocking included in the target crop growth monitoring image and the regional characteristics of the target crop growth monitoring image;
and determining the growth condition category corresponding to the target crop growth monitoring image based on the target growth monitoring visual description knowledge.
2. The method of claim 1, wherein the obtaining, through the target growth condition category resolution network, target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image comprises:
Acquiring target crop growth monitoring image growth monitoring visual description knowledge corresponding to the target crop growth monitoring image through the target growth condition type analysis network, wherein the target crop growth monitoring image growth monitoring visual description knowledge comprises target area characteristics of the target crop growth monitoring image, target monitoring image blocking characteristics corresponding to each monitoring image blocking in the target crop growth monitoring image and target distribution characteristics corresponding to each monitoring image blocking in the target crop growth monitoring image;
And determining target growth monitoring visual description knowledge corresponding to the target crop growth monitoring image based on the target region characteristics, the target monitoring image blocking characteristics corresponding to each monitoring image blocking and the target distribution characteristics corresponding to each monitoring image blocking.
3. The method of claim 1, wherein the method of debugging the target growth condition class resolution network comprises:
Acquiring a first crop growth monitoring image, a second crop growth monitoring image, a third crop growth monitoring image and a growth condition type analysis network to be debugged, wherein the growth condition types of the first crop growth monitoring image and the second crop growth monitoring image are the same, and the growth condition types of the first crop growth monitoring image and the third crop growth monitoring image are different;
acquiring first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image, second growth monitoring visual description knowledge corresponding to the second crop growth monitoring image and third growth monitoring visual description knowledge corresponding to the third crop growth monitoring image through the to-be-debugged growth condition type analysis network, wherein the first growth monitoring visual description knowledge, the second growth monitoring visual description knowledge and the third growth monitoring visual description knowledge respectively reflect the growth condition types of the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image, and the growth monitoring visual description knowledge corresponding to each crop growth monitoring image is determined according to the monitoring image blocking characteristics of each monitoring image blocking included by each crop growth monitoring image, the distribution characteristics of each monitoring image blocking included by each crop growth monitoring image and the area characteristics of each crop growth monitoring image;
Determining a first network training error value based on the first growth monitoring visual description knowledge, the second growth monitoring visual description knowledge and the third growth monitoring visual description knowledge, the first network training error value being used to reflect a relationship between a first common metric value and a second common metric value, the first common metric value being a common metric value between growth condition categories of the first crop growth monitoring image and the second crop growth monitoring image, the second common metric value being a common metric value between growth condition categories of the first crop growth monitoring image and the third crop growth monitoring image;
And optimizing the to-be-debugged growth condition type analysis network according to the fact that the first network training error value is larger than an error threshold to obtain a target growth condition type analysis network, wherein the target growth condition type analysis network is used for analyzing a growth condition type corresponding to the crop growth monitoring image.
4. The method of claim 3, wherein the obtaining, through the to-be-debugged growth condition category analysis network, the first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image includes:
Acquiring first crop growth monitoring image growth monitoring visual description knowledge corresponding to the first crop growth monitoring image through the to-be-debugged growth condition type analysis network, wherein the first crop growth monitoring image growth monitoring visual description knowledge comprises first region characteristics of the first crop growth monitoring image, first monitoring image blocking characteristics corresponding to each monitoring image blocking in the first crop growth monitoring image and first distribution characteristics corresponding to each monitoring image blocking in the first crop growth monitoring image;
Determining first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image based on the first region characteristics, the first monitoring image blocking characteristics corresponding to each monitoring image blocking and the first distribution characteristics corresponding to each monitoring image blocking;
The determining, based on the first region feature, the first monitoring image blocking feature corresponding to each monitoring image blocking and the first distribution feature corresponding to each monitoring image blocking, first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image includes: acquiring a confidence coefficient array based on the first region feature and the first monitoring image block feature corresponding to each monitoring image block, wherein the confidence coefficient array comprises confidence coefficients corresponding to each monitoring image block, and the confidence coefficients corresponding to each monitoring image block are used for reflecting the growth discrimination decision factors of each monitoring image block; and determining first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image based on the confidence coefficient array, the first monitoring image blocking features corresponding to each monitoring image blocking and the first distribution features corresponding to each monitoring image blocking.
5. The method of claim 4, wherein the obtaining a confidence array based on the first region feature and the first monitored image patch feature corresponding to the respective monitored image patch comprises:
Carrying out knowledge embedding on the first regional characteristics to obtain first growth monitoring visual description embedded knowledge;
And weighting the first growth monitoring visual description embedding knowledge and the first monitoring image blocking characteristics corresponding to each monitoring image blocking respectively to obtain the confidence coefficient array.
6. The method of claim 4, wherein the determining the first growth monitor visual description knowledge for the first crop growth monitor image based on the confidence array, the first monitor image patch characteristics for the respective monitor image patches, and the first distribution characteristics for the respective monitor image patches comprises:
Determining second growth monitoring visual description embedding knowledge based on first monitoring image blocking features corresponding to the monitoring image blocking and first distribution features corresponding to the monitoring image blocking, wherein the second growth monitoring visual description embedding knowledge is used for reflecting the growth condition category of the first crop growth monitoring image;
Embedding the confidence coefficient array and the second growth monitoring visual description into knowledge elements corresponding to the same dimension in knowledge, and performing characteristic multiplication operation to obtain first growth monitoring visual description knowledge corresponding to the first crop growth monitoring image;
The determining the second growth monitoring visual description embedding knowledge based on the first monitoring image blocking feature corresponding to each monitoring image blocking and the first distribution feature corresponding to each monitoring image blocking comprises the following steps: determining target quantized growth characteristics corresponding to each monitoring image block based on first monitoring image block characteristics corresponding to each monitoring image block and first distribution characteristics corresponding to each monitoring image block, wherein the target quantized growth characteristics corresponding to each monitoring image block are used for reflecting each monitoring image block; forming the second growth monitoring visual description embedding knowledge by the target quantized growth characteristics corresponding to each monitoring image block;
the determining, based on the first monitored image block feature corresponding to each monitored image block and the first distribution feature corresponding to each monitored image block, the target quantized growth feature corresponding to each monitored image block includes: for any one of the monitoring image blocks, summing the characteristic of the first monitoring image block corresponding to the any one of the monitoring image blocks and the characteristic member corresponding to the same dimension in the first distribution characteristic corresponding to the any one of the monitoring image blocks to obtain the associated block characteristic corresponding to the any one of the monitoring image blocks; and acquiring target quantized growth characteristics corresponding to any one of the monitoring image blocks based on the associated block characteristics corresponding to any one of the monitoring image blocks.
7. The method of any of claims 3 to 6, wherein the determining a first network training error value based on the first growth monitor visual description knowledge, the second growth monitor visual description knowledge, and the third growth monitor visual description knowledge comprises:
Determining a first common metric value based on the first and second growth monitoring visual description knowledge, the first common metric value being used to reflect a common metric value between a growth condition category of the first crop growth monitoring image and a growth condition category of the second crop growth monitoring image;
Determining a second common metric value based on the first and third growth monitoring visual description knowledge, the second common metric value being used to reflect a common metric value between a growth condition category of the first crop growth monitoring image and a growth condition category of the third crop growth monitoring image;
and determining the first network training error value through a target training error index based on the first commonality measurement value and the second commonality measurement value.
8. The method as claimed in any one of claims 3 to 6, wherein optimizing the to-be-debugged growth condition class resolution network according to the first network training error value being greater than an error threshold to obtain a target growth condition class resolution network comprises:
Optimizing the to-be-debugged growth condition type analysis network according to the fact that the first network training error value is larger than the error threshold to obtain a debugged growth condition type analysis network;
Acquiring a first associated blocking feature corresponding to the first crop growth monitoring image, a second associated blocking feature corresponding to the second crop growth monitoring image and a third associated blocking feature corresponding to the third crop growth monitoring image through the debugged growth condition type analysis network, wherein the first associated blocking feature, the second associated blocking feature and the third associated blocking feature respectively reflect the growth condition types of the first crop growth monitoring image, the second crop growth monitoring image and the third crop growth monitoring image;
Determining a second network training error value based on the first associated blocking feature, the second associated blocking feature, and the third associated blocking feature;
And taking the debugged growth condition type analysis network as the target growth condition type analysis network according to the fact that the second network training error value does not exceed the error threshold.
9. A monitoring image analysis system, comprising: a memory for storing program instructions and data; a processor coupled to a memory for executing instructions in the memory to implement the method of any of claims 1-8.
10. A computer storage medium containing instructions which, when executed on a processor, implement the method of any of claims 1-8.
CN202410622775.6A 2024-05-20 2024-05-20 Monitoring image analysis method and system applied to intelligent agricultural park Pending CN118247618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410622775.6A CN118247618A (en) 2024-05-20 2024-05-20 Monitoring image analysis method and system applied to intelligent agricultural park

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410622775.6A CN118247618A (en) 2024-05-20 2024-05-20 Monitoring image analysis method and system applied to intelligent agricultural park

Publications (1)

Publication Number Publication Date
CN118247618A true CN118247618A (en) 2024-06-25

Family

ID=91560645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410622775.6A Pending CN118247618A (en) 2024-05-20 2024-05-20 Monitoring image analysis method and system applied to intelligent agricultural park

Country Status (1)

Country Link
CN (1) CN118247618A (en)

Similar Documents

Publication Publication Date Title
CN105718945B (en) Apple picking robot night image recognition method based on watershed and neural network
EP4154214A1 (en) Systems and methods for automatically grading cannabis plants and adjusting control parameters
CN114359727A (en) Tea disease identification method and system based on lightweight optimization Yolo v4
CN117251700B (en) Artificial intelligence-based environmental monitoring sensor data analysis method and system
CN117271683A (en) Intelligent analysis and evaluation method for mapping data
CN117291444B (en) Digital rural business management method and system
CN115376008A (en) Method and device for identifying plant diseases and insect pests, electronic equipment and storage medium
CN114627467A (en) Rice growth period identification method and system based on improved neural network
KR20190136774A (en) Prediction system for harvesting time of crop and the method thereof
CN117422938B (en) Dam slope concrete structure anomaly analysis method based on three-dimensional analysis platform
CN111027436A (en) Northeast black fungus disease and pest image recognition system based on deep learning
CN116863403B (en) Crop big data environment monitoring method and device and electronic equipment
CN116739868B (en) Afforestation management system and method based on artificial intelligence
CN117253192A (en) Intelligent system and method for silkworm breeding
CN116612386A (en) Pepper disease and pest identification method and system based on hierarchical detection double-task model
Suwarningsih et al. Ide-cabe: chili varieties identification and classification system based leaf
CN118247618A (en) Monitoring image analysis method and system applied to intelligent agricultural park
CN116668473A (en) Edge recognition method, system, equipment and medium for agricultural abnormal data
CN115187878A (en) Unmanned aerial vehicle image analysis-based blade defect detection method for wind power generation device
Zakiyyah et al. Characterization and Classification of Citrus reticulata var. Keprok Batu 55 Using Image Processing and Artificial Intelligence
CN114663652A (en) Image processing method, image processing apparatus, management system, electronic device, and storage medium
CN113837039A (en) Fruit growth form visual identification method based on convolutional neural network
CN117949397B (en) Hyperspectral remote sensing geological mapping control system and hyperspectral remote sensing geological mapping control method
CN115147835B (en) Pineapple maturity detection method based on improved RETINANET natural orchard scene
CN116935235B (en) Fresh tea leaf identification method and related device based on unmanned tea picking machine

Legal Events

Date Code Title Description
PB01 Publication