CN110213376B - Information processing system and method for insect pest prevention - Google Patents

Information processing system and method for insect pest prevention Download PDF

Info

Publication number
CN110213376B
CN110213376B CN201910486919.9A CN201910486919A CN110213376B CN 110213376 B CN110213376 B CN 110213376B CN 201910486919 A CN201910486919 A CN 201910486919A CN 110213376 B CN110213376 B CN 110213376B
Authority
CN
China
Prior art keywords
video
candidate
detected
remote sensing
pest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910486919.9A
Other languages
Chinese (zh)
Other versions
CN110213376A (en
Inventor
彭荣君
赵光明
唐庆刚
孟庆民
张少波
周宇
张曦晖
肖孔军
仇永奇
吕亭宇
曲明伟
张华贵
吴东洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Beidahuang Agriculture Co ltd
Original Assignee
Heilongjiang Beidahuang Agriculture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Beidahuang Agriculture Co ltd filed Critical Heilongjiang Beidahuang Agriculture Co ltd
Priority to CN201910486919.9A priority Critical patent/CN110213376B/en
Publication of CN110213376A publication Critical patent/CN110213376A/en
Application granted granted Critical
Publication of CN110213376B publication Critical patent/CN110213376B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an information processing system and method for pest prevention. The information processing system and method for pest prevention collect low altitude remote sensing images and high altitude remote sensing images of preset planting areas of the agricultural internet of things, plant information and actual pest information of planted crops corresponding to the preset planting areas of the agricultural internet of things and characteristics extracted from the low altitude and high altitude remote sensing images are utilized to train a preset pest prediction model, and then the trained model is utilized to predict pest occurrence conditions. Wherein, the planting information comprises sowing time, sowing quantity, fertilizing time, fertilizing quantity each time, water supply quantity each time, deinsectization time and leaf area index per ten balances. The information processing system and the method for preventing the insect pests can accurately predict the occurrence condition of the insect pests and overcome the defects of the prior art.

Description

Information processing system and method for insect pest prevention
Technical Field
The present invention relates to information processing technologies, and in particular, to an information processing system and method for pest control.
Background
The satellite remote sensing technology belongs to the high-altitude remote sensing technology, and can be used for estimation of meteorological satellites and other applications at present, for example, the satellite remote sensing technology can monitor the growth vigor, plant diseases and insect pests and freezing damage of crops, estimate the disaster area, estimate the crop harvest, and even remotely sense and detect various resources such as fishery resources, thereby showing the unique capability.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of the above, the invention provides a crop monitoring system and method for an agricultural internet of things, so as to at least solve the problem of inaccurate insect pest prediction in the existing agricultural internet of things technology.
The invention provides an information processing system for insect pest prevention, which comprises a remote sensing end and an agricultural Internet of things ground control center, wherein the remote sensing end is connected with the agricultural Internet of things ground control center; the remote sensing end is used for acquiring a low-altitude remote sensing image and a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the low-altitude remote sensing image and the high-altitude remote sensing image to the ground control center of the agricultural Internet of things in real time; agricultural thing networking ground control center includes: the first feature extraction unit is used for extracting features of the low-altitude remote sensing image to obtain first image features; the second feature extraction unit is used for extracting features of the high-altitude remote sensing image to obtain second image features; the crop growth information acquisition unit is used for acquiring planting information of planted crops corresponding to the preset planting area of the agricultural Internet of things and acquiring actual insect pest information of the planted crops corresponding to the preset planting area of the agricultural Internet of things, wherein the planting information comprises sowing time, sowing quantity, fertilizing time, fertilizing quantity each time, water supply quantity each time, insect killing time and leaf area index per ten balances; the prediction model training unit is used for taking the first image characteristic, the second image characteristic, planting information of planted crops corresponding to the preset planting area of the agricultural Internet of things and actual insect pest information as training samples to train a preset insect pest prediction model; and the prediction unit is used for obtaining the predicted insect pest information of the crop to be predicted according to the planting information of the crop to be predicted and the trained insect pest prediction model.
The information processing system and the method for insect pest prevention can effectively and accurately predict insect pests.
Drawings
The invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like reference numerals are used throughout the figures to indicate like or similar parts. Wherein:
FIG. 1 is a schematic diagram showing the construction of an information processing system for pest prevention of the present invention;
fig. 2 is a schematic view showing an exemplary flow of the information processing method for pest prevention of the present invention.
FIG. 3 is a schematic diagram showing one arrangement of first sensors;
FIG. 4 is a schematic view showing an alternative to the irrational location shown in FIG. 3;
FIG. 5 is a schematic diagram showing one arrangement of second sensors;
FIG. 6 is a schematic view showing an alternative to the irrational location shown in FIG. 5;
FIG. 7 is a diagram illustrating the placing together of the first plurality of candidate locations selected in FIG. 4 and the second plurality of candidate locations selected in FIG. 6.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the device structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
Fig. 1 shows a schematic view of an information processing system for pest prevention according to the present invention.
As shown in fig. 1, the information processing system for pest prevention includes a remote sensing terminal 1 and an agricultural internet of things ground control center 2.
The remote sensing terminal 1 is used for collecting low-altitude remote sensing images and high-altitude remote sensing images of a preset planting area of the agricultural Internet of things and sending the low-altitude remote sensing images and the high-altitude remote sensing images to the agricultural Internet of things ground control center 2 in real time.
The remote sensing terminal 1 can comprise two parts, namely an unmanned aerial vehicle low-altitude remote sensing terminal and a satellite high-altitude remote sensing terminal.
The agricultural Internet of things ground control center 2 comprises a first feature extraction unit 2-1, a second feature extraction unit 2-2, a crop growth information acquisition unit 2-3, a prediction model training unit 2-4 and a prediction unit 2-5.
The first feature extraction unit 2-1 is used for performing feature extraction on the low-altitude remote sensing image to obtain a first image feature. The first image feature is, for example, any one or more existing image features, such as a color feature, a texture feature, and the like.
And the second feature extraction unit 2-2 is used for performing feature extraction on the high-altitude remote sensing image to obtain a second image feature. The second image feature is, for example, any one or more existing image features, such as a color feature, a texture feature, and the like.
The crop growth information obtaining unit 2-3 is used for obtaining planting information of the planted crops corresponding to the preset planting area of the agricultural internet of things and obtaining actual insect pest information of the planted crops corresponding to the preset planting area of the agricultural internet of things, wherein the planting information comprises sowing time, sowing quantity, fertilizing time, fertilizing quantity every time, water supply quantity every time, deinsectization time and leaf area indexes per ten balances.
And the prediction model training unit 2-4 is used for training a preset insect pest prediction model by taking the first image characteristic, the second image characteristic, planting information of planted crops corresponding to a preset planting area of the agricultural Internet of things and actual insect pest information as training samples.
The pest prediction model can adopt a spectrum composite prediction model, for example.
As an example, when the prediction model training unit 2-4 trains the pest prediction model, the trained criteria are, for example: and the difference between the predicted insect pest information of the planted crops corresponding to the preset planting area of the agricultural Internet of things and the actual insect pest information obtained by the insect pest prediction model is smaller than a preset threshold value. The predetermined threshold value may be set based on an empirical value, or determined experimentally, for example.
And the prediction unit 2-5 is used for obtaining the predicted insect pest information of the crop to be predicted according to the planting information of the crop to be predicted and the trained insect pest prediction model.
The pest information described above includes, for example, the number of times of pest occurrence and the pest occurrence area. For example, a vector may be formed according to the number of pest occurrences and the area of each pest occurrence, and the vector may be used to represent corresponding pest information (such as "pest information" mentioned in the above-mentioned actual pest information or predicted pest information).
As an example, the information processing system further comprises a monitoring subsystem, a meteorological subsystem, a ground water level monitoring subsystem and a control center subsystem; the monitoring subsystem comprises a plurality of monitoring points, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, the at least one video device is used for capturing video data of a corresponding area, the at least one first sensor is used for acquiring soil environment data corresponding to the monitoring point, and the first communication device is used for sending the video data and the soil environment data acquired by the corresponding monitoring point to the control center subsystem; the weather subsystem comprises a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the second sensors are used for acquiring corresponding air environment data at the weather monitoring station, and the second communication device is used for sending the air environment data of the corresponding weather monitoring station to the control center subsystem; the underground water level monitoring subsystem comprises a plurality of underground water level monitoring points, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, and the underground water level monitoring device is used for acquiring underground water level data of a corresponding position in real time and transmitting the acquired underground water level data to the control center subsystem through the third communication device; and the control center subsystem is used for: obtaining a first sensing range of a first sensor; obtaining a second sensing range of a second sensor; selecting a plurality of first candidate positions as possible positions of a plurality of first sensors to be reselected; selecting a plurality of second candidate locations as possible locations for a plurality of second sensors to be reselected; randomly selecting K position points in a preset monitoring area, wherein K is a positive integer; determining a first candidate positions and b second candidate positions from among the plurality of first candidate positions and the plurality of second candidate positions, wherein a and b are positive integers, so that the following conditions are satisfied: so that the sum of a and b is as small as possible; and at each of the K location points, the location point being capable of being within a first sensing range of a first sensor at least one of the a first candidate locations and within a second sensing range of a second sensor at least one of the b second candidate locations; the first sensors are rearranged according to the determined first candidate positions, and the second sensors are rearranged according to the determined second candidate positions.
As an example, the remote sensing end comprises an unmanned aerial vehicle end and a satellite communication end, and the information processing system further comprises a server end; the unmanned aerial vehicle end is suitable for collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times and sending the low-altitude remote sensing images to the server end in real time; the satellite communication terminal is suitable for acquiring a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the high-altitude remote sensing image to a server terminal of the crop monitoring system in real time; the server side groups the received low-altitude remote sensing images and/or high-altitude remote sensing images, and generates a video to be detected by using each group of images to obtain a plurality of videos to be detected; receiving a target video through a server end; determining a plurality of scene switching moments in a target video; the server side obtains a switched video frame corresponding to each scene switching time in the target video aiming at each scene switching time in the target video; taking a first frame image of a target video and switched video frames corresponding to all scene switching moments in the target video as a plurality of target frame images, and recording the total number of all the target frame images as N, wherein N is a non-negative integer; the method comprises the steps that a server side determines a plurality of scene switching moments in a video to be detected aiming at each video to be detected in a preset video database, obtains a switched video frame corresponding to each scene switching moment in the video to be detected, and takes a first frame image of the video to be detected and the switched video frames corresponding to all the scene switching moments in the video to be detected as frame images to be detected; the server side calculates the similarity between each frame image to be detected of each video to be detected and the target frame image aiming at each target frame image, and determines the frame image to be detected with the similarity higher than a first threshold value with the target frame image as a candidate frame image corresponding to the video to be detected; the server side calculates the number of candidate frame images corresponding to each video to be detected, which is recorded as a1, a1 is a non-negative integer, calculates the number of all target frame images related to each candidate frame image corresponding to the video to be detected, which is recorded as a2, a2 is a non-negative integer, and calculates the first score of the video to be detected according to the following formula: s1 ═ q1 × a1+ q2 × a2, where S1 is a first score of the video to be detected, q1 represents a weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents a weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, where q1 is equal to a preset first weight value, q2 is equal to a preset second weight value when a2 ═ N, and q2 is equal to a preset third weight value when a2 < N, where the second weight value is greater than the third weight value; and the server determines similar videos of the target video in the videos to be detected according to the first score of each video to be detected.
In addition, an embodiment of the present invention also provides an information processing method for pest prevention, the information processing method for pest prevention including: collecting a low-altitude remote sensing image of a preset planting area of the agricultural Internet of things; collecting a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things; carrying out feature extraction on the low-altitude remote sensing image to obtain a first image feature; carrying out feature extraction on the high-altitude remote sensing image to obtain a second image feature; the method comprises the steps of obtaining planting information of planting crops corresponding to a preset planting area of the agricultural internet of things and obtaining actual insect pest information of the planting crops corresponding to the preset planting area of the agricultural internet of things, wherein the planting information comprises sowing time, sowing quantity, fertilizing time, fertilizing quantity each time, water supply quantity each time, insect killing time and leaf area index per ten balances; taking the first image characteristic, the second image characteristic, planting information of planted crops corresponding to a preset planting area of the agricultural Internet of things and actual pest information as training samples, and training a preset pest prediction model; and obtaining the predicted insect pest information of the crop to be predicted according to the planting information of the crop to be predicted and the trained insect pest prediction model.
As shown in fig. 2, in step 201, a low-altitude remote sensing image of a preset planting area of the agricultural internet of things is acquired.
Next, in step 202, a high-altitude remote sensing image of a preset planting area of the agricultural internet of things is collected.
Next, in step 203, feature extraction is performed on the low-altitude remote sensing image to obtain a first image feature.
Then, in step 204, feature extraction is performed on the high-altitude remote sensing image to obtain a second image feature.
Next, in step 205, planting information of the planted crops corresponding to the preset planting area of the agricultural internet of things is obtained, and actual pest information of the planted crops corresponding to the preset planting area of the agricultural internet of things is obtained, wherein the planting information includes sowing time, sowing amount, fertilizing time, fertilizing amount each time, water supply amount each time, pest killing time, and leaf area index per ten balances.
In this way, then, in step 206, the first image feature, the second image feature, planting information of a planted crop corresponding to the preset planting area of the agricultural internet of things, and actual pest information are used as training samples to train a predetermined pest prediction model.
Then, in step 207, the predicted pest information of the crop to be predicted is obtained according to the planting information of the crop to be predicted and the trained pest prediction model.
The pest information includes, for example, pest occurrence frequency and pest occurrence area.
For example, the pest prediction model may employ a spectral composite prediction model.
In addition, when a predetermined insect pest prediction model is trained, for example, the predetermined insect pest prediction model may be trained until the following conditions are satisfied: and the difference between the predicted insect pest information of the planted crops corresponding to the preset planting area of the agricultural Internet of things and the actual insect pest information obtained by the insect pest prediction model is smaller than a preset threshold value.
As an example, the agricultural internet of things further comprises a monitoring subsystem, a meteorological subsystem, an underground water level monitoring subsystem and a control center subsystem; the monitoring subsystem comprises a plurality of monitoring points, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, the at least one video device is used for capturing video data of a corresponding area, the at least one first sensor is used for acquiring soil environment data corresponding to the monitoring point, and the first communication device is used for sending the video data and the soil environment data acquired by the corresponding monitoring point to the control center subsystem; the weather subsystem comprises a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the second sensors are used for acquiring corresponding air environment data at the weather monitoring station, and the second communication device is used for sending the air environment data of the corresponding weather monitoring station to the control center subsystem; the underground water level monitoring subsystem comprises a plurality of underground water level monitoring points, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, and the underground water level monitoring device is used for acquiring underground water level data of a corresponding position in real time and transmitting the acquired underground water level data to the control center subsystem through the third communication device; and the control center subsystem is used for: obtaining a first sensing range of a first sensor; obtaining a second sensing range of a second sensor; selecting a plurality of first candidate positions as possible positions of a plurality of first sensors to be reselected; selecting a plurality of second candidate locations as possible locations for a plurality of second sensors to be reselected; randomly selecting K position points in a preset monitoring area, wherein K is a positive integer; determining a first candidate positions and b second candidate positions from among the plurality of first candidate positions and the plurality of second candidate positions, wherein a and b are positive integers, so that the following conditions are satisfied: so that the sum of a and b is as small as possible; and at each of the K location points, the location point being capable of being within a first sensing range of a first sensor at least one of the a first candidate locations and within a second sensing range of a second sensor at least one of the b second candidate locations; the first sensors are rearranged according to the determined first candidate positions, and the second sensors are rearranged according to the determined second candidate positions.
As an example, the method further comprises: collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times, and sending the low-altitude remote sensing images to a server side in real time; collecting high-altitude remote sensing images of a preset planting area of the agricultural Internet of things, and sending the high-altitude remote sensing images to a server side in real time; the server side groups the received low-altitude remote sensing images and/or high-altitude remote sensing images, and generates a video to be detected by using each group of images to obtain a plurality of videos to be detected; receiving a target video through a server end; determining a plurality of scene switching moments in a target video; the server side obtains a switched video frame corresponding to each scene switching time in the target video aiming at each scene switching time in the target video; taking a first frame image of a target video and switched video frames corresponding to all scene switching moments in the target video as a plurality of target frame images, and recording the total number of all the target frame images as N, wherein N is a non-negative integer; the method comprises the steps that a server side determines a plurality of scene switching moments in a video to be detected aiming at each video to be detected in a preset video database, obtains a switched video frame corresponding to each scene switching moment in the video to be detected, and takes a first frame image of the video to be detected and the switched video frames corresponding to all the scene switching moments in the video to be detected as frame images to be detected; the server side calculates the similarity between each frame image to be detected of each video to be detected and the target frame image aiming at each target frame image, and determines the frame image to be detected with the similarity higher than a first threshold value with the target frame image as a candidate frame image corresponding to the video to be detected; the server side calculates the number of candidate frame images corresponding to each video to be detected, which is recorded as a1, a1 is a non-negative integer, calculates the number of all target frame images related to each candidate frame image corresponding to the video to be detected, which is recorded as a2, a2 is a non-negative integer, and calculates the first score of the video to be detected according to the following formula: s1 ═ q1 × a1+ q2 × a2, where S1 is a first score of the video to be detected, q1 represents a weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents a weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, where q1 is equal to a preset first weight value, q2 is equal to a preset second weight value when a2 ═ N, and q2 is equal to a preset third weight value when a2 < N, where the second weight value is greater than the third weight value; and the server determines similar videos of the target video in the videos to be detected according to the first score of each video to be detected.
According to embodiments of the present invention, in the above systems and methods, the control center subsystem may, for example, obtain a first sensing range of the first sensor. The first sensing range is known in advance or can be obtained experimentally, and may be, for example, a circle, a sector, a semicircle, etc., or may be a range of three-dimensional shapes, etc.
The control center subsystem may then, for example, obtain a second sensing range of the second sensor. The second sensing range is known in advance or can be obtained experimentally, and may be, for example, a circle, a sector, a semicircle, etc., or may be a range of three-dimensional shapes, etc.
Further, it should be noted that the first or second sensing range may also be a virtual sensing range, for example, for a sensor such as a temperature sensor, a humidity sensor or an air pressure sensor, the sensing range itself does not have a long distance, such as only temperature, humidity or air pressure at the position of the detection point can be detected, but in actual operation, the conditions such as temperature, humidity or air pressure within a certain area range may be considered to be the same, for example, the conditions such as air pressure within a radius of one kilometer may be assumed to be the same, or the conditions such as temperature within a radius of 10 kilometers may be assumed to be the same, so that the sensing range (the first or second sensing range) of the temperature sensor or the like may be assumed to be a circular area with a radius of R (R is, for example, 500 meters or the like), and so on.
The control center subsystem may then, for example, select a plurality of first candidate locations as possible locations for a plurality of first sensors to be reselected. For example, a plurality of first candidate positions may be randomly selected so that when the first sensors are arranged in such positions, all of the monitored areas can be covered according to the first sensing range of each of the first sensors. For example, it may be selected to place one barometric pressure sensor every 500 meters (as an example of a first sensor), as shown in fig. 3, where each solid circle represents a possible location of a first sensor.
Optionally, the control center subsystem may then, for example: judging whether unreasonable positions exist in the possible positions of the plurality of currently selected first sensors, if so, rejecting each unreasonable position, and setting at least one candidate position for replacing the position near the rejected position. As shown in fig. 4, the two dotted circles in fig. 4 indicate that the corresponding position is not reasonable, wherein the reason for the unreasonable may be different according to actual situations, for example, if the first sensor needs to buy the map to measure soil moisture, etc., and the position corresponding to the dotted circles is just water or rock, etc., the position is determined as the unreasonable position. It should be understood that the actual unreasonable location is not limited to the areas of water or rock described above, but may include other types of unreasonable locations, such as undamaged land, etc.
As shown in fig. 4, the two solid triangles beside each circular broken line indicate at least one candidate position replacing the corresponding possible position (in this example, two candidate positions are used to replace one irrational position, and in other examples, one or other number may be used).
The control center subsystem may then, for example, select a second plurality of candidate locations as possible locations for a second plurality of sensors to be reselected. For example, a plurality of second candidate positions may be randomly selected so that when the second sensors are arranged in such positions, all of the monitored areas can be covered according to the second sensing range of each of the second sensors. For example, the second sensors may be arranged in a random manner, as shown in FIG. 5, where each solid square represents a possible location of one of the second sensors.
Optionally, the control center subsystem may then, for example: and judging whether unreasonable positions exist in the possible positions of the plurality of currently selected second sensors, if so, rejecting each unreasonable position, and setting at least one candidate position for replacing the position near the rejected position. As shown in fig. 6, two dotted squares in fig. 6 indicate that the corresponding locations are not reasonable, wherein the reason for the unreasonable may be different according to different actual situations, for example, if the second sensor needs to be exposed, and the locations corresponding to the dotted squares are just the environment such as the room of a house, the locations are determined as unreasonable locations. It should be understood that the actual default position is not limited to the above-described situation and may include other types of default positions.
It should be understood that the plurality of first candidate positions and the plurality of second candidate positions may be selected relatively more, that is, the plurality of first candidate positions may be selected such that the sensing ranges of the first sensors arranged at the first candidate positions overlap each other, but such that the sensing ranges of the first sensors at the first candidate positions completely cover the area to be monitored; similarly, the second candidate positions may be selected as many as possible, and the sensing ranges of the second sensors arranged at the second candidate positions may overlap when the second candidate positions are selected, but the sensing ranges of the second sensors at the second candidate positions may completely cover the area to be monitored.
As shown in fig. 6, the two solid star shapes next to each circular square line indicate at least one candidate position that replaces the corresponding possible position (in this example, two or three candidate positions are used to replace an irrational position, and in other examples, one or other number may be used).
It should be understood that in other embodiments of the present invention, more than two types of sensors, i.e., the first and second sensors, may be included, such as a third sensor (e.g., a groundwater level monitoring device, etc., as described above), a fourth sensor, and so on. In this way, in a similar manner, a third sensing range of the third sensor and a fourth sensing range of the fourth sensor may be obtained, and candidate positions, possible positions, etc. corresponding to the third, fourth, etc. sensors may be selected.
In an embodiment of the invention, the control center subsystem may then, for example: it is determined whether or not the different types of sensors have an influence on each other, such as whether or not the respective action ranges (sensing ranges) are influenced. In addition, the sensing range of different sensors may vary according to the environmental conditions such as the terrain, the weather, etc. in the actual situation, for example, the sensing range of the ultrasonic sensor, etc., and therefore, the sensing range according with the current situation is obtained based on different environmental conditions. If there is an influence, the affected sensing range may be corrected, and the corrected sensing range may be used for calculation. For example, whether the different types of sensors are affected, the sensing range after the influence, and the like can be determined through an experimental mode. Therefore, when various possible positions of various sensors are calculated and solved, compared with a mode that a single sensor is considered in isolation to calculate or the sensing range of the sensor is not adjusted according to environment change factors such as terrain and landform, weather and the like in an actual situation, the calculation process of the embodiment of the invention is more accurate.
FIG. 7 is a diagram illustrating the first candidate locations selected in FIG. 4 and the second candidate locations selected in FIG. 6 being placed together.
Then, the control center subsystem may randomly choose K location points in a predetermined monitoring area, for example, where K is a positive integer.
For example, K may be equal to or greater than 100.
Then, the control center subsystem may determine, for example, a first candidate positions and b second candidate positions among the plurality of first candidate positions and the plurality of second candidate positions, where a and b are positive integers, so that the following first condition and second condition are satisfied.
The first condition is: so that the sum of a and b is as small as possible.
The second condition is: at each of the K location points, the location point can be within a first sensing range of a first sensor at least one of the a first candidate locations and within a second sensing range of a second sensor at least one of the b second candidate locations.
Thus, the values of a and b, and the respective positions of the a first candidate positions and the b second candidate positions may be determined.
The process of solving for a and b above is described below by way of example.
After obtaining the plurality of first candidate positions and the plurality of second candidate positions, and in the subsequent processing, the objective is to further reduce the number of the plurality of first candidate positions and the plurality of second candidate positions so that the first sensors and the second sensors are finally arranged as few as possible.
For example, the selected plurality of first candidate positions is assumed to be 10 (actually, more may be adopted, and for convenience of description herein, for example, 50, 100, 1000, and so on may be actually selected) as the possible positions of the plurality of first sensors to be reselected. Further, it is assumed that the selected plurality of second candidate positions is assumed to be 10 (actually, it may be more, and for convenience of description herein, it may be actually selected, for example, 50, 100, 1000, and so on) as possible positions of the plurality of second sensors to be reselected.
Thus, taking one of the K position points randomly selected in the predetermined monitoring area as an example, assuming that the position point l (1) can be located in the sensing ranges of the first sensors at the 6 th and 9 th positions (but cannot be located in the sensing ranges of the first sensors at other positions) among the 10 first candidate positions (pre-numbered), and assuming that the position point l (1) can be located in the sensing ranges of the first sensors at the 2 nd and 3 rd positions (but cannot be located in the sensing ranges of the second sensors at other positions) among the 10 second candidate positions (pre-numbered), the first reception variable sig1(l (1)) of the position point l (1) corresponding to the first sensor can be recorded as sig1(l (1)) (0,0,0,0, 1,0,0,0,0), the second reception variable sig2(l (1)) of the position point l (1) corresponding to the second sensor is denoted as sig2(l (1)) = (0,1,1,0,0,0,0, 0).
For the first received variable sig1(l (1)), each element in the vector indicates whether the position point l (1) can be in the sensing range of the corresponding first sensor, for example, an element value of 0 indicates that it is not in the sensing range of the corresponding first sensor, and an element value of 1 indicates that it is in the sensing range of the corresponding first sensor.
Similarly, for the second receive variable sig2(l (1)), each element in the vector indicates whether position point l (2) can be in the sensing range of the corresponding second sensor, for example, an element value of 0 indicates that it is not in the sensing range of the corresponding second sensor, and an element value of 1 indicates that it is in the sensing range of the corresponding second sensor.
Assuming that a of the a first candidate positions determined in the "first candidate positions" (i.e., 10) is 9 in the current iteration and is the first to ninth first sensors, the first sensor variable c1 is (1,1,1,1,1,1, 0), where 1 indicates that the corresponding sensor is selected into the a first candidate positions and 0 indicates that it is not selected.
According to the second condition, for the position point l (1), for example, it can be determined whether the following expression holds:
(0,0,0,0,0,1,0,0,1,0)(1,1,1,1,1,1,1,1,1,0)Tis greater than 1, and
(0,1,1,0,0,0,0,0,0,0)(1,1,1,1,1,1,1,1,1,0)T>1
if any of the two formulas is not true, the current selection mode is unreasonable.
If the two formulas are both true, the current selection mode is retained and iteration is continued. For example, all the selection modes may be traversed, each of the selection modes satisfying the second condition is retained, and then the calculation is iterated until the first condition is satisfied.
Similarly, each of the randomly selected K location points in the predetermined monitoring area may be processed separately.
It should be noted that in other examples, for sensors with different requirements, for example, when it is required to receive sensing signals of at least 2 sensors of a certain type at the same time, the right "1" in the above equation may be changed to 2.
Furthermore, it should be noted that, in the embodiment of the present invention, the values of a and b may be implemented by, for example, a decreasing iterative calculation manner, that is, an initial value of a may be equal to the number of "a plurality of first candidate positions" (e.g., 10), and an initial value of b may be equal to the number of "a plurality of second candidate positions" (e.g., 10), and after all iterations of calculating a to 10, a to 9 is calculated, and it is noted that there may be a plurality of cases of a to 9 (e.g., 10 in this example), and so on.
The control center subsystem may then, for example, rearrange a first sensors according to the determined a first candidate positions and rearrange b second sensors according to the determined b second candidate positions.
For example, the growth of the corresponding crops and the acquisition of information on soil elements affecting the growth of the crops can be predicted based on at least the video data and the environmental data corresponding to each monitoring point received from the monitoring subsystem.
For example, the information of the environmental elements in the air influencing the growth of the crops can be obtained at least based on the corresponding air environment data at each weather monitoring station received from the weather subsystem.
In addition, for example, the underground water level change condition of each underground water level monitoring point can be monitored at least based on the underground water level data corresponding to each underground water level monitoring point received from the underground water level monitoring subsystem.
In the above example, the case where there is only one kind of the first sensor and one kind of the second sensor is exemplified, and when there are a plurality of kinds of the first sensors and a plurality of kinds of the second sensors, the first condition becomes: determining a for each first sensor and a b for each second sensor, and finally making the sum of all a and all b as small as possible; further, in this case, the second condition becomes: at each of the K location points, the location point can be located within a first sensing range of a first sensor at least one of the a first candidate locations corresponding to each first sensor type and within a second sensing range of a second sensor at least one of the b second candidate locations corresponding to each second sensor type. The calculation process is similar and is not described in detail here.
Further, the first, second, third and fourth communication means may be, for example, a wifi communication module, or may be a module such as bluetooth.
In one example, the agricultural internet of things based system may further include a geographic information subsystem and an agricultural drone and satellite remote sensing subsystem.
The geographic information subsystem comprises an electronic map of a preset farm, and marking information is arranged at a plurality of preset positions on the electronic map.
The agricultural unmanned aerial vehicle and satellite remote sensing subsystem comprises an unmanned aerial vehicle end, a satellite communication end and a server end.
The unmanned aerial vehicle end is suitable for collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times and sending the low-altitude remote sensing images to the server end in real time;
the satellite communication terminal is suitable for acquiring a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the high-altitude remote sensing image to the server terminal in real time;
the server side is suitable for at least one function of crop growth prediction, insect pest detection and flood disaster analysis and early warning based on a low-altitude remote sensing image from the unmanned aerial vehicle side and/or a high-altitude remote sensing image from the satellite communication side.
For example, the annotation information includes one or more of land information, water conservancy information, and forestry information.
For example, in the greenhouse control system, a temperature sensor, a humidity sensor, a pH value sensor, a light intensity sensor and CO of the Internet of things system are used2Sensors, etc. for detecting ambient temperature, relative humidity, pH, illumination intensity, soil nutrients, and CO2The physical quantity parameters such as concentration and the like ensure that the crops have a good and proper growing environment. The realization of remote control makes the technical staff just can monitor the control to the environment of a plurality of big-arch shelters at the office. Wireless networks are used to measure the optimal conditions for achieving crop growth.
In the unmanned aerial vehicle remote sensing technology, a small digital camera (or scanner) is usually used as an airborne remote sensing device, compared with a traditional aerial photograph, the unmanned aerial vehicle remote sensing technology has the problems of small image size, large number of images and the like, and corresponding software is developed for carrying out interactive processing on images by aiming at the characteristics of the remote sensing images, camera calibration parameters, attitude data during shooting (or scanning) and relevant geometric models. In addition, the system also comprises image automatic identification and quick splicing software, so that the quick inspection of the image quality and the flight quality and the quick processing of data are realized, and the real-time and quick technical requirements of the whole system are met.
For example, the server side groups the received low-altitude remote sensing images and/or high-altitude remote sensing images, and generates a video to be detected by using each group of images, so as to obtain a plurality of videos to be detected (this step is not shown in fig. 3).
Then, the target video is received. The target video is received from outside, such as a user terminal. The target video can be a video file in any format, and can also be a video file conforming to one of preset formats. The preset format includes, for example, video formats such as MPEG-4, AVI, MOV, ASF, 3GP, MKV, and FLV.
Next, a plurality of scene cut times in the target video is determined. For example, the scene switching time in the target video may be detected by using the prior art, which is not described herein again.
Then, for each scene switching time in the target video, a switched video frame corresponding to the scene switching time in the target video is obtained. That is, at each scene change point (i.e., scene change time), the frame before the change is referred to as a pre-change video frame, and the frame after the change is referred to as a post-change video frame. Thus, in a target video, one or more post-switching video frames (or 0 post-switching video frames, that is, no switching scene in the video, always the same scene) can be obtained.
Then, the first frame image of the target video and the switched video frames corresponding to all scene switching times in the target video are taken as a plurality of target frame images (if there is no switched video frame in the target video, there is only one target frame image, that is, the first frame image of the target video), and the total number of all target frame images is recorded as N, where N is a non-negative integer. Generally, N is 2 or more. When there is no switched video frame in the target video, N is equal to 1.
Then, for each video to be detected in a preset video database, determining a plurality of scene switching moments in the video to be detected, obtaining a switched video frame corresponding to each scene switching moment in the video to be detected, and taking a first frame image of the video to be detected and the switched video frames corresponding to all the scene switching moments in the video to be detected as frame images to be detected.
The preset video database stores a plurality of videos serving as the videos to be detected in advance. For example, the predetermined video database may be a database stored in a video playing platform, or a database stored in a memory such as a network cloud disk.
In this way, for each target frame image, the similarity between each frame image to be detected of each video to be detected and the target frame image is calculated, and the frame image to be detected, the similarity between which and the target frame image is higher than the first threshold value, is determined as the candidate frame image corresponding to the video to be detected. The first threshold may be set according to an empirical value, for example, the first threshold may be 80% or 70%, or the like.
Then, for each video to be detected, a first score of the video to be detected is calculated.
For example, for each video to be detected, a first score of the video to be detected may be obtained by performing processing as will be described below.
And calculating the number of the candidate frame images corresponding to the video to be detected, and recording the number as a1, wherein a1 is a non-negative integer.
Then, the number of all target frame images related to each candidate frame image corresponding to the video to be detected is calculated and recorded as a2, and a2 is a non-negative integer.
Then, calculating a first score of the video to be detected according to the following formula: s1 ═ q1 × a1+ q2 × a 2.
S1 is the first score of the video to be detected, q1 represents the weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents the weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, wherein q1 is equal to the preset first weight value.
Alternatively, the first weight value is, for example, equal to 0.5, which may also be set empirically.
When a2 is equal to N, q2 is equal to a preset second weight value.
When a2 < N, q2 is equal to a preset third weight value.
Wherein the second weight value is greater than the third weight value.
Alternatively, the second weight value is equal to 1, for example, and the third weight value is equal to 0.5, for example, or the second weight value and the third weight value may be set empirically.
Alternatively, the second weight value may be equal to d times the third weight value, d being a real number greater than 1. Where d can be an integer or a decimal number, for example, d can be an integer or a decimal number greater than or equal to 2, such as 2, 3, or 5, and so on.
And determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected.
Optionally, the step of determining similar videos of the target video in the to-be-detected videos according to the first score of each to-be-detected video may include: and selecting the video to be detected with the first score higher than the second threshold value from all the videos to be detected as the similar video of the target video. The second threshold may be set according to an empirical value, for example, the second threshold may be equal to 5, and different values may be set according to different application conditions.
In this way, similar videos similar to the target video can be determined in the predetermined video database.
Thus, a plurality of target frame images in the target video are obtained based on the scene switching points (i.e. scene switching time), and a plurality of frame images to be detected in each video to be detected are obtained based on the scene switching points, wherein the target frame images are switched video frames corresponding to each scene switching point in the target video, the frame images to be detected are switched video frames corresponding to each scene switching point in each video to be detected, and two kinds of information are obtained by comparing the similarity between each target frame image in the target video and each frame image to be detected in each video to be detected respectively, one kind of information is the number of frame images to be detected in each video to be detected, which is related to the target frame image (i.e. the number of all frame images to be detected in the video to be detected), and the other kind of information is the number of target frame images related to each video to be detected (i.e. the number of all target frame images similar to be detected in the video to, whether the video to be detected is similar to the target video or not is determined based on the combination of the two kinds of information, so that on one hand, the similar video of the target video can be obtained more efficiently, on the other hand, the range needing to be searched can be narrowed for subsequent further similar video judgment, and the workload is greatly reduced.
In a preferred example (hereinafter referred to as example 1), assuming that the target video has 3 scene switching points, the target video has 4 post-switching video frames (including the first frame), i.e., 4 target frame images, which are assumed to be p1, p2, p3, and p4, respectively, i.e., the total number N of all target frame images is 4; assuming that a certain video to be detected (assumed as v1) has 5 scene switching points, the video to be detected v1 has 6 video frames after switching, i.e. 6 frame images to be detected, which are assumed to be p1 ', p 2', p3 ', p 4', p5 'and p 6', respectively. Performing similarity calculation on each frame image to be measured in the 6 frame images to be measured and each target frame image in the 4 target frame images respectively, wherein the similarity between p1 'and p1 is x11, the similarity between p 1' and p2 is x12, the similarity between p1 'and p3 is x13, and the similarity between p 1' and p4 is x 14; the similarity between p2 'and p1 is x21, the similarity between p 2' and p2 is x22, the similarity between p2 'and p3 is x23, and the similarity between p 2' and p4 is x 24; the similarity between p3 'and p1 is x31, the similarity between p 3' and p2 is x32, the similarity between p3 'and p3 is x33, and the similarity between p 3' and p4 is x 34; the similarity between p4 'and p1 is x41, the similarity between p 4' and p2 is x42, the similarity between p4 'and p3 is x43, and the similarity between p 4' and p4 is x 44; the similarity between p5 'and p1 is x51, the similarity between p 5' and p2 is x52, the similarity between p5 'and p3 is x53, and the similarity between p 5' and p4 is x 54; the similarity between p6 'and p1 is x61, the similarity between p 6' and p2 is x62, the similarity between p6 'and p3 is x63, and the similarity between p 6' and p4 is x 64. If only x11, x21, x23, x31, x33 and x43 among the above similarity degrees x11-x14, x21-x24, x31-x34 and x41-x44 are higher than the first threshold 80%, the number a1 of the candidate frame images corresponding to the video v1 to be detected is 4 (including p1 ', p 2', p3 'and p 4'), and the number a2 of all the target frame images related to the candidate frame images corresponding to the video v1 to be detected is 2 (including p1 and p 3). And N is 4, obviously a2 is smaller than N, so q2 is equal to the preset third weighted value. Assuming that the first weight value is equal to 0.5, the second weight value is equal to 1, and the third weight value is equal to 0.5, then q1 is equal to 0.5, and q2 is equal to 0.5. Then, the first score S1 of the video v1 to be detected is q1 × a1+ q2 × a2 is 0.5 × 4+0.5 × 2 is 3 points.
Assuming that another video to be detected (assumed as v2), the number a1 of the candidate frame images corresponding to the video to be detected v2 is 4, and the number a2 of all the target frame images related to each candidate frame image corresponding to the video to be detected v2 is 4, so that a2 is N, and q2 is 1. Then, the first score S1 of the video v2 to be detected is q1 × a1+ q2 × a2 is 0.5 × 4+1 × 4 is 6 points.
Thus, in example 1, the first score of the video to be detected v2 is much higher than the first score of the video to be detected v1, and assuming that the second threshold value is 5 scores (different values may be set in other examples), the video to be detected v2 may be determined as a similar video of the target video, and the video to be detected v1 is not a similar video.
In one example, among all videos to be detected, videos to be detected in which the first score is higher than the second threshold may be selected as candidate videos.
Then, the target video is divided based on a plurality of scene switching moments of the target video to obtain a plurality of first video clips corresponding to the target video, the total number of all the first video clips in the target video is recorded as M, and M is a non-negative integer.
Then, for each candidate video, the candidate video is segmented based on a plurality of scene switching moments of the candidate video, and a plurality of second video segments corresponding to the candidate video are obtained.
Then, for a second video segment corresponding to each candidate frame image of each candidate video, selecting a first video segment related to a target frame image corresponding to the candidate frame image from a plurality of first video segments, performing similarity calculation on the selected first video segment and the selected second video segment, and if the similarity between the first video segment and the second video segment is higher than a third threshold, determining the second video segment as a similar segment corresponding to the first video segment. Wherein the third threshold value may be set according to an empirical value, for example, the third threshold value may be equal to 60% or 70% or 80% or 90%, etc.
For example, the similarity calculation between two video segments can be implemented by using the prior art, and is not described herein again.
Then, for each candidate video, calculating the number of similar segments contained in the candidate video, which is denoted as b1 and b1 as non-negative integers, calculating the number of all first video segments related to each similar segment contained in the candidate video, which is denoted as b2 and b2 as non-negative integers, and calculating a second score of the candidate video according to the following formula: s2 is q3 × b1+ q4 × b2, where S2 is the second score of the candidate video, q3 represents a weight corresponding to the number of similar segments included in the candidate video, and q4 represents a weight corresponding to the number of all first video segments related to each similar segment included in the candidate video, where q3 is equal to a preset fourth weight value, q4 is equal to a preset fifth weight value when b2 is M, and q4 is equal to a preset sixth weight value when b2 < M, where the fifth weight value is greater than the sixth weight value. The fourth weight value, the fifth weight value and the sixth weight value can also be set according to experience.
Then, similar videos of the target video are determined among the candidate videos according to the second score of each candidate video.
Optionally, among all the candidate videos, a candidate video in which the second score is higher than a fourth threshold is selected as the similar video of the target video. The fourth threshold may be set according to an empirical value, for example, the fourth threshold may be equal to 5, and different values may be set according to different application conditions.
Thus, in one implementation, a plurality of target frame images in a target video may be first obtained based on scene switching points (i.e., scene switching time), and a plurality of frame images to be detected in each video to be detected may be obtained based on the scene switching points, where the target frame images are switched video frames corresponding to each scene switching point in the target video, the frame images to be detected are switched video frames corresponding to each scene switching point in each video to be detected, and by comparing similarities between each target frame image of the target video and each frame image to be detected in each video to be detected, two kinds of information are obtained, one is the number of frame images to be detected in each video to be detected, which is related to the target frame image (i.e., the number of all frame images to be detected in the video to be detected), and the other is the number of target frame images related to each video to be detected (i.e., the number of all frame images to be detected in the video to be detected (i.e., all frame images Quantity), determining a first score of each video to be detected based on the combination of the two information, screening out a part of the videos to be detected as candidate videos based on the first score, and performing secondary screening from the candidate videos so as to finally obtain similar videos of the target video, wherein the secondary screening from the candidate videos is realized by calculating a second score of each candidate video. When calculating the second score, firstly, performing video segmentation on the target video and each candidate video based on the scene switching point to obtain a plurality of first video segments corresponding to the target video and a plurality of second video segments corresponding to each candidate video, obtaining another two kinds of information by comparing the similarity of the first video segments in the target video and the second video segments in the candidate video, wherein one kind of information is the number of the second video segments related to the target video in the candidate video (namely the number of similar segments contained in the candidate video), and the other kind of information is the number of the first video segments related to each candidate video (namely the number of all the first video segments related to the similar segments contained in each candidate video), determining the second score of each candidate video based on the combination of the two kinds of information, and then screening the candidate videos according to the second scores of each candidate video, it is determined which are similar videos to the target video. Therefore, the first score and the second score of the video to be detected (or the candidate video) are obtained by combining the four kinds of information, and the video to be detected is screened twice by combining the first score and the second score, so that the similar video obtained by screening is more accurate.
Compared with the prior art of directly calculating the similarity of two videos, the method can greatly reduce the workload and improve the processing efficiency, can firstly carry out primary screening by calculating the first score, the calculation is based on the frame image after scene switching, the calculation amount is much smaller than the similarity calculation of the whole video, then carries out secondary screening on the result of the primary screening, and the secondary screening does not carry out the similarity calculation on all candidate videos, and does not calculate the similarity of the whole video together for a single candidate video, but divides the candidate video based on the scene switching point, carries out the similarity calculation on a part of the divided video segments (namely the similar segments mentioned above) in the candidate video and the corresponding segments in the target video, thus, compared with the prior art of calculating the similarity calculation between every two videos (and the whole video), the calculation amount is greatly reduced, and the efficiency is improved.
In one example, similar videos of the target video are determined in the videos to be detected according to the first score of each video to be detected as follows: selecting the video to be detected with the first score higher than a second threshold value from all the videos to be detected as candidate videos; dividing the target video based on a plurality of scene switching moments of the target video to obtain a plurality of first video segments corresponding to the target video, and recording the total number of all the first video segments in the target video as M, wherein M is a non-negative integer; for each candidate video, segmenting the candidate video based on a plurality of scene switching moments of the candidate video to obtain a plurality of second video segments corresponding to the candidate video; for a second video segment corresponding to each candidate frame image of each candidate video, selecting a first video segment related to a target frame image corresponding to the candidate frame image from a plurality of first video segments, performing similarity calculation on the selected first video segment and the second video segment, and if the similarity between the first video segment and the second video segment is higher than a third threshold value, determining the second video segment as a similar segment corresponding to the first video segment; for each candidate video, calculating the number of similar segments contained in the candidate video, wherein b1 and b1 are non-negative integers, calculating the number of all first video segments related to each similar segment contained in the candidate video, wherein b2 and b2 are non-negative integers, and calculating a second score of the candidate video according to the following formula: s2 ═ q3 × b1+ q4 × b2, where S2 is the second score of the candidate video, q3 represents a weight corresponding to the number of similar segments included in the candidate video, q4 represents a weight corresponding to the number of all first video segments related to each similar segment included in the candidate video, q3 is equal to a preset fourth weight value, q4 is equal to a preset fifth weight value when b2 ═ M, and q4 is equal to a preset sixth weight value when b2 < M, where the fifth weight value is greater than the sixth weight value; and determining similar videos of the target video in the candidate videos according to the second score of each candidate video.
In one example, similar videos of the target video are determined among the candidate videos according to the second score of each candidate video as follows: among all the candidate videos, a candidate video in which the second score is higher than the fourth threshold is selected as a similar video of the target video.
In one example, the method further comprises: taking each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as input, taking the real yield grades corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, training a preset convolutional neural network model, and taking the trained preset convolutional neural network model as a first prediction model; the historical data comprises a plurality of groups of low-altitude remote sensing images and high-altitude remote sensing images, and real yield grades, corresponding weather data and corresponding pest data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images; obtaining a first predicted yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in historical data by using a first prediction model, taking the first predicted yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, corresponding weather data and corresponding pest damage data as input, taking the real yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, training a predetermined BP neural network model, and taking the trained predetermined BP neural network model as a second prediction model; inputting a low-altitude remote sensing image and a high-altitude remote sensing image to be predicted currently into a first prediction model, and obtaining a first prediction yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently; inputting a first predicted yield grade corresponding to a low-altitude remote sensing image and a high-altitude remote sensing image to be predicted at present, weather data and pest damage data corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present into a second prediction model, and obtaining a second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present; and determining a corresponding similar case by using the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, and calculating a prediction yield value corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently based on the real yield of the similar case and the obtained second prediction yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently.
In one example, the step of determining a corresponding similar case by using a low-altitude remote sensing image and a high-altitude remote sensing image to be predicted currently, and calculating a predicted yield value corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently based on the real yield of the similar case and a second predicted yield grade corresponding to the obtained low-altitude remote sensing image and the obtained high-altitude remote sensing image to be predicted currently comprises the following steps: calculating the similarity between each image and each image in the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently according to each group of low-altitude remote sensing images and each image in the high-altitude remote sensing images in the historical data, and determining the number of images with the similarity between the images in the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently and higher than a fifth threshold value as a first score of the images; aiming at each group of low-altitude remote sensing images and high-altitude remote sensing images in historical data, taking the sum of first scores of all images in the group of low-altitude remote sensing images and high-altitude remote sensing images as a first score of the group of low-altitude remote sensing images and high-altitude remote sensing images, taking the similarity between weather data corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images and weather data corresponding to the current low-altitude remote sensing images and high-altitude remote sensing images to be predicted as a second score of the group of low-altitude remote sensing images and high-altitude remote sensing images, taking the similarity between pest data corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images and pest data corresponding to the current low-altitude remote sensing images and high-altitude remote sensing images to be predicted as a third score of the group of low-altitude remote sensing images and high-altitude remote sensing images, and calculating the first scores corresponding to the group of low-altitude remote sensing images and high-altitude remote, The weighted sum of the second score and the third score is used as the total score of the group of low-altitude remote sensing images and the high-altitude remote sensing images; taking T historical cases corresponding to the front T groups of low-altitude remote sensing images and high-altitude remote sensing images with the highest total score as similar cases corresponding to the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted currently, wherein T is 1, 2 or 3; determining the weight of each similar case according to the total score corresponding to each similar case, and calculating the weighted sum of the real yields of the T similar cases according to the determined weight, wherein the sum of the weights of the T similar cases is 1, if the yield grade corresponding to the weighted sum of the real yields of the T similar cases obtained by calculation is the same as the second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, taking the weighted sum of the real yields of the T similar cases as the predicted yield numerical value corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, and if the yield grade corresponding to the weighted sum of the real yields of the T similar cases obtained by calculation is higher than the second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, taking the lowest yield numerical range corresponding to the second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently And taking the maximum value as a predicted yield numerical value corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, and if the calculated weighted sum of the real yields of the T similar cases is lower than a second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently, taking the minimum value in a yield numerical range corresponding to the second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently as a predicted yield numerical value corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted currently.
In one example, the method further comprises: storing picture data and character data of a plurality of stored agricultural products, wherein the picture data of each stored agricultural product comprises one or more pictures; receiving a picture to be searched and/or characters to be retrieved of a product to be searched from a user side, calculating the similarity between each stored agricultural product and the product to be searched, carrying out object detection on the picture to be searched of the product to be searched, and obtaining all identified first article images in the picture to be searched; for each stored agricultural product, calculating the similarity between the stored agricultural product and the product to be searched in the following mode: performing object detection on each picture in the picture data of the stored agricultural products to obtain all identified second item images in the picture data of the stored agricultural products, performing contour retrieval on all identified second item images in the picture data of the stored agricultural products respectively to determine whether the contour of the second item of each second item image is complete, calculating the similarity between each second item image and each first item image in all identified second item images in the picture data of the stored agricultural products, determining the number of first item images with the similarity higher than a seventh threshold value with each second item image for each second item image of the stored agricultural products, taking the number as the first correlation between the second item image and the product to be searched, and accumulating and calculating the sum of the first correlations corresponding to each second item image of the stored agricultural products, determining the number of first item images with similarity higher than a seventh threshold value with respect to each second item image with complete outline of the stored agricultural product, taking the number as a second correlation degree of the second item image and the product to be searched, calculating the sum of the second correlation degrees corresponding to each second item image of the stored agricultural product in an accumulated manner, calculating the text similarity between text data of the stored agricultural product and the text to be retrieved of the product to be searched, and determining the total similarity of the stored agricultural product and the product to be searched according to the sum of the first correlation degrees, the sum of the second correlation degrees and the text similarity corresponding to the stored agricultural product; and displaying the stored agricultural products with the total similarity to the product to be searched higher than an eighth threshold value to the user as search results.
According to an embodiment, the method may further include: and taking each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as input, taking the real yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data as output, training a preset convolutional neural network model, and taking the trained preset convolutional neural network model as a first prediction model.
The production rate level referred to herein (e.g., "production rate level" in "actual production rate level", or "production rate level" in "predicted production rate level" described below) is a plurality of different levels set in advance. For example, a number of production levels may be preset empirically or experimentally, such as 3 levels (e.g., 2 levels, 4 levels, 5 levels, 8 levels, or 10 levels, etc.), wherein the first level corresponds to a production range of x 1-x2 (e.g., 1 kgf-1.2 kgf), the second level corresponds to a production range of x 2-x 3 (e.g., 1.2 kgf-1.4 kgf), and the third level corresponds to a production range of x 3-x 4 (e.g., 1.4 kgf-1.6 kgf).
For example, if the yield is 1.5 kilo kilograms, the corresponding yield grade is the third grade.
Wherein if the yield is exactly equal to the boundary value, the lower grade can be taken. For example, a throughput of 1.2 kilo kilograms corresponds to the first grade.
It should be noted that each set of the low-altitude remote sensing image and the high-altitude remote sensing image may include more than one low-altitude remote sensing image, and may also include more than one high-altitude remote sensing image.
The historical data comprises a plurality of groups of low-altitude remote sensing images and high-altitude remote sensing images, and real yield grades, corresponding weather data and corresponding pest data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images; in addition, the historical data can also comprise the real yield corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images. Each set of low-altitude and high-altitude remote sensing images (and corresponding real yield grade, real yield, corresponding weather data, corresponding pest data and the like) corresponds to a historical case.
Where the weather data may be in the form of a vector, for example, the weather data is represented by (t1, t2) (or more dimensions), where t1, t2 have a value of 0 or 1,0 represents that the corresponding item is no, and 1 represents that the corresponding item is true. For example, t1 indicates whether drought, t2 indicates whether flooding, and so on. For example, weather data (0,1) indicates no drought but flooding, while weather data (0,0) indicates neither drought nor flooding.
Further, pest data may be in the form of vectors, for example, weather data is represented by (h1, h2, h3, h4, h5) (or less or more dimensions), where the values of h1 to h5 are 0 or 1,0 represents no for the corresponding item, and 1 represents true for the corresponding item. For example, h1 item indicates whether the pest frequency is 0, h2 item indicates whether the pest frequency is 1-3, h3 item indicates whether the pest frequency is 3-5, h4 item indicates whether the pest frequency is more than 5, h5 item indicates whether the total area of the pest frequency exceeds a predetermined area (for example, the total area can be set according to experience or determined by a test), and the like. For example, pest data (1,0,0,0,0) indicates that no pest has occurred, while pest data (0,0,1,0,1) indicates that 3-5 pests have occurred and that the total area of pest occurrences exceeds a predetermined area.
Then, a first prediction yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data can be obtained by using the first prediction model, namely, after the first prediction model is trained, each group of low-altitude remote sensing images and high-altitude remote sensing images are input into the first prediction model, and the output result at the moment is used as the first prediction yield grade corresponding to the group of low-altitude remote sensing images and high-altitude remote sensing images.
In this way, the first predicted yield grade, the corresponding weather data and the corresponding pest damage data corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data can be used as input, the real yield grade corresponding to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data is used as output, the preset BP neural network model is trained, and the trained preset BP neural network model is used as a second predicted model;
it should be noted that, in the process of training the predetermined BP neural network model, one of the input quantities is selected from the "first predicted yield grade" corresponding to each group of the low-altitude remote sensing images and the high-altitude remote sensing images, and the corresponding real yield grade is not selected (both the real yield and the real yield grade are known), because, in the testing stage, the image to be tested does not know the real yield grade (or the real yield), so that the second prediction model obtained through training can classify (i.e., predict) the image to be tested more accurately.
Therefore, the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present can be input into the first prediction model, and the first prediction yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present can be obtained.
Then, the first predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present, the weather data and the pest damage data corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present can be input into the second prediction model, and the output result of the second prediction model at this moment is used as the second predicted yield grade corresponding to the low-altitude remote sensing image and the high-altitude remote sensing image to be predicted at present.
In this way, similar cases corresponding to the images to be predicted can be determined in a plurality of historical cases by using the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted currently (hereinafter referred to as images to be predicted), and the prediction yield values corresponding to the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted currently are calculated based on the real yield of the similar cases and the second prediction yield level corresponding to the images to be predicted.
As an example, the following processing may be performed: and calculating the similarity between each image and each image in the images to be predicted according to each group of low-altitude remote sensing images and high-altitude remote sensing images in the historical data, and determining the number of images with the similarity higher than a fifth threshold value in the images to be predicted as the first score of the images.
For example, for a certain image px in a certain group of low-altitude remote sensing images and high-altitude remote sensing images in the history data, assuming that 10 images pd1, pd2, … and pd10 are included in the image to be predicted, the similarity between the image px and the 10 images, that is, the similarity xs1 between px and pd1, the similarity xs2 and … between px and pd2, and the similarity xs10 between px and pd10 are calculated respectively. Assuming that only xs1, xs3, and xs8 among xs1 to xs10 are greater than the above-described fifth threshold, the number of images having a similarity higher than the fifth threshold with respect to the image px in the image to be predicted is 3, that is, the first score of the image px is 3.
Then, the similar case determination module may take the sum of the first scores of the images in the low-altitude remote sensing image group and the high-altitude remote sensing image group as the first scores of the low-altitude remote sensing image group and the high-altitude remote sensing image group (and the corresponding first scores of the historical cases) for each low-altitude remote sensing image group and the high-altitude remote sensing image group in the historical data. Preferably, the first score of each history case may be normalized, for example, or multiplied by a coefficient such that the first score multiplied by a predetermined coefficient (e.g., all first scores multiplied by 0.01 or 0.05, etc.) is between 0 and 1.
For example, for a historical case, it is assumed that the corresponding set of low-altitude remote sensing images and high-altitude remote sensing images includes 5 low-altitude remote sensing images and 5 high-altitude remote sensing images (or other numbers), and these 10 images are denoted as images pl1 to pl 10. In calculating the first score of the history case, assuming that the first scores of the images pl 1-pl 10 are spl 1-spl 10 (assuming that spl 1-spl 10 are already normalized scores), the first score of the history case is spl1+ spl2+ spl3+ … + spl10, i.e., the sum of spl 1-spl 10.
Then, the similarity between the weather data corresponding to the group of low-altitude remote sensing images and the weather data corresponding to the current low-altitude remote sensing images and the current high-altitude remote sensing images to be predicted can be used as a second score of the group of low-altitude remote sensing images and the current high-altitude remote sensing images. The weather data is, for example, in a vector form, and the similarity between the weather data may be calculated by using a vector similarity calculation method, which is not described herein again.
Then, the similarity between the pest data corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images and the pest data corresponding to the current low-altitude remote sensing images and the high-altitude remote sensing images to be predicted can be used as a third score of the group of low-altitude remote sensing images and the high-altitude remote sensing images, wherein the pest data are in a vector form, and the similarity between the pest data can be calculated by adopting a vector similarity calculation method, which is not repeated here.
Then, a weighted sum of the first score, the second score and the third score corresponding to the group of low-altitude remote sensing images and the high-altitude remote sensing images can be calculated as a total score of the group of low-altitude remote sensing images and the high-altitude remote sensing images. Wherein the respective weights of the first score, the second score and the third score may be set empirically or determined experimentally, for example, the weights of the first score, the second score and the third score may be 1, 1/3, respectively, and so on; alternatively, the first score, the second score, and the third score may have different weights.
Therefore, the T historical cases corresponding to the front T groups of low-altitude remote sensing images and high-altitude remote sensing images with the highest total score can be used as similar cases corresponding to the low-altitude remote sensing images and the high-altitude remote sensing images to be predicted currently, wherein T is 1, 2 or 3 or other positive integers.
After determining T similar cases of the image to be predicted, the following process can be performed: and determining the weight of each similar case according to the total score corresponding to each similar case, and calculating the weighted sum of the real yields of the T similar cases according to the determined weights, wherein the sum of the weights of the T similar cases is 1.
For example, assuming that T is 3, 3 similar cases of the image to be predicted are obtained, assuming that the total scores of the 3 similar cases are sz1, sz2, and sz3, respectively, wherein sz1 is smaller than sz2, and sz2 is smaller than sz 3. For example, the weights corresponding to the 3 similar cases may be set to qsz1, qsz2, and qsz3 in order, so that qsz1: qsz2: qsz3 (the ratio of the three) is equal to sz1: sz2: sz3 (the ratio of the three).
If the calculated weighted sum of the real yields of the T similar cases is the same as the second predicted yield level corresponding to the image to be predicted, the weighted sum of the real yields of the T similar cases can be used as the predicted yield value corresponding to the image to be predicted.
If the yield level corresponding to the weighted sum of the real yields of the T similar cases obtained by calculation is higher than the second prediction yield level corresponding to the image to be predicted, the maximum value in the yield numerical range corresponding to the second prediction yield level corresponding to the image to be predicted can be used as the prediction yield numerical value corresponding to the image to be predicted.
If the calculated weighted sum of the real yields of the T similar cases is lower than the second predicted yield level corresponding to the image to be predicted, the minimum value in the yield numerical range corresponding to the second predicted yield level corresponding to the image to be predicted can be used as the predicted yield numerical value corresponding to the image to be predicted.
For example, assuming that the total fractions of 3 similar cases to be predicted (assuming that the actual yields are 1.1 kgs, 1.3 kgs and 1.18 kgs, respectively) are 1, 2 and 2 (assuming that the total fractions of other historical cases are less than 1), the weights corresponding to the 3 similar cases may be set to 0.2, 0.4 and 0.4 in sequence, and then the "weighted sum of the actual yields of the T similar cases" 0.2 × 1.1+0.4 × 1.3+0.4 × 1.18 — 0.22+0.52+0.472 — 1.212 kgs and the corresponding yield grades are the second grades x 2-x 3 (e.g., 1.2 kgs-1.4 kgs).
Assuming that the second prediction yield level corresponding to the image to be predicted is the first level x 1-x2 (e.g., 1 kgf-1.2 kgf), the upper boundary of the yield range corresponding to the first level (i.e., 1.2 kgf) can be used as the prediction yield value corresponding to the image to be predicted.
Assuming that the second prediction yield level corresponding to the image to be predicted is the second level x 2-x 3 (e.g., 1.2 kilo-kg-1.4 kilo-kg), 1.212 kilo-kg can be used as the prediction yield value corresponding to the image to be predicted.
Assuming that the second prediction yield level corresponding to the image to be predicted is the third level x 3-x 4 (e.g., 1.4 kgs-1.6 kgs), the lower boundary of the yield range corresponding to the third level (i.e., 1.4 kgs) can be used as the prediction yield value corresponding to the image to be predicted.
Through the mode, not only the prediction result (namely the second prediction yield level) of the image to be predicted is utilized, but also the prediction result obtained by utilizing the information of the similar cases (namely the weighted sum of the real yields of the T similar cases) is utilized, so that the obtained final yield prediction result is more in line with the actual situation and is more accurate.
According to an embodiment of the present invention, the above system and method may further include an agricultural product search process (subsystem), wherein in the agricultural product search process (subsystem), the database may be used to store the picture data and the text data of a plurality of stored agricultural products, wherein the picture data of each stored agricultural product includes one or more pictures.
In the agricultural product search processing (subsystem), a picture to be searched for and/or a text to be retrieved of a product to be searched for from a user side may be received, for example, object detection may be performed on the picture to be searched for the product to be searched for first to obtain images of all identified first objects in the picture to be searched for, for example, the picture to be searched for input by the user may be a picture taken by a handheld terminal device, or may be other pictures obtained by a device in a stored or downloaded manner, and the picture may include a plurality of objects, for example, may be a picture including two objects, namely, a desk and a teacup. By utilizing the existing object detection technology, two first object images of a desk and a teacup in a picture can be identified.
In the agricultural product search process, a similarity between each stored agricultural product stored in the database unit and a product to be searched may be calculated. For each stored agricultural product, the similarity between the stored agricultural product and the product to be searched can be calculated, for example, as follows: for each picture in the picture data of the stored agricultural product, performing object detection on the picture to obtain all identified second item images in the picture data of the stored agricultural product (which may be implemented by using a technology similar to the above-mentioned detection of the first item image, and is not described here again).
Then, in the agricultural product search processing (subsystem), contour retrieval may be performed on all identified second item images in the picture data of the stored agricultural product, respectively, to determine whether the second item contour of each second item image is complete.
Then, in all the identified second item images (including complete and incomplete outlines) in the picture data of the stored agricultural products, the similarity between each second item image and each first item image may be calculated (for example, the existing image similarity calculation method may be adopted).
Then, for each second item image of the stored agricultural products, the number of first item images with the similarity higher than a seventh threshold value with the second item image may be determined as the first correlation between the second item image and the product to be searched, and the sum of the first correlations corresponding to the respective second item images of the stored agricultural products is calculated in an accumulated manner.
Then, for each second item image with complete outline of the stored agricultural product, the number of first item images with similarity higher than a seventh threshold value with the second item image is determined as a second correlation degree of the second item image and the product to be searched, and the sum of the second correlation degrees corresponding to the second item images of the stored agricultural product is calculated in an accumulated mode.
Then, the literal similarity between the literal data of the stored agricultural product and the literal to be retrieved of the product to be searched can be calculated, for example, the existing method for calculating the similarity of character strings can be used.
In this way, the total similarity between the stored agricultural product and the product to be searched can be determined according to the sum of the first correlations (denoted as f1), the sum of the second correlations (denoted as f2) and the text similarity (denoted as f3), for example, the total similarity can be equal to f1+ f2+ f3, or can be equal to the weighted sum of the three, such as qq1 f1+ qq2 f2+ qq3 f3, where qq1 qq3 are preset weights of f1 to f3, and can be set according to experience.
In this way, stored agricultural products having a total similarity to the product to be searched that is higher than the eighth threshold value may be presented to the user as search results.
It should be noted that the first to eighth thresholds may be set according to empirical values or determined through experiments, and are not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention and the advantageous effects thereof have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. The utility model provides an information processing system for pest control, its characterized in that, an information processing system for pest control includes remote sensing end and agricultural thing networking ground control center:
the remote sensing end is used for acquiring a low-altitude remote sensing image and a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the low-altitude remote sensing image and the high-altitude remote sensing image to the ground control center of the agricultural Internet of things in real time;
agricultural thing networking ground control center includes:
the first feature extraction unit is used for extracting features of the low-altitude remote sensing image to obtain first image features;
the second feature extraction unit is used for extracting features of the high-altitude remote sensing image to obtain second image features;
the crop growth information acquisition unit is used for acquiring planting information of planted crops corresponding to the preset planting area of the agricultural Internet of things and acquiring actual insect pest information of the planted crops corresponding to the preset planting area of the agricultural Internet of things, wherein the planting information comprises sowing time, sowing quantity, fertilizing time, fertilizing quantity each time, water supply quantity each time, insect killing time and leaf area index per ten balances;
the prediction model training unit is used for taking the first image characteristic, the second image characteristic, planting information of planted crops corresponding to the preset planting area of the agricultural Internet of things and actual insect pest information as training samples to train a preset insect pest prediction model;
the prediction unit is used for obtaining the predicted insect pest information of the crop to be predicted according to the planting information of the crop to be predicted and the trained insect pest prediction model;
the remote sensing end comprises an unmanned aerial vehicle end and a satellite communication end, and the information processing system further comprises a server end; the unmanned aerial vehicle end is suitable for collecting low-altitude remote sensing images of preset planting areas of the agricultural Internet of things for multiple times and sending the low-altitude remote sensing images to the server end in real time; the satellite communication terminal is suitable for acquiring a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things and sending the high-altitude remote sensing image to the server terminal in real time;
the server side groups the received low-altitude remote sensing images and/or high-altitude remote sensing images, and generates a video to be detected by using each group of images to obtain a plurality of videos to be detected; receiving a target video through a server end; determining a plurality of scene switching moments in the target video;
the server side obtains a switched video frame corresponding to each scene switching moment in the target video aiming at each scene switching moment in the target video; taking a first frame image of the target video and switched video frames corresponding to all scene switching moments in the target video as a plurality of target frame images, and recording the total number of all the target frame images as N, wherein N is a non-negative integer;
the method comprises the steps that a server side determines a plurality of scene switching moments in a video to be detected aiming at each video to be detected in a preset video database, obtains a switched video frame corresponding to each scene switching moment in the video to be detected, and takes a first frame image of the video to be detected and the switched video frames corresponding to all the scene switching moments in the video to be detected as frame images to be detected;
the server side calculates the similarity between each frame image to be detected of each video to be detected and the target frame image aiming at each target frame image, and determines the frame image to be detected with the similarity higher than a first threshold value with the target frame image as a candidate frame image corresponding to the video to be detected;
for each video to be detected, the server side,
calculating the number of candidate frame images corresponding to the video to be detected, recording as a1, wherein a1 is a non-negative integer,
calculating the number of all target frame images related to each candidate frame image corresponding to the video to be detected, recording as a2, wherein a2 is a non-negative integer,
calculating a first score of the video to be detected according to the following formula: s1 ═ q1 × a1+ q2 × a2, where S1 is the first score of the video to be detected, q1 represents the weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents the weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, where q1 is equal to the preset first weight value,
q2 is equal to a preset second weight value when a2 is equal to N, and q2 is equal to a preset third weight value when a2 is less than N, wherein the second weight value is greater than the third weight value;
the server side determines similar videos of the target video in the videos to be detected according to the first score of each video to be detected; determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected as follows:
selecting the video to be detected with the first score higher than a second threshold value from all the videos to be detected as candidate videos;
dividing the target video based on a plurality of scene switching moments of the target video to obtain a plurality of first video clips corresponding to the target video, and recording the total number of all the first video clips in the target video as M, wherein M is a non-negative integer;
for each candidate video, segmenting the candidate video based on a plurality of scene switching moments of the candidate video to obtain a plurality of second video segments corresponding to the candidate video;
for a second video segment corresponding to each candidate frame image of each candidate video,
selecting a first video segment related to a target frame image corresponding to the candidate frame image among the plurality of first video segments,
performing similarity calculation between the selected first video segment and the selected second video segment,
if the similarity between the first video clip and the second video clip is higher than a third threshold, determining the second video clip as a similar clip corresponding to the first video clip;
for each of the candidate videos, one or more of the candidate videos is selected,
calculating the number of similar segments contained in the candidate video, and marking as b1, wherein b1 is a non-negative integer,
calculating the number of all first video segments related to similar segments contained in the candidate video, which is marked as b2, b2 is a non-negative integer,
calculating a second score for the candidate video according to: s2 is q3 × b1+ q4 × b2, where S2 is the second score of the candidate video, q3 represents a weight corresponding to the number of similar segments included in the candidate video, q4 represents a weight corresponding to the number of all first video segments related to each similar segment included in the candidate video, where q3 is equal to a preset fourth weight value,
q4 is equal to a preset fifth weight value when b2 is equal to M, and q4 is equal to a preset sixth weight value when b2 is less than M, wherein the fifth weight value is greater than the sixth weight value;
determining similar videos of the target video in the candidate videos according to the second score of each candidate video.
2. The information processing system for pest prevention according to claim 1, wherein the pest information includes a number of pest occurrences and a pest occurrence area.
3. The information processing system for pest prevention according to claim 1, wherein the pest prediction model employs a spectral composite prediction model.
4. The information processing system for pest prevention according to claim 1, wherein in the step of training the predetermined pest prediction model, a difference between predicted pest information of a planted crop corresponding to the preset planting area of the agricultural internet of things obtained by the pest prediction model and actual pest information of the planted crop is smaller than a predetermined threshold value.
5. An information processing system for pest prevention according to any one of claims 1 to 4, wherein the information processing system further comprises a monitoring subsystem, a weather subsystem, a ground water level monitoring subsystem and a control center subsystem;
the monitoring subsystem comprises a plurality of monitoring points, wherein each monitoring point is provided with at least one video device, at least one first sensor and a first communication device, the at least one video device is used for capturing video data of a corresponding area, the at least one first sensor is used for acquiring soil environment data corresponding to the monitoring point, and the first communication device is used for sending the video data and the soil environment data acquired by the corresponding monitoring point to the control center subsystem;
the weather subsystem comprises a plurality of weather monitoring stations, wherein each weather monitoring station is provided with a plurality of second sensors and a second communication device, the second sensors are used for acquiring air environment data corresponding to the weather monitoring station, and the second communication device is used for sending the air environment data corresponding to the weather monitoring station to the control center subsystem;
the underground water level monitoring subsystem comprises a plurality of underground water level monitoring points, wherein each underground water level monitoring point is provided with an underground water level monitoring device and a third communication device, the underground water level monitoring device is used for acquiring underground water level data at a corresponding position in real time and transmitting the acquired underground water level data to the control center subsystem through the third communication device; and
the control center subsystem is configured to:
obtaining a first sensing range of a first sensor; obtaining a second sensing range of a second sensor; selecting a plurality of first candidate positions as possible positions of a plurality of first sensors to be reselected; selecting a plurality of second candidate locations as possible locations for a plurality of second sensors to be reselected; randomly selecting K position points in a preset monitoring area, wherein K is a positive integer; determining a first candidate positions and b second candidate positions from among the first candidate positions and the second candidate positions, wherein a and b are positive integers, so that the following conditions are satisfied: so that the sum of a and b is as small as possible; and at each of the K location points, the location point being locatable within a first sensing range of a first sensor at least one of the a first candidate locations and within a second sensing range of a second sensor at least one of the b second candidate locations; the first sensors are rearranged according to the determined first candidate positions, and the second sensors are rearranged according to the determined second candidate positions.
6. An information processing method for pest prevention, characterized by comprising:
collecting a low-altitude remote sensing image of a preset planting area of the agricultural Internet of things;
collecting a high-altitude remote sensing image of a preset planting area of the agricultural Internet of things;
carrying out feature extraction on the low-altitude remote sensing image to obtain a first image feature;
carrying out feature extraction on the high-altitude remote sensing image to obtain a second image feature;
the method comprises the steps of obtaining planting information of planting crops corresponding to a preset planting area of the agricultural internet of things and obtaining actual insect pest information of the planting crops corresponding to the preset planting area of the agricultural internet of things, wherein the planting information comprises sowing time, sowing amount, fertilizing time, fertilizing amount each time, water supply amount each time, insect killing time and leaf area index per ten scales;
taking the first image feature, the second image feature, planting information of planted crops corresponding to the preset planting area of the agricultural Internet of things and actual insect pest information as training samples, and training a preset insect pest prediction model;
obtaining predicted pest information of the crop to be predicted according to planting information of the crop to be predicted and the trained pest prediction model;
grouping the received low-altitude remote sensing images and/or high-altitude remote sensing images, and generating a video to be detected by using each group of images to obtain a plurality of videos to be detected; receiving a target video; determining a plurality of scene switching moments in the target video;
aiming at each scene switching moment in the target video, obtaining a switched video frame corresponding to the scene switching moment in the target video; taking a first frame image of the target video and switched video frames corresponding to all scene switching moments in the target video as a plurality of target frame images, and recording the total number of all the target frame images as N, wherein N is a non-negative integer;
determining a plurality of scene switching moments in a video to be detected aiming at each video to be detected in a preset video database, obtaining a switched video frame corresponding to each scene switching moment in the video to be detected, and taking a first frame image of the video to be detected and the switched video frames corresponding to all the scene switching moments in the video to be detected as frame images to be detected;
calculating the similarity between each frame image to be detected of each video to be detected and the target frame image aiming at each target frame image, and determining the frame image to be detected with the similarity higher than a first threshold value with the target frame image as a candidate frame image corresponding to the video to be detected;
for each video to be detected,
calculating the number of candidate frame images corresponding to the video to be detected, recording as a1, wherein a1 is a non-negative integer,
calculating the number of all target frame images related to each candidate frame image corresponding to the video to be detected, recording as a2, wherein a2 is a non-negative integer,
calculating a first score of the video to be detected according to the following formula: s1 ═ q1 × a1+ q2 × a2, where S1 is the first score of the video to be detected, q1 represents the weight corresponding to the number of candidate frame images corresponding to the video to be detected, q2 represents the weight corresponding to the number of all target frame images related to each candidate frame image corresponding to the video to be detected, where q1 is equal to the preset first weight value,
q2 is equal to a preset second weight value when a2 is equal to N, and q2 is equal to a preset third weight value when a2 is less than N, wherein the second weight value is greater than the third weight value;
determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected;
determining similar videos of the target video in the videos to be detected according to the first score of each video to be detected as follows:
selecting the video to be detected with the first score higher than a second threshold value from all the videos to be detected as candidate videos;
dividing the target video based on a plurality of scene switching moments of the target video to obtain a plurality of first video clips corresponding to the target video, and recording the total number of all the first video clips in the target video as M, wherein M is a non-negative integer;
for each candidate video, segmenting the candidate video based on a plurality of scene switching moments of the candidate video to obtain a plurality of second video segments corresponding to the candidate video;
for a second video segment corresponding to each candidate frame image of each candidate video,
selecting a first video segment related to a target frame image corresponding to the candidate frame image among the plurality of first video segments,
performing similarity calculation between the selected first video segment and the selected second video segment,
if the similarity between the first video clip and the second video clip is higher than a third threshold, determining the second video clip as a similar clip corresponding to the first video clip;
for each of the candidate videos, one or more of the candidate videos is selected,
calculating the number of similar segments contained in the candidate video, and marking as b1, wherein b1 is a non-negative integer,
calculating the number of all first video segments related to similar segments contained in the candidate video, which is marked as b2, b2 is a non-negative integer,
calculating a second score for the candidate video according to: s2 is q3 × b1+ q4 × b2, where S2 is the second score of the candidate video, q3 represents a weight corresponding to the number of similar segments included in the candidate video, q4 represents a weight corresponding to the number of all first video segments related to each similar segment included in the candidate video, where q3 is equal to a preset fourth weight value,
q4 is equal to a preset fifth weight value when b2 is equal to M, and q4 is equal to a preset sixth weight value when b2 is less than M, wherein the fifth weight value is greater than the sixth weight value;
determining similar videos of the target video in the candidate videos according to the second score of each candidate video.
7. The information processing method for pest prevention according to claim 6, wherein the pest information includes a number of pest occurrences and a pest occurrence area.
8. The information processing method for pest prevention according to claim 6, wherein the pest prediction model employs a spectral composite prediction model.
9. The information processing method for pest prevention according to claim 6, wherein in the step of training the predetermined pest prediction model, a difference between predicted pest information of a planted crop corresponding to the preset planting area of the agricultural internet of things obtained by the pest prediction model and actual pest information of the planted crop is smaller than a predetermined threshold value.
CN201910486919.9A 2019-06-05 2019-06-05 Information processing system and method for insect pest prevention Expired - Fee Related CN110213376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910486919.9A CN110213376B (en) 2019-06-05 2019-06-05 Information processing system and method for insect pest prevention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910486919.9A CN110213376B (en) 2019-06-05 2019-06-05 Information processing system and method for insect pest prevention

Publications (2)

Publication Number Publication Date
CN110213376A CN110213376A (en) 2019-09-06
CN110213376B true CN110213376B (en) 2021-03-09

Family

ID=67790969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910486919.9A Expired - Fee Related CN110213376B (en) 2019-06-05 2019-06-05 Information processing system and method for insect pest prevention

Country Status (1)

Country Link
CN (1) CN110213376B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111121862A (en) * 2019-09-29 2020-05-08 广西中遥空间信息技术有限公司 Air-space-ground integrated atmospheric environment monitoring system and method
WO2021196062A1 (en) * 2020-04-01 2021-10-07 唐山哈船科技有限公司 Agricultural pest-killing device and method based on unmanned aerial vehicle
CN111539372B (en) * 2020-05-06 2023-05-19 中南民族大学 Method, equipment, storage medium and device for monitoring pest and disease damage distribution
WO2021226976A1 (en) * 2020-05-15 2021-11-18 安徽中科智能感知产业技术研究院有限责任公司 Soil available nutrient inversion method based on deep neural network
CN111814866A (en) * 2020-07-02 2020-10-23 深圳市万物云科技有限公司 Disease and pest early warning method and device, computer equipment and storage medium
CN111898590A (en) * 2020-08-26 2020-11-06 龙川县林业科学研究所 Camellia oleifera pest and disease monitoring method
CN112907547A (en) * 2021-02-26 2021-06-04 海南金垦赛博信息科技有限公司 Tropical crop pest risk assessment method and system
CN115291541A (en) * 2022-01-18 2022-11-04 聊城市农业技术推广服务中心(聊城市绿色农业发展服务中心) Crop pest and disease monitoring system and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107783522A (en) * 2017-10-18 2018-03-09 来安县威光绿园生态农业专业合作社 A kind of diseases and pests of agronomic crop Intelligent prevention and cure system based on Internet of Things

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10667456B2 (en) * 2014-09-12 2020-06-02 The Climate Corporation Methods and systems for managing agricultural activities
CN106915462A (en) * 2017-02-14 2017-07-04 福建兴宇信息科技有限公司 Forestry pests & diseases intelligent identifying system based on multi-source image information
CN109242717A (en) * 2018-10-16 2019-01-18 首都师范大学 Agricultural Information processing method, device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107783522A (en) * 2017-10-18 2018-03-09 来安县威光绿园生态农业专业合作社 A kind of diseases and pests of agronomic crop Intelligent prevention and cure system based on Internet of Things

Also Published As

Publication number Publication date
CN110213376A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213376B (en) Information processing system and method for insect pest prevention
CN110197308B (en) Crop monitoring system and method for agricultural Internet of things
CN110188962B (en) Rice supply chain information processing method based on agricultural Internet of things
CN110210408B (en) Crop growth prediction system and method based on satellite and unmanned aerial vehicle remote sensing combination
Apolo-Apolo et al. A cloud-based environment for generating yield estimation maps from apple orchards using UAV imagery and a deep learning technique
Sumesh et al. Integration of RGB-based vegetation index, crop surface model and object-based image analysis approach for sugarcane yield estimation using unmanned aerial vehicle
CN106971167B (en) Crop growth analysis method and system based on unmanned aerial vehicle platform
BR112020026356A2 (en) SYSTEMS, DEVICES AND METHODS FOR DIAGNOSIS IN GROWTH STAGE FIELD AND CULTURE YIELD ESTIMATE IN A PLANT AREA
GB2598012A (en) System and method for crop monitoring
CN108195767B (en) Estuary wetland foreign species monitoring method
Hu et al. Coupling of machine learning methods to improve estimation of ground coverage from unmanned aerial vehicle (UAV) imagery for high-throughput phenotyping of crops
Solvin et al. Use of UAV photogrammetric data in forest genetic trials: measuring tree height, growth, and phenology in Norway spruce (Picea abies L. Karst.)
CN110197381B (en) Traceable information processing method based on agricultural Internet of things integrated service management system
CN108776106A (en) A kind of crop condition monitoring method and system based on unmanned plane low-altitude remote sensing
Gómez‐Sapiens et al. Improving the efficiency and accuracy of evaluating aridland riparian habitat restoration using unmanned aerial vehicles
Ouyang et al. Assessment of canopy size using UAV-based point cloud analysis to detect the severity and spatial distribution of canopy decline
CN110161970B (en) Agricultural Internet of things integrated service management system
CN110138879B (en) Processing method for agricultural Internet of things
WO2023131949A1 (en) A versatile crop yield estimator
CN110175267B (en) Agricultural Internet of things control processing method based on unmanned aerial vehicle remote sensing technology
Escobar-Silva et al. A general grass growth model for urban green spaces management in tropical regions: A case study with bahiagrass in southeastern Brazil
Al Rawashdeh Evaluation of the differencing pixel-by-pixel change detection method in mapping irrigated areas in dry zones
Yusof et al. Land clearing, preparation and drone monitoring using Red-Green-Blue (RGB) and thermal imagery for Smart Durian Orchard Management project
Sadiq et al. A review on the imaging approaches in agriculture with crop and soil sensing methodologies
Rilwani et al. Geoinformatics in agricultural development: challenges and prospects in Nigeria

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210219

Address after: No.263, Hanshui Road, Nangang District, Harbin City, Heilongjiang Province

Applicant after: Heilongjiang Beidahuang Agriculture Co.,Ltd.

Address before: 154000 Qixing farm, Sanjiang Administration Bureau of agricultural reclamation, Jiamusi City, Heilongjiang Province

Applicant before: Qixing Farm in Heilongjiang Province

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210309

CF01 Termination of patent right due to non-payment of annual fee