CN114998744A - Agricultural machinery track field segmentation method based on motion and vision dual-feature fusion - Google Patents

Agricultural machinery track field segmentation method based on motion and vision dual-feature fusion Download PDF

Info

Publication number
CN114998744A
CN114998744A CN202210839109.9A CN202210839109A CN114998744A CN 114998744 A CN114998744 A CN 114998744A CN 202210839109 A CN202210839109 A CN 202210839109A CN 114998744 A CN114998744 A CN 114998744A
Authority
CN
China
Prior art keywords
track
agricultural machinery
data
dimensional
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210839109.9A
Other languages
Chinese (zh)
Other versions
CN114998744B (en
Inventor
陈瑛
权雷
张晓强
吴才聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202210839109.9A priority Critical patent/CN114998744B/en
Publication of CN114998744A publication Critical patent/CN114998744A/en
Application granted granted Critical
Publication of CN114998744B publication Critical patent/CN114998744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an agricultural machinery track field segmentation method based on motion and vision dual-feature fusion, and belongs to the technical field of computers. The method comprises the following steps: acquiring agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time; performing characteristic extraction on the agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented; and determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track graph visual characteristics of the agricultural machinery track data to be segmented. The agricultural machinery track field segmentation method based on the fusion of the motion and vision dual features provided by the invention fully utilizes the space-time motion feature information of the track data, so that the accuracy rate of agricultural machinery travel track field segmentation can be obviously improved.

Description

Agricultural machinery track field dividing method based on motion and vision dual-feature fusion
Technical Field
The invention relates to the technical field of computers, in particular to an agricultural machinery track field segmentation method based on motion and vision dual-feature fusion.
Background
Along with the progress and the development of agricultural modernization technology, novel intelligent equipment and a novel data transmission management mode are widely applied to agricultural production, and the agricultural intelligent and precise process is greatly promoted. Wherein, the agricultural machinery based on big dipper positioning terminal can provide the operation geographical position information of all-weather, and this brings very big convenience for promoting agricultural informatization and intelligent level, reduction agricultural production cost.
During operation of the agricultural machine, the Beidou positioning terminal can acquire the longitude, latitude, speed, azimuth angle, altitude and other attribute information of track points at each moment in the driving process of the agricultural machine in real time, and finally a complete driving track of the agricultural machine is formed. By using the agricultural machine running tracks as data support, detailed information such as the operation area and the operation efficiency of the agricultural machine can be analyzed in real time, which is important for agricultural machine operation subsidy, social service, agricultural machine operation planning and the like.
The first step of performing operation analysis on agricultural machinery trajectory data is to accurately perform field trajectory segmentation, i.e., to determine the location type of each trajectory point ("field" or "road"). If the category information of all track points in one track data is clarified, the effective operation track length of the agricultural machine can be calculated, and then the agricultural machine operation indexes such as the accumulated operation area (namely, the track length in a farmland is multiplied by the operation width of the agricultural machine), the accumulated operation time, the operation efficiency and the like of the agricultural machine can be analyzed. The quality of the track segmentation algorithm directly influences the result of the subsequent agricultural machinery operation analysis. Therefore, the main research objective herein is to provide a field division method for the driving track of agricultural machinery.
At present, the following methods mainly exist in field division:
(1) a dividing method based on farmland boundaries comprises the following steps: when the agricultural machine runs to a boundary area of a field, whether the agricultural machine enters the field operation or not is automatically judged according to the real-time geographical position of the agricultural machine, and therefore the purpose of dividing the field of the operation track is achieved. The method is time-consuming and labor-consuming, and the situation of false alarm and missing report of the boundary information of the field collected manually is not beneficial to the statistics and supervision of the effective operation track of the agricultural machinery.
(2) The remote sensing image-based segmentation method comprises the following steps: and giving a remote sensing image of the agricultural machinery driving area, and performing field segmentation by adopting an image segmentation method. The method has high requirement on the quality of the remote sensing image, the image segmentation effect is directly limited by the data, and certain economic cost is consumed for data acquisition, so that the method is difficult to use on a large scale.
(3) The segmentation method based on density clustering comprises the following steps: aiming at the characteristic that the track points on farmlands and roads in the running track of the agricultural machinery have different densities, a density-based clustering method is used for carrying out farmland segmentation on the running track. The method is dependent on the parameter setting of the density degree of the track points, and the effect is very unstable.
(4) The segmentation method based on machine learning comprises the following steps: track points are classified based on traditional fully supervised machine learning methods ("farmland" or "roads"). The method has too high dependence on a design feature extraction scheme, and the result is that the space-time correlation features among the track points are difficult to extract, so that the final field segmentation accuracy is not ideal.
(5) The segmentation method based on deep learning comprises the following steps: the deep learning model based on the mainstream extracts the space-time characteristic information of the track points and classifies the track points (farmland or road). At present, deep learning methods in the field of urban traffic track segmentation are widely applied, research technologies are advanced, but relevant experiments on the segmentation of the agricultural machinery driving track field are fewer. In addition, the common deep learning models are difficult to simultaneously extract the time-space correlation information among the track points, and further the field segmentation accuracy rate is difficult to significantly improve.
Disclosure of Invention
The invention provides an agricultural machinery track field segmentation method based on motion and vision dual-feature fusion, which is used for solving the technical problem that the field segmentation accuracy rate is difficult to obviously improve in the prior art.
The invention provides an agricultural machinery track field segmentation method based on motion and vision dual-feature fusion, which comprises the following steps:
acquiring agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
performing characteristic extraction on the agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented;
and determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track graph visual characteristics of the agricultural machinery track data to be segmented.
In some embodiments, the determining the position classification of the agricultural machinery trajectory data to be segmented based on the one-dimensional trajectory sequence motion feature and the two-dimensional trajectory graph visual feature of the agricultural machinery trajectory data to be segmented includes:
performing feature fusion on the motion features of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual features of the two-dimensional track map to obtain fusion feature vectors;
and determining the position classification of the agricultural machinery track data to be segmented based on the fusion feature vector.
In some embodiments, the performing feature fusion on the motion feature of the one-dimensional trajectory sequence of the agricultural machinery trajectory data to be segmented and the visual feature of the two-dimensional trajectory graph to obtain a fusion feature vector includes:
inputting the motion characteristics of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual characteristics of the two-dimensional track graph into a characteristic fusion layer of a target track segmentation model for characteristic fusion to obtain a fusion characterization vector;
the determining the position classification of the agricultural machinery trajectory data to be segmented based on the fusion feature vector comprises:
inputting the fusion characterization vector to a linear classification layer of the target track segmentation model for classification to obtain position classification of the agricultural machinery track data to be segmented output by the target track segmentation model;
the target trajectory segmentation model is trained in the following way:
carrying out data cleaning on the acquired sample track data to obtain target sample track data;
performing characteristic extraction on the target sample trajectory data to obtain one-dimensional trajectory sequence motion characteristics of the target sample trajectory data;
obtaining a sample track image dataset based on the target sample track data;
obtaining a two-dimensional trajectory chart visual feature of the target sample trajectory data based on the sample trajectory image dataset;
training an initial track segmentation model based on the one-dimensional track sequence motion characteristics and the two-dimensional track chart visual characteristics of the target sample track data to obtain the target track segmentation model.
In some embodiments, said deriving a two-dimensional trajectory map visual feature of said target sample trajectory data based on said sample trajectory image dataset comprises:
inputting the sample track image data set into a target image segmentation model to obtain two-dimensional track map visual characteristics of the target sample track data output by the target image segmentation model;
the target image segmentation model is obtained based on the sample track image data set and the track icon label training, and the track icon label is generated based on the mapping relation between the longitude and latitude coordinates and the pixel coordinates of the track points in the sample track image data set.
In some embodiments, the performing data cleaning on the acquired sample trajectory data to obtain target sample trajectory data includes:
deleting the track points under the condition that the track points of the sample track data exceed the target range;
under the condition that an abnormal track segment exists in the sample track data, cleaning the abnormal track segment, and reserving a first track point in the abnormal track segment;
the abnormal track segment includes at least one of:
track segments corresponding to track points with the same time in continuous time;
corresponding track segments with the same longitude and latitude coordinates in continuous time;
track segments corresponding to the track segments with different longitude and latitude coordinates and zero speed in continuous time.
In some embodiments, the performing feature extraction on the target sample trajectory data to obtain a motion feature of a one-dimensional trajectory sequence of the target sample trajectory data includes:
obtaining a motion characteristic and a difference characteristic corresponding to the attribute characteristic based on the attribute characteristic of the target sample track data;
obtaining a one-dimensional track sequence motion characteristic of the target sample track data based on the motion characteristic and the difference characteristic corresponding to the attribute characteristic;
wherein the attribute characteristics include at least one of: time, longitude, latitude, speed, and azimuth.
The invention also provides an agricultural machinery track field dividing device based on the fusion of motion and vision dual characteristics, which comprises:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring agricultural machinery track data to be segmented, and the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
the first determining module is used for extracting the characteristics of the agricultural machinery track data to be segmented to obtain the one-dimensional track sequence motion characteristics and the two-dimensional track map visual characteristics of the agricultural machinery track data to be segmented;
and the second determination module is used for determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track map visual characteristics of the agricultural machinery track data to be segmented.
The invention also provides an electronic device which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the agricultural machinery track field segmentation method based on the fusion of the motion and the vision dual features is realized.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for agricultural track field segmentation based on the fusion of motion and visual features as described in any of the above.
According to the agricultural machinery track field segmentation method based on the motion and vision dual-feature fusion, all farmland track points and road track points in a track sequence can be effectively identified by automatically classifying all track points in the agricultural machinery running track.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of an agricultural machinery track field segmentation method based on motion and vision dual-feature fusion, provided by the invention;
FIG. 2 is a schematic overall flow chart of an agricultural machinery track field segmentation method based on the fusion of motion and vision dual features provided by the invention;
FIG. 3 is a schematic diagram of a track map and a label generation by applying the agricultural machinery track field segmentation method based on the fusion of motion and vision dual features provided by the invention;
FIG. 4 is a schematic diagram of an Attention U-Net model architecture of an agricultural machinery track field segmentation method based on the fusion of motion and vision dual features provided by the invention;
FIG. 5 is a schematic diagram of a BiLSTM-based feature fusion model architecture applying the method for dividing the agricultural machinery track field based on the motion and vision dual-feature fusion provided by the invention;
FIG. 6 is a schematic structural diagram of an agricultural machinery track field dividing device based on the fusion of motion and vision dual features provided by the invention;
fig. 7 is a schematic physical structure diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a schematic flow chart of an agricultural machinery track field segmentation method based on motion and vision dual-feature fusion provided by the invention. Referring to fig. 1, the agricultural machinery track field division based on the fusion of motion and vision dual features provided by the invention comprises: step 110, step 120 and step 130.
Step 110, obtaining agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
120, extracting characteristics of agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track map visual characteristics of the agricultural machinery track data to be segmented;
and step 130, determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track graph visual characteristics of the agricultural machinery track data to be segmented.
The execution main body of the agricultural machinery track field segmentation method based on the fusion of the motion and the vision dual characteristics can be electronic equipment, a component in the electronic equipment, an integrated circuit or a chip. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the present invention is not limited in particular.
The following describes the technical solution of the present invention in detail by taking a computer to execute the method for dividing the agricultural machinery locus field based on the fusion of the motion and the vision dual features provided by the present invention as an example.
According to the invention, the agricultural machinery track data to be segmented can be acquired through the Beidou positioning terminal.
The agricultural machinery trajectory data to be segmented may be used to indicate a driving trajectory of any agricultural machinery within a target time, where the target time may be a period of time such as a few hours, a day, or a week, and is not limited specifically herein.
The present embodiment is described with the target time being a certain day, and the speed and the azimuth angle change rate of the one-day travel track of one agricultural machine are different between the travel track of one agricultural machine in the farmland and the travel track of one agricultural machine on the road. For example, when an agricultural machine works in a farmland, the agricultural machine tends to run straight due to the field arrangement, and in this case, the azimuth fluctuation is small; however, when the agricultural machine runs on the road, the movement direction of the agricultural machine is often irregular due to the complicated change of the road condition and the traffic condition, and the change of the azimuth angle is large. The acceleration change rate not only can reflect the acceleration change condition of the agricultural machinery driven by a machinist, but also can be an important index for measuring the driving safety problem.
The present embodiment mainly comprises the following three main steps:
(1) one-dimensional trajectory sequence motion feature extraction
The method for transmitting back the basic attribute characteristics of the agricultural machinery track data to be segmented by using the Beidou positioning terminal can comprise the following steps: and the time, longitude, latitude, azimuth angle and the like are respectively calculated to obtain the motion characteristics of the acceleration change rate, the azimuth angle change rate and the like.
In order to effectively utilize the motion characteristic correlation between adjacent track points, difference value characteristics such as longitude difference, latitude difference and azimuth difference between a current-time track point and a previous-time track point in the agricultural machinery track data to be segmented are further calculated.
And finishing the extraction of the motion characteristics of the one-dimensional track sequence of the agricultural machinery track data to be segmented based on the motion characteristics and the difference characteristics.
(2) Two-dimensional trace graph visual feature extraction
And constructing a track map by utilizing longitude and latitude information of track points in the one-dimensional track sequence, wherein a certain mapping relation exists between pixel coordinates of the track points and longitude and latitude coordinates of the track points. The track map is a two-dimensional display of a one-dimensional track sequence on a spatial level, and important two-dimensional track map visual characteristics such as farmland shapes, road distribution and the like can be obtained through the track map.
(3) Multi-angle fusion sequence feature training
In some embodiments, step 120 comprises: performing feature fusion on the motion features of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual features of the two-dimensional track map to obtain fusion feature vectors;
and determining the position classification of the agricultural machinery track data to be segmented based on the fusion feature vector.
In actual implementation, the one-dimensional track sequence motion characteristics of each track point in the agricultural machinery track data to be segmented and the two-dimensional track graph visual characteristics are subjected to characteristic splicing, and then input to a characteristic fusion layer of a target track segmentation model to perform multi-angle characteristic fusion, so that the characteristic information of the original track data under different dimensions is fully utilized.
And further performing space-time feature extraction, so that the information of the track points at each moment is fully fused with the space-time features at different angles to obtain a fusion characterization vector.
And classifying the linear classification layers of the fusion characterization vectors input into the target track segmentation model, and finally obtaining the position classification of each track point in the agricultural machinery track data to be segmented output by the target track segmentation model, namely 'farmland' or 'road'.
According to the agricultural machine track field segmentation method based on the motion and vision dual-feature fusion, all farmland track points and road track points in a track sequence can be effectively identified by automatically classifying all track points in the agricultural machine running track.
In some embodiments, the target trajectory segmentation model is trained by:
carrying out data cleaning on the acquired sample track data to obtain target sample track data;
performing characteristic extraction on the target sample track data to obtain one-dimensional track sequence motion characteristics of the target sample track data;
obtaining a sample track image dataset based on target sample track data;
obtaining two-dimensional trajectory chart visual characteristics of target sample trajectory data based on the sample trajectory image dataset;
training the initial track segmentation model based on the one-dimensional track sequence motion characteristics and the two-dimensional track chart visual characteristics of the target sample track data to obtain a target track segmentation model.
As shown in fig. 2, the overall process of training the target trajectory segmentation model is as follows:
first, data acquisition
In practical implementation, the sample trajectory data used by the present invention is mainly of the following two types:
the first type is rice harvester track data, the data source is 100 track data of rice harvester operation in the period of 10 months to 11 months in 2021, 120136 track points are counted, and the data acquisition frequency is about 30 s;
the second type of data is wheat harvester track data, the data source is 150 track data of wheat harvester operation in the period from 6 months to 7 months in 2021, the total number of the track points is 885279, and the data acquisition frequency is about 5 s.
The positioning accuracy of the two types of sample track data is poor (the accuracy is about 2m-5 m), the operating places of the agricultural machinery are distributed in a plurality of provinces, the field operation conditions of different regions are different, and the effects of the traditional field track segmentation method are poor under the conditions.
Second, data cleaning
In some embodiments, performing data cleaning on the acquired sample trajectory data to obtain target sample trajectory data includes:
deleting the track points under the condition that the track points of the sample track data exceed the target range;
under the condition that the abnormal track segment exists in the sample track data, cleaning the abnormal track segment, and reserving a first track point in the abnormal track segment;
the abnormal track segment includes at least one of:
track segments corresponding to track points with the same time in continuous time;
corresponding track segments with the same longitude and latitude coordinates in continuous time;
track segments corresponding to the track segments with different longitude and latitude coordinates and zero speed in continuous time.
In practical implementation, in the process of acquiring the sample trajectory data of the agricultural machinery driving trajectory, due to signal loss, a Global Navigation Satellite System (GNSS) positioning terminal often has a sampling error.
In order to avoid the influence of sampling errors on subsequent classification effects, corresponding cleaning needs to be performed in advance for abnormal point types existing in sample trajectory data, and the data cleaning mode includes the following steps:
(1) the resampling type: and cleaning track segments corresponding to the track points with the same time in the continuous time, and reserving a first track point in the abnormal track segment.
(2) Type of stationary trajectory: and cleaning track segments corresponding to track points with the same longitude and latitude coordinates in continuous moments, and reserving a first track point in the abnormal track segment.
(3) Type of static drift: and cleaning track segments corresponding to track points with different longitude and latitude coordinates and 0 speed at continuous moments, and reserving a first track point in the abnormal track segment.
(4) The longitude and latitude abnormity type is as follows: because the data acquisition area extends over a plurality of provinces and cities, when the track segments corresponding to the target range are cleaned, the acquisition points are in the condition of abnormal longitude and latitude and need to be cleaned. The cleaning mode is direct deletion. The target range is determined according to actual requirements and is not specifically limited herein.
And obtaining target sample track data after data cleaning.
Three-dimensional and one-dimensional trajectory motion feature extraction
In some embodiments, performing feature extraction on the target sample trajectory data to obtain a one-dimensional trajectory sequence motion feature of the target sample trajectory data includes:
obtaining a motion characteristic and a difference characteristic corresponding to the attribute characteristic based on the attribute characteristic of the target sample track data;
obtaining a one-dimensional track sequence motion characteristic of the target sample track data based on the motion characteristic and the difference characteristic corresponding to the attribute characteristic;
the attribute characteristics include at least one of: time, longitude, latitude, speed, and azimuth.
In actual implementation, motion characteristics such as an acceleration change rate, an acceleration change rate and an azimuth change rate can be respectively calculated according to the attribute characteristics of the target sample track data, and the motion characteristics corresponding to the attribute characteristics can sufficiently reflect the motion state of the agricultural machinery.
In addition, in order to fuse the motion characteristics of historical moments, difference value characteristics such as longitude differences, latitude differences and azimuth differences between the current moment track point and the last moment track point in the target sample track data are calculated respectively.
The trajectory data assuming the travel of a certain agricultural machine can be expressed as:
Figure 780626DEST_PATH_IMAGE001
wherein,
Figure 373282DEST_PATH_IMAGE002
is shown as
Figure 923212DEST_PATH_IMAGE003
The track points of the time, n is the number of all track points in the track sequence, and the track points are used
Figure 601318DEST_PATH_IMAGE004
In the first place
Figure 922926DEST_PATH_IMAGE003
The longitude of the time is taken as an example, wherein the longitude difference is calculated in the first way
Figure DEST_PATH_IMAGE005
Longitude and longitude of time track point
Figure 901246DEST_PATH_IMAGE006
The longitude of the track point of the time is differentiated, the default longitude difference of the starting time is 0, and the calculation mode of the latitude difference and the azimuth difference is the same as the above.
After the motion characteristics of the one-dimensional track sequence of the target sample track data are extracted, each track point can be characterized by the motion characteristics and the difference characteristics.
Fourth, locus diagram construction
In some embodiments, obtaining a two-dimensional trajectory map visual feature of target sample trajectory data based on the sample trajectory image dataset comprises:
inputting the sample track image data set into a target image segmentation model to obtain two-dimensional track map visual characteristics of target sample track data output by the target image segmentation model;
the target image segmentation model is obtained based on a sample track image data set and track icon label training, and the track icon label is generated based on the mapping relation between longitude and latitude coordinates and pixel coordinates of track points in the sample track image data set.
In actual implementation, in order to extract space-time motion characteristics contained in the trajectory data from different dimensions, for each target sample trajectory data, a trajectory graph corresponding to the trajectory data is drawn by using longitude and latitude coordinate information of the target sample trajectory data, so as to obtain a sample trajectory image data set.
The color channel of the traditional image consists of three colors of red, green and blue, and a brand-new channel expression mode is defined in the invention in order to fuse the motion characteristics of the track data and the time sequence correlation of adjacent track points in the track sequence. Specifically, for each track point, each pixel in the three channels is calculated by the following three formulas:
Figure 622078DEST_PATH_IMAGE007
Figure 803792DEST_PATH_IMAGE008
Figure 635481DEST_PATH_IMAGE009
given a sequence of trajectories
Figure 468308DEST_PATH_IMAGE010
Where n is the number of track points in the sequence of tracks,
Figure 94462DEST_PATH_IMAGE011
is shown as
Figure 12739DEST_PATH_IMAGE005
The speed of the trace point at the moment,
Figure 648120DEST_PATH_IMAGE012
is shown as
Figure 820606DEST_PATH_IMAGE005
Time locus point and the first
Figure 883240DEST_PATH_IMAGE013
The azimuth difference of the time track point.
When the agricultural machinery works in the farmland, the running speed is slow and has no large fluctuation, and when the agricultural machinery runs on the road, the speed is high and the fluctuation is indefinite.
The speed between adjacent track points can be utilized to replace pixels in a traditional red channel, so that the position condition of the track points of the agricultural machinery can be better analyzed. In addition, in order to meet the input requirement of the drawing function, the speed values of all track points are divided by the maximum speed for normalization.
The conditions of agricultural machine turning around in the field, irregular distribution of the field and the like are considered to have great influence on the change of the driving azimuth angle of the agricultural machine, so that the field operation condition of the agricultural machine can be analyzed according to the change rule of the azimuth angle. Therefore, the important characteristic of the variation difference of the agricultural machinery driving azimuth angle is used for replacing pixels in the green channel so as to generate a track map.
Also in order to meet the input requirement of the mapping function, further normalization processing is required after the absolute value operation on the azimuth difference value. Therefore, the green depth condition of the track points in the track map fuses the semantic information of the change of the driving azimuth of the agricultural machine.
Time sequence information in target sample track data is of great importance, in order to enable image data to contain time sequence characteristics, time information is used for replacing pixels in a blue channel, and after normalization processing, the shade degree of blue represents the time sequence precedence relationship of the track data.
Fifthly, establishing a mapping relation between the position coordinates and the pixel coordinates
The picture is stored in the computer in a multidimensional matrix manner, and the pixel information of a certain position in the picture can be positioned according to the row index and the column index of the matrix, and the matrix index is essentially the pixel coordinate of the picture, as shown in fig. 3.
Because the track sequence of the target sample track data is composed of continuous GNSS track points, and the track map generated in the fourth step is essentially a scatter map, the track points of interest in the track map can be conveniently positioned according to the mapping relation between the longitude and latitude coordinates of the track points and the pixel coordinates of the track map, and the two-dimensional track map visual characteristics of the target area can be extracted in a targeted manner in the subsequent visual characteristic extraction stage.
In actual implementation, the longitude and latitude coordinates of all track points in the target sample track data can be selectedAs the original data before mapping, expressed as
Figure 23234DEST_PATH_IMAGE014
In which
Figure 462306DEST_PATH_IMAGE015
And
Figure 4146DEST_PATH_IMAGE016
respectively represent
Figure 988413DEST_PATH_IMAGE017
Longitude and latitude of the moment track point.
Then we use the correlation function for scatter plot processing in matplotlib library to implement coordinate mapping transformation. First of all utilize
Figure 615704DEST_PATH_IMAGE018
And
Figure 592887DEST_PATH_IMAGE019
the method respectively obtains the value ranges of longitude and latitude in the geographic coordinate system, and then takes the previously obtained longitude and latitude coordinates as a mapping function
Figure 723654DEST_PATH_IMAGE020
Inputting, the pixel coordinates after each trace point mapping can be finally generated, and the mapping for longitude and latitude coordinates can be respectively expressed as:
Figure 393670DEST_PATH_IMAGE021
Figure 242677DEST_PATH_IMAGE022
sixth, track icon label construction
In order to effectively extract pixel features in the sample trajectory image dataset, in one embodiment, the sample trajectory image dataset is input to a target image segmentation model to obtain two-dimensional trajectory map visual features of the target sample trajectory data.
It is understood that the target image segmentation model may be any deep learning model with an image semantic segmentation function, and is not limited in particular.
In the embodiment of the invention, the initial image segmentation model is a U-Net model, and the target image segmentation model is an optimal U-Net model obtained after training is finished.
The invention trains a sample trajectory image data set by using a pixel-level semantic segmentation model U-Net, thereby extracting the two-dimensional trajectory image visual characteristics of each trajectory point in the trajectory image by using the unique down-sampling and up-sampling combined structure.
In addition, the quality of the constructed labels conforming to the model input can directly influence the training effect of the final model. Considering that the traditional semantic segmentation labeling tool LabelMe is difficult to label each trace point and a great deal of energy is consumed for manual labeling, the invention automatically generates the trace map label by utilizing the mapping relation between the longitude and latitude coordinates and the pixel coordinates of the trace point, thereby greatly reducing the labeling cost and improving the labeling accuracy.
The track map label drawing mainly uses a palette (palette) picture format, the pixel range in the palette mode is 0-255, that is, at most 256 colors can be presented, and the pixel requirements of the semantic segmentation model picture label are as follows: the pixels of each category in the picture are the labels of the category, the pixels of all farmland track points are set to be 1, the pixels of all road track points are set to be 2, the pixels of other background areas are set to be 0 according to the mapping relation between the longitude and latitude coordinates and the pixel coordinates of the track points obtained by the embodiment, and then the color index of the pixel 0 is defined to be 0 by utilizing the index relation of the pixels of the color palette
Figure 774284DEST_PATH_IMAGE023
Appear black and pixel 1 defines a color index of
Figure 25137DEST_PATH_IMAGE024
Appear red and pixel 2 defines a color index of
Figure 866054DEST_PATH_IMAGE025
Appearing green in color.
Seven, target image segmentation model training
In practical implementation, the invention trains on sample track image datasets of rice and wheat harvesters respectively to obtain optimal U-Net models respectively, as shown in FIG. 4.
Taking a wheat harvester track image as an example, 100 track images are used for training, 20 images are used as a verification set, the data division mode is the same as the division method of the rice harvester track data, and finally, an optimal target image segmentation model is obtained on the verification set according to Mean Pixel Accuracy (MPA).
Because only thousands of GNSS track points are recorded in one sample track data, the size of the generated track image is 640 multiplied by 480, which means that only thousands of pixel points in 307200 pixels are used as foreground classes. In order to solve the problem of unbalanced distribution between the foreground class and the background class in the training process, a loss Function (FC) is introduced. The loss function can be adjusted by dynamically adjusting the scale factor during the training process
Figure 202357DEST_PATH_IMAGE026
The weighting of the samples that are easy to distinguish is reduced, thereby allowing the model to quickly focus more attention on those samples that are difficult to distinguish.
The focal length formula is as follows:
Figure 786922DEST_PATH_IMAGE027
as can be seen from this formula, focal length not only introduces
Figure 643014DEST_PATH_IMAGE026
To improve the imbalance between positive and negative samples, and also by weighting
Figure 389253DEST_PATH_IMAGE028
To control the contribution of the easily predictable samples to the final loss.
Step eight, extracting visual features of two-dimensional locus diagram
And after the training of the target image segmentation model is finished, inputting the sample track image data set into the target image segmentation model to obtain the two-dimensional track map visual characteristics of the target sample track data output by the target image segmentation model.
In actual implementation, the semantic segmentation model U-Net is essentially a pixel-level classification model, and the classification of each pixel in an image is judged, so that the two-dimensional locus diagram visual characteristics of each locus point can be extracted according to the constructed coordinate mapping relation and the trained U-Net model.
The invention utilizes the standard U-Net network architecture of the integrated Attention Gate mechanism, can concentrate more Attention on the obvious characteristics of the target area, reduce the Attention on irrelevant areas, and realize the simultaneous extraction of the global characteristics and the detailed characteristics of the track point after down sampling and up sampling.
And splicing the feature map output by the last layer of upsampling with the feature map subjected to Attention Gate coding to obtain a final feature map. The size of each feature map is consistent with that of the original input track map, after the pixel features of the track points on each feature map are positioned by utilizing the coordinate mapping relation, each track point can finally obtain a 64-dimensional visual feature vector, and in order to improve the feature fusion efficiency on the premise of ensuring that the visual features are not lost, the 64-dimensional feature vector passes through
Figure 212853DEST_PATH_IMAGE029
And linearly transforming the network layer into a 30-dimensional feature vector, thereby obtaining the two-dimensional locus diagram visual feature of the target sample locus data.
Step nine, multi-angle feature fusion
In order to fully utilize the one-dimensional track sequence motion characteristics and the two-dimensional track chart visual characteristics of the target sample track data, multi-angle characteristic fusion is carried out on the target sample track data.
Firstly, the motion characteristics of the one-dimensional track sequence obtained in the above steps and the visual characteristics of the two-dimensional track map are spliced, and each track point is represented by a 38-dimensional vector after splicing, wherein 8 dimensions are the motion characteristics obtained by one-dimensional track data extraction, and the rest 30 dimensions are the visual characteristics obtained by two-dimensional track map extraction.
For each trace point, the feature fusion formula is:
Figure 335529DEST_PATH_IMAGE030
wherein,
Figure 560974DEST_PATH_IMAGE031
representing the one-dimensional motion characteristic representation of the track points,
Figure 478115DEST_PATH_IMAGE032
visual feature characterization representing the trace points.
By the method, the feature expressions at different angles can mutually make up for each other, so that the spliced representation mode contains richer semantic information. Secondly, feature fusion is realized by utilizing a bidirectional long-short term memory network (BilSTM).
The Long Short-Term Memory network (LSTM) is composed of three gate units, wherein a Memory gate and a forgetting gate are respectively used for controlling information needing to be memorized and forgotten and transmitting the information to the next time step, and an output gate outputs the cell state of the current time. The single forward LSTM and the single backward LSTM are combined into a BilSTM, which has the capability of capturing two-way semantic dependence and can well utilize the time sequence information in the track data to extract the time-space characteristics of the two-way semantic dependence.
As shown in fig. 5 for slave
Figure 805322DEST_PATH_IMAGE033
To
Figure 466110DEST_PATH_IMAGE034
The track sequence of time instants, forward LSTM and backward LSTM, may be at each time step
Figure 546062DEST_PATH_IMAGE035
Respectively obtain a hidden state
Figure 165262DEST_PATH_IMAGE036
And
Figure 697875DEST_PATH_IMAGE037
. Will be provided with
Figure 178666DEST_PATH_IMAGE038
And
Figure 847544DEST_PATH_IMAGE037
and performing splicing treatment, and performing final characterization on each track point through a 512-dimensional vector after splicing so that the information of the track points at each moment is fully fused with space-time characteristics at different angles.
Step ten, linear classification
Linear transformation is realized through a fully-connected network of the last layer of a network architecture, so that the output dimensionality of each track point is the number of prediction categories, and finally, a Softmax function is utilized to calculate the prediction probability of the track point on the categories (farmland or road) classified at two positions
Figure 637646DEST_PATH_IMAGE039
The calculation formula is as follows:
Figure 657554DEST_PATH_IMAGE040
. Where W and b are parameters that the fully connected network layer needs to learn.
And repeating the steps until the convergence of the target track segmentation model reaches the expected target, and finally finishing the training.
The agricultural machinery track field dividing method based on the fusion of the motion and vision dual characteristics has the following technical effects
(1) Aiming at the defect that the traditional representation mode in agricultural machinery track data lacks comprehensive space-time track characteristic information, the technology provides a novel representation mode for fusing one-dimensional track sequence motion characteristic representation and two-dimensional track graph visual characteristic representation, and then the position classification of each track point in the track sequence is predicted according to the fused representation information (farmland or road).
(2) According to the invention, the mapping relation between the longitude and latitude coordinates of the track points and the pixel coordinates of the track image is utilized to automatically generate the track image and the track map label, so that the image annotation cost is greatly reduced, and the accuracy of image annotation is greatly improved.
(3) Aiming at the time sequence correlation among the track points in the same farmland, the technology uses a bidirectional long-short term memory network (BilSTM) to capture the time sequence dependency among different track points in a track sequence from a positive sequence and a reverse sequence, so that the time-space motion characteristic information of track data can be fully utilized, and the field segmentation accuracy of the running track of the agricultural machinery is finally and remarkably improved.
In actual implementation, 4 classical field segmentation methods can be adopted: random Forest (RF), Decision Tree (DT), Density-Based Spatial Clustering of Applications with Noise, DBSCAN, and BiLSTM are used as baseline models.
The input to each baseline model is a motion signature. The experimental data are self-collected rice and wheat harvester trajectory data sets. When the rice harvesting track data is tested, 100 track data are randomly divided into a training set, a verification set and a test set according to the ratio of 8:1: 1. When the wheat harvesting track data are tested, 150 pieces of track data are randomly divided into a training set, a verification set and a test set according to the same data dividing mode.
On the rice and wheat harvester trajectory data sets, the baseline models are respectively compared with a field segmentation method (BilSTM + MF + VF) based on the fusion of Motion Features (MF) and Visual Features (VF), and the final experimental results are shown in tables 1 and 2.
According to the analysis of the tables 1 and 2, compared with the experimental result of a baseline model, the field division model of the agricultural machinery driving track based on the fusion of the motion characteristics of the one-dimensional track sequence and the visual characteristics of the two-dimensional track graph is improved in all evaluation indexes, which shows that the visual characteristics of the track points can well make up the defects of the traditional motion characteristic characterization.
On the rice harvester trajectory data set, compared with BiLSTM (BiLSTM + MF) in a baseline model experiment, BiLSTM (BiLSTM + MF + VF) fused with visual features has the advantages that the field segmentation accuracy is improved by 3.65%, the recall rate is improved by 4.67%, and the F1-score is improved by 4.45%.
For the wheat harvester trajectory data, compared with the BiLSTM (BiLSTM + MF) in the baseline model experiment, the field segmentation accuracy of the BiLSTM (BiLSTM + MF + VF) fused with the visual features is improved by 4.46%, the recall rate is improved by 9.6%, and the F1-score is improved by 9.4%.
Compared with the existing agricultural machinery driving track field segmentation model, the agricultural machinery driving track field segmentation model based on fusion of the motion characteristics of the one-dimensional track sequence and the visual characteristics of the two-dimensional track graph is obviously improved in effect in all aspects.
It can be seen from the comparison between table 1 and table 2 that the field division effect based on the track of the wheat harvester is generally better than the field division effect based on the track of the rice harvester. For example, for the BilSTM + MF model, the F1-score was 79.90% and 72.08%, respectively, with a 7.82% difference. This is most likely due to data imbalance.
TABLE 1 different segmentation model experiment comparison results based on rice harvester trajectory data
Figure 925725DEST_PATH_IMAGE041
TABLE 2 comparison results of different segmentation model experiments based on wheat harvester trajectory data
Figure 465422DEST_PATH_IMAGE042
Specifically, in the wheat harvester trajectory data, the proportion of the field trajectory points is 4: 1; in the track data of the rice harvester, the proportion of track points of the field is 1.4: 1. however, compared with the baseline model, the agricultural machine driving track field segmentation model based on the fusion of the one-dimensional track sequence motion features and the two-dimensional track graph visual features performs relatively closely on the 2 data sets (the difference of the values of F1-score is 2.87%), which indicates the reliability of the motion feature fusion visual feature characterization.
The following describes the agricultural machinery track field dividing device based on the fusion of the motion and the vision dual-features, and the agricultural machinery track field dividing device based on the fusion of the motion and the vision dual-features described below and the agricultural machinery track field dividing method based on the fusion of the motion and the vision dual-features described above can be referred to correspondingly.
Fig. 6 is a schematic structural diagram of the agricultural machinery track field dividing device based on the fusion of motion and vision dual features provided by the invention. Referring to fig. 6, the agricultural machinery track field dividing device based on the fusion of the motion and the vision dual features provided by the invention comprises:
the acquisition module 610 is used for acquiring agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating a running track of any agricultural machinery in a target time;
the first determining module 620 is configured to perform feature extraction on the agricultural machinery trajectory data to be segmented to obtain a one-dimensional trajectory sequence motion feature and a two-dimensional trajectory graph visual feature of the agricultural machinery trajectory data to be segmented;
a second determining module 630, configured to determine a position classification of the agricultural machinery trajectory data to be segmented based on the one-dimensional trajectory sequence motion feature and the two-dimensional trajectory diagram visual feature of the agricultural machinery trajectory data to be segmented.
The agricultural machine track field dividing device based on the fusion of the motion and vision dual features can effectively identify all farmland track points and road track points in a track sequence by automatically classifying all track points in the agricultural machine running track.
In some embodiments, the second determining module 630 is further configured to:
performing feature fusion on the motion features of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual features of the two-dimensional track map to obtain fusion feature vectors;
and determining the position classification of the agricultural machinery track data to be segmented based on the fusion feature vector.
In some embodiments, the performing feature fusion on the motion feature of the one-dimensional trajectory sequence of the agricultural machinery trajectory data to be segmented and the visual feature of the two-dimensional trajectory graph to obtain a fusion feature vector includes:
inputting the motion characteristics of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual characteristics of the two-dimensional track graph into a characteristic fusion layer of a target track segmentation model for characteristic fusion to obtain a fusion characterization vector;
the determining the position classification of the agricultural machinery trajectory data to be segmented based on the fusion feature vector comprises:
inputting the fusion characterization vector to a linear classification layer of the target track segmentation model for classification to obtain the position classification of the agricultural machinery track data to be segmented output by the target track segmentation model;
the target trajectory segmentation model is trained in the following way:
carrying out data cleaning on the acquired sample track data to obtain target sample track data;
performing feature extraction on the target sample trajectory data to obtain one-dimensional trajectory sequence motion features of the target sample trajectory data;
obtaining a sample track image dataset based on the target sample track data;
obtaining a two-dimensional locus diagram visual characteristic of the target sample locus data based on the sample locus image data set;
training an initial track segmentation model based on the one-dimensional track sequence motion characteristics and the two-dimensional track chart visual characteristics of the target sample track data to obtain the target track segmentation model.
In some embodiments, said deriving a two-dimensional trajectory map visual feature of said target sample trajectory data based on said sample trajectory image dataset comprises:
inputting the sample track image data set into a target image segmentation model to obtain two-dimensional track map visual characteristics of the target sample track data output by the target image segmentation model;
the target image segmentation model is obtained based on the sample track image data set and the track icon label training, and the track icon label is generated based on the mapping relation between the longitude and latitude coordinates and the pixel coordinates of the track points in the sample track image data set.
In some embodiments, the performing data cleaning on the acquired sample trajectory data to obtain target sample trajectory data includes:
deleting the track points under the condition that the track points of the sample track data exceed the target range;
under the condition that an abnormal track segment exists in the sample track data, the abnormal track segment is cleaned, and a first track point in the abnormal track segment is reserved;
the abnormal track segment includes at least one of:
track segments corresponding to track points with the same time in continuous time;
corresponding track segments with the same longitude and latitude coordinates in continuous time;
track segments corresponding to the track segments with different longitude and latitude coordinates and zero speed in continuous time.
In some embodiments, the performing feature extraction on the target sample trajectory data to obtain a one-dimensional trajectory sequence motion feature of the target sample trajectory data includes:
obtaining a motion characteristic and a difference characteristic corresponding to the attribute characteristic based on the attribute characteristic of the target sample track data;
obtaining a one-dimensional track sequence motion characteristic of the target sample track data based on the motion characteristic and the difference characteristic corresponding to the attribute characteristic;
wherein the attribute characteristics include at least one of: time, longitude, latitude, speed, and azimuth.
Fig. 7 illustrates a physical structure diagram of an electronic device, and as shown in fig. 7, the electronic device may include: a processor (processor) 710, a communication Interface (Communications Interface) 720, a memory (memory) 730, and a communication bus 740, wherein the processor 710, the communication Interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a method for agricultural track field segmentation based on motion and visual dual feature fusion, the method comprising:
acquiring agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
performing characteristic extraction on the agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented;
and determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track graph visual characteristics of the agricultural machinery track data to be segmented.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product includes a computer program, the computer program can be stored on a non-transitory computer readable storage medium, when the computer program is executed by a processor, a computer can execute the method for dividing an agricultural track field based on the fusion of motion and visual dual features provided by the above methods, the method includes:
obtaining agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
performing characteristic extraction on the agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented;
and determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track graph visual characteristics of the agricultural machinery track data to be segmented.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the method for agricultural track field segmentation based on motion and visual dual feature fusion provided by the above methods, the method including:
acquiring agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
performing characteristic extraction on the agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented;
and determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track graph visual characteristics of the agricultural machinery track data to be segmented.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. An agricultural machinery track field segmentation method based on motion and vision dual-feature fusion is characterized by comprising the following steps:
acquiring agricultural machinery track data to be segmented, wherein the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
performing characteristic extraction on the agricultural machinery track data to be segmented to obtain one-dimensional track sequence motion characteristics and two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented;
and determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track chart visual characteristics of the agricultural machinery track data to be segmented.
2. The agricultural machinery track field segmentation method based on the fusion of motion and vision dual features as claimed in claim 1, wherein the determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion features and the two-dimensional track map vision features of the agricultural machinery track data to be segmented comprises:
performing feature fusion on the motion features of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual features of the two-dimensional track map to obtain fusion feature vectors;
and determining the position classification of the agricultural machinery track data to be segmented based on the fusion feature vector.
3. The agricultural machinery track field segmentation method based on the fusion of motion and vision dual features as claimed in claim 2, wherein the feature fusion of the motion features of the one-dimensional track sequence and the vision features of the two-dimensional track map of the agricultural machinery track data to be segmented to obtain a fusion feature vector comprises:
inputting the motion characteristics of the one-dimensional track sequence of the agricultural machinery track data to be segmented and the visual characteristics of the two-dimensional track graph into a characteristic fusion layer of a target track segmentation model for characteristic fusion to obtain a fusion characterization vector;
the determining the position classification of the agricultural machinery trajectory data to be segmented based on the fusion feature vector comprises:
inputting the fusion characterization vector to a linear classification layer of the target track segmentation model for classification to obtain position classification of the agricultural machinery track data to be segmented output by the target track segmentation model;
the target trajectory segmentation model is trained in the following way:
carrying out data cleaning on the acquired sample track data to obtain target sample track data;
performing characteristic extraction on the target sample trajectory data to obtain one-dimensional trajectory sequence motion characteristics of the target sample trajectory data;
obtaining a sample track image dataset based on the target sample track data;
obtaining a two-dimensional trajectory chart visual feature of the target sample trajectory data based on the sample trajectory image dataset;
training an initial track segmentation model based on the one-dimensional track sequence motion characteristics and the two-dimensional track chart visual characteristics of the target sample track data to obtain the target track segmentation model.
4. The method for segmenting the agricultural machinery track field based on the fusion of the motion and the visual dual features, as claimed in claim 3, wherein the obtaining the two-dimensional track graph visual features of the target sample track data based on the sample track image dataset comprises:
inputting the sample track image data set into a target image segmentation model to obtain two-dimensional track map visual characteristics of the target sample track data output by the target image segmentation model;
the target image segmentation model is obtained based on the sample track image data set and the track icon label training, and the track icon label is generated based on the mapping relation between the longitude and latitude coordinates and the pixel coordinates of the track points in the sample track image data set.
5. The agricultural machinery track field segmentation method based on the fusion of the motion and the vision dual features as claimed in claim 3, wherein the step of performing data cleaning on the acquired sample track data to obtain target sample track data comprises:
deleting the track points under the condition that the track points of the sample track data exceed the target range;
under the condition that an abnormal track segment exists in the sample track data, cleaning the abnormal track segment, and reserving a first track point in the abnormal track segment;
the abnormal trajectory segment includes at least one of:
track segments corresponding to track points with the same time in continuous time;
corresponding track segments with the same longitude and latitude coordinates in continuous time;
track segments corresponding to the track segments with different longitude and latitude coordinates and zero speed in continuous time.
6. The agricultural machinery track field segmentation method based on the fusion of motion and vision dual features as claimed in claim 3, wherein the feature extraction of the target sample track data to obtain the one-dimensional track sequence motion features of the target sample track data comprises:
obtaining a motion characteristic and a difference characteristic corresponding to the attribute characteristic based on the attribute characteristic of the target sample track data;
obtaining a one-dimensional track sequence motion characteristic of the target sample track data based on the motion characteristic and the difference characteristic corresponding to the attribute characteristic;
wherein the attribute characteristics include at least one of: time, longitude, latitude, speed, and azimuth.
7. An agricultural machinery orbit field dividing device based on motion and vision dual-feature fusion is characterized by comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring agricultural machinery track data to be segmented, and the agricultural machinery track data to be segmented is used for indicating the running track of any agricultural machinery in target time;
the first determining module is used for extracting the characteristics of the agricultural machinery track data to be segmented to obtain the one-dimensional track sequence motion characteristics and the two-dimensional track map visual characteristics of the agricultural machinery track data to be segmented;
and the second determination module is used for determining the position classification of the agricultural machinery track data to be segmented based on the one-dimensional track sequence motion characteristics and the two-dimensional track map visual characteristics of the agricultural machinery track data to be segmented.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method for agricultural track field segmentation based on fusion of motion and visual dual features according to any one of claims 1 to 6.
9. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for segmenting an agricultural track field based on motion and visual dual feature fusion according to any one of claims 1 to 6.
CN202210839109.9A 2022-07-18 2022-07-18 Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion Active CN114998744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210839109.9A CN114998744B (en) 2022-07-18 2022-07-18 Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210839109.9A CN114998744B (en) 2022-07-18 2022-07-18 Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion

Publications (2)

Publication Number Publication Date
CN114998744A true CN114998744A (en) 2022-09-02
CN114998744B CN114998744B (en) 2022-10-25

Family

ID=83022384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210839109.9A Active CN114998744B (en) 2022-07-18 2022-07-18 Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion

Country Status (1)

Country Link
CN (1) CN114998744B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115876184A (en) * 2023-02-22 2023-03-31 武汉依迅北斗时空技术股份有限公司 Agricultural machinery operation field distinguishing method and device based on trajectory analysis and electronic equipment
CN116533529A (en) * 2023-05-12 2023-08-04 湖州东尼新能源有限公司 Intelligent control method and system for ultrasonic welding PC (polycarbonate) sheet
CN116797897A (en) * 2023-07-07 2023-09-22 中国人民解放军国防科技大学 Detection model generation and infrared small target detection method based on space-time feature fusion

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095104A (en) * 2016-06-20 2016-11-09 电子科技大学 Continuous gesture path dividing method based on target model information and system
CN110163888A (en) * 2019-05-30 2019-08-23 闽江学院 A kind of novel motion segmentation model quantity detection method
CN111436216A (en) * 2018-11-13 2020-07-21 北京嘀嘀无限科技发展有限公司 Method and system for color point cloud generation
CN111665861A (en) * 2020-05-19 2020-09-15 中国农业大学 Trajectory tracking control method, apparatus, device and storage medium
WO2020191642A1 (en) * 2019-03-27 2020-10-01 深圳市大疆创新科技有限公司 Trajectory prediction method and apparatus, storage medium, driving system and vehicle
CN112905576A (en) * 2021-03-02 2021-06-04 中国农业大学 Method and system for determining farmland and road based on agricultural machinery operation track
CN113641773A (en) * 2021-08-13 2021-11-12 中国农业大学 Agricultural machinery behavior visualization marking method for driving track
CN114021627A (en) * 2021-10-25 2022-02-08 国家计算机网络与信息安全管理中心 Abnormal track detection method and device fusing LSTM and scene rule knowledge
US20220058403A1 (en) * 2020-12-03 2022-02-24 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of estimating road condition, and method and apparatus of establishing road condition estimation model
CN114442623A (en) * 2022-01-20 2022-05-06 中国农业大学 Agricultural machinery operation track field segmentation method based on space-time diagram neural network
CN114463724A (en) * 2022-04-11 2022-05-10 南京慧筑信息技术研究院有限公司 Lane extraction and recognition method based on machine vision
CN114677507A (en) * 2022-03-11 2022-06-28 吉林化工学院 Street view image segmentation method and system based on bidirectional attention network
CN114707567A (en) * 2022-02-08 2022-07-05 高德软件有限公司 Trajectory classification method, trajectory classification model training method and computer program product
CN114758252A (en) * 2022-06-16 2022-07-15 南开大学 Image-based distributed photovoltaic roof resource segmentation and extraction method and system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095104A (en) * 2016-06-20 2016-11-09 电子科技大学 Continuous gesture path dividing method based on target model information and system
CN111436216A (en) * 2018-11-13 2020-07-21 北京嘀嘀无限科技发展有限公司 Method and system for color point cloud generation
WO2020191642A1 (en) * 2019-03-27 2020-10-01 深圳市大疆创新科技有限公司 Trajectory prediction method and apparatus, storage medium, driving system and vehicle
CN110163888A (en) * 2019-05-30 2019-08-23 闽江学院 A kind of novel motion segmentation model quantity detection method
CN111665861A (en) * 2020-05-19 2020-09-15 中国农业大学 Trajectory tracking control method, apparatus, device and storage medium
US20220058403A1 (en) * 2020-12-03 2022-02-24 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of estimating road condition, and method and apparatus of establishing road condition estimation model
CN112905576A (en) * 2021-03-02 2021-06-04 中国农业大学 Method and system for determining farmland and road based on agricultural machinery operation track
CN113641773A (en) * 2021-08-13 2021-11-12 中国农业大学 Agricultural machinery behavior visualization marking method for driving track
CN114021627A (en) * 2021-10-25 2022-02-08 国家计算机网络与信息安全管理中心 Abnormal track detection method and device fusing LSTM and scene rule knowledge
CN114442623A (en) * 2022-01-20 2022-05-06 中国农业大学 Agricultural machinery operation track field segmentation method based on space-time diagram neural network
CN114707567A (en) * 2022-02-08 2022-07-05 高德软件有限公司 Trajectory classification method, trajectory classification model training method and computer program product
CN114677507A (en) * 2022-03-11 2022-06-28 吉林化工学院 Street view image segmentation method and system based on bidirectional attention network
CN114463724A (en) * 2022-04-11 2022-05-10 南京慧筑信息技术研究院有限公司 Lane extraction and recognition method based on machine vision
CN114758252A (en) * 2022-06-16 2022-07-15 南开大学 Image-based distributed photovoltaic roof resource segmentation and extraction method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冀福华: "《农机田间作业大数据处理关键技术研究及平台构建》", 《中国优秀硕士学位论文全文数据库》 *
吴才聪: "《基于北斗的农机作业大数据***构建》", 《农业装备工程与机械化》 *
辛德奎: "《基于北斗_GPS双模的田间作业机车工况监测***》", 《中国优秀硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115876184A (en) * 2023-02-22 2023-03-31 武汉依迅北斗时空技术股份有限公司 Agricultural machinery operation field distinguishing method and device based on trajectory analysis and electronic equipment
CN116533529A (en) * 2023-05-12 2023-08-04 湖州东尼新能源有限公司 Intelligent control method and system for ultrasonic welding PC (polycarbonate) sheet
CN116533529B (en) * 2023-05-12 2023-09-29 湖州东尼新能源有限公司 Intelligent control method and system for ultrasonic welding PC (polycarbonate) sheet
CN116797897A (en) * 2023-07-07 2023-09-22 中国人民解放军国防科技大学 Detection model generation and infrared small target detection method based on space-time feature fusion
CN116797897B (en) * 2023-07-07 2024-03-12 中国人民解放军国防科技大学 Detection model generation and infrared small target detection method based on space-time feature fusion

Also Published As

Publication number Publication date
CN114998744B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN114998744B (en) Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion
CN108846835B (en) Image change detection method based on depth separable convolutional network
CN106897681B (en) Remote sensing image contrast analysis method and system
Ren et al. YOLOv5s-M: A deep learning network model for road pavement damage detection from urban street-view imagery
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN113887515A (en) Remote sensing landslide identification method and system based on convolutional neural network
CN115035361A (en) Target detection method and system based on attention mechanism and feature cross fusion
CN112464766A (en) Farmland automatic identification method and system
CN110659601A (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN115496891A (en) Wheat lodging degree grading method and device
CN110909656B (en) Pedestrian detection method and system integrating radar and camera
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN114998251A (en) Air multi-vision platform ground anomaly detection method based on federal learning
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN117830788B (en) Image target detection method for multi-source information fusion
CN115830469A (en) Multi-mode feature fusion based landslide and surrounding ground object identification method and system
Cheng et al. Multi-scale Feature Fusion and Transformer Network for urban green space segmentation from high-resolution remote sensing images
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
Zhou et al. ASSD-YOLO: a small object detection method based on improved YOLOv7 for airport surface surveillance
Li et al. Learning to holistically detect bridges from large-size vhr remote sensing imagery
CN113569911A (en) Vehicle identification method and device, electronic equipment and storage medium
CN116625388A (en) Unstructured road map generation method, device, equipment and medium
CN115311867B (en) Tunnel scene positioning method and device, computer equipment and storage medium
CN114419338B (en) Image processing method, image processing device, computer equipment and storage medium
CN116052110A (en) Intelligent positioning method and system for pavement marking defects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant