CN115280960B - Combined harvester steering control method based on field vision SLAM - Google Patents
Combined harvester steering control method based on field vision SLAM Download PDFInfo
- Publication number
- CN115280960B CN115280960B CN202210798209.1A CN202210798209A CN115280960B CN 115280960 B CN115280960 B CN 115280960B CN 202210798209 A CN202210798209 A CN 202210798209A CN 115280960 B CN115280960 B CN 115280960B
- Authority
- CN
- China
- Prior art keywords
- combine harvester
- pose
- points
- image
- steering control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000005457 optimization Methods 0.000 claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000003306 harvesting Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000012795 verification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D41/00—Combines, i.e. harvesters or mowers combined with threshing devices
- A01D41/12—Details of combines
- A01D41/127—Control or measuring arrangements specially adapted for combines
- A01D41/1278—Control or measuring arrangements specially adapted for combines for automatic steering
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01B—SOIL WORKING IN AGRICULTURE OR FORESTRY; PARTS, DETAILS, OR ACCESSORIES OF AGRICULTURAL MACHINES OR IMPLEMENTS, IN GENERAL
- A01B69/00—Steering of agricultural machines or implements; Guiding agricultural machines or implements on a desired track
- A01B69/007—Steering or guiding of agricultural vehicles, e.g. steering of the tractor to keep the plough in the furrow
- A01B69/008—Steering or guiding of agricultural vehicles, e.g. steering of the tractor to keep the plough in the furrow automatic
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Environmental Sciences (AREA)
- Automation & Control Theory (AREA)
- Soil Sciences (AREA)
- Mechanical Engineering (AREA)
- Guiding Agricultural Machines (AREA)
Abstract
The invention provides a combine harvester steering control method based on field vision SLAM, an industrial personal computer divides a field image into a harvested area and a non-harvested area according to a navigation datum line based on an image acquired by a binocular camera, a front-end vision odometer carries out targeted feature extraction on field stubble cutting clusters and crop information analysis, estimates the pose of the combine harvester, optimizes the pose of the combine harvester at different receiving moments based on the rear-end optimization of Ceres, and obtains the pose of the combine harvester after optimization; and obtaining the steering adjustment angle of the combine harvester by the deviation of the optimized pose of the combine harvester and the navigation datum line, and realizing the steering control of the combine harvester. The invention makes the image easier to be resolved by the industrial personal computer, and the position and the posture of the combine harvester body are judged more accurately, thereby improving the steering control precision of the combine harvester.
Description
Technical Field
The invention belongs to the technical field of intelligent agricultural machinery, and particularly relates to a combine harvester steering control method based on field vision SLAM.
Background
With the proposal of fine agriculture, the agriculture of China gradually develops to an intelligent direction, and the automatic navigation technology of agricultural vehicles becomes one of important research branches. Currently the dominant navigation technologies include satellite navigation, machine vision, inertial navigation, laser and radar, etc.
The visual sensor has lower cost, and particularly as unmanned technology is continuously explored and researched in recent years, high-precision visual navigation technology is continuously mature. However, the independent use of vision technology in an agricultural machine navigation system has limitations, such as complex and changeable field operation environment, serious influence by weather conditions, and certain errors in determining navigation datum lines and movement tracks. The pure visual navigation scheme is adopted for field operation, so that the characteristic information of the current crop row can be extracted in real time, but the characteristic information is easily affected by light rays, and the inertial navigation module is combined for compensation, so that the positioning accuracy is improved.
The prior art discloses a steering control method for a wheel type combine harvester, which adopts a binocular camera and inertial navigation to synchronously position and output real-time pose, adopts an image processing and pose estimation SLAM algorithm designed for the conditions of complex and changeable farmland environment, single color, large influence of illumination and the like to estimate the state of the harvester, and simultaneously utilizes a nonlinear Kalman filter to build and optimize a nonlinear Gaussian system model for the small change characteristic of an image in front of the harvester and output accurate pose. According to the method, manual stepping before operation is not needed, auxiliary control can be performed on the driving of the harvester, but frame loss occurs when the travelling speed is high, real-time pose output is affected, and navigation accuracy is affected.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a combine harvester steering control method based on field vision SLAM, which is used for improving the steering control precision of the combine harvester.
The present invention achieves the above technical object by the following means.
A combine harvester steering control method based on field vision SLAM specifically comprises the following steps:
The industrial personal computer estimates the pose of the combine harvester based on the images acquired by the binocular camera and the front-end visual odometer, optimizes the pose of the combine harvester at different receiving moments based on the rear-end optimization of Ceres, and obtains the pose of the combine harvester after the optimization;
And obtaining the steering adjustment angle of the combine harvester by the deviation of the optimized pose of the combine harvester and the navigation datum line, and realizing the steering control of the combine harvester.
According to the further technical scheme, the front-end visual odometer estimates the pose of the combine harvester, namely, a field image is divided into a harvested region and a non-harvested region according to a navigation datum line, feature point extraction is respectively carried out on the harvested region and the non-harvested region, and pose resolving is carried out on the extracted image feature points.
Further technical proposal is that for the harvested area:
The image of the harvested region is divided into 24 blocks by using a quadtree, interested ROI search regions of each block of image are set for the stubble cutting cluster, all the interested ROI search regions are subjected to self-adaptive FAST corner extraction, FAST key points of the stubble cutting cluster are obtained, and screening of the characteristic points of the stubble cutting cluster is realized.
According to a further technical scheme, the method for determining the stubble cutting cluster comprises the following steps:
And selecting a plurality of marked pixel points in each interested ROI searching region, selecting one pixel point P, taking P as a circle center, extracting the pixel point of the circle with the self-adaptive radius of R, and judging as a stubble cutting cluster in the interested ROI searching region if T marked pixel points exist.
Still further, for the unharvested area:
Setting a division depth limit:
C=4x
Wherein: c is the number of feature points required to be extracted, and x is the depth of division;
The image of the non-harvested region is divided into 24 blocks by using the quadtree, the interested ROI search region of each block of image is set for crops, all the interested ROI search regions are subjected to self-adaptive FAST corner extraction, FAST key points of the crops are obtained, and screening of crop feature points is realized.
In a further technical scheme, when C is lower than a set minimum value, a point P 'with the pixel brightness of I p is selected, a detection circle is formed by using P' as a center and 5.5 pixels with radius, 28 pixels on the detection circle are extracted, and when a pair of pixels in the pixel 11 and the pixel 26, the pixel 12 and the pixel 25, the pixel 13 and the pixel 24, the pixel 4 and the pixel 19, the pixel 5 and the pixel 18, and the pixel 6 and the pixel 19 satisfy constraint conditions:
judging that P' is a crop characteristic point;
wherein: i 1 and I 2 are the brightness of the pixel pairs, and t is a set threshold.
According to a further technical scheme, the pose of the combine harvester is optimized based on the rear end optimization receiving of Ceres at different moments: and (3) carrying out constraint and calculation on the error of the information pre-integration result acquired by the inertial measurement unit and the error of the image from 3D re-projection to 2D so as to obtain the pose of the combine harvester after optimization relative to the world coordinate system.
According to a further technical scheme, the back-end optimization further comprises closed-loop processing, and if the current frame is similar to the key frame in the map and the similarity exceeds a threshold value, the combined harvester is judged to return to the position of the key frame in the map again.
According to the further technical scheme, the estimated pose of the combine harvester is tracked by the following method:
Judging whether the input frame and the previous frame have constant speed, if so, estimating pose according to a constant speed motion model, carrying out characteristic point matching calculation on the input frame and the previous frame, if the number of the matching points is larger than a set value, judging that tracking is successful, otherwise, judging that tracking is failed;
If the constant speed is not available, estimating the pose of the reference frame, namely matching the current frame with the reference frame in the gallery by optimizing the error of the image from 3D to 2D, wherein the number of matching points is larger than a set value, judging that tracking is successful, otherwise, tracking fails, and repositioning to estimate the pose, namely recalculating the matching relation of the characteristic points between the two frames, and if tracking still fails, judging that the current information is lost.
The beneficial effects of the invention are as follows:
(1) According to the information acquired by the binocular camera and the inertial measurement unit, the position and posture information of the combined harvester after optimization is acquired and then is used as the input of steering control, so that the combined harvester has high anti-interference capability and can accurately work in a complex environment;
(2) The invention adopts SLAM algorithm to conduct targeted feature extraction on the field stubble cutting cluster and crop information analysis, divides the field image into a harvested region and an unharvested region according to a navigation datum line, for the harvested region, divides the image of the harvested region into 24 blocks by using a quadtree, sets interested ROI searching regions of each block of image for the stubble cutting cluster, conducts self-adaptive FAST angular point extraction on all interested ROI searching regions to obtain FAST key points of the stubble cutting cluster, and realizes screening of the feature points of the stubble cutting cluster; for the non-harvesting area, if the four-way tree division is used, firstly setting division depth limitation, if the four-way tree division is not used, selecting a point P ' with the pixel brightness of I p, forming a detection circle by using the P ' as a circle center and 5.5 pixels with the radius, extracting 28 pixel points on the detection circle, and judging that the P ' is a crop feature point when a pair of pixels in the pixel point 11 and the pixel point 26, the pixel point 12 and the pixel point 25, the pixel point 13 and the pixel point 24, the pixel point 4 and the pixel point 19, the pixel point 5 and the pixel point 18 and the pixel point 6 and the pixel point 19 meet constraint conditions; the method for extracting the characteristics ensures that the image is easier to be resolved by the industrial personal computer, the pose judgment of the combine harvester body is more accurate, and the measurement precision is improved;
(3) According to the deviation between the optimized pose of the combine harvester and the navigation datum line, the steering adjustment angle of the combine harvester is obtained, the reliability of a navigation system is improved, and the interference suppression capability of the navigation system is improved.
Drawings
FIG. 1 is a flow chart of the steering control of the combine harvester based on field vision SLAM according to the present invention;
FIG. 2 is a flow chart of a field vision SLAM algorithm according to the present invention;
FIG. 3 is a flow chart of feature point extraction of the front-end visual odometer of the present invention;
Fig. 4 is a schematic diagram of feature point extraction of FAST crops according to the present invention;
FIG. 5 is a schematic diagram of the coordinate transformation according to the present invention;
FIG. 6 is a flow chart of the tracking of the pose of the combine harvester according to the invention;
FIG. 7 is a block diagram of a combine steering control system according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and the specific embodiments, but the scope of the invention is not limited thereto.
As shown in fig. 1, according to the steering control method of the combine harvester based on the field vision SLAM, information acquired by the binocular camera and the inertial measurement unit is transmitted to the industrial personal computer, the industrial personal computer acquires the position and posture information of the optimized combine harvester by utilizing an SLAM algorithm, and then the position and posture information is adjusted according to the posture deviation information, so that the combine harvester performs steering adjustment, the edge of a header is closer to a crop boundary line, and the effect of full-width harvesting is achieved.
FIG. 2 is a SLAM algorithm flow chart, and is mainly aimed at autonomous positioning and navigation (realized by a binocular camera and an inertial measurement unit) of a moving process of the combine harvester in an unknown environment, a front-end visual odometer of a field estimates the pose of the combine harvester from images shot by the binocular camera, and the pose of the combine harvester and information of loop detection acquired by the front-end visual odometer at different moments are optimally received based on the rear end of Ceres, so that the optimized pose of the combine harvester is obtained; the loop detection is mainly to judge whether the current position is reached or not through a binocular camera, and carry out closed loop processing.
FIG. 3 is a flow chart of feature point extraction of a front visual odometer, wherein images acquired by left and right cameras are respectively processed, and a field image is divided into a harvested area and an unharvested area according to a navigation datum line; dividing the image of the harvested region into 24 blocks by adopting a quadtree, setting an interested ROI searching region of each block aiming at the stubble cutting cluster, and obtaining the contrast D of the image, wherein the calculation formula is as follows:
Wherein: delta (i-j) is the gray level difference between adjacent pixels of the image, and P δ (ij) is the pixel distribution probability that the adjacent gray level difference of the image is delta;
performing self-adaptive FAST corner extraction on the interested ROI search region selected from 24 images, and designing a self-adaptive radius formula:
R=αD+β
Wherein: alpha and beta are self-adaptive parameters set according to stubble cutting;
Because of the special similarity of the stubble cutting clusters, a plurality of pixel points are selected and marked in the interested ROI searching area of each block, one pixel point P is selected, the pixel point of a circle with the self-adaptive radius of R is extracted by taking P as the center of a circle, and if T pixel points with the selected marks exist in the circle with the self-adaptive radius of R (the value of T is an empirical value), one stubble cutting cluster in the interested ROI searching area is judged; and FAST key points of the stubble cutting clusters are obtained through a FAST angular point extraction algorithm, so that screening of the characteristic points of the stubble cutting clusters is realized.
Because the feature point extraction of the non-harvested region is difficult, the excessive division level of the quadtree can cause the feature point extraction to be too small and cause tracking loss, the invention sets the division depth limit, namely:
C=4x
Wherein: c is the number of feature points required to be extracted, and x is the depth of division;
And then extracting crop characteristic points according to a method for extracting characteristics of the harvested areas.
When the feature point C extracted in the non-harvesting area is lower than the set minimum value (is an empirical value), the quadtree division is canceled, a point P ' with the pixel brightness of I p is selected, a detection circle is formed by using P ' as a circle center and 5.5 pixels with radius, a threshold t is set, 28 pixel points on the detection circle are extracted, as shown in fig. 4, when the pixel points 11 and 26, the pixel points 12 and 25, the pixel points 13 and 24, the pixel points 4 and 19, the pixel points 5 and 18, and a pair of pixels 6 and 19 meet the following constraint conditions, it is determined that P ' is a crop feature point:
Wherein: i 1 and I 2 are the luminance of the pixel pairs.
After the stubble cutting cluster characteristic points and the crop characteristic points are screened, calculating the main direction of the characteristic points by using a gray centroid method for the screened characteristic points, describing the peripheral pixels of the characteristic points by using Brief descriptors and storing information; and then performing binocular matching (the binocular matching process is the prior art), and obtaining the uniformly distributed characteristic points of the current frame.
After the front-end visual odometer extracts the image characteristic points, the industrial personal computer carries out preprocessing on the information acquired by the inertial measurement unit, and denoising and filtering processing are included, and due to the difference of the acquisition frequencies of the sensors, the data of the inertial measurement unit and the images acquired by the binocular camera are required to be aligned in time and space.
And initializing the extracted image characteristic points and performing pose calculation. Pose calculation, namely motion estimation between frames of a binocular camera, wherein fig. 5 is a coordinate conversion schematic diagram, coordinates of extracted image feature points are converted into coordinates of pixels of the binocular camera, the extracted image feature points (3D points) are A, B, C, a, b and c in the corresponding binocular camera image (2D points) are corresponding, wherein the points represented by lower case letters are projections of upper case letters on a camera imaging plane; and recording the verification point pair as D-D, and recording the binocular camera optical center as O. The coordinates of A, B, C in the world coordinate system are known, since there is a correspondence between triangles: Δ Oab- Δoab, Δ Obc- Δobc, Δoac- Δoac, using the cosine law, there are:
OA2+OB2-2OA·OB·cos(a,b)=AB2 (1)
there are similar properties for the other two triangles, then:
OB2+OC2-2OB·OC·cos(b,c)=BC2 (2)
OA2+OC2-2OA·OC·cos(a,c)=AC2 (3)
Dividing each of formulas (1), (2) and (3) by OC 2, and recording x=oa/OC, y=ob/OC, there are:
x2+y2-2xy cos(a,b)=AB2/OC2 (4)
y2+12-2y cos(b,c)=BC2/OC2 (5)
x2+12-2x cos(a,c)=AC2/OC2 (6)
note v=ab 2/OC2、uv=BC2/OC2、wv=AC2/OC2, then there are:
x2+y2-2xy cos(a,b)-v=0 (7)
y2+12-2y cos(b,c)-uv=0 (8)
x2+12-2x cos(a,c)-wv=0 (9)
Shifting v in equation (7) to the right of the equation and substituting into equations (8), (9) to obtain:
(1-u)y2-ux2-2y cos(b,c)+2uxy cos(a,b)+1=0 (10)
(1-w)x2-wy2-2x cos(a,c)+2wxy cos(a,b)+1=0 (11)
Knowing the image position of the 2D point, the three tailangles cos (a, b), cos (b, c), cos (a, c) are known while u=bc 2/AB2、w=AC2/AB2, so u, w can be calculated from the coordinates of A, B, C in the world coordinate system, and after transformation into the camera coordinate system, the values of u, w are unchanged. The x and y in the formula are unknown, the change occurs along with the movement of the binocular camera, the most probable solution is calculated through verification of the points, and then the 3D coordinate of A, B, C under the camera coordinate system is obtained according to x=OA/OC and y=OB/OC; and then, according to the point-to-point relation under the two coordinate systems, calculating the rotation and translation of one frame of image of the binocular camera, and obtaining the pose of the combine harvester after all the images of the binocular camera are calculated.
The rear-end optimization is mainly a state estimation problem, and when the information pre-integration result acquired by the pose and inertia measurement unit of the combine harvester transmitted by the front-end visual odometer is received, the error of the pre-integration result and the error of the image re-projected from 3D to 2D are constrained and calculated by using a nonlinear least square method, so that the pose of the combine harvester after being optimized relative to a world coordinate system is obtained.
The closed loop processing in the back-end optimization mainly comprises the steps of searching in a key frame library, judging whether the current frame is similar to a key frame in a map (stored in an industrial personal computer), and judging that the combine harvester returns to the position of the key frame again if the similarity exceeds a threshold value; if a closed loop is detected, the current state is passed to the backend for optimization.
In the process of extracting the image characteristic points by the front-end visual odometer, aiming at the problems that the field characteristic points are difficult to extract, the visual system tracking is easy to lose and the like, the position and pose tracking method of the combine harvester in FIG. 6 is adopted, so that the position and pose acquisition accuracy of the combine harvester is improved. Firstly, judging whether an input frame (i.e. a current processed image) and a previous frame (stored in a key frame library) have constant speeds or not according to an image processed by an industrial personal computer, if so, estimating pose according to a constant speed motion model, carrying out feature point matching calculation on the input frame and the previous frame, if the number of matching points is larger than a set value, judging that tracking is successful, otherwise, judging that tracking is failed; if the constant speed is not available, estimating the pose of the reference frame, namely matching the current frame with the reference frame in the gallery by optimizing the error of the image from 3D to 2D, wherein the number of matching points is larger than a set value, judging that tracking is successful, otherwise, tracking fails, and performing the next repositioning estimation of the pose, namely recalculating the matching relation of the characteristic points between the two frames, and if tracking still fails (judging that the number of the matching points is larger than the set value), judging that the current information is lost and restarting is needed; and if one of the three pose estimation is successfully tracked, recording pose information, updating the three pose estimation information, and eliminating invalid feature points. The pose tracking information is used for judging the motion trail and pose transformation information of the harvester from the moment to the current moment. The process of estimating the pose according to the constant-speed motion model, estimating the pose by the reference frame and repositioning the estimated pose is the prior art.
Fig. 7 is a block diagram of a steering control system of a combine harvester, wherein firstly, image information acquired by a binocular camera is preprocessed to obtain a navigation datum line, secondly, the navigation datum line and the optimized pose output of the combine harvester are input as a navigation controller, deviation calculation is performed to obtain a steering adjustment angle of the combine harvester, the rotation angle of a steering wheel is output as the navigation controller and is input into a steering controller of a wheel type harvester, steering control of rear wheels of the combine harvester is achieved, finally, the steering angle of the rear wheels measured by an angle sensor is used as feedback, and closed loop is achieved for overall steering control.
The examples are preferred embodiments of the present invention, but the present invention is not limited to the above-described embodiments, and any obvious modifications, substitutions or variations that can be made by one skilled in the art without departing from the spirit of the present invention are within the scope of the present invention.
Claims (5)
1. A combine harvester steering control method based on field vision SLAM is characterized in that:
The industrial personal computer estimates the pose of the combine harvester based on the images acquired by the binocular camera and the front-end visual odometer, optimizes the pose of the combine harvester at different receiving moments based on the rear-end optimization of Ceres, and obtains the pose of the combine harvester after the optimization;
obtaining a steering adjustment angle of the combine harvester according to the deviation of the optimized pose of the combine harvester and the navigation datum line, and realizing steering control of the combine harvester;
Estimating the pose of the combine harvester by a front-end visual odometer, namely dividing a field image into a harvested region and a non-harvested region according to a navigation datum line, respectively extracting characteristic points of the harvested region and the non-harvested region, and carrying out pose resolving on the extracted image characteristic points;
For the harvested region: dividing the image of the harvested region into 24 blocks by using a quadtree, setting an interested search region of each block of image for a stubble cutting cluster, carrying out self-adaptive FAST corner extraction on all the interested search regions to obtain FAST key points of the stubble cutting cluster, and realizing screening of the characteristic points of the stubble cutting cluster;
For the unharvested areas: setting a division depth limit:
C=4x
Wherein: c is the number of feature points required to be extracted, and x is the depth of division;
Dividing an image of an unharvested area into 24 blocks by using a quadtree, setting an interested search area of each image for crops, carrying out self-adaptive FAST corner extraction on all the interested search areas to obtain FAST key points of the crops, and screening the crop feature points;
When C is lower than a set minimum value, the quadtree division is canceled, a point P 'with the pixel brightness of I p is selected, a detection circle is formed by taking the P' as a circle center and 5.5 pixels with the radius, 28 pixel points on the detection circle are extracted, and when the pixel points 11 and 26, the pixel points 12 and 25, the pixel points 13 and 24, the pixel points 4 and 19, the pixel points 5 and 18 and a pair of pixels 6 and 17 meet constraint conditions:
judging that P' is a crop characteristic point;
wherein: i 1 and I 2 are the brightness of the pixel pairs, and t is a set threshold.
2. The combine harvester steering control method according to claim 1, wherein the stubble cluster determination method comprises:
and selecting a plurality of marked pixel points in each interested search area, selecting one pixel point P, taking P as a circle center, extracting the pixel point with the self-adaptive radius of R, and judging as a stubble cutting cluster in the range of the interested search area if T marked pixel points exist.
3. The combine steering control method according to claim 1, wherein the position and orientation of the combine is optimized based on the rear-end optimized reception of Ceres at different moments: and (3) carrying out constraint and calculation on the error of the information pre-integration result acquired by the inertial measurement unit and the error of the image from 3D re-projection to 2D so as to obtain the pose of the combine harvester after optimization relative to the world coordinate system.
4. The combine steering control method of claim 3, wherein the back-end optimization further comprises a closed loop process, and wherein if the current frame is similar to a key frame in the map and the similarity exceeds a threshold, determining that the combine is returned to the position of the key frame in the map.
5. The combine steering control method according to claim 1, wherein estimating the pose of the combine is tracked by:
Judging whether the input frame and the previous frame have constant speed, if so, estimating pose according to a constant speed motion model, carrying out characteristic point matching calculation on the input frame and the previous frame, if the number of the matching points is larger than a set value, judging that tracking is successful, otherwise, judging that tracking is failed;
If the constant speed is not available, estimating the pose of the reference frame, namely matching the current frame with the reference frame in the gallery by optimizing the error of the image from 3D to 2D, wherein the number of matching points is larger than a set value, judging that tracking is successful, otherwise, tracking fails, and repositioning to estimate the pose, namely recalculating the matching relation of the characteristic points between the two frames, and if tracking still fails, judging that the current information is lost.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210798209.1A CN115280960B (en) | 2022-07-08 | 2022-07-08 | Combined harvester steering control method based on field vision SLAM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210798209.1A CN115280960B (en) | 2022-07-08 | 2022-07-08 | Combined harvester steering control method based on field vision SLAM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115280960A CN115280960A (en) | 2022-11-04 |
CN115280960B true CN115280960B (en) | 2024-06-07 |
Family
ID=83823061
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210798209.1A Active CN115280960B (en) | 2022-07-08 | 2022-07-08 | Combined harvester steering control method based on field vision SLAM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115280960B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116918593B (en) * | 2023-09-14 | 2023-12-01 | 众芯汉创(江苏)科技有限公司 | Binocular vision unmanned image-based power transmission line channel tree obstacle monitoring system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6285930B1 (en) * | 2000-02-28 | 2001-09-04 | Case Corporation | Tracking improvement for a vision guidance system |
US6721453B1 (en) * | 2000-07-10 | 2004-04-13 | The Board Of Trustees Of The University Of Illinois | Method and apparatus for processing an image of an agricultural field |
CN107264621A (en) * | 2017-06-15 | 2017-10-20 | 驭势科技(北京)有限公司 | Vehicle preview distance computational methods, device, medium and rotating direction control method |
CN108364344A (en) * | 2018-02-08 | 2018-08-03 | 重庆邮电大学 | A kind of monocular real-time three-dimensional method for reconstructing based on loopback test |
CN110308733A (en) * | 2019-08-07 | 2019-10-08 | 四川省众望科希盟科技有限公司 | A kind of micro robot kinetic control system, method, storage medium and terminal |
CN111983639A (en) * | 2020-08-25 | 2020-11-24 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
-
2022
- 2022-07-08 CN CN202210798209.1A patent/CN115280960B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6285930B1 (en) * | 2000-02-28 | 2001-09-04 | Case Corporation | Tracking improvement for a vision guidance system |
US6721453B1 (en) * | 2000-07-10 | 2004-04-13 | The Board Of Trustees Of The University Of Illinois | Method and apparatus for processing an image of an agricultural field |
CN107264621A (en) * | 2017-06-15 | 2017-10-20 | 驭势科技(北京)有限公司 | Vehicle preview distance computational methods, device, medium and rotating direction control method |
CN108364344A (en) * | 2018-02-08 | 2018-08-03 | 重庆邮电大学 | A kind of monocular real-time three-dimensional method for reconstructing based on loopback test |
CN110308733A (en) * | 2019-08-07 | 2019-10-08 | 四川省众望科希盟科技有限公司 | A kind of micro robot kinetic control system, method, storage medium and terminal |
CN111983639A (en) * | 2020-08-25 | 2020-11-24 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
Non-Patent Citations (1)
Title |
---|
基于ORB-GMS的视觉惯导SLAM研究;袁俊鹏;《中国优秀硕士学位论文全文数据库信息科技辑》;第I138-1081页 * |
Also Published As
Publication number | Publication date |
---|---|
CN115280960A (en) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110703777B (en) | Combined navigation method and system of combine harvester based on Beidou and vision | |
CN110243372B (en) | Intelligent agricultural machinery navigation system and method based on machine vision | |
CN102368158B (en) | Navigation positioning method of orchard machine | |
CN111739063A (en) | Electric power inspection robot positioning method based on multi-sensor fusion | |
CN109460029A (en) | Livestock and poultry cultivation place inspection mobile platform and its control method | |
CN115407357B (en) | Low-harness laser radar-IMU-RTK positioning mapping algorithm based on large scene | |
CN111521195B (en) | Intelligent robot | |
CN113778081B (en) | Orchard path identification method and robot based on laser radar and vision | |
CN115731268A (en) | Unmanned aerial vehicle multi-target tracking method based on visual/millimeter wave radar information fusion | |
CN115280960B (en) | Combined harvester steering control method based on field vision SLAM | |
CN112729318A (en) | AGV fork truck is from moving SLAM navigation of fixed position | |
WO2024120269A1 (en) | Position recognition method for fusing point cloud map, motion model and local feature | |
CN113947616A (en) | Intelligent target tracking and loss rechecking method based on hierarchical perceptron | |
CN116823812B (en) | Silage corn field life detection method | |
CN115451965B (en) | Relative heading information detection method for transplanting system of transplanting machine based on binocular vision | |
CN112432653A (en) | Monocular vision inertial odometer method based on point-line characteristics | |
CN116117807A (en) | Chilli picking robot and control method | |
CN115773747A (en) | High-precision map generation method, device, equipment and storage medium | |
CN115294562A (en) | Intelligent sensing method for operation environment of plant protection robot | |
CN113554705A (en) | Robust positioning method for laser radar in changing scene | |
CN111191513A (en) | Method for estimating position of mobile robot based on scene size analysis | |
CN114485612B (en) | Route generation method and device, unmanned operation vehicle, electronic equipment and storage medium | |
Zhu et al. | Development of a combined harvester navigation control system based on visual simultaneous localization and mapping-inertial guidance fusion | |
CN118168545A (en) | Positioning navigation system and method for weeding robot based on multi-source sensor fusion | |
Guo et al. | Advanced Stereo Vision-Based Solutions for Header and Crop Height Detection in Combine Harvesters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |