CN111144406B - Adaptive target ROI positioning method for solar panel cleaning robot - Google Patents

Adaptive target ROI positioning method for solar panel cleaning robot Download PDF

Info

Publication number
CN111144406B
CN111144406B CN201911332440.6A CN201911332440A CN111144406B CN 111144406 B CN111144406 B CN 111144406B CN 201911332440 A CN201911332440 A CN 201911332440A CN 111144406 B CN111144406 B CN 111144406B
Authority
CN
China
Prior art keywords
target
image
robot
frame
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911332440.6A
Other languages
Chinese (zh)
Other versions
CN111144406A (en
Inventor
杨大卫
张文强
张传法
李馨蕾
陶玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201911332440.6A priority Critical patent/CN111144406B/en
Publication of CN111144406A publication Critical patent/CN111144406A/en
Application granted granted Critical
Publication of CN111144406B publication Critical patent/CN111144406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3476Special cost functions, i.e. other than distance or default speed limit of road segments using point of interest [POI] information, e.g. a route passing visible POIs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the technical field of machine vision image processing, and particularly relates to a self-adaptive ROI (region of interest) target positioning method of a solar panel cleaning robot. According to the invention, the characteristic that the position change of the target in two frames of images is limited is utilized, the detection result of the previous frame is fused with the motion information of the sensor to compensate the position change of the target, the possible interested region of the target in the current image is estimated, the detection range is reduced, the large calculation amount of the whole image scanning target and the interference introduced by useless background regions are avoided, the effective region is concentrated, and the target can be detected in real time, efficiently and accurately. The invention solves the problems of large operation amount, poor instantaneity, multiple interference, easy target loss and the like of the cleaning robot on the solar panel due to wide detection range, complex background and motion change, greatly improves the detection efficiency and stability, and ensures that the cleaning robot can rapidly, efficiently and accurately complete the full-automatic cleaning work of the solar panel.

Description

Adaptive target ROI positioning method for solar panel cleaning robot
Technical Field
The invention belongs to the technical field of machine vision image processing, and particularly relates to a method for adaptively positioning a target ROI by a solar panel cleaning robot.
Background
With the rapid development of technology and economy in the current society, the demands for energy are continuously increasing worldwide, and the energy problem is a problem which is concerned and urgently desired to be solved in various countries. Solar energy has the advantages of high efficiency, environmental protection, low cost and the like, and has rapid development and wide application in recent years. Because the photovoltaic power generation system is exposed outdoors for a long time, the transmittance of sunlight can be greatly reduced by dust and foreign matters accumulated on the solar panel, so that the power generation efficiency is reduced, and the solar panel is an important task for periodic dust removal and cleaning.
Current solar panel dust removal cleaning modes can be categorized into three categories: the first type is manual cleaning. The manual cleaning requires employment and cultivation of a large number of professionals, and the combination of the limitation of working environment and the specificity of tasks leads to low utilization rate of personnel, high maintenance cost, dead angles in the cleaning area and incomplete cleaning. The second type is a orbital cleaning robot. Such machines require the installation of additional rails to assist the robot in operation, add significant equipment and maintenance costs, and provide poor flexibility. The third category is autonomous cleaning robots. The robot can identify a working area, plan a driving path and complete a cleaning task.
Traditional autonomous robot navigation schemes rely mainly on radar sensors. The scheme has high reliability and mature technology, but has high cost (for example, the price of the laser radar is different from tens of thousands to hundreds of thousands), and has complex structure and high installation requirement, and some radars are heavy and are not suitable for being used on solar panels with special materials. In contrast, the camera has low cost, low power consumption, small size and convenient installation. However, the solar panel cleaning robot has limited performance of a used processor due to power consumption, cost and the like, and the working area has wide detection range, complex background and continuous movement, and is difficult to process a large amount of image data in real time, so that the realization of autonomous navigation by a vision technology is an urgent problem to be solved on the solar panel cleaning robot.
Disclosure of Invention
The invention aims to provide a self-adaptive target ROI (region of interest) positioning method of a solar panel cleaning robot, which can detect a target in real time, efficiently and accurately and realize autonomous navigation.
Because the visual solar panel cleaning robot needs to detect targets (such as panel edges, indication boards, two-dimensional codes and the like) in a picture in real time in the working process, a drivable area is judged, the pose of the drivable area is adjusted, then path planning is analyzed and executed, and the autonomous navigation is completed for cleaning the solar panel. The object may appear at any position in the frame and the robot needs to scan the full view to detect the object. However, the calculation amount of the full-image scanning is huge, the background is complex and moves continuously, and the real-time performance and the precision requirements are difficult to meet. According to the self-adaptive target ROI positioning method provided by the invention, the characteristic that the position change of the target in two frames of images is limited is utilized, the detection result of the previous frame is fused with the sensor motion information to compensate the position change of the target, the possible interested region of the target in the current image is estimated, the detection range is reduced, the large calculation amount of the whole image scanning target and the interference introduced by the useless background region are avoided, the effective region is focused, the target can be detected efficiently and accurately in real time, and the autonomous navigation is further realized.
The invention provides a self-adaptive target ROI positioning method of a solar panel cleaning robot, which comprises the following specific steps of
Firstly, carrying out full-image detection by using a target detection algorithm, and screening a result to obtain the position of a target; the method comprises the following steps:
step 101: the system captures a frame of image at regular time or according to the command, and if the frame of image is not the first frame of image of the system and has a detection result, the step is skipped to step 201;
step 102: and calling a corresponding algorithm according to the target to be detected, performing full-image detection, and screening the result to obtain the position of the target. For example, detected target B 0 Is denoted as cut_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) Wherein x is 0 ,y 0 ,x 1 ,y 1 The coordinates of the upper left and lower right corners of the object, respectively. If the frame does not detect the target, returning to the step 101, and waiting for detecting the next frame image;
step 103: after the system processes the current image, updating pre_B 0 =curt_B 0 Returning to step 101, waiting for detection of the next frame image;
secondly, estimating an interested region of the target in the current image according to the target position in the previous frame of image, wherein the method comprises the following steps of:
step 201: the object B 0 The position in the previous frame image is pre_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) pre_B 0 Expanding 1.2 times to obtain: ROI_B' 0 =(x′ 0 ,y′ 0 ,x′ 1 ,y′ 1 )=(x 0 -δx,y 0 -δy,x 1 +δx,y 1 +δy), where δx=0.6× (x 1 -x 0 ),δy=0.6×(y 1 -y 0 ) Is an intermediate variable; expanded ROI_B' 0 Compared with pre_B 0 Increased inclusion of B in current frame pictures 0 The probability of (1) improves the success rate and speed of detection;
thirdly, modeling a motion state of the robot, and calculating the position change of the robot in the interval time of two frames of images, wherein the position change is specifically as follows:
step 301: although the target detected by the solar panel cleaning robot is stationary, the robot's own motion may cause the target's position in the image to change, as shown in fig. 2, assuming the target B 0 Center point P is at t 1 Time mapped p on image1 1 Point, the robot is rotated R and translated t, after which it is at t 2 Time mapping and p on image2 2 And (5) a dot.
According to pre_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) Calculating p 1 =((x 0 +x 1 )/2,(y 0 +y 1 ) 2), and p 2 Namely, target B to be solved 0 A location box center on image 2;
step 302: target B 0 The position change in the two images depends on t 1 →t 2 ]The rotation R and translation t of the robot in the gap. Sampling IMU (inertial measurement Unit) at regular intervals Deltat to obtain linear accelerations a of the robot in three axial directions x ,a y ,a z And angular velocities w in three directions x ,w y ,w z And solving a rotation matrix R and a translation vector t, so that a constraint is applied to the motion to compensate the image motion, and the accurate positioning of the target ROI is achieved. The invention describes the motion of the robot by using only a few motion parameters, and the storage space and the operation time occupied by the parameters are basically negligible.The IMU data is the sum of the true value ω, the drift value b, and the gaussian error η:
Figure BDA0002330018550000031
instantaneous angular velocity of movement at time t +.>
Figure BDA0002330018550000032
And linear acceleration->
Figure BDA0002330018550000033
The calculation formulas of (a) are respectively as follows:
Figure BDA0002330018550000034
Figure BDA0002330018550000035
wherein the superscripts g and a denote angular velocity and linear acceleration, respectively, and the left label w denotes the world coordinate system (the left label w appearing hereinafter denotes the same meaning, e.g
Figure BDA0002330018550000036
) R represents the rotation angle.
Step 303: assuming Δt is the IMU sampling time interval, since the sampling frequency of the IMU is relatively high, above 100HZ, Δt is very short, so it can be assumed that the angular velocity and linear acceleration within Δt time remain unchanged. Combining a physical motion model formula:
Figure BDA0002330018550000037
Figure BDA0002330018550000038
(superscript. Represents derivative), the rotation angle R at time t+Δt, the speed v and the position p are calculated:
R(t+Δt)=R(t)Exp(ω(t)Δt)
wv(t+Δt)=wv(t)+wa(t)Δt
wp(t+Δt)=wp(t)+wa(t)Δt+0.5×wa(t)Δt 2
step 304: combining the angular velocity and acceleration measurement formulas described in step 302, a complete formula of rotation angle R, velocity v and position p at time t+Δt (wherein superscript d represents a discrete value):
Figure BDA0002330018550000039
Figure BDA00023300185500000310
Figure BDA00023300185500000311
step 305: and calculating the motion state of the robot at the moment of the second frame image. Since the IMU has a higher sampling frequency than the image, IMU data at multiple times needs to be accumulated. Suppose that from the first frame image t 1 To the second frame t 2 Between j-i IMU data are sampled, then from t 1 Accumulating j-i IMU data at moment to obtain t 2 State value of time:
Figure BDA00023300185500000312
Figure BDA00023300185500000313
Figure BDA00023300185500000314
wherein subscript j equals t 2 Because both correspond to the same instant;
step 306: calculate two frame image t 1 →t 2 ]The robot motion state within the interval changes. t is t 1 And t 2 Time state quantityThe difference between i.e. the cumulative total rotation DeltaR of the IMU between i and j ij Velocity Deltav ij And displacement Δp ij The change amount of (2) is calculated as follows:
Figure BDA00023300185500000315
Figure BDA0002330018550000041
Figure BDA0002330018550000042
fourthly, performing motion estimation and compensation on the region of interest according to the position change of the robot; the method comprises the following steps:
step 401: and establishing a conversion relation between the world three-dimensional coordinate system and the image coordinate system. The real world is a three-dimensional space, provided that a point P is used in the world coordinate system (x w ,y w ,z w ) Expressed as (u, v) in the corresponding image coordinate system, the conversion relationship thereof passing through "world coordinate system =>Camera coordinate system =>Image physical coordinate System =>Image pixel coordinate system ", wherein the positional relationship between the camera coordinate system and the world coordinate system is described by a rotation matrix R and a translation vector t, f representing the camera focal length, d x ,d y The unit sizes of the individual pixels on the image line are represented respectively, zc represents depth information, and the calculation formula is as follows:
Figure BDA0002330018550000043
step 402: the robot pose change (Δr) calculated in step 305 is calculated ij ,Δv ij ,Δp ij ) Merging into camera model to estimate position p of target in second frame image coordinate system 2 The formula (where K is an internal reference calibrated in advance and not described here in detail) is as follows:
Figure BDA0002330018550000044
fifthly, correcting the region of interest and detecting the accurate position of the target, wherein the method comprises the following steps:
step 501: calculating the pixel offset, Δu=u 2 -u 1 ,Δv=v 2 -v 1 Compensating the ROI_B' 0 =(x′ 0 ,y′ 0 ,x′ 1 ,y′ 1 ) Obtaining a new ROI estimated frame ROI_B 0 =(x″ 0 ,y″ 0 ,x″ 1 ,y″ 1 )=(x′ 0 +Δu,y′ 0 +Δv,x′ 1 +Δu,y′ 1 +Δv);
Step 502: compensating for said B' 0 =(x′ 0 ,y′ 0 ,x′ 1 ,y′ 1 ) Obtaining a new ROI estimated frame B 0 =(x″ 0 ,y″ 0 ,x″ 1 ,y″ 1 )=(x′ 0 +Δu,y′ 0 +Δv,x′ 1 +Δu,y′ 1 +Δv);
Step 503: determining a new region of interest ROI_B 0 If the boundary is exceeded, the adjustment is performed if the boundary is exceeded. Let the image size be h×w, then the judgment and adjustment formula is: x' 0 =min(x″ 0 ,0);y″ 0 =min(y″ 0 ,0);x″ 1 =max(x″ 1 ,w);y″ 1 =max(y″ 1 ,h);
Step 504: in the current picture ROI_B 0 Executing corresponding detection algorithm in the range to obtain a target B 0 Is (are) accurate positioning curt_b 0 =(x 0 ,y 0 ,x 1 ,y 1 ) If no target is detected, the pre_B is cleared 0 Step 102, jumping to perform full-image detection;
step 505: after the system processes the current image, updating pre_B 0 =curt_B 0 Returning to step 101.
According to the self-adaptive dynamic ROI positioning method designed by the invention, the movement of the target on the image is compensated by utilizing the historical detection result and the sensor information, so that the detection of large operation amount and useless area of the whole image scanning target is avoided, the problems of low operation speed, poor instantaneity and easy target loss in the driving process caused by wide detection range and complex background of the cleaning robot on the solar panel are solved, and the detection efficiency and stability are greatly improved.
Drawings
FIG. 1 is a schematic diagram of the algorithm of the present invention for locating a target ROI.
FIG. 2 is a schematic diagram showing the change of an object in an image during the movement process of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the following is a specific implementation procedure, and the present invention will be further described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic illustration of an embodiment. First, subgraph A is t 1 The method comprises the steps that images acquired by a camera at moment are subjected to full-image scanning by using a target detection algorithm, and the positions of targets are obtained through screening; t is t 2 A second frame image is acquired at the moment, and the possible position of the target is estimated in the sub-image B by utilizing the result in the first frame image; in the sub-graph C, the motion state of the robot is calculated, the motion estimation and compensation of the image are carried out, and the position in the sub-graph B is corrected. And finally, calling a target detection algorithm in the corrected region to trace out a target, as shown in a sub graph D.
The first step, carrying out full-image detection by using a target detection algorithm, screening the result to obtain the position of the target, and implementing the following steps:
step 101: the robotic system captures an image of a frame and time stamps t, either at a timing or upon command, if there is no image of the first frame and there is a detection result of the last frame, go to step 201.
Step 102: and calling a corresponding algorithm to perform full-image detection according to the target to be detected, and screening the result to obtain the position frame information of the target to be detected. In the embodiment, a sign detection algorithm is called, and a target sign B is screened out according to a preset threshold value of 100-h, w-400, c-ter E (100, w) U (100, h) 0 Is denoted as cut_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ). Wherein h and w are respectively the image height and width, and x 0 ,y 0 ,x 1 ,y 1 The coordinates of the upper left and lower right corners of the object, respectively. If the frame does not detect or filter out any results, go back to step 101 to wait for the next frame image to be detected.
Step 103: after the system processes the current image information and related programs (such as executing path planning and position moving), updating
Figure BDA0002330018550000051
And then returns to step 101 to await detection of the next frame image.
The second step, estimating the interested area of the target in the current image according to the target position in the previous frame image, and implementing the following steps:
step 201: the object B 0 The position in the previous frame image is pre_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) pre_B 0 Enlarging 1.2 times to obtain the ROI_B' 0 =(x′ 0 ,y′ 0 ,x′ 1 ,y′ 1 )=(x 0 -δx,y 0 -δy,x 1 +δx,y 1 +δy), where δx=0.6× (x 1 -x 0 ),δy=0.6×(y 1 -y 0 ). Expanded ROI_B' 0 Compared with pre_B 0 Increased inclusion of B in current frame pictures 0 The probability of (1) improves the success rate and speed of detection.
Thirdly, modeling the motion state of the robot, calculating the position change of the robot in the interval time of two frames of images, and implementing the following steps:
step 301: although the target detected by the solar panel cleaning robot is stationary, the movement of the robot itself may cause a change in the position of the target in the image. The movement in three dimensions is constituted by three axes, so the robot movement is described by translation in three axes and rotation about three axes, which together have six degrees of freedom, as shown in fig. 2, the object B 0 Center point P is at t 1 Time mapped p on image1 1 Point, the robot is rotated R and translated t, after which it is at t 2 Time mapping and p on image2 2 And (5) a dot. According to pre_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) Calculating p 1 =((x 0 +x 1 )/2,(y 0 +y 1 )/2);
Step 302: target B 0 The position change in the two images depends on t 1 →t 2 ]The rotation R and translation t of the robot in the gap. In the present embodiment, linear accelerations a of the robot in three axis directions are output by using the IMU inertial measurement unit x ,a y ,a z And angular velocities w in three directions x ,w y ,w z And solving a rotation matrix R and a translation vector t, so that a constraint is applied to the motion to compensate the local motion of the image, and the accurate positioning of the target ROI is achieved. The invention describes the motion of the robot by using only a few motion parameters, and the storage space and the operation time occupied by the parameters are basically negligible. The IMU is sampled at a timing interval delta t, and the obtained data is the sum of a true value, a drift value b and a Gaussian error eta:
Figure BDA0002330018550000061
the formula of the instantaneous angular velocity and linear acceleration of the movement is as follows:
Figure BDA0002330018550000062
Figure BDA0002330018550000063
step 303: the present embodiment sets the IMU sampling interval to Δt=10ms, and combines the physical motion model formula
Figure BDA0002330018550000064
The rotation angle R, the speed v and the position p at the time t+deltat are calculated, and the formula is as follows:
R(t+Δt)=R(t)Exp(ω(t)Δt)
wv(t+Δt)=wv(t)+wa(t)Δt
wp(t+Δt)=wp(t)+wa(t)Δt+0.5×wa(t)Δt 2
step 304: the complete rotation angle R, velocity v and position p at time t+Δt are calculated in combination with the angular velocity and acceleration measurement formula described in step 302, wherein the superscript d represents the dispersion:
Figure BDA0002330018550000065
Figure BDA0002330018550000066
/>
Figure BDA0002330018550000067
step 305: and calculating the motion state of the robot at the moment of the second frame image. Since the IMU has a higher sampling frequency than the image, IMU data at multiple times needs to be accumulated. Suppose that from the first frame image t 1 To the second frame t 2 Between j-i IMU data are sampled, then from t 1 Accumulating j-i IMU data at moment to obtain t 2 State value of time:
Figure BDA0002330018550000068
Figure BDA0002330018550000071
Figure BDA0002330018550000072
step 306: calculate two frame image t 1 →t 2 ]The robot motion state within the interval changes. t is t 1 And t 2 The difference between the time-of-day state quantities, i.e. accumulating all rotations DeltaR of the IMU between i and j ij Velocity Deltav ij And displacement Δp ij The change amount of (2) is calculated as follows:
Figure BDA0002330018550000073
Figure BDA0002330018550000074
Figure BDA0002330018550000075
and fourthly, performing motion estimation and compensation on the region of interest according to the position change of the robot, wherein the implementation steps are as follows:
step 401: and establishing a conversion relation between the world three-dimensional coordinate system and the image coordinate system. The real world is a three-dimensional space, provided that a point P is used in the world coordinate system (x w ,y w ,z w ) Expressed as (u, v) in the corresponding image coordinate system, the conversion relationship thereof passing through "world coordinate system =>Camera coordinate system =>Image physical coordinate System =>Image pixel coordinate system ", wherein the positional relationship between the camera coordinate system and the world coordinate system is described by a rotation matrix R and a translation vector t, f representing the camera focal length, d x ,d y The unit sizes of the individual pixels on the image line are represented respectively, zc represents depth information, and the calculation formula is as follows:
Figure BDA0002330018550000076
step 402: using the robot pose change (Δr) calculated in step 305 ij ,Δv ij ,Δp ij ) Merging into camera model to estimate position p of target in second frame image coordinate system 2 Formula (where K is a premature markThe specific internal references, not described in detail herein) are as follows:
Figure BDA0002330018550000077
fifthly, correcting the region of interest and detecting the accurate position of the target, wherein the implementation steps are as follows:
step 501: calculating the pixel offset Δu=u 2 -u 1 ,Δv=v 2 -v 1 Compensating to the region of interest ROI_B 'described in S21' 0 Obtaining a new region of interest ROI_B 0 =(x″ 0 ,y″ 0 ,x″ 1 ,y″ 1 )=(x′ 0 +Δu,y′ 0 +Δv,x′ 1 +Δu,y′ 1 +Δv);
Step 502: compensating for said B' 0 =(x′ 0 ,y′ 0 ,x′ 1 ,y′ 1 ) Obtaining a new ROI estimated frame B 0 =(x″ 0 ,y″ 0 ,x″ 1 ,y″ 1 )=(x′ 0 +Δu,y′ 0 +Δv,x′ 1 +Δu,y′ 1 +Δv);
Step 503: judging the region of interest ROI_B 0 If the boundary is exceeded, the adjustment is performed if the boundary is exceeded. Let the image size be h×w, then determine and adjust the manner: x' 0 =min(x″ 0 ,0);y″ 0 =min(y″ 0 ,0);x″ 1 =max(x″ 1 ,w);y″ 1 =max(y″ 1 ,h);
Step 504: in the current picture ROI_B 0 Executing corresponding detection algorithm in the range to obtain a target B 0 Is (are) accurate positioning curt_b 0 =(x 0 ,y 0 ,x 1 ,y 1 ) If no target is detected, the pre_B is cleared 0 Step 102, jumping to perform full-image detection;
step 505: after the system processes the current image, updating pre_B 0 =cur_B 0 Returning to step 101.
The experimental results show that the test comparison results are shown in the following table, and it can be seen that the method of the invention is obvious in improvement of accuracy and calculation efficiency.
Table 1: the recognition rate is compared with the time consumption,
category(s) Precision of Recall rate of recall Correct number of Number of errors Single frame averaging time consuming
Before 85.00% 70.12% 12992 2292 81ms
Now 98.24% 93.32% 15000 268 23ms
Lifting up 15.58% 33.09% 15.46% 88.30% 71.60%
The self-adaptive target ROI positioning method disclosed by the invention utilizes the characteristic that the position change of the target in two frames of images is limited, the detection result of the previous frame is fused with the motion information of the sensor to compensate the position change of the target, and the possible interested region of the target in the current image is estimated, so that the detection region can be reduced by more than 50 percent (720P camera, 1080 multiplied by 720 resolution), the average speed is improved by 70 percent, and the detection precision is also improved by about 15 percent due to the reduction of background interference. The invention effectively solves the problems of low operation speed, poor real-time performance and easy target loss in the motion process caused by wide detection range and complex background of the cleaning robot on the solar panel.

Claims (4)

1. A self-adaptive target ROI positioning method of a solar panel cleaning robot is characterized by comprising the following specific steps:
s01: collecting a first frame image, and detecting and screening the whole image to obtain a target position;
s02: acquiring a next frame of image, and estimating an interested region of a target in a current image according to the target position in the previous frame of image;
s03: modeling the motion state of the robot, and calculating the position change of the robot between two images;
s04: performing motion estimation and compensation on the region of interest according to the position change of the robot;
s05: correcting the region of interest and detecting the accurate position of the target;
the target position is obtained by detecting and screening the whole image, and the specific steps are as follows:
s11: the system captures a frame of image at regular time or according to the command, and if the first frame of image of the system is not the first frame of image and the detection result exists, the step is skipped to S21;
s12: invoking a corresponding algorithm model according to a target to be detected to perform full-image detection, and screening to obtain a target position; set the detected target B 0 The position is recorded as curt_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) Wherein (x) 0 ,y 0 ),(x 1 ,y 1 ) Coordinates of upper left and lower right corners of the target respectively; if the frame does not detect the target, returning to S11, and waiting for detecting the next frame of image;
s13: after the system processes the current image, updating pre_B 0 =curt_B 0 Returning to S11 to wait for the next frame image; wherein pre_B 0 Representing the position of the object in the previous frame of image;
the method comprises the following specific steps of:
s21: the object B 0 The position in the previous frame image is pre_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) pre_B 0 Expanding 1.2 times to obtain a region of interest ROI_B' 0 =(x′ 0 ,y′ 0 ,x′ 1 ,y′ 1 )=(x 0 -δx,y 0 -δy,x 1 +δx,y 1 +δy), where δx=0.6× (x 1 -x 0 ),δy=0.6×(y 1 -y 0 ) Is an intermediate variable.
2. The method for adaptively positioning the target ROI of the solar panel cleaning robot according to claim 1, wherein modeling the motion state of the robot calculates the position change of the robot between the two images, comprising the following steps:
s31: according to pre_B 0 =(x 0 ,y 0 ,x 1 ,y 1 ) Calculate target B 0 Center p of position on last frame image 1 =((x 0 +x 1 )/2,(y 0 +y 1 )/2);
S32: sampling IMU (inertial measurement Unit) at regular intervals Deltat to obtainLinear acceleration a to the robot in three axial directions x ,a y ,a z And angular velocities w in three directions x ,w y ,w z Then solving a rotation matrix R and a translation vector t; since Δt is very short, it is assumed that the angular velocity and linear acceleration within Δt remain unchanged, combined with the physical motion model formula:
Figure FDA0004093885600000011
Figure FDA0004093885600000012
here superscript-means derivative; the rotation angle R, the speed v and the position p at the time t+deltat are calculated, and the formula is as follows:
Figure FDA0004093885600000013
Figure FDA0004093885600000021
Figure FDA0004093885600000022
wherein, the upper mark d represents the dispersion, the upper mark g represents the gravity acceleration, b is the drift value, and eta is the Gaussian error;
s33: calculating the motion state of the robot corresponding to the second frame image; because the sampling frequency of the IMU is higher than that of the image, the IMU data at a plurality of moments are accumulated; let t be the first frame image 1 To the second frame t 2 Between j-i IMU data are sampled, then from t 1 Accumulating j-i IMU data at moment to obtain t 2 The state value of the moment is as follows:
Figure FDA0004093885600000023
/>
Figure FDA0004093885600000024
Figure FDA0004093885600000025
wherein subscript j equals t 2 Because both correspond to the same instant;
s34: calculate two images t 1 →t 2 ]The motion state of the robot changes between the intervals; solving for t 1 And t 2 The difference between the time-of-day state quantities, i.e. accumulating all rotations DeltaR of the IMU between i and j ij Velocity Deltav ij And displacement Δp ij The change amount of (2) is calculated as follows:
Figure FDA0004093885600000026
Figure FDA0004093885600000027
Figure FDA0004093885600000028
3. the method for adaptively positioning the target ROI of the solar panel cleaning robot according to claim 1, wherein the motion estimation and compensation are performed on the region of interest according to the position change of the robot, comprising the following steps:
s41: establishing a conversion relation between a world three-dimensional coordinate system and an image coordinate system; suppose a point P is used in world coordinate system (x w ,y w ,z w ) Represented by (u,v) the conversion relation is from the world coordinate system to the camera coordinate system, to the image physical coordinate system and to the image pixel coordinate system, and the formula is as follows:
Figure FDA0004093885600000029
wherein the position change between the camera coordinate system and the world coordinate system is described by a rotation matrix R and a translation vector t, f represents the focal length of the camera, d x ,d y Representing the unit size, Z, of a single pixel on an image line c Representing depth information;
s42: the robot pose change (Δr) calculated in S34 ij ,Δv ij ,Δp ij ) Merging into camera model to estimate position p of target in second frame image coordinate system 2 The formula is as follows:
Figure FDA0004093885600000031
wherein K is an internal reference calibrated in advance.
4. The method for adaptively positioning a target ROI of a solar panel cleaning robot according to claim 1, wherein said correcting a region of interest and detecting a target accurate position comprises the steps of:
s51: calculating the pixel offset Δu=u 2 -u 1 ,Δv=v 2 -v 1 Compensating to the region of interest ROI_B 'described in S21' 0 Obtaining a new region of interest ROI_B 0 =(x″ 0 ,y″ 0 ,x″ 1 ,y″ 1 )=(x′ 0 +Δu,y′ 0 +Δv,x′ 1 +Δu,y′ 1 +Δv);
S52: judging the region of interest ROI_B 0 If the boundary is exceeded, adjusting if the boundary is exceeded; let the image size be h×w, then determine and adjust the manner: x' 0 =min(x″ 0 ,0);y″ 0 =min(y″ 0 ,0);x″ 1 =max(x″ 1 ,w);y″ 1 =max(y″ 1 ,h);
S53: in the current picture ROI_B 0 Executing corresponding detection algorithm in the range to obtain a target B 0 Is (are) accurate positioning curt_b 0 =(x 0 ,y 0 ,x 1 ,y 1 ) If no target is detected, the pre_B is cleared 0 Jumping to S12 to perform full-image detection;
s54: after the system processes the current image, updating pre_B 0 =curt_B 0 And returns to S11.
CN201911332440.6A 2019-12-22 2019-12-22 Adaptive target ROI positioning method for solar panel cleaning robot Active CN111144406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911332440.6A CN111144406B (en) 2019-12-22 2019-12-22 Adaptive target ROI positioning method for solar panel cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911332440.6A CN111144406B (en) 2019-12-22 2019-12-22 Adaptive target ROI positioning method for solar panel cleaning robot

Publications (2)

Publication Number Publication Date
CN111144406A CN111144406A (en) 2020-05-12
CN111144406B true CN111144406B (en) 2023-05-02

Family

ID=70519292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911332440.6A Active CN111144406B (en) 2019-12-22 2019-12-22 Adaptive target ROI positioning method for solar panel cleaning robot

Country Status (1)

Country Link
CN (1) CN111144406B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011234314A (en) * 2010-04-30 2011-11-17 Canon Inc Image processing apparatus, image processing method and program
CN107093188A (en) * 2017-04-12 2017-08-25 湖南源信光电科技股份有限公司 A kind of intelligent linkage and tracking based on panoramic camera and high-speed ball-forming machine
CN108230328A (en) * 2016-12-22 2018-06-29 深圳光启合众科技有限公司 Obtain the method, apparatus and robot of target object

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3603737B2 (en) * 2000-03-30 2004-12-22 日本電気株式会社 Moving object tracking method and device
TWI537580B (en) * 2013-11-26 2016-06-11 財團法人資訊工業策進會 Positioning control method
CN105825524B (en) * 2016-03-10 2018-07-24 浙江生辉照明有限公司 Method for tracking target and device
CN105741325B (en) * 2016-03-15 2019-09-03 上海电气集团股份有限公司 A kind of method and movable object tracking equipment of tracked mobile target
CN107230219B (en) * 2017-05-04 2021-06-04 复旦大学 Target person finding and following method on monocular robot
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
US11023761B2 (en) * 2017-11-06 2021-06-01 EagleSens Systems Corporation Accurate ROI extraction aided by object tracking
TWI701609B (en) * 2018-01-04 2020-08-11 緯創資通股份有限公司 Method, system, and computer-readable recording medium for image object tracking
JP7050509B2 (en) * 2018-01-31 2022-04-08 キヤノン株式会社 Image processing equipment, image processing methods, and programs
CN109018591A (en) * 2018-08-09 2018-12-18 沈阳建筑大学 A kind of automatic labeling localization method based on computer vision
CN110516620B (en) * 2019-08-29 2023-07-28 腾讯科技(深圳)有限公司 Target tracking method and device, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011234314A (en) * 2010-04-30 2011-11-17 Canon Inc Image processing apparatus, image processing method and program
CN108230328A (en) * 2016-12-22 2018-06-29 深圳光启合众科技有限公司 Obtain the method, apparatus and robot of target object
CN107093188A (en) * 2017-04-12 2017-08-25 湖南源信光电科技股份有限公司 A kind of intelligent linkage and tracking based on panoramic camera and high-speed ball-forming machine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘乐元.集控式足球机器人视觉***的研究与实现.集控式足球机器人视觉***的研究与实现.(第11期期),全文. *

Also Published As

Publication number Publication date
CN111144406A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN110261870B (en) Synchronous positioning and mapping method for vision-inertia-laser fusion
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
Zou et al. A seam tracking system based on a laser vision sensor
US6470271B2 (en) Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN108907526A (en) A kind of weld image characteristic recognition method with high robust
CN112017248A (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN114353690B (en) On-line detection device and detection method for roundness of large aluminum alloy annular forging
CN101865656B (en) Method for accurately positioning position of multi-camera system by using small number of coplanar points
CN110648354B (en) Slam method in dynamic environment
CN116977628A (en) SLAM method and system applied to dynamic environment and based on multi-mode semantic framework
CN115984766A (en) Rapid monocular vision three-dimensional target detection method for underground coal mine
CN111144406B (en) Adaptive target ROI positioning method for solar panel cleaning robot
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
CN114067210A (en) Mobile robot intelligent grabbing method based on monocular vision guidance
CN111696155A (en) Monocular vision-based multi-sensing fusion robot positioning method
CN109815812B (en) Vehicle bottom edge positioning method based on horizontal edge information accumulation
CN116117800B (en) Machine vision processing method for compensating height difference, electronic device and storage medium
Liu et al. A new measurement method of real-time pose estimation for an automatic hydraulic excavator
CN107248171B (en) Triangulation-based monocular vision odometer scale recovery method
CN116388669A (en) Photovoltaic panel foreign matter detection and cleaning method based on Swin transducer
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
CN111062907B (en) Homography transformation method based on geometric transformation
CN117611762B (en) Multi-level map construction method, system and electronic equipment
CN117649619B (en) Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant