CN103716399A - Remote interaction fruit picking cooperative asynchronous control system and method based on wireless network - Google Patents

Remote interaction fruit picking cooperative asynchronous control system and method based on wireless network Download PDF

Info

Publication number
CN103716399A
CN103716399A CN201310746643.6A CN201310746643A CN103716399A CN 103716399 A CN103716399 A CN 103716399A CN 201310746643 A CN201310746643 A CN 201310746643A CN 103716399 A CN103716399 A CN 103716399A
Authority
CN
China
Prior art keywords
module
robot
fruit
coordinate
timestamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310746643.6A
Other languages
Chinese (zh)
Other versions
CN103716399B (en
Inventor
刘成良
刘佰鑫
贡亮
赵源深
陈冉
牛庆良
黄丹枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201310746643.6A priority Critical patent/CN103716399B/en
Publication of CN103716399A publication Critical patent/CN103716399A/en
Application granted granted Critical
Publication of CN103716399B publication Critical patent/CN103716399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a remote interaction fruit picking cooperative asynchronous control system based on a wireless network, and belongs to the field of data identification. The remote interaction fruit picking cooperative asynchronous control system comprises a visual information collection module, a robot posture and position information collection module, a robot motion control module, a network transmission module, a man-machine interaction module, a three-dimensional coordinate calculation module and a robot task list module. According to the remote interaction fruit picking cooperative asynchronous control system, due to cooperative asynchronous control based on the wireless network, fruit identification operation, locating operation and picking execution operation of fruit picking robots can be asynchronous, and efficiency is improved; meanwhile, the remote interaction fruit picking cooperative asynchronous control system gives play to the advantage of the identification capacity of humans in an unstructured environment and the advantage of accurate locating of the robots, the identification space range is wide, the identification type is not limited, and whether fruits are ripe or not can be judged.

Description

The collaborative asynchronous control system of remote interaction picking fruit and method based on wireless network
Technical field
What the present invention relates to is a kind of system and method for field of data recognition, specifically a kind of remote interaction picking fruit cooperative control system and method based on wireless network.
Background technology
The development of wireless network, the particularly development of IEEE802.11 standard, constantly increase transmission rate, communication distance, makes robot controlled in wireless become possibility.WLAN (wireless local area network) based on 802.11 standards (Wlan) is most popular wireless local area network technology, and its cost performance is high, networking flexibility convenient, and this makes it in mechanization of agriculture and intellectuality, have important use value.
Fruit picking robot is for this custom-designed robot of specific task of picking fruit.At present, fruit identification and the locating module of research and the fruit picking robot of having developed are all positioned on robot body.The Silsoe research institute of Britain has developed mushroom harvesting (1994), plucks success rate in 75% left and right, and picking rate is 6.7/s, and it is to pluck failed main cause that growth is tilted.Holland's agricultural environment Graduate School of Engineering is developed cucumber harvesting robot (1996), and cucumber is detected as power and is greater than 90%, plucks success rate approximately 80%, but picking rate is about 54s/, the time length can not meet commercialization.QingBei, Korea university has developed apple-picking machine (1998) and has reached 85% from the discrimination of the outside identification of tree crown apple, and speed reaches 5/s, and success rate can not meet application requirements.Its new grade of Cao of Shanghai Communications University is developed Strawberry recognition algorithm (2008), and in laboratory environment, average distinguishing speed is 1s, and carpopodium False Rate is 7%, and harvesting process causes damage to 5% fruit.The people such as China Agricultural University's discipline is super, Li Wei develop cucumber picking robot, and the success rate of gathering reaches 85%, and single cucumber is plucked 28.6s consuming time, has higher practicality.Be subject to the restriction of robot operational capability, identification and low, the consuming time length of location link accuracy rate, can not drop into production application.
Through the retrieval of prior art is found, Chinese patent literature CN102682286A, open day 2012.09.19, a kind of picking robot fruit recognition methods technology based on laser vision system is disclosed, the summary of this technology is: it is laser vision system that laser range finder and rectilinear translation mechanical mechanism are closed to structure, obtain fruit tree local distance information, utilize the 3-D view of the far and near feature of agreement relation generation mark scene of distance value and gray value.After level and smooth this 3-D view, centre of form coordinate and the radius of fruit in employing chain code following technology and at random round detection method computed image.
But defect and deficiency that this technology is compared with the present invention are: identification spatial dimension has limitation: only can process part, be difficult to the whole fruits on fruit tree effectively to be identified; Identification kind scope has limitation: can only process circular fruit; Can not judge the maturity of fruit, make follow-up harvesting operation pluck green fruit, cause economic loss.
Summary of the invention
The present invention is directed to prior art above shortcomings, a kind of remote interaction picking fruit cooperative control system and method based on wireless network is provided, make fruit identification, location and the execution harvesting of fruit picking robot asynchronous, raise the efficiency.
The present invention is achieved by the following technical solutions:
The present invention relates to the collaborative asynchronous control system of a kind of remote interaction picking fruit based on wireless network, comprise: visual information acquisition module, robot pose and positional information acquisition module, robot motion control module, network transmission module, human-computer interaction module, three-dimensional coordinate resolve module and robot task list block, wherein:
Visual information acquisition module, robot pose and positional information acquisition module, robot motion control module and robot task list block are arranged in robot, and human-computer interaction module and three-dimensional coordinate resolve module and be arranged at main control computer;
Robot task list block is sent timestamp to visual information module and robot pose and positional information acquisition module, the image information of visual information module collecting fruit also together transfers to network transmission module with timestamp, robot pose and positional information acquisition module are that timestamp is set up message breakpoint, network transmission module is sent to image information and timestamp the human-computer interaction module of main control computer, human-computer interaction module is determined the fruit scope of image information, by three-dimensional coordinate, resolve module and determine that each fruit in this fruit scope is with respect to robot camera and the three-dimensional world coordinate sequence corresponding with timestamp, each three-dimensional world coordinate sequence and timestamp return to robot task list block by network transmission module, robot task list block reads according to the rule of first in first out the three-dimensional world coordinate sequence that timestamp is corresponding, transformation matrix determines that fruit to be plucked is with respect to the coordinate sequence of current binocular camera shooting head coordinate system, motion-control module resolves and sends control signal according to this coordinate sequence is contrary, the harvesting action of control.
Described visual information acquisition module comprises: two are arranged at camera in robot and for the image compression module of compressed image information.
Described network transmission module comprises: application layer, network layer, transport layer and physical layer, wherein: application layer adopts custom protocol, network layer adopts Transmission Control Protocol, transport layer adopts IP protocol family, physical layer comprises: wireless omnidirectional access point group, hub and the wireless directional relay of mutual communication, access point Zu Yu robot of wireless omnidirectional communicates, and wireless directional relay and main control computer communicate.
Described custom protocol specifically comprises: timestamp, information type, stem check portion, instruction department.
Described robot task list block comprises: for sending the newly-built TU task unit of timestamp, be used for reading, coordinate transforming sequence and the abortive task management unit that sorts, for discharging the resource releasing unit of the shared hardware resource of finishing the work.
The present invention relates to the control method of said system, the method creation-time stamp, the image information of correspondent time collecting fruit and robot pose and positional information, for image information is set up coordinate sequence as the task list of robot, action is plucked according to this task list by robot, comprises the following steps:
Step 1, robot task list block are sent timestamp to visual information module and robot pose and positional information acquisition module, the image information of visual information module collecting fruit also together transfers to network transmission module with timestamp, robot pose and positional information acquisition module are that timestamp is set up message breakpoint, and network transmission module is sent to main control computer by image information and timestamp;
The human-computer interaction module of step 2, main control computer is determined the fruit scope of image information, resolves module determine that each fruit in this fruit scope is with respect to robot camera and the three-dimensional world coordinate sequence corresponding with timestamp by three-dimensional coordinate;
Step 3, each three-dimensional world coordinate sequence and timestamp return to robot task list block by network transmission module, robot task list block reads according to the rule of first in first out the three-dimensional world coordinate sequence that timestamp is corresponding, transformation matrix determines that fruit to be plucked is with respect to the coordinate sequence of current two camera coordinate systems, motion-control module is according to this coordinate sequence against resolving and send control signal, and the harvesting of control is moved.
Described step 1 specifically comprises the following steps:
1) the task list module of robot is sent timestamp to visual information acquisition module, robot pose and positional information acquisition module;
2) vision collecting module is used its two video cameras that are arranged at robot binocular position to take and compress the figure frame of two-way image information, transfers in the lump network transmission module with corresponding timestamp;
3) network transmission module packaging time stamp and figure frame, be transferred to main control computer;
4) robot pose and positional information acquisition module gather attitude and positional information and are that the timestamp receiving is set up breakpoint sign.
Described attitude and positional information comprise: for describing height and the angle of pitch of The Cloud Terrace at robot location's compass output information, driver motor encoder output information and binocular camera shooting head place.
Described breakpoint is masked as and points to the attitude of time of receipt (T of R) during stamp and the pointer of positional information memory address.
Described step 2 specifically comprises the following steps:
1) application layer of network transmission module is unpacked the information of obtaining, and obtains the image information after timestamp and compression;
2) image information display after decompress(ion) is in the visual interface of human-computer interaction module, determine the fruit to be plucked on the two width images that two cameras take, in two width images of same frame, match fruit position, by three-dimensional coordinate, resolve the three-dimensional world coordinate (x that module is calculated each fruit to be plucked wy wz w), coupling has been calculated fruits all on image successively;
The three-dimensional world coordinate computational process of each described fruit to be plucked specifically comprises the following steps:
2.1) fruit is with respect to the two-dimensional coordinate (x, y) of single camera:
Z c x y 1 = f dx s u 0 0 0 f dy v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 ;
Wherein: (x, y) be under image coordinate system fruit with respect to the two-dimensional coordinate of single camera; Z cfor Description Image coordinate is tied to the Coordinate Conversion factor of the transformation relation of world coordinate system; M 1for intrinsic parameters of the camera; M 2for video camera external parameter; M=M 1m 2; F represents video camera effective focal length; Dx, dy represents horizontal and vertical pixel unit length; (u 0, v 0) origin of coordinates of presentation video coordinate origin under computer coordinate system, unit is millimeter; S represents the distortion parameter in non-linear camera model;
Intrinsic parameters of the camera M 1for computer coordinate is tied to the transformation relation of image coordinate system:
x y 1 = 1 dx s u 0 0 1 dy v 0 0 0 1 u v 1 ;
The external parameter M of video camera 2for the transformation relation of world coordinate system and computer coordinate system, R represents a definite quadrature spin matrix, and T represents a definite translation matrix:
u v w 1 = R T 0 T 1 x w y w z w 1 ;
2.2) show that fruit is with respect to the depth z of binocular camera shooting head: establish C 1, C 2be respectively the optical centre of two video cameras, b is C 1, C 2horizontal range; F is the focal length of camera, P 1and P 2for the mapping point of space geometry point P in two video camera imaging planes; Cross C 1and C 2to camera coordinates, be that plane is done to slaver over, intersection point is A 1and A 2; Crossing P is that plane is made intersection point to camera coordinates, and intersection point is B, meets at an E with imaging plane; If A 1c 1=i a, A 2c 2=i b; ?
Figure BDA0000450092390000043
2.3) spatial point x wy wz w tprojection on the image of two video cameras is expressed as:
Z c 1 u 1 v 1 1 = M ′ x w y w z w 1 = m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 x w y w z w 1 ;
Z c 2 u 2 v 2 1 = M ′ ′ x w y w z w 1 = m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 x w y w z w 1 ;
Wherein: Z c1and Z c2the image coordinate that is respectively two video cameras acquisitions is tied to the Coordinate Conversion factor of world coordinate system, and M ', M ' ' are respectively the product of two intrinsic parameters of the camera and external parameter,
Figure BDA0000450092390000046
k=1,2, i=1,2,3,4, j=1,2,3 representing matrix elements;
2.4) establish the x of world coordinate system wo wy wplane overlaps with the image coordinate system of one of them video camera, x waxle is optical axis, by the position relationship of two video cameras, calculates spatial point x wy wz w t, wherein, R cthe quadrature spin matrix that represents the position relationship of two video cameras, T cthe translation matrix that represents the position relationship of two video cameras: x c 2 y c 2 z c 2 1 = R c T c 0 T 1 x c 1 y c 1 z c 1 1 .
The transformation matrix of described step 3 refers to: attitude and positional information acquisition module are that waiting task timestamp is established message breakpoint, according to this message breakpoint place attitude and location expression matrix, current attitude and location expression matrix, obtains fruit to be plucked with respect to the coordinate sequence of two camera coordinate systems.
Described step 3 specifically comprises step:
1) the three-dimensional world coordinate of each fruit to be plucked and timestamp return to robot task list block by network transmission module;
2) transformation matrix: attitude and the location expression matrix of establishing message breakpoint place robot are S 0, the attitude of current robot and location expression matrix are S p, transformation matrix S c, as shown in the formula expression:
S p=S cs 0,
X wy wz wthe three-dimensional world coordinate that represents each fruit to be plucked, fruit to be plucked relatively with the coordinate system x of current two video cameras p, y p, z pfor:
x p y p z p = S c · x w y w z w = S p · S 0 - 1 · x w y w z w ;
The attitude of described robot and location expression matrix S 0attitude and location expression matrix S with current robot pcomprise: compass output information, driving wheel are clicked height and the angle of pitch of the The Cloud Terrace at encoder output information and binocular camera shooting head place.
Technique effect
The present invention is based on the collaborative asynchronous control of wireless network, make fruit identification, location and the execution harvesting of fruit picking robot asynchronous, raise the efficiency; Meanwhile, native system has been brought into play the people's accurate location of identification capability and robot advantage separately under destructuring environment, and identification spatial dimension is wide, and identification kind is not limit, and can whether judge fruit maturation.Relative with the conventional fruit recognizer based on machine vision, can effectively avoid illumination shakiness and block the problem of bringing.
Accompanying drawing explanation
Fig. 1 is the system connection diagram of embodiment 1;
Fig. 2 is the custom protocol of application layer.
Fig. 3 is the topological structure schematic diagram of physical layer;
Fig. 4 is the method step schematic diagram of embodiment 2;
Fig. 5 is the coordinate system schematic diagram of embodiment 2;
Fig. 6 is the vision range finding amount principle schematic of embodiment 2.
Embodiment
Below embodiments of the invention are elaborated, the present embodiment is implemented take technical solution of the present invention under prerequisite, provided detailed execution mode and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment 1
As shown in Figure 1, the present embodiment comprises: visual information acquisition module, robot pose and positional information acquisition module, robot motion control module, network transmission module, human-computer interaction module, three-dimensional coordinate resolve module and robot task list block, wherein:
Visual information acquisition module, robot pose and positional information acquisition module, robot motion control module and robot task list block are arranged in robot, and human-computer interaction module and three-dimensional coordinate resolve module and be arranged at main control computer;
Robot task list block is sent timestamp to visual information module and robot pose and positional information acquisition module, the image information of visual information module collecting fruit also together transfers to network transmission module with timestamp, robot pose and positional information acquisition module are that timestamp is set up message breakpoint, network transmission module is sent to image information and timestamp the human-computer interaction module of main control computer, human-computer interaction interface is determined the fruit scope of image information, by three-dimensional coordinate, resolve module and determine that each fruit in this fruit scope is with respect to robot camera and the three-dimensional world coordinate sequence corresponding with timestamp, each three-dimensional world coordinate sequence and timestamp return to robot task list block by network transmission module, robot task list block reads according to the rule of first in first out the three-dimensional world coordinate sequence that timestamp is corresponding, transformation matrix determines that fruit to be plucked is with respect to the coordinate sequence of current binocular camera shooting head coordinate system, motion-control module resolves and sends control signal according to this coordinate sequence is contrary, the harvesting action of control.
Described visual information acquisition module comprises: be arranged at binocular camera shooting head in robot and for the image compression module of compressed image information.
As shown in Figure 2, described custom protocol specifically comprises: timestamp DD, HH, MM, SS, MS, information type TYPE, stem check portion, instruction department data.
Described network transmission module comprises: application layer, network layer, transport layer and physical layer, and wherein: application layer adopts custom protocol, network layer adopts Transmission Control Protocol, and transport layer adopts IP protocol family,
As shown in Figure 3, physical layer comprises: wireless omnidirectional access point group, hub and the wireless directional relay of mutual communication, and access point Zu Yu robot of wireless omnidirectional communicates, and wireless directional relay and main control computer communicate.
Described robot task list block comprises: for sending the newly-built TU task unit of timestamp, be used for reading, coordinate transforming sequence and the abortive task management unit that sorts, for discharging the resource releasing unit of the shared hardware resource of finishing the work.
Embodiment 2
As shown in Figure 4, the present embodiment comprises the following steps:
1, by robot task list block, send timestamp, visual information module collection receives after timestamp, and the image information collecting is transferred to network transmission module.With this simultaneously, robot pose and positional information acquisition module are that timestamp is set up message breakpoint.Network transmission module package image and timestamp are sent to the network transmission module of main control computer.
1.1, the task list module of robot is sent timestamp to visual information acquisition module, robot pose and positional information acquisition module.
1.2, vision collecting module is used its two binocular cameras that are arranged at robot binocular position to take and compress the figure frame of two-way image information, gives in the lump network transmission module with corresponding timestamp.
1.3, network transmission module packaging time stamp and figure frame, be transferred to main control computer;
1.4, robot pose and positional information acquisition module gather attitude and positional information and are that the timestamp receiving is set up breakpoint sign.
Described attitude and positional information comprise: for describing height and the angle of pitch of the The Cloud Terrace at robot location's compass output information and driver motor encoder output information, binocular camera shooting head place.
Described breakpoint is masked as and points to the attitude of time of receipt (T of R) during stamp and the pointer of positional information memory address.
2, the application layer of network transmission module is unpacked the information of obtaining, obtain the image information after timestamp and compression, image information display after decompress(ion) is in the visual interface of human-computer interaction module, by user, by masterplate frame, select method to click successively the fruit to be plucked on two width images, three-dimensional coordinate resolves module and determines that in two field picture, fruit is with respect to the three-dimensional coordinate point sequence of camera, and the application layer of network transmission module returns to robot by timestamp and three-dimensional coordinate sequence of packets.
2.1, the application layer of main control computer network transmission module is unpacked the information of obtaining, and obtains the image information after timestamp and compression.
2.2, the image after decompress(ion) is shown in the visual interface of human-computer interaction module, fruit to be plucked on the two width images that selected method to click successively to be taken by binocular camera shooting head respectively by masterplate frame by user, in two width images of same frame, match fruit position, transfer to three-dimensional coordinate in another thread to resolve module and calculate the coordinate with respect to binocular camera shooting head of each fruit to be plucked (x y z), coupling has been calculated fruits all on image successively.
Computational process is as follows: as shown in Figure 5, according to the external parameter of the inner parameter of video camera, video camera, obtain fruit with respect to the two-dimensional coordinate (x, y) of single camera; By vision range finding amount principle, obtain fruit with respect to the depth z of binocular camera shooting head; By the geometrical relationship between two cameras, obtained plucking the three-dimensional world coordinate system coordinate (x y z) of fruit.
Described masterplate frame selects method to be: select circular shuttering, rectangle template and the arc template of prepackage as mouse pointer, by mouse roller, change in proportion template size, by right button, add mouse roller combination and change Aspect Ratio, long-time by the arc radian of left mouse button towing change, make staff indicate fruit region, comprise the possible region that is blocked.
Described fruit is represented by formula 1 with respect to the two-dimensional coordinate (x, y) of single camera:
Z c x y 1 = f dx s u 0 0 0 f dy v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 ;
Wherein: x, y be under image coordinate system fruit with respect to the two-dimensional coordinate of single camera; Z cfor Description Image coordinate is tied to the Coordinate Conversion factor of the transformation relation of world coordinate system; M 1for intrinsic parameters of the camera; M 2for video camera external parameter; M=M 1m 2; F represents video camera effective focal length; Dx, dy represents horizontal and vertical pixel unit length; u 0, v 0the origin of coordinates of presentation video coordinate origin under computer coordinate system, unit is millimeter; S represents the distortion parameter in non-linear camera model.
Described intrinsic parameters of the camera M 1by the two-step method based on radial arrangement restraint, demarcate and draw.
Described computer coordinate is the cartesian coordinate system of arrangement mode of the image pixel of the capable v of u row, and its initial point is defined as first pixel in the image upper left corner, and unit is pixel; Described image coordinate system is cartesian coordinate systems coplanar from computer coordinate system but different initial points, (the u that initial point is 0, v 0), unit is millimeter.
As shown in Figure 5, binocular camera coordinate system is respectively two three-dimensional system of coordinates, wherein x c1o c1y c1, x c2o c2y c2represent respectively two planes, plane x c1o c1y c1with image coordinate system xO 1y is parallel, ideally z caxle is the optical axis of video camera, with computer coordinate system uO 0v meets at O 1, focal distance f is the distance between two coordinate systems, in reality, optical axis is not orthogonal to x-y plane, with the distortion parameter s in non-linear camera model, describes;
Consider distortion parameter, computer coordinate is tied to the transformation relation of image coordinate system and describes with following formula:
x y 1 = 1 dx s u 0 0 1 dy v 0 0 0 1 u v 1 ;
The external parameter M of described video camera 2for the transformation relation of world coordinate system and computer coordinate system, also by the two-step method based on radial arrangement restraint, demarcate and draw; For selected world coordinate system, coordinate system transformation relation can be provided by a definite quadrature spin matrix R and a definite translation matrix T, coordinate system transformation relation as shown in the formula:
u v 0 1 = R T 0 T 1 x w y w z w 1 ;
As shown in Figure 6, show that fruit is with respect to the vision range finding amount principle of the depth z of binocular camera shooting head: C wherein 1, C 2be respectively the optical centre of binocular camera shooting head, b is C 1, C 2horizontal range; F is the focal length of camera, P 1and P 2for the mapping point of space geometry point world point P on two camera imaging planes; Cross C 1and C 2to camera coordinates, be that plane is done to slaver over, intersection point is A 1and A 2; Crossing P is that plane is made intersection point to camera coordinates, and intersection point is B, meets at an E with imaging plane; If A 1c 1=i a, A 2c 2=i b; If z is that space geometry point world point P is to the distance of camera plane; Easily know Δ PEP 2∽ Δ PBC 2, Δ PEP 1∽ Δ PBC 1thereby, draw:
z - f z = a a + i b z - f z = a + b - i a + i b b + i b + a ;
Cancellation intermediate variable a, obtains
z = bf i a - i b .
From formula 1:
Spatial point x wy wz w tprojection on the image of video camera 1 is as shown in Equation 2:
Z c 1 u 1 v 1 1 = M ′ x w y w z w 1 = m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 x w y w z w 1 ;
Spatial point x wy wz w tprojection on the image of video camera 2 is as shown in Equation 3:
Z c 2 u 2 v 2 1 = M ′ ′ x w y w z w 1 = m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 x w y w z w 1 ;
Z c1and Z c2the image coordinate that is respectively two video cameras acquisitions is tied to the Coordinate Conversion factor of world coordinate system, and M ', M ' ' are respectively the product of two intrinsic parameters of the camera and external parameter,
Figure BDA0000450092390000094
k=1,2, i=1,2,3,4, j=1,2,3 representing matrix elements, establish the x of world coordinate system wo wy wplane overlaps with the image coordinate system of one of them video camera 2, x waxle is optical axis, as shown in Figure 5,, by the position relationship of two video cameras, can calculate spatial point x wy wz w t;
The position relationship of two video cameras is also by quadrature spin matrix R cwith translation matrix T cprovide, the position relationship of two video cameras, as shown in Equation 4: x c 2 y c 2 z c 2 1 = R c T c 0 T 1 x c 1 y c 1 z c 1 1 ;
Simultaneous formula 2,3 and 4 cancellation Z c1and Z c2, can be about x wy wz wlinear equation.
3, the timestamp of coordinate sequence and original image transfers to network transmission module to return to robot task list block.Task list reads by the rule of first in first out the three-dimensional coordinate sequence that this timestamp is corresponding, determines the three-dimensional coordinate sequence of the relatively current binocular camera shooting head of fruit to be plucked coordinate system through video camera transformation matrix.Motion-control module is according to the contrary control signal of sending that there emerged a driver of resolving of this three-dimensional coordinate sequence, the harvesting action of control.When plucking operation, repeat fruit space coordinates above-mentioned to be plucked and obtain, form fruit identification, location and carry out pluck asynchronous, improve picking efficiency.
3.1, the timestamp of coordinate sequence and original image transfers to network transmission module to return to the task list module of robot.
3.2, described video camera transformation matrix, be specially: waiting task timestamp is at the message breakpoint of attitude and positional information acquisition module, according to this breakpoint attitude and location expression matrix and current attitude, location expression matrix, release the transformation matrix of binocular camera coordinate system:
If message breakpoint place, the attitude of robot and location expression matrix are S 0, the attitude of current robot and location expression matrix are S pre, video camera transformation matrix S c, as shown in the formula expression:
S p=S cs 0,
Figure BDA0000450092390000095
Fruit to be plucked relatively with the three-dimensional coordinate (x of current binocular camera shooting head coordinate system p, y p, z p), (x wy wz w) represent the three-dimensional world coordinate of each fruit to be plucked of correspondent time
x p y p z p = S c · x w y w z w = S pre · S 0 - 1 · x w y w z w ;
Height and the angle of pitch of the The Cloud Terrace that wherein, attitude and location expression matrix comprise compass output information, driving wheel click encoder output information and binocular camera shooting head place.

Claims (9)

1. the remote interaction picking fruit based on wireless network is worked in coordination with asynchronous control system, it is characterized in that, comprise: visual information acquisition module, robot pose and positional information acquisition module, robot motion control module, network transmission module, human-computer interaction module, three-dimensional coordinate resolve module and robot task list block, wherein:
Visual information acquisition module, robot pose and positional information acquisition module, robot motion control module and robot task list block are arranged in robot, and human-computer interaction module and three-dimensional coordinate resolve module and be arranged at main control computer;
Robot task list block is sent timestamp to visual information module and robot pose and positional information acquisition module, the image information of visual information module collecting fruit also together transfers to network transmission module with timestamp, robot pose and positional information acquisition module are that timestamp is set up message breakpoint, network transmission module is sent to image information and timestamp the human-computer interaction module of main control computer, human-computer interaction module is determined the fruit scope of image information, by three-dimensional coordinate, resolve module and determine that each fruit in this fruit scope is with respect to robot camera and the three-dimensional world coordinate sequence corresponding with timestamp, each three-dimensional world coordinate sequence and timestamp return to robot task list block by network transmission module, robot task list block reads according to the rule of first in first out the three-dimensional world coordinate sequence that timestamp is corresponding, transformation matrix determines that fruit to be plucked is with respect to the coordinate sequence of binocular camera shooting head coordinate system, motion-control module resolves and sends control signal according to this coordinate sequence is contrary, the harvesting action of control.
2. system according to claim 1, is characterized in that, described visual information acquisition module comprises: two are arranged at camera in robot and for the image compression module of compressed image information.
3. system according to claim 1, it is characterized in that, described robot task list block comprises: for sending the newly-built TU task unit of timestamp, be used for reading, coordinate transforming sequence and the abortive task management unit that sorts, for discharging the resource releasing unit of the shared hardware resource of finishing the work.
4. the control method for system described in claim 1-3 any one, it is characterized in that, the method creation-time stamp, the image information of correspondent time collecting fruit and robot pose and positional information, for image information is set up coordinate sequence as the task list of robot, action is plucked according to this task list by robot, comprises the following steps:
Step 1, robot task list block are sent timestamp to visual information module and robot pose and positional information acquisition module, the image information of visual information module collecting fruit also together transfers to network transmission module with timestamp, robot pose and positional information acquisition module are that timestamp is set up message breakpoint, and network transmission module is sent to main control computer by image information and timestamp;
The human-computer interaction module of step 2, main control computer is determined the fruit scope of image information, resolves module determine that each fruit in this fruit scope is with respect to robot camera and the three-dimensional world coordinate sequence corresponding with timestamp by three-dimensional coordinate;
Step 3, each three-dimensional world coordinate sequence and timestamp return to robot task list block by network transmission module, robot task list block reads according to the rule of first in first out the three-dimensional world coordinate sequence that timestamp is corresponding, transformation matrix determines that fruit to be plucked is with respect to the coordinate sequence of two camera coordinate systems, motion-control module is according to this coordinate sequence against resolving and send control signal, and the harvesting of control is moved.
5. method according to claim 4, is characterized in that, described step 1 specifically comprises the following steps:
1) the task list module of robot is sent timestamp to visual information acquisition module, robot pose and positional information acquisition module;
2) vision collecting module is used its two video cameras that are arranged at robot binocular position to take and compress the figure frame of two-way image information, transfers in the lump network transmission module with corresponding timestamp;
3) network transmission module packaging time stamp and figure frame, be transferred to main control computer;
4) robot pose and positional information acquisition module gather attitude and positional information and are that the timestamp receiving is set up breakpoint mark
Will.
6. according to the method described in claim 4 or 5, it is characterized in that, described step 2 specifically comprises the following steps:
1) application layer of network transmission module is unpacked the information of obtaining, and obtains the image information after timestamp and compression;
2) image information display after decompress(ion) is in the visual interface of human-computer interaction module, determine the fruit to be plucked on the two width images that two cameras take, in two width images of same frame, match fruit position, by three-dimensional coordinate, resolve the three-dimensional world coordinate (x that module is calculated each fruit to be plucked wy wz w), coupling has been calculated fruits all on image successively.
7. system according to claim 6, is characterized in that, the three-dimensional world coordinate computational process of each described fruit to be plucked specifically comprises the following steps:
2.1) fruit is with respect to the two-dimensional coordinate (x, y) of single camera:
Z c x y 1 = f dx s u 0 1 0 f dx v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 x w y w z w 1 = M x w y w z w 1 ;
Wherein: x, y be under image coordinate system fruit with respect to the two-dimensional coordinate of single camera; Z cfor Description Image coordinate is tied to the Coordinate Conversion factor of the transformation relation of world coordinate system; M 1for intrinsic parameters of the camera; M 2for video camera external parameter; M=M 1m 2; F represents video camera effective focal length; Dx, dy represents horizontal and vertical pixel unit length; u 0, v 0the origin of coordinates of presentation video coordinate origin under computer coordinate system, unit is millimeter; S represents the distortion parameter in non-linear camera model;
Intrinsic parameters of the camera M 1for computer coordinate is tied to the transformation relation of image coordinate system:
x y 1 = 1 dx s u 0 0 1 dy v 0 0 0 1 u v 1 ;
The external parameter M of video camera 2for the transformation relation of world coordinate system and computer coordinate system, R represents a definite quadrature spin matrix, and T represents a definite translation matrix:
u v 0 1 = R T 0 T 1 x w y w z w 1 ;
2.2) show that fruit is with respect to the depth z of binocular camera shooting head: establish C 1, C 2be respectively the optical centre of two video cameras, b is C 1, C 2horizontal range; F is the focal length of camera, P 1and P 2for the mapping point of space geometry point world point P in two video camera imaging planes; Cross C 1and C 2to camera coordinates, be that plane is done to slaver over, intersection point is A 1and A 2; Crossing P is that plane is made intersection point to camera coordinates, and intersection point is B, meets at an E with imaging plane; If A 1c 1=i a, A 2c 2=i b; If w is the distance that space geometry point world point P arrives camera plane, i.e. depth z:
Figure FDA0000450092380000033
2.3) spatial point x wy wz w tprojection on the image of two video cameras is expressed as:
Z c 1 u 1 v 1 1 = M ′ x w y w z w 1 = m 11 1 m 12 1 m 13 1 m 14 1 m 21 1 m 22 1 m 23 1 m 24 1 m 31 1 m 32 1 m 33 1 m 34 1 x w y w z w 1 ;
Z c 2 u 2 v 2 1 = M ′ ′ x w y w z w 1 = m 11 2 m 12 2 m 13 2 m 14 2 m 21 2 m 22 2 m 23 2 m 24 2 m 31 2 m 32 2 m 33 2 m 34 2 x w y w z w 1 ;
Wherein: Z c1and Z c2the image coordinate that is respectively two video cameras acquisitions is tied to the Coordinate Conversion factor of world coordinate system, and M ', M ' ' are respectively the product of two intrinsic parameters of the camera and external parameter,
Figure FDA0000450092380000036
k=1,2, i=1,2,3,4, j=1,2,3 representing matrix elements;
2.4) establish the x of world coordinate system wo wy wplane overlaps with the image coordinate system of one of them video camera, x waxle is optical axis, by the position relationship of two video cameras, calculates spatial point x wy wz w t, wherein, R cthe quadrature spin matrix that represents the position relationship of two video cameras, T cthe translation matrix that represents the position relationship of two video cameras: x 2 y 2 z 2 1 = R c T c 0 T 1 x 1 y 1 z 1 1 .
8. according to the method described in claim 4 or 5, it is characterized in that, the transformation matrix of described step 3 refers to: attitude and positional information acquisition module are that waiting task timestamp is established message breakpoint, according to this message breakpoint place attitude and location expression matrix, current attitude and location expression matrix, obtains fruit to be plucked with respect to the coordinate sequence of two camera coordinate systems.
9. method according to claim 8, is characterized in that, described step 3 specifically comprises step:
1) the three-dimensional world coordinate of each fruit to be plucked and timestamp return to robot task list block by network transmission module;
2) transformation matrix: attitude and the location expression matrix of establishing message breakpoint place robot are S 0, the attitude of current robot and location expression matrix are S p, video camera transformation matrix Sc, as shown in the formula expression:
S p=ScS0,
Figure FDA0000450092380000042
y wz wthe three-dimensional world coordinate that represents each fruit to be plucked, fruit to be plucked relatively with the coordinate system x of current two video cameras p, y p, z pfor:
x p y p z p = S c · x w y w z w = S p · S 0 - 1 · x w y w z w ;
The attitude of message breakpoint place robot and location expression matrix S 0attitude and location expression matrix S with current robot pcomprise: compass output information, driving wheel are clicked height and the angle of pitch of the The Cloud Terrace at encoder output information and binocular camera shooting head place.
CN201310746643.6A 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method Active CN103716399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310746643.6A CN103716399B (en) 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310746643.6A CN103716399B (en) 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method

Publications (2)

Publication Number Publication Date
CN103716399A true CN103716399A (en) 2014-04-09
CN103716399B CN103716399B (en) 2016-08-17

Family

ID=50408969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310746643.6A Active CN103716399B (en) 2013-12-30 2013-12-30 Remote interaction picking fruit based on wireless network works in coordination with asynchronous control system and method

Country Status (1)

Country Link
CN (1) CN103716399B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732550A (en) * 2015-04-08 2015-06-24 吴春光 Electronic automatic picking platform for pomegranate trees
CN107247634A (en) * 2017-06-06 2017-10-13 广州视源电子科技股份有限公司 Method and device for dynamic asynchronous remote process call of robot
CN107608525A (en) * 2017-10-25 2018-01-19 河北工业大学 VR interacts mobile platform system
CN108550141A (en) * 2018-03-29 2018-09-18 上海大学 A kind of movement wagon box automatic identification and localization method based on deep vision information
CN108811766A (en) * 2018-07-06 2018-11-16 常州大学 A kind of man-machine interactive fruits and vegetables of greenhouse harvesting robot system and its collecting method
CN109566092A (en) * 2019-01-24 2019-04-05 西北农林科技大学 A kind of fruit harvest machine people's control system for contest
CN111201922A (en) * 2020-02-27 2020-05-29 扬州大学 Facility fence frame type fruit and vegetable vine and fruit-bearing regulation and control device
CN113093156A (en) * 2021-03-12 2021-07-09 昆明物理研究所 Multi-optical-axis calibration system and method for LD laser range finder
CN114770559A (en) * 2022-05-27 2022-07-22 中迪机器人(盐城)有限公司 Fetching control system and method of robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008037035A1 (en) * 2006-09-28 2008-04-03 Katholieke Universiteit Leuven Autonomous fruit picking machine
CN101273688A (en) * 2008-05-05 2008-10-01 江苏大学 Apparatus and method for flexible pick of orange picking robot
CN101807247A (en) * 2010-03-22 2010-08-18 中国农业大学 Fine-adjustment positioning method of fruit and vegetable picking point
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008037035A1 (en) * 2006-09-28 2008-04-03 Katholieke Universiteit Leuven Autonomous fruit picking machine
CN101273688A (en) * 2008-05-05 2008-10-01 江苏大学 Apparatus and method for flexible pick of orange picking robot
CN101807247A (en) * 2010-03-22 2010-08-18 中国农业大学 Fine-adjustment positioning method of fruit and vegetable picking point
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王沈辉: "机器人采摘番茄中的双目定位技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732550A (en) * 2015-04-08 2015-06-24 吴春光 Electronic automatic picking platform for pomegranate trees
CN107247634A (en) * 2017-06-06 2017-10-13 广州视源电子科技股份有限公司 Method and device for dynamic asynchronous remote process call of robot
CN107608525A (en) * 2017-10-25 2018-01-19 河北工业大学 VR interacts mobile platform system
CN107608525B (en) * 2017-10-25 2024-02-09 河北工业大学 VR interactive mobile platform system
CN108550141A (en) * 2018-03-29 2018-09-18 上海大学 A kind of movement wagon box automatic identification and localization method based on deep vision information
CN108811766A (en) * 2018-07-06 2018-11-16 常州大学 A kind of man-machine interactive fruits and vegetables of greenhouse harvesting robot system and its collecting method
CN109566092A (en) * 2019-01-24 2019-04-05 西北农林科技大学 A kind of fruit harvest machine people's control system for contest
CN109566092B (en) * 2019-01-24 2024-05-31 西北农林科技大学 Fruit harvesting robot control system for competition
CN111201922A (en) * 2020-02-27 2020-05-29 扬州大学 Facility fence frame type fruit and vegetable vine and fruit-bearing regulation and control device
CN113093156A (en) * 2021-03-12 2021-07-09 昆明物理研究所 Multi-optical-axis calibration system and method for LD laser range finder
CN113093156B (en) * 2021-03-12 2023-10-27 昆明物理研究所 Multi-optical axis calibration system and method for LD laser range finder
CN114770559A (en) * 2022-05-27 2022-07-22 中迪机器人(盐城)有限公司 Fetching control system and method of robot

Also Published As

Publication number Publication date
CN103716399B (en) 2016-08-17

Similar Documents

Publication Publication Date Title
CN103716399A (en) Remote interaction fruit picking cooperative asynchronous control system and method based on wireless network
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN110706248B (en) Visual perception mapping method based on SLAM and mobile robot
CN103049912B (en) Random trihedron-based radar-camera system external parameter calibration method
CN110446159B (en) System and method for accurate positioning and autonomous navigation of indoor unmanned aerial vehicle
CN109100730B (en) Multi-vehicle cooperative rapid map building method
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN103279186B (en) Merge the multiple goal motion capture system of optical alignment and inertia sensing
WO2021208442A1 (en) Three-dimensional scene reconstruction system and method, device, and storage medium
CN108053449A (en) Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system
CN104457704A (en) System and method for positioning ground targets of unmanned planes based on enhanced geographic information
CN106443687A (en) Piggyback mobile surveying and mapping system based on laser radar and panorama camera
CN207115193U (en) A kind of mobile electronic device for being used to handle the task of mission area
CN107491071A (en) A kind of Intelligent multi-robot collaboration mapping system and its method
CN106774296A (en) A kind of disorder detection method based on laser radar and ccd video camera information fusion
CN111337037B (en) Mobile laser radar slam drawing device and data processing method
CN108459597A (en) A kind of mobile electronic device and method for handling the task of mission area
CN114743021A (en) Fusion method and system of power transmission line image and point cloud data
CN110062916A (en) For simulating the visual simulation system of the operation of moveable platform
CN104680528A (en) Space positioning method of explosive-handling robot based on binocular stereo vision
CN110298914A (en) A kind of method of fruit tree canopy characteristic map in orchard establishing
CN110275179A (en) A kind of building merged based on laser radar and vision ground drawing method
US20220191463A1 (en) Apparatus and method for providing wrap-around view based on 3d distance information
CN111914615A (en) Fire-fighting area passability analysis system based on stereoscopic vision
CN104036541A (en) Fast three-dimensional reconstruction method in vision measurement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant