CN109588114B - Parallel recognition picking system and method applied to fruit picking robot - Google Patents

Parallel recognition picking system and method applied to fruit picking robot Download PDF

Info

Publication number
CN109588114B
CN109588114B CN201811562710.8A CN201811562710A CN109588114B CN 109588114 B CN109588114 B CN 109588114B CN 201811562710 A CN201811562710 A CN 201811562710A CN 109588114 B CN109588114 B CN 109588114B
Authority
CN
China
Prior art keywords
image
module
data
image data
picking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811562710.8A
Other languages
Chinese (zh)
Other versions
CN109588114A (en
Inventor
王浩
邹光明
王炯
王欣
张晓寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201811562710.8A priority Critical patent/CN109588114B/en
Publication of CN109588114A publication Critical patent/CN109588114A/en
Application granted granted Critical
Publication of CN109588114B publication Critical patent/CN109588114B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Robotics (AREA)
  • Environmental Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a parallel recognition picking system and a parallel recognition picking method applied to a fruit picking robot, wherein the parallel recognition picking system comprises an upper computer and a lower computer; the upper computer comprises an image matching identification module, an image segmentation module, a data processing module, a path planning decision module and a first communication module; the upper computer is used for receiving the images uploaded by the lower computer, performing image matching identification, segmentation, picking position positioning and path planning decision making, and feeding back the path planning decision to the lower computer; the lower computer comprises a video image acquisition module, a second communication module and a control module, and is used for acquiring images, uploading the images to the upper computer and receiving information fed back by the upper computer to realize walking control, mechanical arm space positioning control and end effector grabbing control of the picking robot; each module is provided with an independent calculation part, so that corresponding control work can be independently completed, and the cooperative processing of each module can be realized. The invention has good expansion performance and good parallelization effect.

Description

Parallel recognition picking system and method applied to fruit picking robot
Technical Field
The invention belongs to the technical field of image recognition, relates to a parallel recognition method, and particularly relates to a parallel recognition picking system and method applied to a fruit picking robot.
Background
At present, although the application of the picking robot is more and more extensive, the recognition picking technology of the picking robot is difficult to realize a parallelization strategy, and meanwhile, the picking process is slow and difficult to put into real production; and the software module parallelism (prediction model decision tree, image segmentation and matching) can not realize the parallelization strategy, and the control part also can not realize the parallelization control (walking device, mechanical arm positioning and end effector).
Disclosure of Invention
In order to solve the technical problems, the invention provides a parallel recognition picking system and a parallel recognition picking method applied to a fruit picking robot.
The technical scheme adopted by the system of the invention is as follows: the utility model provides a be applied to parallel discernment of fruit picking robot and pick system which characterized in that: comprises an upper computer and a lower computer;
the upper computer comprises an image matching identification module, an image segmentation module, a data processing module, a path planning decision module and a first communication module; the upper computer is used for receiving the images uploaded by the lower computer, performing image matching identification, segmentation, picking position positioning and path planning decision making, and feeding back the path planning decision to the lower computer; each module is provided with an independent calculation part, can independently complete corresponding control work, and can realize the cooperative processing of each module;
the lower computer comprises a video image acquisition module, a second communication module and a control module, and is used for acquiring images, uploading the images to the upper computer and receiving information fed back by the upper computer to realize walking control of the picking robot, mechanical arm space positioning control and end effector grabbing control; each module is provided with an independent calculation part, so that corresponding control work can be independently completed, and the cooperative processing of each module can be realized.
The method adopts the technical scheme that: a parallel recognition picking method applied to a fruit picking robot is characterized by comprising the following steps:
step 1: the upper computer receives the image collected by the lower computer video image collecting module;
step 2: the image matching identification module carries out image matching identification;
and step 3: the image segmentation module segments the matched images;
and 4, step 4: the path planning decision module carries out path decision and planning on the picking robot;
and 5: the data processing module feeds back the path decision and planning to the lower computer through the first communication module;
step 6: the lower computer receives the information fed back by the upper computer through the second communication module;
and 7: the control module controls the picking robot to execute the path decision and planning.
Compared with the prior art, the invention has the following characteristics and beneficial effects:
1. the expandability is good, and the number of independent individual robots formed by lower computer individuals can be flexibly set according to the working cost and the task requirement;
2. the structure is clear, and the design and the construction are easy;
3. the data volume is centralized, and the software parallelization effect is good;
4. the building module is complete, different modules can be called according to the picking of different fruits, and the fruit picker has good universality.
Drawings
Fig. 1 is a parallel technical route diagram of a picking robot according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of an embodiment of the present invention;
FIG. 3 is a schematic diagram of a matching method according to an embodiment of the present invention;
FIG. 4 is a flow chart of a matching method according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the determination of matched images before segmentation according to an embodiment of the present invention;
FIG. 6 is a flowchart of an image segmentation method according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating the partitioning of data and task allocation scheduling via a decision tree according to an embodiment of the present invention;
fig. 8 is a parallel flowchart of the embodiment of the present invention, in which s1, s2, and s3 are logic control circuit switches of the lower computer, respectively, and L1, L2, and L3 are signal lights for turning on the circuit, respectively.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the parallel recognition picking system applied to a fruit picking robot provided by the invention comprises an upper computer and a lower computer;
the upper computer comprises an image matching identification module, an image segmentation module, a data processing module, a path planning decision module and a first communication module; the upper computer is used for receiving the images uploaded by the lower computer, performing image matching identification, segmentation, picking position positioning and path planning decision making, and feeding back the path planning decision to the lower computer; each module is provided with an independent calculation part, can independently complete corresponding control work, and can realize the cooperative processing of each module;
the lower computer comprises a video image acquisition module, a second communication module and a control module, and is used for acquiring images, uploading the images to the upper computer and receiving information fed back by the upper computer to realize walking control, mechanical arm space positioning control and end effector grabbing control of the picking robot; each module is provided with an independent calculation part, so that corresponding control work can be independently completed, and the cooperative processing of each module can be realized.
Referring to fig. 2, the invention further provides a parallel recognition picking method applied to a fruit picking robot, comprising the following steps:
step 1: the upper computer receives the image collected by the lower computer video image collecting module;
step 2: the image matching identification module carries out image matching identification;
referring to fig. 3 and 4, the image matching identification module of the embodiment matches the template with the image acquired by the lower computer video image acquisition module, and the size relationship between corresponding pixels is consistent with the template by matching the search window with the template; conversely, once this condition is not met, it is considered to be not matched with its template; when the image data are matched, the data are stored in the data memory, and when the image data are not matched, the data are deleted, and the image matching process is executed again;
dividing the image data result after image matching into three categories: image data a, image data B, and image data C; wherein, the image data A is an interference image, the image data B is a navigation chart, and the image data C is a fruit chart;
respectively performing one-dimensional projection on a two-dimensional image to an X axis and a Y axis by a rapid one-dimensional projection template matching method, converting a one-dimensional projection value of the two-dimensional image to a character string which is formed by a group of 0 and 1 numbers and used for describing image characteristics by a one-dimensional difference quantization method, and matching the character string with the template;
when the template character string is matched with the image characteristic character string, reserving the image, wherein the image is image data B or image data C, and the logic judgment parameter i is 1; when the template character string is not matched with the image characteristic character string, reserving the image, namely image data A, and taking a logic judgment parameter i as 0;
then, carrying out fine matching on the image data B or the image data C through the feature vectors, and further classifying the images; using a feature vector b and a feature vector c, wherein the feature vector b is used for matching the navigation map, and the feature vector c is used for matching the navigation map; traversing each image by using a characteristic vector b and a characteristic vector c respectively;
traversing the image by the feature vector b, and when the navigation feature vector b is matched with the image, taking the logic judgment parameter j as 1;
traversing the image by the feature vector c, and when the fruit feature vector c is matched with the image, taking a logic judgment parameter n as 1;
through the first-level logic judgment, when the judgment parameter i is equal to 1, the image data is determined to be matched and relevant and is classified as image data B or image data C, and when the judgment parameter i is equal to 0, the image data is determined to be matched and irrelevant and is classified as image data A; and performing two-stage logic judgment, and when the judgment parameter j is equal to 1, determining that the image data is matched and classified as image data B, and when the judgment parameter n is equal to 1, determining that the image data is matched and classified as image data C.
And step 3: the image segmentation module segments the matched images;
referring to fig. 5 and 6, the specific implementation of step 3 includes the following sub-steps:
step 3.1: judging the matched image before segmentation to determine whether the matched image is a target to be processed; when the logic relation is 1, the processing object is the required processing object, and when the logic relation is 0, the processing object is not the required processing object;
step 3.2: for the graph needing to be processed, the gray information of the processed image is optimized, and the gray value of the pixel (i, j) is Gij,0≤Gij≤255;
In which the gray data is optimized, assuming that the image has n data points { x }1,x2,x3......xnDividing the image into 3 cluster clusters, wherein the target function J to be optimized for the minimized image is needed for clustering;
Figure BDA0001913685500000041
when the data is classified by the target function J, the data n is classified into a clusterkTime, data relation variable TnkTaking 1, otherwise, taking 0;
when J is minimum, the formula is satisfied:
Figure BDA0001913685500000042
μkvalue of (1) is all clusterkAverage of the data points in (a); since each iteration is to take the minimum value of J, J is only continuously reduced or unchanged but not increased, and 3 clustering points mu of the gray level image are obtained through repeated iterationsk1,μk2,μk3,μk1k2k3
Step 3.3: inputting n data points X1、X2、…、XnThe number k of clusters to be clustered; obtaining K clustering centers, screening the clustering centers to obtain three optimal clustering values Ck1、Ck2、Ck3Selecting (C)k1,Ck3) Performing threshold segmentation on the image for the optimized gray scale range;
step 3.4: dividing the image by using a threshold, and comparing the threshold to judge whether the threshold is the optimal division threshold, wherein the initial threshold is sigma, and comparing the initial threshold with the clustering center Ck2Comparing to obtain a judgment value W, judging through a logical relationship, if the logical relationship is 1, carrying out segmentation through sigma, and if the logical relationship is 0, carrying out segmentation through an optimized threshold;
in this embodiment, let the gray level [0, 255 ] of an image]The pixel cluster center obtained from the upper part is used for taking the image gray level as [ mu ]k1k3];
Dividing pixels in the selected image into C by a threshold value w0And C1Two kinds, C0From the grey value at [ mu ]k1,w]Pixel composition of C1With gray value of [ w, mu ]k3]Of the pixel region C0And C1The average gray scale of can be calculated as mu0、μ1;P0Is C0Probability of region, P1Is C1A probability of a region;
mu is the image at [ mu ]k1μk3]Average gray scale of (2):
Figure BDA0001913685500000051
σ2=p00-μ)2+p11-μ)2=p0p101)2
let K be [ mu ]k1 μk3]Values are sequentially taken in the interval, so that sigma is2The maximum value of K is the optimal region segmentation threshold.
The obtained K value and the cluster value mu obtained by clusteringk2And comparing, if the formula is satisfied:
Figure BDA0001913685500000052
the requirement of a threshold value of a segmentation range is met;
if w is less than or equal to sigma, assign 1 and take muk2In order to optimize the segmentation threshold value(s),
otherwise, assigning 0 and taking K as the optimal segmentation threshold.
Step 3.5: storing the segmented image data in a data memory, wherein the segmented image comprises at least three image information which are classified into data information 1: fruit coordinates, data information 2: fruit number, data information 3: fruit size; and respectively storing the three types of data information, and reserving the three types of data information in a database so as to be convenient for the subsequent data call of the machine.
And 4, step 4: the path planning decision module carries out path decision and planning on the picking robot;
in the embodiment, after the segmented image data is classified and learned, the data is partitioned and task distribution scheduling is performed through the decision tree, and the decision and planning of the next action path of the picking robot are determined.
Referring to fig. 7, when the image data B is segmented, navigation parameters can be obtained: a transverse deviation lambda and a navigation deviation theta; when the image data C is subjected to identification and segmentation, the fruit sequence number n, the fruit radius r and the fruit coordinates (x, y, z) can be obtained;
and 5: the data processing module feeds back the path decision and planning to the lower computer through the first communication module;
step 6: the lower computer receives the information fed back by the upper computer through the second communication module;
and 7: the control module controls the picking robot to execute the path decision and planning.
Referring to fig. 8, the specific implementation of step 7 includes the following steps:
step 7.1: the process that the video image acquisition module of the lower computer transmits the image to the upper computer is controlled by the control circuit S0, when the image acquisition is finished, a low level signal 0 is transmitted, at the moment, S0 is switched on, a lamp L0 is lightened, picture information begins to be transmitted, when the image transmission is finished, a high level signal 1 is received, and at the moment, S0 is switched off;
step 7.2: s1 is switched on, the lamp L1 emits light, the upper computer transmits control information to the walking device, and when the control instruction execution is finished, a high level signal 1 is received, and S1 is switched off;
step 7.3: s2 is switched on, a lamp L2 emits light, the upper computer transmits control information to position the mechanical arm, and a high level signal 1 is received when the execution of a control instruction is finished, and S2 is switched off at the moment;
step 7.4: s3 is switched on, a lamp L3 emits light, the upper computer transmits control information to the end effector, and when the control instruction execution is finished, a high level signal 1 is received, and S3 is switched off;
step 7.5: s0, S1, S2, S3 are turned on, lights L0, L1, L2, L3 are lighted, a high level signal 1 is received respectively after the completion of the execution of the control instruction, S0, S1, S2, S3 are turned off, and lights L0, L1, L2, L3 are turned off;
step 7.6: s0, S1, S3 are turned on, lights L0, L1, L3 are lighted, waiting for the completion of the execution of the control command, respectively receiving a high level signal 1, S0, S1, S3 are turned off, and lights L0, L1, L3 are turned off;
step 7.7: s0, S2, S3 are turned on, lights L0, L2, L3 are lighted, waiting for the completion of the execution of the control command, respectively receiving a high level signal 1, S0, S2, S3 are turned off, and lights L0, L2, L3 are turned off;
step 7.8: s0 and S3 are turned on, lamps L0 and L3 emit light, the control command is waited for execution completion, a high level signal 1 is received, S0 and S3 are turned off, and lamps L0 and L3 are turned off;
wherein S0 is the control switch of the image transmission circuit, L0 is the image transmission circuit turning on the display lamp; s1 is the control switch of the walking device circuit, L1 is the display lamp of the walking device circuit; s2 is a control switch of the mechanical arm space positioning circuit, L2 is a display lamp for connecting the mechanical arm space positioning circuit; s3 is a control switch for the end effector device circuit, and L3 is an end effector device circuit on display light.
In this embodiment, the fruit picking robot picks apples on apple trees in a farm room, the video image acquisition module obtains two pieces of image information, one piece of image is an orchard route map, the other piece of image is a fruit information map, 5 single fruits, 1 pair of connected fruits and 2 fruits shielded by branches and leaves are attached to a fruit tree, and the other fruit is not in a picking range. And the robot adopts relative decision planning through the acquired image information.
In actual use: through the connection of an image transmission circuit S0, an L0 is lightened, image information is transmitted and acquired, the image information is classified into three types through image matching, data A is an impurity image, data B is a navigation image, and data C is a fruit image; the image segmentation method obtains navigation information by processing a navigation map, the information is transmitted to a lower computer, a walking device circuit S1 is switched on at the moment, an L1 is lightened, and the robot walks to the side of a fruit tree; the image segmentation method comprises the steps of processing a fruit image to obtain fruit information, obtaining spatial position information of single fruits, connected fruits, branches and leaves shielding fruits and fruits which cannot be picked through an image algorithm, transmitting the information to a mechanical arm spatial positioning device, switching on a mechanical arm spatial positioning circuit S2 at the moment, lighting an L2 and positioning a mechanical arm; when the mechanical arm reaches the picking position, the end effector device circuit S3 is switched on, the L3 is lightened, and the fruit is picked. Of course, the circuits S1, S2, S3 and S4 may be controlled in parallel, respectively, in a similar process to the above-described serial control.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A parallel recognition picking method applied to a fruit picking robot adopts a parallel recognition picking system applied to the fruit picking robot;
the method is characterized in that: the system comprises an upper computer and a lower computer;
the upper computer comprises an image matching identification module, an image segmentation module, a data processing module, a path planning decision module and a first communication module; the upper computer is used for receiving the images uploaded by the lower computer, performing image matching identification, segmentation, picking position positioning and path planning decision making, and feeding back the path planning decision to the lower computer; each module is provided with an independent calculation part, can independently complete corresponding control work, and can realize the cooperative processing of each module;
the lower computer comprises a video image acquisition module, a second communication module and a control module, and is used for acquiring images, uploading the images to the upper computer and receiving information fed back by the upper computer to realize walking control of the picking robot, mechanical arm space positioning control and end effector grabbing control; each module is provided with an independent calculation part, can independently complete corresponding control work, and can realize the cooperative processing of each module;
the method comprises the following steps:
step 1: the upper computer receives the image collected by the lower computer video image collecting module;
step 2: the image matching identification module carries out image matching identification;
and step 3: the image segmentation module segments the matched images;
and 4, step 4: the path planning decision module carries out path decision and planning on the picking robot;
and 5: the data processing module feeds back the path decision and planning to the lower computer through the first communication module;
step 6: the lower computer receives the information fed back by the upper computer through the second communication module;
and 7: the control module controls the picking robot to execute path decision and planning;
in step 2, the image matching identification module matches the template with the image acquired by the lower computer video image acquisition module, and the size relation between corresponding pixels is consistent with the template through matching between the search window and the template; conversely, once this condition is not met, it is considered to be not matched with its template; when the image data are matched, the data are stored in the data memory, and when the image data are not matched, the data are deleted, and the image matching process is executed again;
dividing the image data result after image matching into three categories: image data a, image data B, and image data C; wherein, the image data A is an interference image, the image data B is a navigation chart, and the image data C is a fruit chart;
respectively performing one-dimensional projection on a two-dimensional image to an X axis and a Y axis by a rapid one-dimensional projection template matching method, converting a one-dimensional projection value of the two-dimensional image to a character string which is formed by a group of 0 and 1 numbers and used for describing image characteristics by a one-dimensional difference quantization method, and matching the character string with the template;
when the template character string is matched with the image characteristic character string, reserving the image, wherein the image is image data B or image data C, and the logic judgment parameter i is 1; when the template character string is not matched with the image characteristic character string, reserving the image, namely image data A, and taking a logic judgment parameter i as 0;
then, carrying out fine matching on the image data B or the image data C through the feature vectors, and further classifying the images; using a feature vector b and a feature vector c, wherein the feature vector b is used for matching the navigation map, and the feature vector c is used for matching the navigation map; traversing each image by using a characteristic vector b and a characteristic vector c respectively;
traversing the image by the feature vector b, and when the navigation feature vector b is matched with the image, taking the logic judgment parameter j as 1;
traversing the image by the feature vector c, and when the feature vector c is matched with the image, taking a logic judgment parameter n as 1;
through the first-level logic judgment, when the judgment parameter i is equal to 1, the image data is determined to be matched and relevant and is classified as image data B or image data C, and when the judgment parameter i is equal to 0, the image data is determined to be matched and irrelevant and is classified as image data A; and performing two-stage logic judgment, and when the judgment parameter j is equal to 1, determining that the image data is matched and classified as image data B, and when the judgment parameter n is equal to 1, determining that the image data is matched and classified as image data C.
2. Parallel recognition picking method applied to fruit picking robots according to claim 1, characterized in that the specific implementation of step 3 comprises the following sub-steps:
step 3.1: judging the matched image before segmentation to determine whether the matched image is a target to be processed; when the logic relation is 1, the processing object is the required processing object, and when the logic relation is 0, the processing object is not the required processing object;
step 3.2: performing gray processing on the image to be processed, and performing optimization processing on the gray data of the image to be processed, wherein the gray value of the pixel (i, j) is Gij,0≤Gij≤255;
Step 3.3: inputting n data points X1、X2、…、XnThe number k of clusters to be clustered; obtaining K clustering centers, screening the clustering centers to obtain three optimal clustering values Ck1、Ck2、Ck3Selecting (C)k1,Ck3) Performing threshold segmentation on the image for the optimized gray scale range;
step 3.4: dividing the image by using a threshold, and comparing the threshold to judge whether the threshold is the optimal division threshold, wherein the initial threshold is sigma, and comparing the initial threshold with the clustering center Ck2Comparing to obtain a judgment value w, judging through a logical relationship, if the logical relationship is 1, carrying out segmentation through sigma, and if the logical relationship is 0, carrying out segmentation through an optimized threshold;
step 3.5: storing the segmented image data in a data memory, wherein the segmented image comprises at least three image information which are classified into data information 1: fruit coordinates, data information 2: fruit number, data information 3: fruit size; and respectively storing the three types of data information, and reserving the three types of data information in a database so as to be convenient for the subsequent data call of the machine.
3. Parallel recognition picking method applied to fruit picking robots according to claim 2, characterized in that: the gray data is optimized in step 3.2, assuming that the image has n data points { x }1,x2,x3......xnDividing the image into 3 cluster clusters, wherein the target function J to be optimized for the minimized image is needed for clustering;
Figure FDA0003051217680000031
when the data is classified by the target function J, the data n is classified into a clusterkTime, data relation variable TnkTaking 1, otherwise, taking 0;
when J is minimum, the formula is satisfied:
Figure FDA0003051217680000032
μkvalue of (1) is all clusterkAverage of the data points in (a); since each iteration is to take the minimum value of J, J is only continuously reduced or unchanged but not increased, and 3 clustering points mu of the gray level image are obtained through repeated iterationsk1,μk2,μk3,μk1k2k3
4. Parallel recognition picking method applied to fruit picking robots according to claim 3, characterized in that: in step 3.4, the gray level [0, 255 ] of an image is set]The pixel cluster center obtained from the upper part is used for taking the image gray level as [ mu ]k1k3];
Dividing pixels in the selected image into C by a threshold value w0And C1Two kinds, C0From the grey value at [ mu ]k1,w]Pixel composition of C1With gray value of [ w, mu ]k3]Of the pixel region C0And C1The average gray scale of can be calculated as mu0、μ1;P0Is C0Probability of region, P1Is C1A probability of a region;
mu is the image at [ mu ]k1μk3]Average gray scale of (2):
Figure FDA0003051217680000041
σ2=p00-μ)2+p11-μ)2=p0p101)2
let K be [ mu ]k1μk3]Values are sequentially taken in the interval, so that sigma is2The maximum value of K is the optimal region segmentation threshold.
5. Parallel recognition picking method applied to fruit picking robots according to claim 4, characterized in that: step 3.4, the obtained K value and the clustered value mu are obtainedk2And comparing, if the formula is satisfied:
Figure FDA0003051217680000042
the requirement of a threshold value of a segmentation range is met;
if w is less than or equal to sigma, assign 1 and take muk2In order to optimize the segmentation threshold value(s),
otherwise, assigning 0 and taking K as the optimal segmentation threshold.
6. Parallel recognition picking method applied to fruit picking robots according to claim 1, characterized in that: and 4, after classifying and learning the segmented image data, partitioning and task allocation scheduling the data through a decision tree, and deciding the decision and planning of the next action path of the picking robot.
7. Parallel recognition picking method applied to fruit picking robots according to any of the claims 1 to 6, characterized in that the specific implementation of step 7 comprises the following steps:
step 7.1: the process that the video image acquisition module of the lower computer transmits the image to the upper computer is controlled by the control circuit S0, when the image acquisition is finished, a low level signal 0 is transmitted, at the moment, S0 is switched on, a lamp L0 is lightened, picture information begins to be transmitted, when the image transmission is finished, a high level signal 1 is received, and at the moment, S0 is switched off;
step 7.2: s1 is switched on, the lamp L1 emits light, the upper computer transmits control information to the walking device, and when the control instruction execution is finished, a high level signal 1 is received, and S1 is switched off;
step 7.3: s2 is switched on, a lamp L2 emits light, the upper computer transmits control information to position the mechanical arm, and a high level signal 1 is received when the execution of a control instruction is finished, and S2 is switched off at the moment;
step 7.4: s3 is switched on, a lamp L3 emits light, the upper computer transmits control information to the end effector, and when the control instruction execution is finished, a high level signal 1 is received, and S3 is switched off;
step 7.5: s0, S1, S2, S3 are turned on, lights L0, L1, L2, L3 are lighted, a high level signal 1 is received respectively after the completion of the execution of the control instruction, S0, S1, S2, S3 are turned off, and lights L0, L1, L2, L3 are turned off;
step 7.6: s0, S1, S3 are turned on, lights L0, L1, L3 are lighted, waiting for the completion of the execution of the control command, respectively receiving a high level signal 1, S0, S1, S3 are turned off, and lights L0, L1, L3 are turned off;
step 7.7: s0, S2, S3 are turned on, lights L0, L2, L3 are lighted, waiting for the completion of the execution of the control command, respectively receiving a high level signal 1, S0, S2, S3 are turned off, and lights L0, L2, L3 are turned off;
step 7.8: s0 and S3 are turned on, lamps L0 and L3 emit light, the control command is waited for execution completion, a high level signal 1 is received, S0 and S3 are turned off, and lamps L0 and L3 are turned off;
wherein S0 is the control switch of the image transmission circuit, L0 is the image transmission circuit turning on the display lamp; s1 is the control switch of the walking device circuit, L1 is the display lamp of the walking device circuit; s2 is a control switch of the mechanical arm space positioning circuit, L2 is a display lamp for connecting the mechanical arm space positioning circuit; s3 is a control switch for the end effector device circuit, and L3 is an end effector device circuit on display light.
CN201811562710.8A 2018-12-20 2018-12-20 Parallel recognition picking system and method applied to fruit picking robot Expired - Fee Related CN109588114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811562710.8A CN109588114B (en) 2018-12-20 2018-12-20 Parallel recognition picking system and method applied to fruit picking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811562710.8A CN109588114B (en) 2018-12-20 2018-12-20 Parallel recognition picking system and method applied to fruit picking robot

Publications (2)

Publication Number Publication Date
CN109588114A CN109588114A (en) 2019-04-09
CN109588114B true CN109588114B (en) 2021-07-06

Family

ID=65964148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811562710.8A Expired - Fee Related CN109588114B (en) 2018-12-20 2018-12-20 Parallel recognition picking system and method applied to fruit picking robot

Country Status (1)

Country Link
CN (1) CN109588114B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110125036B (en) * 2019-04-25 2020-12-22 广东工业大学 Self-recognition sorting method based on template matching
CN112136505B (en) * 2020-09-07 2021-11-26 华南农业大学 Fruit picking sequence planning method based on visual attention selection mechanism
CN112528826B (en) * 2020-12-04 2024-02-02 江苏省农业科学院 Control method of picking device based on 3D visual perception

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101726251A (en) * 2009-11-13 2010-06-09 江苏大学 Automatic fruit identification method of apple picking robot on basis of support vector machine
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot
CN103503637A (en) * 2013-10-14 2014-01-15 青岛农业大学 Intelligent identification picking robot and picking method
CN103529855A (en) * 2013-10-11 2014-01-22 华南农业大学 Rotary adjustable binocular vision target recognition and positioning device and application thereof in agricultural fruit harvesting machinery
CN103999635A (en) * 2014-05-21 2014-08-27 浙江工业大学 Intelligent automatic cutting type tea-leaf picker based on machine vision and working method
WO2016132264A1 (en) * 2015-02-22 2016-08-25 Ffmh-Tech Ltd. Multi-robot crop harvesting machine
CN106371446A (en) * 2016-12-03 2017-02-01 河池学院 Navigation and positioning system of indoor robot
CN108340374A (en) * 2018-02-08 2018-07-31 西北农林科技大学 A kind of control system and control method of picking mechanical arm

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101726251A (en) * 2009-11-13 2010-06-09 江苏大学 Automatic fruit identification method of apple picking robot on basis of support vector machine
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot
CN103529855A (en) * 2013-10-11 2014-01-22 华南农业大学 Rotary adjustable binocular vision target recognition and positioning device and application thereof in agricultural fruit harvesting machinery
CN103503637A (en) * 2013-10-14 2014-01-15 青岛农业大学 Intelligent identification picking robot and picking method
CN103999635A (en) * 2014-05-21 2014-08-27 浙江工业大学 Intelligent automatic cutting type tea-leaf picker based on machine vision and working method
WO2016132264A1 (en) * 2015-02-22 2016-08-25 Ffmh-Tech Ltd. Multi-robot crop harvesting machine
CN106371446A (en) * 2016-12-03 2017-02-01 河池学院 Navigation and positioning system of indoor robot
CN108340374A (en) * 2018-02-08 2018-07-31 西北农林科技大学 A kind of control system and control method of picking mechanical arm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种用于机器人水果采摘的快速识别方法;邹谜等;《农机化研究》;20190130(第1期);第71-79页 *
新型苹果采摘机器人的设计与试验;伍锡如等;《科学技术与工程》;20160330;第16卷(第9期);第206-210页 *

Also Published As

Publication number Publication date
CN109588114A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN109588114B (en) Parallel recognition picking system and method applied to fruit picking robot
AU2020103613A4 (en) Cnn and transfer learning based disease intelligent identification method and system
US8423485B2 (en) Correspondence learning apparatus and method and correspondence learning program, annotation apparatus and method and annotation program, and retrieval apparatus and method and retrieval program
CN110021033B (en) Target tracking method based on pyramid twin network
CN110689539B (en) Workpiece surface defect detection method based on deep learning
KR20180004898A (en) Image processing technology and method based on deep learning
CN109299664B (en) Reordering method for pedestrian re-identification
CN112507896B (en) Method for detecting cherry fruits by adopting improved YOLO-V4 model
CN115147488B (en) Workpiece pose estimation method and grabbing system based on dense prediction
CN116563293B (en) Photovoltaic carrier production quality detection method and system based on machine vision
CN111091101A (en) High-precision pedestrian detection method, system and device based on one-step method
CN110281243B (en) Picking robot operation sequence planning method
CN111582395A (en) Product quality classification system based on convolutional neural network
CN113752255A (en) Mechanical arm six-degree-of-freedom real-time grabbing method based on deep reinforcement learning
CN115376125A (en) Target detection method based on multi-modal data fusion and in-vivo fruit picking method based on target detection model
CN117520933B (en) Environment monitoring method and system based on machine learning
CN108280516B (en) Optimization method for mutual-pulsation intelligent evolution among multiple groups of convolutional neural networks
CN114049621A (en) Cotton center identification and detection method based on Mask R-CNN
Hoang et al. Grasp configuration synthesis from 3D point clouds with attention mechanism
CN113721628A (en) Maze robot path planning method fusing image processing
CN112149727A (en) Green pepper image detection method based on Mask R-CNN
CN115187878A (en) Unmanned aerial vehicle image analysis-based blade defect detection method for wind power generation device
CN111259981B (en) Automatic classification system after remote sensing image processing
CN114782360A (en) Real-time tomato posture detection method based on DCT-YOLOv5 model
CN112487909A (en) Fruit variety identification method based on parallel convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210706

Termination date: 20211220

CF01 Termination of patent right due to non-payment of annual fee