WO2019192402A1 - Plug-in method and plug-in device - Google Patents

Plug-in method and plug-in device Download PDF

Info

Publication number
WO2019192402A1
WO2019192402A1 PCT/CN2019/080453 CN2019080453W WO2019192402A1 WO 2019192402 A1 WO2019192402 A1 WO 2019192402A1 CN 2019080453 W CN2019080453 W CN 2019080453W WO 2019192402 A1 WO2019192402 A1 WO 2019192402A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
pose
robot
coordinate
program module
Prior art date
Application number
PCT/CN2019/080453
Other languages
French (fr)
Chinese (zh)
Inventor
何德裕
朱文飞
彭显明
Original Assignee
鲁班嫡系机器人(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 鲁班嫡系机器人(深圳)有限公司 filed Critical 鲁班嫡系机器人(深圳)有限公司
Priority to CN201980000632.9A priority Critical patent/CN110463376B/en
Publication of WO2019192402A1 publication Critical patent/WO2019192402A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K13/00Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
    • H05K13/04Mounting of components, e.g. of leadless components

Definitions

  • the present invention relates to the field of automation technologies, and in particular, to an insertion method and an insertion device.
  • the automatic plug-in device is used to automatically insert the pins of the electronic component into the PCB board.
  • the current automatic plug-in device can include two working modes: one is blind insertion, and the blind insertion is The position of the acquired electronic components and the position of the PCB board are pre-calculated, and the insertion of the machine without visual guidance is realized by the cooperation of mechanical precision, which requires high positional accuracy of the machine and each component.
  • Another way is a vision-based plug-in method, which is to add eye guidance during the insertion process.
  • the plug-in based on the machine learning method can improve the accuracy and efficiency of the plug-in in various complicated environments.
  • a first aspect of the present invention provides an insertion method, the insertion method comprising:
  • a second aspect of the present invention provides an insertion method, the insertion method comprising:
  • a third aspect of the present invention provides an insertion method, the insertion method comprising:
  • a fourth aspect of the present invention provides an insertion method, the insertion method comprising:
  • Calculating, according to the pre-trained NN model, the robot to perform according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The current amount of exercise; determining whether the robot meets the plug-in condition; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  • a fifth aspect of the present invention provides an insertion method, the insertion method comprising:
  • a sixth aspect of the present invention provides an insertion method, the insertion method comprising:
  • a seventh aspect of the present invention provides an insertion method, the insertion method comprising:
  • An eighth aspect of the present invention provides an insertion method, the insertion method comprising:
  • a ninth aspect of the present invention provides an insertion method, the insertion method comprising:
  • a tenth aspect of the present invention provides an insertion method, the insertion method comprising:
  • An eleventh aspect of the present invention provides a computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the plug-in method of any of the above.
  • a twelfth aspect of the present invention provides an electronic device comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes any of the above when executing the computer program A plug-in method as described.
  • a thirteenth aspect of the present invention provides a plug-in device, where the plug-in device includes a first image sensor, a second image sensor, a robot, and a processor;
  • the processor respectively connects the first image sensor, the second image sensor, and the robot;
  • the first image sensor when in operation, acquires a first image including a pin and transmits the first image to the processor;
  • the robot sends the current information of the joints of the robot to the processor when working; moves the current amount of motion based on the control of the processor; and the pin is inserted into the target based on the control of the processor Jack
  • the processor when operating, implements the plug-in method of any of the above first to fourth aspects; or
  • a fourteenth aspect of the present invention provides a plug-in device, the plug-in device including a third image sensor, a robot, and a processor;
  • the processor respectively connects the third image sensor and the robot
  • the third image sensor when in operation, acquires a third current image including a pin and a target jack, and transmits the third current image to the processor;
  • the robot sends the current information of the joints of the robot to the processor when working; moves the current amount of motion based on the control of the processor; and the pin is inserted into the target based on the control of the processor Jack
  • the processor is configured to implement the plug-in method of any one of the above-mentioned fifth to eighth aspects, or the plug-in method of any of the twelfth to fourteenth aspects.
  • a fourteenth aspect of the present invention provides a plug-in device, wherein the plug-in device includes functional modules.
  • the plug-in device includes functional modules.
  • each functional module refer to the plug-in method above.
  • a fifteenth aspect of the present invention provides the method for acquiring a pre-trained NN model in the method of inserting the first, second, fifth or sixth aspect, wherein the pre-trained NN model is as follows Method to get:
  • the initialized NN model is trained based on the training data and the tag data to obtain the pre-trained NN model.
  • a sixteenth aspect of the present invention provides the method for acquiring a pre-trained first CNN model in the plug-in method according to the second or third aspect above, wherein the pre-trained first CNN model is obtained by the following method :
  • the first CNN model outputs a relative pose or pin pose for the input first image; and/or outputs a third pose for the input second image or the second current image Or the third current pose;
  • a seventeenth aspect of the present invention provides the method for acquiring a pre-trained third CNN model in the plug-in method according to the fourth aspect above, wherein the pre-trained third CNN model is obtained by:
  • the third CNN model is a current amount of motion to be implemented by the robot for the input first image, the second image or the second current image, and the first current pose;
  • the initialized third CNN model is trained to acquire the pre-trained third CNN model based on the training data and the tag data.
  • the eighteenth aspect of the present invention provides the method for acquiring a pre-trained fourth CNN model in the plug-in method according to the sixth or seventh aspect above, wherein the pre-trained fourth CNN model is obtained by:
  • an initialized fourth CNN model which is a third current image for input, outputting a second pose or a second current pose, and a third pose or a third current pose;
  • the initialized fourth CNN model is trained to acquire the pre-trained fourth CNN model based on the training data and the tag data.
  • a nineteenth aspect of the present invention provides the method for acquiring a pre-trained fifth CNN model in the plug-in method according to the eighth aspect, wherein the pre-trained fifth CNN model is obtained by:
  • an initialized fifth CNN model which is a third current image for input and a first current pose, and outputs a current amount of motion required by the robot;
  • a twentieth aspect of the present invention provides the method for acquiring a pre-trained NN model in the method of inserting a ninth, tenth, twelfth or thirteenth aspect, wherein the pre-trained NN model
  • the acquisition methods include:
  • the NN model is required to output the robot for the input of the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate The current amount of exercise implemented;
  • the initialized NN model is trained based on the training data and the tag data to obtain the pre-trained NN model.
  • the twenty-first aspect of the present invention provides the method for acquiring a pre-trained sixth CNN model in the plug-in method according to the tenth or eleventh aspect, wherein the pre-trained sixth CNN model is acquired.
  • Methods include:
  • the sixth CNN model is for outputting the first image and/or the second image or the second current image, outputting the second coordinates, and/or the Three coordinates or third current coordinates;
  • the initialized sixth CNN model is trained to acquire the pre-trained sixth CNN model based on the training data and the tag data.
  • the twenty-second aspect of the present invention provides the method for acquiring a pre-trained seventh CNN model in the method of inserting the tenth or eleventh, the method for acquiring the pre-trained seventh CNN model include:
  • the seventh CNN model is configured to output the third coordinate or the third current coordinate for the input second image or the second current image;
  • the initialized seventh CNN model is trained to acquire the pre-trained seventh CNN model based on the training data and the tag data.
  • the twenty-third aspect of the present invention provides the method for acquiring a pre-trained eighth CNN model in the plug-in method according to any one of the thirteenth or fourteenth, wherein the pre-trained eighth CNN
  • the methods for obtaining the model include:
  • the eighth CNN model is outputting the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, for the third current image input;
  • the plug-in can be adapted to the case where the background environment is complicated, thereby improving the efficiency and accuracy of the plug-in work.
  • FIG. 1 is a first flow chart of an embodiment of an insertion method provided by the present invention.
  • FIG. 2 is a second flow chart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 3 is a flow chart of an embodiment of a pre-trained NN model acquisition method provided by the present invention.
  • FIG. 4 is a flow chart of an embodiment of a method for acquiring a pre-trained first CNN model provided by the present invention.
  • FIG. 5 is a flowchart of an embodiment of a method for acquiring a pre-trained second CNN model provided by the present invention.
  • FIG. 6 is a third flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 7 is a fourth flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 8 is a fifth flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 9 is a sixth flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 10 is a flowchart of an embodiment of a method for acquiring a pre-trained third CNN model provided by the present invention.
  • Figure 11 is a seventh flow chart of an embodiment of the method of plugging in the present invention.
  • FIG. 12 is an eighth flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 13 is a flowchart of an embodiment of a method for acquiring a pre-trained fourth CNN model provided by the present invention.
  • Figure 14 is a ninth flow chart of an embodiment of the method of plugging in the present invention.
  • Figure 15 is a tenth flow chart of an embodiment of the method of plugging in the present invention.
  • Figure 16 is an eleventh flow chart of an embodiment of the plug-in method provided by the present invention.
  • Figure 17 is a twelfth flow chart of an embodiment of the method of plugging in the present invention.
  • FIG. 18 is a flowchart of an embodiment of a method for acquiring a pre-trained fifth CNN model provided by the present invention.
  • FIG. 19 is a first structural block diagram of an embodiment of a plug-in device provided by the present invention.
  • FIG. 20 is a second structural block diagram of an embodiment of a plug-in device provided by the present invention.
  • FIG. 21 is a third structural block diagram of an embodiment of a plug-in device provided by the present invention.
  • FIG. 22 is a fourth structural block diagram of an embodiment of a plug-in device provided by the present invention.
  • FIG. 23 is a structural block diagram of an embodiment of an electronic device provided by the present invention.
  • FIG. 24 is a first structural block diagram of a model connection embodiment provided by the present invention.
  • FIG. 25 is a second structural block diagram of a model connection embodiment provided by the present invention.
  • FIG. 26 is a third structural block diagram of a model connection embodiment provided by the present invention.
  • Figure 27 is a structural diagram of a feedforward application network in the present invention.
  • Figure 29 is a fourteenth flow chart of an embodiment of the method of plugging in the present invention.
  • FIG. 30 is a flow chart of an embodiment of a pre-trained NN model acquisition method provided by the present invention.
  • FIG. 31 is a flowchart of an embodiment of a method for acquiring a pre-trained sixth CNN model provided by the present invention.
  • FIG. 32 is a flow chart of an embodiment of a method for acquiring a pre-trained seventh CNN model provided by the present invention.
  • Figure 33 is a fifteenth flow chart of an embodiment of the plug-in method provided by the present invention.
  • Figure 34 is a sixteenth flowchart of an embodiment of the plug-in method provided by the present invention.
  • Figure 35 is a seventeenth flow chart of an embodiment of the plug-in method provided by the present invention.
  • Figure 36 is an eighteenth flow chart of an embodiment of the plug-in method provided by the present invention.
  • FIG. 37 is a flow chart of an embodiment of a method for acquiring a pre-trained eighth CNN model provided by the present invention.
  • Figure 38 is a nineteenth flow chart of an embodiment of the method of plugging in the present invention.
  • Figure 39 is a twentieth flowchart of an embodiment of the method of plugging in the present invention.
  • FIG. 1 is a first flow chart of an embodiment of an insertion method provided by the present invention.
  • 2 is a second flow chart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 19 is a first structural block diagram of an embodiment of a plug-in device provided by the present invention.
  • 20 is a second structural block diagram of an embodiment of a plug-in device provided by the present invention.
  • the plug-in device is an industrial automation device that automatically inserts the pins of an electronic component into a target jack on a PCB.
  • an embodiment of the present invention provides an insertion method, where the insertion method includes:
  • S110 obtains a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin.
  • the electronic component 800 is moved into the field of view of the first image sensor 710 such that the first image sensor 710 passes through the first image sensor 710.
  • a first image comprising pins 810 of electronic component 800 is acquired.
  • the first image usually does not include the PCB board background, because if the image includes a PCB background, the background image is complicated, which makes pin identification difficult.
  • the first coordinate system may include: a robot coordinate system, a first image sensor coordinate system, a second image sensor coordinate system, or any other specified coordinate system that has been calibrated with each of the coordinate systems described above.
  • the first coordinate system needs to be calibrated with other coordinate systems in advance, so that other coordinate systems can be uniformly converted to the first coordinate system based on the pre-calibrated matrix conversion relationship.
  • the robot coordinate system is taken as a first coordinate system as an example for further detailed description.
  • the center of the base of the robot is usually set as a robot coordinate system.
  • the posture of the manipulator may refer to the posture of the center of the flange to which the end of the manipulator is articulated, or the posture of the center of the end effector of the manipulator, and the like.
  • the information of the joints sent to the processor by the joints of the robot includes the information of the movement amount of each joint, and combined with the information of the type and size of each joint, the formula can be obtained by the kinematics of the robot.
  • the pose of the robot at this time in the robot coordinate system is the information of the joints sent to the processor by the joints of the robot.
  • the relative position of the pin is obtained according to the first image, which will be described in further detail in the following embodiments.
  • the first image acquired by the first image sensor in addition to the relative pose for acquiring the pin, can also be used to check whether the electronic component has a defect, that is, by analyzing the first image, A comparison is made with a pre-stored non-defective component image to determine whether the electronic component is defective. If there are no defects, you can continue with the following steps. If there is a defect, you can control the robot to put the electronic component back to the recycling position, and then return to the reclaiming position to pick up the electronic components.
  • S120 acquires a first current pose of the robot in the first coordinate system according to the current information of the joints of the acquired robot.
  • step S180 may be performed to control the robot to drive the pin movement. Go to the target jack, which saves the subsequent plug-in work time.
  • the coordinates or pose of the marked point on the PCB board can be detected first, and the Mark point is the PCB.
  • a solid circular or rectangular point with a blank area on the board, combined with the layout of the PCB board, can calculate the approximate position of the target jack and move the electronic components to this approximate position, so that the pins can be placed in the target Near the hole.
  • the position of the first target jack can be used as a reference, and the layout of the PCB board can be combined to calculate the target jack. Approximate position coordinates and control the robot to move to this position, causing the pin to move near the target jack.
  • S130 acquires a third pose or a third current pose of the target jack in the first coordinate system.
  • the second image sensor 720 when the second image sensor 720 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The second current image captured and transmitted by the image sensor 720, thereby acquiring the third current pose of the target jack.
  • the second image sensor 720 is preferably disposed on the robot, and since the second image sensor 720 moves along with the robot 730, the second image sensor 720 can be positioned closer to or directly above the target jack 910 to include the target insertion. The image of the hole, thereby improving the accuracy of the target jack pose extraction, and better improving the accuracy of subsequent plug-in.
  • S140 calculates, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of motion to be performed by the robot based on the previously trained Neural Network (NN) model; S150 determines whether the robot is satisfied The plug-in condition; if satisfied, the S160 controls the robot to drive the pin into the target jack; if not, the S170 controls the robot to implement the current amount of motion.
  • NN Neural Network
  • calculating the current amount of exercise to be performed by the robot based on the previously trained NN model may include: according to the first current position
  • the second current pose is calculated by the pose and the relative pose. Since the relative pose of the robot and the pin remains unchanged during the movement of the robot after the relative pose is acquired through S110, the first current of the robot is acquired. In the pose, you can get the second current pose of the pin.
  • control robot drives the pin into the target jack to complete the plug-in action of the electronic component; then, the control robot moves to the take-out position of the next electronic component, picks up the next electronic component, and repeats the above steps.
  • the plug-in actions of the electronic components corresponding to all target jacks on the PCB are completed.
  • control robot implements the corresponding current amount of motion, and repeats the above steps after the robot performs the corresponding current amount of motion.
  • the current amount of exercise to be performed by the robot refers to the amount of movement (movement amount + rotation amount) to be performed by the end effector or the end shaft of the robot.
  • the inverse kinematics formula of the manipulator can be used to obtain the amount of motion that each joint of the manipulator needs to perform, and then the command of each amount of motion is sent to the motor controller of each joint, thereby controlling the amount of motion corresponding to the movement of the manipulator.
  • a large PCB board may be difficult to complete the entire plug-in operation at one time. Therefore, a large PCB board is usually virtualized into a plurality of small modules, and multiple modules are inserted in multiple times, thereby finally The insertion of the entire PCB board is completed. Therefore, in this case, the plug-in method of one module is completed according to the plug-in method of the specific embodiment, and then the steps of the plug-in method are repeated, and the plug-ins of other modules are sequentially completed. Until the entire PCB board is plugged in. The PCB board is then removed from the working position and the next PCB board is moved to the plug-in working station to repeat the steps described in the plug-in method of the embodiment of the present invention.
  • the NN model is an operational model consisting of a large number of nodes (or neurons) and interconnections. Each node represents a specific output function called an activation function. The connection between every two nodes represents a weighting value for passing the connection signal, called weight, which is equivalent to the memory of the artificial neural network. The output of the network varies depending on the connection method of the network, the weight value and the excitation function. According to the network structure, NN can be divided into three categories: feedforward neural network, feedback application network and self-organizing application network. This embodiment is preferably a feedforward neural network.
  • Feedforward neural network (FNN), referred to as feedforward network.
  • each neuron starts from the input layer, receives the previous stage input, and outputs it to the next stage until the output layer. There is no feedback throughout the network, and a directed acyclic graph can be used.
  • the feedforward neural network employs a unidirectional multilayer structure. Each layer contains several neurons, and the neurons in the same layer are not connected to each other, and the transmission of information between layers is performed in only one direction.
  • the first layer is called the input layer
  • the last layer is the output layer
  • the middle is the hidden layer, referred to as the hidden layer.
  • the hidden layer can be one layer or multiple layers.
  • the biological neuron model is reduced to a mathematical model consisting of a linear function plus a nonlinear activation function.
  • neurons receive input signals from n other neurons that are passed through a weighted connection.
  • the total input value received by the neuron is compared to the threshold of the neuron.
  • the comparison is then processed by an activation function to produce the output of the neuron.
  • the nonlinear activation function is the key to making the neural network represent the nonlinear function.
  • the mathematical expression of the Sigmoid function is It maps the input to a number between 0 and 1. When the input is greater than 0, the function output is greater than 0.5, and the larger the input, the closer the function output is to 1, and the neuron can be considered activated.
  • the Tanh function the hyperbolic tangent function, is similar to the sigmoid function. The difference is that it maps the input to a number between -1 and 1.
  • each neuron represents a nonlinear function
  • each layer in the neural network represents a set of nonlinear functions.
  • the output of these nonlinear functions is the input to the next layer.
  • Figure 27 is a structural diagram of a feedforward application network in the present invention.
  • b1, b2 are the offsets of the corresponding neurons and g represents the activation function.
  • the output y of the neural network can be represented by h1, h2.
  • the output layer of the neural network does not use an activation function.
  • the feedforward neural network may comprise 2-5 layers of hidden layers, each layer containing 1024 neurons.
  • Each layer of hidden layer is a fully connected layer, that is to say, any one of the neurons in the next layer is connected to all the neurons in the upper layer.
  • the output layer of this NN model has six neurons, which correspond to the xyzuvw space coordinates required to control the pose of the robot. In addition, you can set any number of hidden layers as needed, and set any number of neurons on each layer.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • the pre-trained NN model can be obtained by:
  • S141 acquiring an initialized NN model, where the NN model is a first current pose, a second current pose or a relative pose, and a third pose or a third current pose in the first coordinate system for the input, Output the current amount of exercise that the robot needs to implement.
  • the NN model is actually a family of functions, or a family of functions. These functions have some common properties, because once the model is determined, the model structure is fixed, and the specific selection of each model parameter is equivalent to selecting a function in this family of functions. Model training is actually choosing the best function in this family of functions to describe the quantitative relationship between input and output.
  • Initializing the NN model is actually determining the model structure and the initial parameters of the model.
  • Methods for initializing parameters can include:
  • S142 acquires training data and tag data.
  • the plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
  • the robot In the conventional view of the conventional visual servo, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps) or a small increment of motion.
  • the NN model can be trained by using the posture of the robot for each step of the servo servo, and the posture of the pin corresponding to the step and the posture of the target jack as training data.
  • the amount of motion required for the robot to move from each pose to the inserted pose is calculated, and this amount of motion is used as the annotation for the training of the NN model.
  • Tag data Based on the pose of each step of the robot during visual servoing and the pose of the robot when the component is finally inserted, the amount of motion required for the robot to move from each pose to the inserted pose is calculated, and this amount of motion is used as the annotation for the training of the NN model.
  • S143 trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
  • the training model is to adjust the parameters of the model so that the prediction results of the model output are as close as possible to the label data.
  • the training error function can be based on the need to adopt a corresponding function, such as: taking the mean square error of the prediction result and the label data.
  • some regular terms related to the parameters of the NN model may also be introduced to prevent overfitting, such as: sum of squares of all parameters, Dropout, or regularization.
  • each training data corresponds to label data with labels.
  • Model parameters - the iterative training process is updated many times, not all data ends once it runs, and all data runs once and then runs the second time), making the model's parameter update cycle too long.
  • the 9000 training data can be divided into smaller sets, for example, every 100 as a mini-batch, a total of 90 such mini-batch. Then do the previous training operations for the 90 mini-batch. At this time, each mini-batch will update the model parameters (100 error calculations, 100 gradient calculations, and update the model parameters once). When the entire training data (9000) is used once, the model parameters have been Updated 90 times. (Of course, you need to continue to repeat the above process).
  • the optimization goal of the training error function can be based on various designs.
  • the training of the model is stopped.
  • the current parameter of the model is the parameter of the final model.
  • optimization target of the training error function may include, but is not limited to, the following situations:
  • the optimization target is the preset maximum number of iterations, and the current parameter of the corresponding model is the final parameter of the model when the iteration is completed;
  • the optimization target is a certain threshold, and the value of the function of the training error after each iteration is recorded.
  • the training error is lower than the threshold, the current parameter is the parameter of the final model.
  • a part of the training data may be divided into a part (for example, 1000 of the 10,000 sets of data) as the verification data, and the verification data also has corresponding labels, and the verification data is used to check whether the model is over-fitting. For example, after every 90 updates (that is, after 9000 training data are used once), the 1000 validated data are predicted by the current model, and then the error is calculated with the label as a criterion for over-fitting.
  • the training error is theoretically a decreasing trend (because the model is updated with training errors), but the verification error does not always decrease (because the model is not updated based on the verification error).
  • the selection error is small, the corresponding model parameter is the final model parameter.
  • set a maximum number of iterations After the iteration is completed, the same training parameters and verification errors corresponding to which model parameters are compared are used as the final model parameters.
  • a threshold can also be set. When the verification error and the training error are both lower than the threshold, the training is stopped, and the current parameter is selected as the final model parameter.
  • FIG. 4 is a flow chart of an embodiment of a method for acquiring a pre-trained first CNN model provided by the present invention.
  • the relative pose based on the acquired first image described in the above embodiment S110 may be implemented by a traditional visual method or by a machine learning method.
  • the traditional visual mode refers to binarizing the first image, then identifying the contour of the pin from the first image, calculating the coordinates of the pin according to the contour, and converting the pin coordinates to the pre-calibrated result.
  • the pose of the pin is then converted to a relative pose according to the pose of the robot.
  • the method of machine learning is to obtain a relative pose based on the first trained CNN model, which may include: acquiring a pose of the pin based on the pre-trained first CNN model, and then placing the pin (the pin may The position of the pin insertion end or the entire pin, preferably the pin insertion end, is converted into a relative pose by the manipulator pose; or the first image and the manipulator pose are input to the first CNN model, and the relative pose is directly output; Preferably, the output pin pose is first converted to the relative pose, so that the accuracy of the pose acquisition can be improved.
  • the preferred embodiment will be further described in detail below.
  • Obtaining the pose of the pin based on the pre-trained first CNN model may include: inputting the first image into the previously trained first CNN model, outputting the coordinates of the pin, and then converting the coordinates into a pin according to the pre-calibration result. Or the position of the pin directly output through the pre-trained first CNN model, and then converted into the relative pose of the pin in combination with the pose of the robot.
  • the calibration result includes the calibration of the first image sensor itself and the calibration of the first image sensor and the robot (ie, hand-eye calibration).
  • the first image sensor is calibrated.
  • One is to obtain internal parameters.
  • the internal parameters include the distortion coefficient (because there is more or less distortion after imaging through a lens, etc.) and focal length, etc.; when it is binocular or
  • the internal parameters can also include structural parameters.
  • the structural parameters the relationship between each pixel of the image acquired by two or more cameras can be quantitatively described in mathematical language to ensure two or two.
  • the above cameras are all in a "required" state; the second is to obtain the corresponding matrix conversion relationship between the world coordinate system and the image coordinate system corresponding to the calibration plate in order to obtain external parameters.
  • the opponent's eye is calibrated to obtain a matrix conversion relationship between the first image sensor coordinate system and the second image sensor coordinate system and the robot coordinate system, respectively.
  • the specific calibration method can adopt the method of OpenCV or Matlab.
  • the Convolutional Neural Network is a feedforward neural network.
  • the basic structure of CNN consists of two layers, one of which is the feature extraction layer, and the input of each neuron and the local acceptance of the previous layer. The domains are connected and the local features are extracted. Once the local feature is extracted, its positional relationship with other features is also determined; the second is the feature mapping layer, each computing layer of the network is composed of multiple feature maps, and each feature map is a plane. The weights of all neurons on the plane are equal.
  • the feature mapping structure uses a small sigmoid function that affects the function kernel as the activation function of the convolutional network, so that the feature map has displacement invariance.
  • each convolutional layer in the convolutional neural network is followed by a computational layer for local averaging and quadratic extraction. This unique two-feature extraction structure reduces feature resolution.
  • CNN is mainly used to identify two-dimensional graphics of displacement, scaling and other forms of distortion invariance. Since the feature detection layer of the CNN learns through the training data, when the CNN is used, the feature extraction of the display is avoided, and the learning data is implicitly learned; and the weights of the neurons on the same feature mapping surface are the same. So the network can learn in parallel, which is also a big advantage of the convolutional network relative to the neural network connected to each other. Convolutional neural networks have unique advantages in image processing with their special structure of local weight sharing.
  • the first Convolutional Neural Network (CNN) model can include various network structures such as: LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2, and other network model structures now known or later developed.
  • CNN Convolutional Neural Network
  • the pre-trained first CNN model is obtained by:
  • S111 acquires an initialized first CNN model, the first CNN model is a coordinate, a pose or a relative pose for the input first image output pin, and/or a second image or a second current including the target jack
  • the image outputs a third pose or a third current pose. It should be noted that when the model needs to output the relative pose, it is also necessary to combine the pose of the manipulator of the input model to output the relative pose.
  • the first CNN model can be used only to obtain the coordinates, poses or relative poses of the pins according to the first image; or only to obtain the third pose or the third current according to the second image or the second current image.
  • Position or can be used to acquire the coordinates, pose or relative pose of the pin according to the first image, and can also be used to obtain the third pose or the third current pose according to the second image or the second current image.
  • the initialization of the first CNN model is referred to the initialization of the NN model, and the details are not repeated here.
  • S112 acquires training data and tag data
  • Tag data can be labeled manually or automatically.
  • the automatic method can use the coordinates, pose or relative pose of the target jack extracted from the image including the target jack as the training annotation during the insertion trajectory planning process based on the traditional visual method.
  • S113 trains the initialized first CNN model based on the training data and the tag data to obtain the first CNN model that is trained in advance.
  • FIG. 5 is a flowchart of an embodiment of a method for acquiring a pre-trained second CNN model provided by the present invention.
  • the S130 described in the above embodiment acquires the third pose or the third current pose of the target jack in the first coordinate system according to the acquired second image or the second current image, which may be adopted.
  • Traditional visual methods can also be implemented by machine learning.
  • the traditional visual mode refers to binarizing the image, and then identifying the outline of the target jack from the image, calculating the coordinates of the target jack according to the contour, or calculating the coordinates of the target jack according to the pre-calibrated result. Convert to the pose of the target jack.
  • the method implemented by the machine learning refers to inputting the second current image into the trained second CNN model or the trained first CNN model described in the above embodiment, and outputting the third pose or the third current pose, specifically
  • the method may include: outputting a third coordinate or a third current coordinate through the model, and then converting to a third pose or a third current pose according to the calibration result; or directly outputting the third pose or the third current position through the model Position; preferred former, this can improve the accuracy of pose extraction.
  • the calibration result is the calibration of the second image sensor itself for acquiring the second image or the second current image, and the hand-eye calibration between the second image sensor and the robot.
  • the first image sensor Let me repeat.
  • the pre-trained second CNN model is obtained by:
  • S131 acquiring an initialized second CNN model, where the second CNN model outputs a third image or a second current image including the target jack for the input, and outputs a third coordinate of the target jack in the second image or the second current image. Or a third current coordinate, or a third posture or a third current posture;
  • the initialization of the second CNN model is referred to the initialization of the NN model, and the details are not repeated here.
  • S132 obtains training data and tag data
  • Acquiring multiple images including the target jack during the operation of the plug-in or at rest requires approximately 1000 times to obtain sufficient training data to train the neural network.
  • Tag data can be labeled manually or automatically.
  • the automatic method can use the coordinates of the target jack extracted from the image including the target jack as the training annotation in the insertion trajectory planning process based on the traditional visual method.
  • S133 trains the initialized second CNN model based on the training data and the tag data to obtain the second CNN model that is trained in advance.
  • the first image that can be input based on the first CNN model may also be used according to the above embodiment.
  • the coordinates, pose or relative pose of the output pin therefore, when training the first CNN model, it is necessary to input the first image and the second image or the second current image to train the first CNN model together.
  • the respective model connection structures are as follows:
  • a relative pose is obtained; and based on the second CNN model 12, a third pose or a third relative pose is obtained;
  • the second current pose is acquired in the relative pose or the first current pose;
  • the MPL model 13 outputs the current amount of movement in combination with the first current pose, the second current pose, the third pose or the third current pose.
  • a relative pose is obtained; and based on the second CNN model 12, a third pose or a third relative pose is obtained; Obtaining a second current pose according to the relative pose or the first current pose; the MPL model 13 outputs the current amount of movement in combination with the first current pose, the relative pose, the third pose or the third current pose.
  • the relative pose and the third pose or the third relative pose are obtained.
  • the MPL model 13 outputs the current in combination with the first current pose, the second current pose, the third pose or the third current pose The amount of movement.
  • the relative pose and the third pose or the third relative position are obtained.
  • the MPL model 13 outputs the current amount of movement in conjunction with the first current pose, the relative pose, the third pose, or the third current pose.
  • the embodiment of the present invention further provides a plug-in device 700, which includes a first image sensor 710, a second image sensor 720, a robot 730, and a processor. 740 and a memory storing a computer program (not shown).
  • the processor 740 interfaces the other units described above by wire or wirelessly.
  • Wireless methods may include, but are not limited to, 3G/4G, WIFI, Bluetooth, WiMAX, Zigbee, UWB (ultra wideband), and other wireless connections that are now known or developed in the future.
  • the first image sensor 710 while in operation, acquires a first image comprising a pin and transmits the first image to the processor 740.
  • the first image sensor 710 is typically disposed at a location between the PCB board insertion work position and the electronic component pickup position.
  • the second image sensor 720 When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
  • the second image sensor may be disposed at any position capable of acquiring an image including the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 20, when the second image sensor 720 is disposed on the PCB Around the board 900, the second image sensor 720 is fixed in position relative to the target jack 910, so that only the second image is acquired once; as shown in FIG. 19, when the second image sensor 720 is disposed on the robot 730, along with the robot 730
  • the movement of the target jack 910 is constantly changing relative, so the robot 730 needs to reacquire the second current image after each movement.
  • the second image sensor is placed on the robot, as will be described in further detail in the following embodiments.
  • the first image sensor and the second image sensor may respectively acquire an image including a target (pin or target jack) by using a monocular, binocular or multi-eye, and obtain a target 3D pose by analyzing the image including the target by the processor.
  • the first image sensor and the second image sensor may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like. The camera will be taken as an example for further details.
  • the monocular includes only one camera, that is, in the case of only one camera, the inter-frame movement is used to form a triangular geometric relationship of the corresponding feature points, thereby obtaining the pose of the target.
  • the binocular consists of two cameras, which are positioned with two cameras.
  • the camera including the target is acquired by two cameras fixed at different positions, and the coordinates of the target on the two camera planes are respectively obtained.
  • the pose of the target in the coordinate system of any of the cameras can be obtained geometrically, that is, the pose of the target is determined.
  • the multi-purpose principle refers to binoculars, and the details are not repeated here.
  • the current information of each joint of the robot is sent to the processor 740; the current amount of motion is moved according to the control of the processor 740; and the pin is inserted into the target jack according to the control of the processor 740.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S110 through S170 shown in FIG.
  • the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained first CNN model acquisition method, and/or an advance
  • the trained second CNN model acquires the various steps in the method.
  • each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • the plug-in device is an industrial automation device that automatically inserts the pins of an electronic component into a target jack on a PCB.
  • the plug-in method includes:
  • S210 obtains a relative pose of the pin relative to the robot in the first coordinate system based on the acquired first image including the pin, based on the first trained CNN model.
  • step S280 may be performed to control the robot to drive the pin movement. Go to the target jack, which saves the subsequent plug-in work time.
  • the pin when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the PCB layout diagram can be combined to calculate By pointing the approximate location of the target jack and moving the electronic components to this approximate position, the pins are placed near the target jack.
  • the position of the first target jack can be used as a reference, and the layout coordinates of the target board can be combined to calculate the approximate position coordinates of the target jack, and the robot can be controlled to move to Near the target jack.
  • S220 acquires a first current pose of the robot in the first coordinate system according to current information of each joint of the robot.
  • S230 acquiring, according to the second image or the second current image including the target jack, a third pose of the target jack in the first coordinate system based on the second trained CNN model or the first CNN model Or the third current pose.
  • S240 calculates, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of motion that the robot needs to implement; S250 determines whether the robot meets the plug-in condition; if satisfied, The S260 controls the robot to drive the pin into the target jack; if not, the S270 controls the robot to implement the current amount of motion.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • the current amount of motion that the robot needs to implement may be implemented by a conventional visual servo method or by a machine.
  • the method of learning is achieved.
  • the traditional visual servoing method is to obtain the current pose of the target jack and the current pose of the pin.
  • the position of the pin after the motion needs to be calculated, according to the pin.
  • the method of machine learning refers to calculating the current amount of exercise that the robot needs to perform based on the pre-trained NN model according to the first current pose, the relative pose, and the third pose or the third current pose.
  • calculating the current amount of exercise to be performed by the robot based on the previously trained NN model may include:
  • the first CNN model For a description of the first CNN model, the second CNN model, and the NN model, refer to the description in the first embodiment, and the detailed description is not repeated here.
  • the plug-in device 700 includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory (not shown).
  • the processor 740 interfaces the other units described above by wire or wirelessly.
  • the first image sensor 710 when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
  • the second image sensor 720 When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740
  • the foot is inserted into the target jack.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S210 through S270 shown in FIG.
  • the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained first CNN model acquisition method, and/or an advance
  • the trained second CNN model acquires the various steps in the method.
  • each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • FIG. 8 is a fifth flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 9 is a sixth flowchart of an embodiment of a plug-in method provided by the present invention.
  • FIG. 10 is a flowchart of an embodiment of a method for acquiring a pre-trained third CNN model provided by the present invention.
  • the plug-in device is an industrial automation device that automatically inserts the pins of an electronic component into a target jack on a PCB.
  • the plug-in method includes:
  • S310 acquires a first image including pins.
  • S320 obtains a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot.
  • the method may further include the step S380 controlling the robot to drive the pin to move to Near the target jack, this saves subsequent plug-in work hours.
  • the pin when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the PCB layout diagram can be combined to calculate By pointing the approximate location of the target jack and moving the electronic components to this approximate position, the pins are placed near the target jack.
  • the position of the first target jack can be used as a reference, and the layout coordinates of the target board can be combined to calculate the approximate position coordinates of the target jack, and the robot can be controlled to move to Near the target jack.
  • S330 acquires a second image or a second current image including the target jack
  • S340 calculating, according to the first image, the first current pose, the second image, or the second current image, a current amount of motion that the robot needs to implement based on the third CNN model that is trained in advance; S350 determining the robot Whether the insertion condition is satisfied; if satisfied, S360 controls the robot to drive the pin into the target jack; if not, S370 controls the robot to implement the current amount of motion.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • the pre-trained third CNN model is obtained by:
  • S341 acquiring an initialized third CNN model, where the CNN model is a first image for input, a first current pose, a second image, or a second current image; and outputting a current displacement amount that the robot needs to implement;
  • S342 acquires training data and tag data
  • the plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
  • the robot In the conventional view of conventional visual servoing, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps). Specifically, the posture of the robot at each step of the robot during visual servoing, and the image including the pin corresponding to the step and the image including the target jack may be used as the training data.
  • the amount of motion required by the robot to move from each pose to the inserted pose is calculated, and the amount of motion is used as a label for model training. data.
  • S343 trains the initialized third CNN model based on the training data and the tag data to obtain the third CNN model that is trained in advance.
  • the first image, the second image or the third image, and the first current pose are input to the third CNN model 14, according to the first image, the second The image or the third image calculates an intermediate result and then combines the first current pose to obtain the current amount of movement.
  • the plug-in device 700 includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory (not shown).
  • the processor 740 interfaces the other units described above by wire or wirelessly.
  • the first image sensor 710 when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
  • the second image sensor 720 When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740
  • the foot is inserted into the target jack.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S310 through S370 shown in FIG.
  • the processor 740 of the plug-in device also includes, in operation, various steps in implementing the pre-trained third CNN model acquisition method described in the above embodiments. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • the plug-in method includes:
  • the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910.
  • the third current image may further include a step S470 to control the robot to drive the pin to move to the vicinity of the target jack before the S410.
  • the robot is moved to the PCB board mark before the pin of the first target jack of the PCB board is inserted, and the Mark point is a solid circle with a peripheral blank area or a rectangular point, using a second image sensor fixed at the end of the robot to acquire a mark bit image, and performing mark bit position detection, based on the detected mark position and the PCB layout map, to calculate the approximate position of the target jack, and When the electronic component moves to this position, the pin is located near the target jack. In the future, since the position of the first target jack is already known, it is no longer necessary to obtain the image of the marker bit. The position of the first target jack can be used as a reference, and the layout of the PCB board can be combined to calculate the target jack. Approximate position coordinates and control the movement of the robot to this position, causing the pin to move near the target jack.
  • the third image sensor 750 when the third image sensor 750 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The third current image acquired and transmitted by the three image sensor 750, because the third image sensor 750 moves relative to the target jack 910, and the posture of the pin 810 is fixed, therefore, the second pin of the pin can be acquired based on the third current image.
  • the pose and the third current pose of the target jack are examples of the target jack.
  • the third image sensor 750 when the third image sensor 750 is disposed at a position around the PCB board 900, since the third image sensor 750 is fixed relative to the PCB board 900, the relative pin moves, and therefore, based on the third current image.
  • the second current pose of the pin and the third pose of the target jack can be obtained.
  • the third image sensor 750 is preferably disposed at the periphery of the PCB board 900 so that a third current image including the pins can be easily obtained.
  • S420 Acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • S430 calculates, according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose, based on the pre-trained NN model, the current implementation of the robot The amount of motion; S440 determines whether the robot meets the plug-in condition; if S450 is satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if S460 is not satisfied, the control robot implements the current amount of motion.
  • the input of the pre-trained NN model includes: a first current pose, a second pose or a second current pose, and a third pose or a third current pose, the difference being that the input entered in the first embodiment includes The relative orientation of the foot or the second current pose, and the input in this embodiment includes the second pose or the second current pose of the pin, so the pre-trained NN model in this embodiment is the same as the first embodiment.
  • the pre-trained NN model described in the above can use the same model structure and training method, except that the input training data is slightly different.
  • the pre-trained NN model acquisition method includes:
  • S210 obtains an initialized NN model, where the NN model is a first current pose in the first coordinate system for input; a relative pose, a second pose or a second current pose; and a third pose or The three current poses output the current amount of exercise that the robot needs to implement.
  • S220 acquires training data and tag data.
  • S230 trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • FIG. 13 is a flowchart of an embodiment of a method for acquiring a pre-trained fourth CNN model provided by the present invention.
  • the above embodiment S410 obtains a second pose or a second current pose of the pin in the first coordinate system according to the third current image including the pin and the target jack, and the target
  • the third pose or the third current pose of the jack in the first coordinate system can be implemented by a traditional visual method or by a machine learning method.
  • the traditional visual mode refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second pose or the second current pose of the pin according to the contour. And the third pose or the third current pose of the target jack.
  • the implementation by the machine learning method refers to inputting the third current image into the pre-trained fourth CNN model, and directly outputting the second pose or the second current pose, and the third pose or the third current pose.
  • the fourth CNN model may include LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2.
  • the pre-trained fourth CNN model is obtained by:
  • S411 acquires an initialized fourth CNN model, which is a third current image including a pin and a target jack for input, and outputs a second pose or a second current position of the pin in the third current image. Position, and the third posture or the third current posture of the target jack;
  • S412 acquires training data and tag data
  • Tag data can be labeled manually or automatically.
  • the automatic method can be used as a training annotation by the position of the pin and the target jack extracted from the image including the pin and the target jack during the insertion trajectory planning process based on the conventional visual method.
  • S413 trains the initialized third CNN model based on the training data and the tag data to obtain the third CNN model that is trained in advance.
  • an embodiment of the present invention further provides a plug-in device 700.
  • the plug-in device 700 includes a third image sensor 750, a robot 730, and a processor 740.
  • the processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
  • the third image sensor may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like.
  • the third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
  • the third image sensor 750 when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
  • the third image sensor may be disposed at any position capable of acquiring an image including the pin and the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 22, when the third image sensor 750 Set at the periphery of the PCB board 900, the third image sensor 750 is fixed in position with respect to the target jack 910, and the relative movement pin 810 moves in a posture; as shown in FIG. 21, when the third image sensor 750 is disposed on the robot 730, With the movement of the robot 730, the posture of the target jack 910 is constantly changing, and the posture of the pins is relatively fixed.
  • the third image sensor 750 is disposed around the periphery of the PCB board 900, which will be described in further detail in the following embodiments.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S410 through S460 shown in FIG.
  • the processor 740 of the plug-in device further includes, in operation, each of the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained fourth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • Figure 14 is a ninth flow chart of an embodiment of the method of plugging in the present invention.
  • Figure 15 is a tenth flow chart of an embodiment of the method of plugging in the present invention.
  • the plug-in method includes:
  • S510 obtains a second pose or a second current pose of the pin in the first coordinate system, and a target jack according to the third current CNN model that is trained according to the third current image including the pin and the target jack. a third pose or a third current pose in the first coordinate system;
  • the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910.
  • the three current images may further include a step S470 to control the robot to drive the pin to move to the vicinity of the target jack before S410.
  • the third image sensor 750 may be disposed on the hand of the machine 730 or may be disposed at the periphery of the PCB board 900; preferably, the third image sensor 750 is disposed at the periphery of the PCB board 900, so that the third current image including the pin is conveniently obtained. .
  • S520 acquiring, according to current information of each joint of the robot, a first current pose of the robot in the first coordinate system;
  • S530 calculates, according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose, a current amount of motion that the robot needs to implement; S540 determines whether the robot satisfies the insertion If the S550 is satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if S560 is not satisfied, the control robot implements the current amount of motion.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • calculating the current amount of exercise to be performed by the robot according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose may pass
  • the traditional method can also be realized by the method of machine learning, or by the traditional visual servo method, preferably by the machine learning method, because the machine learning method can improve the accuracy and efficiency of the current exercise amount calculation.
  • the machine learning method is based on a pre-trained NN model.
  • NN model For a description of the NN model, refer to the specific embodiment 1, and the detailed description is not repeated here.
  • an embodiment of the present invention further provides a plug-in device 700.
  • the plug-in device 700 includes a third image sensor 750, a robot 730, and a processor 740.
  • the processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
  • the third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
  • the third image sensor 750 when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S510 through S560 shown in FIG.
  • the processor 740 of the plug-in device further includes, in operation, each of the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained fourth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • the plug-in method includes:
  • S610 acquiring, according to the obtained current information of each joint of the robot, a first current pose of the robot in the first coordinate system;
  • the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910.
  • the three current images may further include a step S670 to control the robot to drive the pin to move to the vicinity of the target jack before S410.
  • the third image sensor 750 may be disposed on the hand of the machine 730 or may be disposed at the periphery of the PCB board 900; preferably, the third image sensor 750 is disposed at the periphery of the PCB board 900, so that the third current image including the pin is conveniently obtained. .
  • the S620 acquires a third current image including the pin and the target jack
  • S630 according to the third current image, the first current pose, based on the previously trained fifth CNN model, calculating a current amount of motion that the robot needs to implement; S640 determining whether the robot meets the plug-in condition; if satisfied, the S650 control station The robot drives the pin into the target jack; if not, the S660 controls the robot to implement the current amount of motion.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • the pre-trained fifth CNN model is obtained by the following method:
  • S631 obtains an initialized fifth CNN model, which is a third current image and a first current pose for the input; and outputs a current displacement amount that the robot needs to implement.
  • S632 acquires training data and tag data.
  • the plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
  • the robot In the conventional view of conventional visual servoing, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps). Specifically, the position of the robot at each step of the robot during visual servoing, and the image including the pin and the target jack corresponding to the step may be used as training data.
  • a preset number of steps for example, 3 steps.
  • the amount of motion required by the robot to move from each pose to the inserted pose is calculated, and the amount of motion is used as a label for model training. data.
  • S633 trains the initialized fifth CNN model based on the training data and the tag data to obtain the fifth CNN model that is trained in advance.
  • FIG. 25 is a connection block diagram of a model according to an embodiment of the present invention.
  • a third current image and a first current pose are input to the fifth CNN model 14, an intermediate result is calculated according to the third current image, and then combined with the first The current pose, thus obtaining the current amount of movement.
  • an embodiment of the present invention further provides a plug-in device 700.
  • the plug-in device 700 includes a third image sensor 750, a robot 730, and a processor 740.
  • the processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
  • the third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
  • the third image sensor 750 when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S610 through S660 shown in FIG.
  • the processor 740 of the plug-in device also includes, in operation, various steps in implementing the pre-trained fifth CNN model acquisition method described in the above embodiments. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • an embodiment of the present invention further provides an insertion method, where the insertion method includes:
  • S110' acquires the second coordinates of the pin according to the acquired first image including the pin.
  • the electronic component 800 is moved into the field of view of the first image sensor 710 such that the first image sensor 710 passes through the first image sensor 710.
  • a first image comprising pins 810 of electronic component 800 is acquired.
  • the first image usually does not include the PCB board background, because if the image includes a PCB background, the background image is complicated, which makes pin identification difficult.
  • the processor acquires a first image acquired and transmitted by the first image sensor, and extracts a second coordinate of the pin.
  • the second coordinate of the pin may be the second coordinate of the pin inserted into the insertion end of the target jack or the second coordinate of the entire pin, preferably the second coordinate of the pin insertion end.
  • the first image acquired by the first image sensor in addition to the relative pose for acquiring the pin, can also be used to check whether the electronic component has a defect, that is, by analyzing the first image, A comparison is made with a pre-stored non-defective component image to determine whether the electronic component is defective. If there are no defects, you can continue with the following steps. If there is a defect, you can control the robot to put the electronic component back to the recycling position, and then return to the reclaiming position to pick up the electronic components.
  • S120' acquires a first current pose of the robot in the first coordinate system based on the current information of the joints of the acquired robot.
  • the first coordinate system may be a robot coordinate system, a first image sensor coordinate system, a second image sensor coordinate system, or any other coordinate system specified.
  • the robot coordinate system is taken as the first coordinate system as an example for further detailed description.
  • the center of the base of the robot is typically set to a robot coordinate system.
  • the first current pose of the robot may be a first current pose of the center of the flange of the end joint of the robot, a first current pose of the center of the end effector of the robot, and the like.
  • the information includes the information of the movement amount of each joint, and combined with the information of the type and size of each joint, the robot can be obtained by the positive formula of the kinematics of the robot.
  • step S180' may be performed to control the robot to drive the pin to move to Near the target jack, this saves subsequent plug-in work hours.
  • the coordinates or pose of the marked point on the PCB board can be detected first, and the Mark point is the PCB.
  • a solid circular or rectangular point with a blank area on the board, combined with the layout of the PCB board, can calculate the approximate position of the target jack and move the electronic components to this approximate position, so that the pins can be placed in the target Near the hole.
  • the position of the first target jack since the position of the first target jack is already known, it is no longer necessary to acquire the mark bit image, but the position of the first target jack can be used as a reference, and the layout of the PCB board is combined.
  • the figure estimates the approximate position of the target jack and controls the robot to move to this position, causing the pin to move near the target jack.
  • S130' acquires a third coordinate or a third current coordinate of the target jack in the first coordinate system.
  • the target jack moves relative to the second image sensor, and the second image sensor 720 is acquired and transmitted.
  • the current image acquires a third current coordinate of the target jack according to the second current image.
  • the second image sensor 720 is preferably disposed on the robot, and since the second image sensor 720 moves along with the robot 730, the second image sensor 720 can be positioned closer to or directly above the target jack 910 to include the target insertion. The image of the hole, thereby improving the accuracy of the coordinate extraction of the target jack, and better improving the accuracy of the subsequent plug-in.
  • S140' calculates a current amount of motion to be performed by the robot based on the previously trained neural network (Neural Network NN) model according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate; S150' determines whether the robot is satisfied The plug-in condition; if satisfied, the S160' controls the robot to drive the pin into the target jack; if not, the S170' controls the robot to implement the current amount of motion.
  • Neural Network NN previously trained neural network
  • control robot drives the pin into the target jack to complete the plug-in action of the electronic component; then, the robot moves under the control of the processor to the take-out position of the next electronic component, and the next electronic component is clamped. Repeat the above steps until the plug-in actions of the electronic components corresponding to all target jacks on the PCB are completed.
  • control robot implements the corresponding current amount of exercise, and repeats the above steps after the robot has implemented the corresponding current amount of exercise.
  • the current amount of exercise that the robot needs to implement refers to the amount of movement (movement amount + rotation amount) that the end effector or end shaft of the manipulator needs to perform.
  • the inverse kinematics formula of the manipulator can be used to obtain the motion information that needs to be implemented by each joint of the manipulator, and then the command of each motion information is sent to the motor controller of each joint, thereby controlling the motion of the manipulator.
  • the amount of exercise refers to the amount of movement (movement amount + rotation amount) that the end effector or end shaft of the manipulator needs to perform.
  • a large PCB board may be difficult to complete the entire plug-in operation at one time. Therefore, a large PCB board is usually virtualized into a plurality of small modules, and multiple modules are inserted in multiple times, thereby finally The insertion of the entire PCB board is completed. Therefore, in this case, the plug-in method of one module is completed according to the plug-in method of the specific embodiment, and then the steps of the plug-in method are repeated, and the plug-ins of other modules are sequentially completed. Until the entire PCB board is plugged in. The PCB board is then removed from the working position and the next PCB board is moved to the plug-in working station to repeat the steps described in the plug-in method of the embodiment of the present invention.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • the pre-trained NN model can be obtained by:
  • the NN model is a first current pose, a second coordinate, and a third coordinate or a third current coordinate in the first coordinate system for the input, and outputs the current exercise amount to be implemented by the robot .
  • S142' acquires training data and tag data.
  • the plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
  • the robot In the conventional view of conventional visual servoing, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps).
  • the NN model can be trained by using the posture of the robot at each step of the robot during visual servoing, and the coordinates of the corresponding pin of the step and the coordinates of the target jack as training data.
  • the amount of motion required for the robot to move from each pose to the inserted pose is calculated, and this amount of motion is used as the annotation for the training of the NN model.
  • Tag data Based on the pose of each step of the robot during visual servoing and the pose of the robot when the component is finally inserted, the amount of motion required for the robot to move from each pose to the inserted pose is calculated, and this amount of motion is used as the annotation for the training of the NN model.
  • S143' trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
  • FIG. 31 is a flowchart of an embodiment of a method for acquiring a pre-trained sixth CNN model provided by the present invention.
  • obtaining the second coordinate according to the acquired first image described in the above embodiment S110' may be implemented by a traditional visual method or by a machine learning method.
  • the traditional visual mode refers to binarizing the first image, then identifying the outline of the pin from the first image, and extracting the second coordinate of the pin according to the contour.
  • the implementation by the machine learning method refers to inputting the first image into the previously trained first convolutional neural network (CNN) model and directly outputting the second coordinates of the pin.
  • CNN convolutional neural network
  • the pre-trained sixth CNN model is obtained by:
  • S111' obtains an initialized sixth CNN model, which is a first image including an input pin and/or a second image or a second current image including a target jack, and outputs a first image The second coordinate of the foot and/or the third or third coordinate of the target jack.
  • the sixth CNN model may be used only to acquire the second coordinates according to the first image; or only to acquire the second coordinates or the second current coordinates according to the second image or the second current image; or An image acquisition second coordinate may be used to acquire a third coordinate or a third current coordinate according to the second image or the second current image.
  • the initialization of the sixth CNN model is referred to the initialization of the NN model, and the details are not repeated here.
  • S112' acquires training data and tag data.
  • Tag data can be labeled manually or automatically.
  • the automatic method can use the coordinates of the pin extracted from the image including the pin as the training annotation in the insertion trajectory planning process based on the traditional visual method.
  • S113' trains the initialized sixth CNN model based on the training data and the tag data to obtain the sixth CNN model that is trained in advance.
  • FIG. 32 is a flowchart of an embodiment of a method for acquiring a pre-trained seventh CNN model provided by the present invention.
  • the S130' described in the above embodiment acquires the third coordinate or the third current coordinate according to the second image or the second current image, and may be implemented by a traditional visual method or by a machine learning method.
  • the traditional visual mode refers to binarizing an image, and then identifying the outline of the target jack from the image, and identifying the third coordinate or the third current coordinate of the target jack according to the contour.
  • the implementation by the machine learning method refers to inputting the second current image into the previously trained seventh CNN model, and directly outputting the third coordinate or the third current coordinate.
  • the method for acquiring the trained seventh CNN model is referred to the sixth CNN model and the NN model, and details are not repeated herein.
  • the present invention also provides a plug-in device that includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory. (The figure is not shown).
  • the processor 740 interfaces the other units described above by wire or wirelessly.
  • Wireless methods may include, but are not limited to, 3G/4G, WIFI, Bluetooth, WiMAX, Zigbee, UWB (ultra wideband), and other wireless connections that are now known or developed in the future.
  • the first image sensor and the second image sensor may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like.
  • the first image sensor 710 when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
  • the first image sensor 710 is typically disposed at a location between the PCB board insertion work position and the electronic component pickup position. After the robot picks up the electronic component at the take-up position under the control of the processor, the electronic component is moved into the field of view of the first image sensor such that the first image of the pin including the electronic component is captured by the first image sensor.
  • the first image usually does not include the PCB board background, because if the image includes a PCB background, the background image is complicated, which makes pin identification difficult.
  • the second image sensor 720 When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
  • the second image sensor may be disposed at any position capable of acquiring an image including the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 20, when the second image sensor 720 is disposed on the PCB Around the board 900, the second image sensor 720 is fixed in position relative to the target jack 910, so that only the second image is acquired once; as shown in FIG. 19, when the second image sensor 720 is disposed on the robot 730, along with the robot 730
  • the movement of the target jack 910 is constantly changing relative, so the robot 730 needs to reacquire the second current image after each movement.
  • the second sensor is disposed on the robot, as will be described in further detail in the following embodiments.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S110' through S170' shown in FIG.
  • the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained sixth CNN model acquisition method, and/or an advance
  • the trained seventh CNN model acquires the various steps in the method.
  • each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • the present invention provides an interpolating method, the interpolating method comprising:
  • S210' obtains the second coordinates of the pin based on the acquired first image including the pin based on the previously trained sixth CNN model.
  • S220' acquires a first current pose of the robot in the first coordinate system according to current information of the joints of the robot.
  • step S280 may be performed to control the robot to drive the pin to move to the target. Near the jack, this saves the subsequent plug-in working hours.
  • the pin when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the Mark point is the PCB.
  • a solid circular or rectangular point with a blank area on the board, combined with the layout of the PCB board, can calculate the approximate position of the target jack and move the electronic components to this approximate position, so that the pins can be placed in the target Near the hole.
  • the position of the first target jack since the position of the first target jack is already known, it is no longer necessary to acquire the mark bit image, but the position of the first target jack can be used as a reference, and the layout of the PCB board is combined.
  • the figure estimates the approximate position of the target jack and controls the robot to move to this position, causing the pin to move near the target jack.
  • S230' obtain a third coordinate or a third of the target jack based on the acquired second image or the second current image including the target jack based on the pre-trained sixth CNN model or the pre-trained seventh CNN model Three current coordinates.
  • S240' calculates a current amount of motion that the robot needs to implement according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate; S250' determines whether the robot satisfies the insertion condition; if satisfied, The S260' controls the robot to drive the pin into the target jack; if not, the S270' controls the robot to implement the current amount of motion.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • calculating the current amount of motion that the robot needs to implement according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate may be implemented by a conventional visual servo method or by machine learning. Method implementation.
  • the traditional way is to obtain the current pose of the target jack and the current pose of the pin.
  • calculate the pose of the pin after exercise according to the pin and the manipulator.
  • the calibration result calculates the posture of the robot after the exercise is performed, and calculates the current amount of movement (movement amount + rotation amount) that the robot needs to perform according to the current posture of the manipulator and the posture after the exercise is required; and controls the manipulator to implement the current exercise amount; Then, the above steps are repeated until after the current amount of motion is small or the preset number of steps is moved, it is judged that the plug-in condition is satisfied, and the control robot drives the pin to perform the plug-in.
  • the machine learning method is based on a pre-trained NN model.
  • the plug-in device 700 includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory (not shown).
  • the processor 740 interfaces the other units described above by wire or wirelessly.
  • the first image sensor 710 when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
  • the second image sensor 720 When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740
  • the foot is inserted into the target jack.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S210' through S270' shown in FIG.
  • the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained sixth CNN model acquisition method, and/or an advance
  • the trained seventh CNN model acquires the various steps in the method.
  • each of the above methods may also be performed by a processor of another device than the plug-in device.
  • the present invention further provides an insertion method, the insertion method comprising:
  • S310' obtains a second coordinate or a second current coordinate of the pin in the first coordinate system according to the acquired third current image including the pin and the target jack, and the first coordinate system of the target jack a third coordinate or a third current coordinate;
  • the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910.
  • the three current images may further include a step S370' to control the robot to drive the pins to move to the vicinity of the target jack before S310'.
  • the robot is moved to the PCB board mark before the pin of the first target jack of the PCB board is inserted, and the Mark point is a solid circle with a peripheral blank area or a rectangular point, using a second image sensor fixed at the end of the robot to acquire a mark bit image, and performing mark bit position detection, based on the detected mark position and the PCB layout map, to calculate the approximate position of the target jack, and When the electronic component moves to this position, the pin is located near the target jack. In the future, since the position of the first target jack is already known, it is no longer necessary to obtain the image of the marker bit. The position of the first target jack can be used as a reference, and the layout of the PCB board can be combined to calculate the target jack. Approximate position coordinates and control the movement of the robot to this position, causing the pin to move near the target jack.
  • the third image sensor 750 when the third image sensor 750 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The third current image acquired and transmitted by the three image sensor 750 is fixed relative to the pin 810 because the third image sensor 750 moves relative to the target jack 910. Therefore, the second coordinate and the target of the pin can be acquired based on the third current image.
  • the third current pose of the jack As shown in FIG. 21, according to the above embodiment, when the third image sensor 750 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The third current image acquired and transmitted by the three image sensor 750 is fixed relative to the pin 810 because the third image sensor 750 moves relative to the target jack 910. Therefore, the second coordinate and the target of the pin can be acquired based on the third current image.
  • the third current pose of the jack when the third image sensor 750 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The third current
  • the third image sensor 750 when the third image sensor 750 is disposed at a position around the PCB board 900, since the third image sensor 750 is fixed relative to the PCB board 900, the relative pin moves, and therefore, the third current image can be acquired.
  • the third image sensor 750 is preferably disposed at the periphery of the PCB board 900 so that a third current image including the pins can be easily obtained.
  • S320' obtains a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • S330′ calculating, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current exercise amount to be implemented by the robot based on the trained NN model; S340 'determining whether the robot meets the plug-in condition; if the S350' is satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if the S360' is not satisfied, the control robot performs the current amount of exercise.
  • the input of the pre-trained NN model includes: a first current pose, a second coordinate or a second current coordinate, and a third coordinate or a third current coordinate, except that the input in the first embodiment includes the second coordinate, and In this embodiment, the second coordinate may be included, or the second current coordinate may be included. Therefore, the pre-trained NN model in this embodiment may be the same as the pre-trained NN model described in the first embodiment.
  • the model structure and training method are only slightly different from the input data and training data.
  • the pre-trained NN model acquisition method includes:
  • the NN model is a first current pose in the first coordinate system for the input; the second coordinate or the second current coordinate; and the third coordinate or the third current coordinate, the output robot The current amount of exercise to be implemented.
  • S220' acquires training data and tag data.
  • S230' trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • FIG. 37 is a flow chart of an embodiment of a method for acquiring a pre-trained eighth CNN model provided by the present invention.
  • the above embodiment S410' acquires the second coordinate or the second current coordinate of the pin and the third coordinate or the third of the target jack according to the third current image including the pin and the target jack.
  • the three current coordinates can be implemented by a traditional visual servo method or by a machine learning method.
  • the traditional visual mode refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second coordinate or the second current coordinate and target of the pin according to the contour.
  • the third coordinate of the jack or the third current coordinate refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second coordinate or the second current coordinate and target of the pin according to the contour.
  • the third coordinate of the jack or the third current coordinate refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second coordinate or the second current coordinate and target of the pin according to the contour.
  • the third coordinate of the jack or the third current coordinate refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second coordinate or the second current coordinate and target of the pin according to the contour.
  • the method implemented by the machine learning refers to inputting the third current image into the third CNN model that is trained in advance, and directly outputting the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate.
  • the eighth CNN model may include LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2.
  • the previously trained eighth CNN model is obtained by:
  • S311' obtains an initialized eighth CNN model, which is a third current image including a pin and a target jack for input, and outputs a second coordinate or a second current coordinate of the pin in the third current image. And the third coordinate or the third current coordinate of the target jack;
  • S312' acquires training data and tag data
  • Tag data can be labeled manually or automatically.
  • the automatic method can be used as a training annotation by the coordinates of the pin and the target jack extracted from the image including the pin and the target jack during the insertion trajectory planning process based on the conventional visual method.
  • S313' trains the initialized eighth CNN model based on the training data and the tag data to obtain the eighth CNN model that is trained in advance.
  • an embodiment of the present invention further provides a plug-in device 700.
  • the plug-in device 700 includes a third image sensor 750, a robot 730, a processor 740, and a memory (not shown).
  • the processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
  • the third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
  • the third image sensor 750 when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
  • the third image sensor may be disposed at any position capable of acquiring an image including the pin and the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 16, when the third image sensor 750 Positioned on the periphery of the PCB board 900, the third image sensor 750 is fixed in position with respect to the target jack 910, and the relative movement pin 810 moves in a posture; as shown in FIG. 15, when the third image sensor 750 is disposed on the robot 730, With the movement of the robot 730, the posture of the target jack 910 is constantly changing, and the posture of the pins is relatively fixed.
  • the third image sensor 750 is disposed around the periphery of the PCB board 900, which will be described in further detail in the following embodiments.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S310' through S370' shown in FIG.
  • the processor 740 of the plug-in device further includes, in operation, the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained eighth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • the present invention further provides an insertion method, the insertion method comprising:
  • S410' acquires a second coordinate or a second current coordinate of the pin, and a third coordinate or a third coordinate of the target jack based on the third current CNN model trained in advance according to the third current image including the pin and the target jack Current coordinates
  • the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910.
  • the third current image may further include a step S470' to control the robot to drive the pin to move to the vicinity of the target jack before the S410'.
  • the third image sensor 750 may be disposed on the hand of the machine 730 or may be disposed at the periphery of the PCB board 900; preferably, the third image sensor 750 is disposed at the periphery of the PCB board 900, so that the third current image including the pin is conveniently obtained. .
  • S420' obtains a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • S430' calculates a current amount of motion that the robot needs to implement according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate; S440' determines whether the robot satisfies the insertion condition S450', if satisfied, controlling the robot to drive the pin into the target jack; if not satisfied, the control robot implements the current amount of motion.
  • the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
  • calculating the current amount of motion that the robot needs to implement according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate may be by a conventional method, It can also be realized by the method of machine learning, or by the traditional method of visual servoing, preferably by the method of machine learning, because the method of machine learning can improve the accuracy and efficiency of current exercise calculation.
  • the machine learning method is based on a pre-trained NN model.
  • NN model For a description of the NN model, refer to Embodiment 9 or 7. The description is not repeated here.
  • an embodiment of the present invention further provides a plug-in device 700.
  • the plug-in device 700 includes a third image sensor 750, a robot 730, a processor 740, and a memory (not shown).
  • the processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
  • the third image sensor 750 may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like.
  • the third image sensor 750 when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
  • the robot 730 when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
  • steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S410' through S460' shown in FIG.
  • the processor 740 of the plug-in device further includes, in operation, the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained eighth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
  • the present invention further provides a plug-in device, the plug-in device comprising: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, and a third a pose and a third current pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and the control motion program module;
  • the relative pose acquisition program module is configured to acquire a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
  • the second current pose acquisition program module is configured to acquire a second current pose of the pin in the first coordinate system according to the first current pose and the relative pose;
  • the third pose and the third current pose acquisition program module are configured to acquire a third pose of the target jack in the first coordinate system according to the second image or the second current image including the target jack Or the third current pose;
  • the current motion quantity acquisition program module is configured to calculate a robot based on the pre-trained NN model according to the first current pose, the second current pose, and the third pose or the third current pose The current motion amount to be implemented; the determining program module, configured to determine whether the robot meets the plug-in condition; and the control plug-in program module, if configured to control the robot to drive the pin to insert the target a jack; the control motion program module, if not satisfied, controlling the robot to perform the current amount of exercise; or
  • the plug-in device includes: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, a third pose and a third current pose acquisition program module, and a current exercise amount acquisition program. a module, a judgment program module, a control plug-in program module, and a control motion program module;
  • the relative pose acquisition module is configured to acquire, according to the acquired first image including the pin, a relative pose of the pin relative to the robot in the first coordinate system based on the first trained CNN model;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • the second current pose acquisition program module is configured to acquire a second current pose of the pin in the first coordinate system according to the first current pose and the relative pose;
  • the third pose and the third current pose acquisition program module are configured to be based on the second CNN model or the first CNN model that is trained in advance according to the second image or the second current image including the target jack Obtaining a third pose or a third current pose of the target jack in the first coordinate system;
  • the current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second current pose, and the third pose or the third current pose, a current amount of exercise that the robot needs to implement;
  • the determining program module is configured to determine whether the robot meets the plug-in condition; the control plug-in program module is configured to control the robot to drive the pin to insert into the target jack if satisfied; the control motion program a module for controlling the robot to perform the current amount of exercise if not satisfied; or
  • the plug-in device includes: a first image acquisition program module, a first current pose acquisition program module, a second image or second current image acquisition program module, a current motion amount acquisition program module, a judgment program module, and a control plug-in program module. And the control motion program module;
  • the first image acquisition program module is configured to acquire a first image including a pin
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
  • the second image or second current image acquisition program module is configured to acquire a second image or a second current image including a target jack;
  • the current motion quantity acquisition program module is configured to calculate a robot based on the first trained CNN model according to the first image, the first current pose, the second image, or the second current image.
  • the judgment program module is configured to determine whether the robot meets the plug-in condition;
  • the control plug-in program module is configured to control the robot to drive the pin to be inserted into the target jack if satisfied
  • the control motion program module configured to control the robot to perform the current exercise amount if not satisfied; or
  • the plug-in device includes: a second pose or a second current pose and a third pose or a third current pose acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, and a judgment program module , controlling the plug-in program module and controlling the motion program module;
  • the second pose or the second current pose and the third pose or the third current pose acquisition program module are configured to acquire a pin according to the acquired third current image including the pin and the target jack. a second pose or a second current pose in a coordinate system, and a third pose or a third current pose of the target jack in the first coordinate system;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
  • the current exercise amount acquisition program module is configured to perform pre-training according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose
  • the NN model calculates a current amount of motion to be implemented by the robot; the determining program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to control the robot to drive the lead if satisfied Inserting a foot into the target jack; the control motion program module, if not satisfied, controlling the robot to implement the current amount of exercise; or
  • the plug-in device includes: a second pose or a second current pose and a third pose or a third current pose acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, and a judgment program module , controlling the plug-in program module and controlling the motion program module;
  • the second pose or the second current pose and the third pose or the third current pose acquisition program module are configured to be based on the fourth trained image according to the third current image including the pin and the target jack
  • the CNN model obtains a second pose or a second current pose of the pin in the first coordinate system, and a third pose or a third current pose of the target jack in the first coordinate system;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • the current motion quantity acquisition program module is configured to calculate the robotic requirement according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose
  • the current motion amount is implemented;
  • the determining program module is configured to determine whether the robot meets the plug-in condition; and
  • the control plug-in program module is configured to, if satisfied, control the robot to drive the pin to insert the target plug a control motion program module; for controlling the robot to perform the current exercise amount if not satisfied; or
  • the plug-in device includes: a third image acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and a control motion program module;
  • the third image acquisition program module is configured to acquire a third current image including a pin and a target jack;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the obtained current information of each joint of the robot;
  • the current motion quantity acquisition program module is configured to calculate, according to the third current image and the first current pose, a current amount of motion to be performed by the robot based on the trained fifth CNN model; a module, configured to determine whether the robot meets an insertion condition; the control insertion program module is configured to: if satisfied, control the robot to drive the pin into the target jack; the control motion program module, Used to control the robot to perform the current amount of exercise if not satisfied; or
  • the plug-in device includes: a second coordinate acquiring program module, a first current pose acquiring program module, a third coordinate or a third current coordinate acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in program module. And controlling the motion program module;
  • the second coordinate acquiring program module is configured to acquire a second coordinate of the pin according to the acquired first image including the pin;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
  • the third coordinate or third current coordinate acquiring program module is configured to acquire a third coordinate or a third current coordinate of the target jack according to the second image or the second current image including the target jack;
  • the current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current requirement to be implemented by the robot based on the trained NN model
  • the judging program module is configured to determine whether the robot meets the plug-in condition; if the control plug-in program module is satisfied, the control plug-in program module is configured to control the robot to drive the pin into the target jack; Controlling a motion program module for controlling the robot to perform the current amount of exercise if not satisfied; or
  • the plug-in device includes: a second coordinate acquiring program module, a first current pose acquiring program module, a third coordinate or a third current coordinate acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in program module. And controlling the motion program module;
  • the second coordinate acquiring program module is configured to acquire, according to the acquired first image including the pin, the second coordinate of the pin based on the sixth trained CNN model;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • the third coordinate or third current coordinate acquisition program module is configured to be based on a pre-trained seventh CNN model or the pre-trained sixth CNN according to the second image or the second current image including the target jack a model that obtains a third coordinate or a third current coordinate of the target jack;
  • the current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; the determining program module For determining whether the robot meets the plug-in condition; the control plug-in program module is configured to: if satisfied, the control robot drives the pin to be inserted into the target jack; and the control motion program module is configured to control the robot if not satisfied The current amount of exercise; or
  • the plug-in device includes: a second coordinate or a second current coordinate and a third coordinate or third current coordinate acquiring program module, a first current pose acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in Program module and control motion program module;
  • the second coordinate or the second current coordinate and the third coordinate or the third current coordinate acquiring program module configured to acquire the second coordinate of the pin or the first according to the acquired third current image including the pin and the target jack Two current coordinates, and a third coordinate or a third current coordinate of the target jack;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
  • the current motion quantity acquisition program module is configured to calculate based on the pre-trained NN model according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate The current amount of motion to be implemented by the robot; the determining program module, configured to determine whether the robot meets the plug-in condition; and the control plug-in program module, if the controller is controlled to drive the pin to insert the pin a target jack; the control motion program module, configured to control the robot to perform the current amount of motion if not satisfied; or
  • the plug-in device includes: a second coordinate or a second current coordinate and a third coordinate or third current coordinate acquiring program module, a first current pose acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in Program module and control motion program module;
  • the second coordinate or second current coordinate and the third coordinate or third current coordinate acquiring program module are configured to obtain, according to the third current CNN model that is trained according to the third current image including the pin and the target jack a second coordinate or a second current coordinate of the pin, and a third coordinate or a third current coordinate of the target jack;
  • the first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
  • the current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of exercise that the robot needs to implement;
  • the judging program module is configured to determine whether the robot meets the plug-in condition; insert the target jack; and the control motion program module is configured to: if not satisfied, control the robot to implement the current motion amount.
  • the present invention also provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the plug-in method described in any of the above embodiments.
  • the present invention also provides an electronic device including a memory 750, a processor 740, and a computer program 770 stored in the memory 760 and executable on the processor 740.
  • the processor inserting method described in any one of the above embodiments is implemented when the processor executes the computer program.
  • the computer program can be partitioned into one or more modules/units, which are stored in the memory (not shown) and by the processor 740 Executed to complete the present invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, the instruction segments being used to describe the process of trajectory planning of the computer program in the plug-in device.
  • the computer program may be divided into the plug-in device, including: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, a third pose, and a third current a pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and the control motion program module; the specific functions of each module are as follows: the relative pose acquisition program module is configured to be included according to the acquisition a first image of the pin, the relative pose of the pin relative to the robot in the first coordinate system; the first current pose acquisition program module, configured to acquire the robot according to the current information of the acquired joints of the robot a first current pose in the first coordinate system; the second current pose acquisition program module, configured to acquire the pin according to the first current pose and the relative pose a second current pose in the first coordinate system; the third pose and a third current pose acquisition program module for using the second image including the target jack or a second current image, a third pose or a third current pose of the target jack
  • the electronic device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the electronic device can include, but is not limited to, a processor, a memory. It will be understood by those skilled in the art that the schematic diagram is merely an example of an electronic device, does not constitute a limitation on an electronic device, may include more or less components than those illustrated, or combine some components, or different components.
  • the electronic device may also include an input and output device, a network access device, a bus, and the like.
  • the processor 740 may be a central processing unit (CPU), or may be other general-purpose processors, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory may be a storage device built into the plug-in device or an electronic device, such as a hard disk or a memory.
  • the memory may also be an external storage device of the plug-in device or an electronic device, such as a plug-in hard disk equipped with the plug-in device or the electronic device, a smart memory card (SMC), and a secure digital device ( Secure Digital, SD) cards, flash cards, etc.
  • the memory may also include both the plug-in device 700 or an internal storage unit of the electronic device, and an external storage device.
  • the memory is for storing the computer program and other programs and data required by the electronic device or the plug-in device.
  • the memory can also be used to temporarily store data that has been output or is about to be output.
  • FIGS. 19-23 are merely examples of the plug-in device and the electronic device, and do not constitute a limitation of the plug-in device and the electronic device, and may include more or less components than the illustration, or a combination thereof. Certain components, or different components, such as the plug-in device, may also include memory, input and output devices, and the like.
  • each block in the flowchart or block diagram can represent a module, a program segment, or a portion of code, and a module, a program segment, or a portion of code includes one or more Executable instructions.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Manipulator (AREA)

Abstract

A plug-in method and a plug-in device. The plug-in method comprises: acquiring, according to an acquired first image comprising a pin, a relative position of the pin relative to a mechanical arm in a first coordinate system (S110); acquiring, according to acquired current information of each joint of the mechanical arm, a first current position of the mechanical arm in the first coordinate system (S120); acquiring, according to a second image or a second current image comprising a target socket, a third position or a third current position of the target socket in the first coordinate system (S130); calculating, on the basis of a pre-trained NN model and according to the first current position, the relative position, and the third position or the third current position, a current amount of movement to be performed by the mechanical arm (S140); determining whether the mechanical arm meets a plug-in condition (S150); if yes, controlling the mechanical arm to drive the pin to plug into the target socket (S160); and if not, controlling the mechanical arm to perform the current amount of movement (S170). The plug-in performed on the basis of a machine learning method can improve the accuracy and efficiency of plug-in operations in various complex work environments.

Description

一种插机方法及插机设备Plug-in method and plug-in device 技术领域Technical field
本发明涉及自动化技术领域,具体涉及一种插机方法及插机设备。The present invention relates to the field of automation technologies, and in particular, to an insertion method and an insertion device.
背景技术Background technique
现有的工业自动化技术领域中,自动化插机设备用于自动将电子元件的引脚***PCB板中,目前的自动化插机设备可以包括两种工作方式:一种方式是盲插,盲插是预先计算好获取的电子元件的位置和PCB板的位置,靠机械精度的配合实现没有视觉指导下的插机,这种方式对机械和各部件的位置精度要求较高。另外一种方式是基于视觉的插机方法,该方法是在插机过程中,加入眼睛的指导,这种方式虽然在一定程度上降低了机械和位置精度的要求,但是,当出现插机过程中外部环境改变或元件种类变化等等的情况时,可能会造成插机准确率降低,或者为适应各种不同的情况,在保证插机准确率的情况下,需要重新调整相关的参数,从而造成了效率的降低。In the field of industrial automation technology, the automatic plug-in device is used to automatically insert the pins of the electronic component into the PCB board. The current automatic plug-in device can include two working modes: one is blind insertion, and the blind insertion is The position of the acquired electronic components and the position of the PCB board are pre-calculated, and the insertion of the machine without visual guidance is realized by the cooperation of mechanical precision, which requires high positional accuracy of the machine and each component. Another way is a vision-based plug-in method, which is to add eye guidance during the insertion process. Although this method reduces the mechanical and positional accuracy requirements to a certain extent, when the insertion process occurs When the external environment changes or the component type changes, etc., the accuracy of the plug-in may be reduced, or in order to adapt to various situations, the relevant parameters need to be re-adjusted while ensuring the accuracy of the plug-in. Caused a reduction in efficiency.
发明内容Summary of the invention
本发明的目的是提供一种插机方法及插机设备。通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率和效率。It is an object of the present invention to provide an insertion method and an insertion device. The plug-in based on the machine learning method can improve the accuracy and efficiency of the plug-in in various complicated environments.
本发明第一方面提供一种插机方法,所述插机方法包括:A first aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚的第一图像,获取引脚相对机械手在第一坐标系下的相对位姿;Obtaining a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin;
根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;Obtaining, according to current information of each joint of the robot, the first current pose of the robot in the first coordinate system;
根据获取的包括目标插孔的第二图像或第二当前图像,获取目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Obtaining a third pose or a third current pose of the target jack in the first coordinate system according to the acquired second image or the second current image including the target jack;
根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位 姿,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating a current amount of exercise to be performed by the robot based on the pre-trained NN model according to the first current pose, the relative pose, and the third pose or the third current pose; determining the robot Whether the plug-in condition is satisfied; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
本发明第二方面提供一种插机方法,所述插机方法包括:A second aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚相对机械手在第一坐标系下的相对位姿;Obtaining a relative pose of the pin relative to the robot in the first coordinate system based on the acquired first image including the pin, based on the pre-trained first CNN model;
根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;Obtaining, according to current information of each joint of the robot, the first current pose of the robot in the first coordinate system;
根据获取的包括目标插孔的第二图像或第二当前图像,基于预先经过训练的第二CNN模型或所述第一CNN模型,获取所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Acquiring the target jack in the first coordinate system based on the acquired second image or the second current image including the target jack based on the previously trained second CNN model or the first CNN model a three-position or a third current pose;
根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位姿,计算机械手需实施的当前运动量;判断机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of motion that the robot needs to implement; determining whether the robot satisfies the insertion condition; if satisfied, the control center The robot drives the pin into the target jack; if not, the robot is controlled to perform the current amount of motion.
本发明第三方面提供一种插机方法,所述插机方法包括:A third aspect of the present invention provides an insertion method, the insertion method comprising:
获取包括引脚的第一图像;Obtaining a first image including a pin;
根据获取的机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
获取包括目标插孔的第二图像或第二当前图像;Obtaining a second image or a second current image including the target jack;
根据所述第一图像、所述第一当前位姿、所述第二图像或所述第二当前图像,基于预先经过训练的第三CNN模型,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first image, the first current pose, the second image, or the second current image, a current amount of exercise to be performed by the robot based on a third CNN model trained in advance; Whether the robot meets the insertion condition; if not, the robot is controlled to drive the pin into the target jack; if not, the robot is controlled to perform the current amount of exercise.
本发明第四方面提供一种插机方法,所述插机方法包括:A fourth aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚和目标插孔的第三当前图像,获取所述引脚在第一坐标系下的第二位姿或第二当前位姿,以及所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Obtaining a second pose or a second current pose of the pin in the first coordinate system according to the acquired third current image including the pin and the target jack, and the target jack is at the first a third pose or a third current pose in the coordinate system;
根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the pre-trained NN model, the robot to perform according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The current amount of exercise; determining whether the robot meets the plug-in condition; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
本发明第五方面提供一种插机方法,所述插机方法包括:A fifth aspect of the present invention provides an insertion method, the insertion method comprising:
根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第四CNN模型,获取所述引脚在第一坐标系下的第二位姿或第二当前位姿,以及所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Obtaining a second pose or a second current pose of the pin in a first coordinate system based on a third CNN model that is trained in advance according to a third current image including a pin and a target jack, and a third pose or a third current pose of the target jack in the first coordinate system;
根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating a current amount of motion to be performed by the robot according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose; determining the robot Whether the plug-in condition is satisfied; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
本发明第六方面提供一种插机方法,所述插机方法包括:A sixth aspect of the present invention provides an insertion method, the insertion method comprising:
获取包括引脚和目标插孔的第三当前图像;Obtaining a third current image including the pin and the target jack;
根据获取的机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
根据所述第三当前图像和所述第一当前位姿,基于预先经过训练的第五CNN模型,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the third current image and the first current pose, a current amount of motion to be performed by the robot based on the trained fifth CNN model; determining whether the robot meets the insertion condition; if satisfied, controlling The robot drives the pin into the target jack; if not, controls the robot to implement the current amount of motion.
本发明第七方面提供一种插机方法,所述插机方法包括:A seventh aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚的第一图像,获取引脚的第二坐标;Obtaining a second coordinate of the pin according to the acquired first image including the pin;
根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
根据获取的包括目标插孔的第二图像或第二当前图像,获取目标插孔的第三坐标或第三当前坐标;Obtaining a third coordinate or a third current coordinate of the target jack according to the acquired second image or the second current image including the target jack;
根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion to be performed by the robot based on the trained NN model; determining whether the robot is satisfied Inserting conditions; if satisfied, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
本发明第八方面提供一种插机方法,所述插机方法包括:An eighth aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚的第二坐标;Obtaining a second coordinate of the pin based on the acquired first image including the pin, based on the previously trained first CNN model;
根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
根据获取的包括目标插孔的第二图像或第二当前图像,基于所述预先经过训练的第一CNN模型或预先经过训练的第二CNN模型,获取目标插孔的第三坐标或第三当前坐标;Obtaining a third coordinate or a third current of the target jack based on the acquired second image or the second current image including the target jack based on the pre-trained first CNN model or the pre-trained second CNN model coordinate;
根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动引脚***目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; determining whether the robot meets an insertion condition; if satisfied, Controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
本发明第九方面提供一种插机方法,所述插机方法包括:A ninth aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的第二坐标或第二当前坐标,以及目标插孔的在所述第一坐标系下的第三坐标或第三当前坐标;Obtaining a second coordinate or a second current coordinate of the pin in the first coordinate system according to the acquired third current image including the pin and the target jack, and the first of the target jacks in the first coordinate system Three coordinates or third current coordinates;
根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of motion to be performed by the robot based on the trained NN model; Whether the robot meets the plug-in condition; if satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if not, the robot is controlled to implement the current amount of motion.
本发明第十方面提供一种插机方法,所述插机方法包括:A tenth aspect of the present invention provides an insertion method, the insertion method comprising:
根据获取的包括引脚和目标插孔的第三当前图像,基于预先经过训练的第三CNN模型,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;Obtaining a second coordinate or a second current coordinate of the pin and a third coordinate or a third of the target jack based on the acquired third current image including the pin and the target jack based on the previously trained third CNN model Current coordinates
根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; determining whether the robot meets the insertion condition And if so, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
本发明第十一方面提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上面任意一项所述的插机方法。An eleventh aspect of the present invention provides a computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the plug-in method of any of the above.
本发明第十二方面提供一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上面任意一项所述的插机方法。A twelfth aspect of the present invention provides an electronic device comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes any of the above when executing the computer program A plug-in method as described.
本发明第十三方面提供一种插机设备,所述插机设备包括第一图像传感器、第二图像传感器、机械手和处理器;A thirteenth aspect of the present invention provides a plug-in device, where the plug-in device includes a first image sensor, a second image sensor, a robot, and a processor;
所述处理器分别藕接所述第一图像传感器、所述第二图像传感器和所述机械手;The processor respectively connects the first image sensor, the second image sensor, and the robot;
所述第一图像传感器在工作时,采集包括引脚的第一图像,并将所述第一图像发送给所述处理器;The first image sensor, when in operation, acquires a first image including a pin and transmits the first image to the processor;
所述第二图像传感器在工作时,采集包括目标插孔的第二图像或第二当前图像,并将所述第二图像或第二当前图像发送给所述处理器;While the second image sensor is in operation, acquiring a second image or a second current image including the target jack, and transmitting the second image or the second current image to the processor;
所述机械手在工作时,将机械手各关节的当前信息发送给所述处理器;基于所述处理器的控制移动所述当前运动量;基于所述处理器的控制带动所述引脚***所述目标插孔;The robot sends the current information of the joints of the robot to the processor when working; moves the current amount of motion based on the control of the processor; and the pin is inserted into the target based on the control of the processor Jack
所述处理器在工作时实现上面第一到第四方面任意一项所述的插机方法; 或The processor, when operating, implements the plug-in method of any of the above first to fourth aspects; or
上面第九到第十一方面任意一项所述的插机方法。The method of plugging in any one of the above ninth to eleventh aspects.
本发明第十四方面提供一种插机设备,所述插机设备包括第三图像传感器、机械手和处理器;A fourteenth aspect of the present invention provides a plug-in device, the plug-in device including a third image sensor, a robot, and a processor;
所述处理器分别藕接所述第三图像传感器和所述机械手;The processor respectively connects the third image sensor and the robot;
所述第三图像传感器在工作时,采集包括引脚和目标插孔的第三当前图像,并将所述第三当前图像发送给所述处理器;The third image sensor, when in operation, acquires a third current image including a pin and a target jack, and transmits the third current image to the processor;
所述机械手在工作时,将机械手各关节的当前信息发送给所述处理器;基于所述处理器的控制移动所述当前运动量;基于所述处理器的控制带动所述引脚***所述目标插孔;The robot sends the current information of the joints of the robot to the processor when working; moves the current amount of motion based on the control of the processor; and the pin is inserted into the target based on the control of the processor Jack
所述处理器在工作时实现上面第五到第八方面任意一项所述的插机方法;或第十二到第十四方面任意一项所述的插机方法。The processor is configured to implement the plug-in method of any one of the above-mentioned fifth to eighth aspects, or the plug-in method of any of the twelfth to fourteenth aspects.
本发明第十四方面提供一种插机装置,所述插机装置包括各功能模块,有关各功能模块的相关描述参见上面的插机方法。A fourteenth aspect of the present invention provides a plug-in device, wherein the plug-in device includes functional modules. For a description of each functional module, refer to the plug-in method above.
本发明第十五方面提供一种上面第一、第二、第五或第六方面所述的插机方法中的预先经过训练的NN模型的获取方法,所述预先经过训练的NN模型通过如下方法获取:A fifteenth aspect of the present invention provides the method for acquiring a pre-trained NN model in the method of inserting the first, second, fifth or sixth aspect, wherein the pre-trained NN model is as follows Method to get:
获取初始化的NN模型,所述NN模型为针对输入的第一当前位姿、相对位姿、第二当前位姿或第二位姿、以及第三位姿或第三当前位姿,输出机械手需实施的当前运动量;Obtaining an initialized NN model, wherein the NN model is required for outputting a first current pose, a relative pose, a second current pose or a second pose, and a third pose or a third current pose. The current amount of exercise implemented;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的NN模型进行训练,以获取所述预先经过训练的NN模型。The initialized NN model is trained based on the training data and the tag data to obtain the pre-trained NN model.
本发明第十六方面提供一种上面第二或第三方面所述的插机方法中的预先经过训练的第一CNN模型的获取方法,所述预先经过训练的第一CNN模型通过如下方法获取:A sixteenth aspect of the present invention provides the method for acquiring a pre-trained first CNN model in the plug-in method according to the second or third aspect above, wherein the pre-trained first CNN model is obtained by the following method :
获取初始化的第一CNN模型,所述第一CNN模型为针对输入的第一图像输出相对位姿或引脚位姿;和/或针对输入的第二图像或第二当前图像输出第 三位姿或第三当前位姿;Obtaining an initialized first CNN model, the first CNN model outputs a relative pose or pin pose for the input first image; and/or outputs a third pose for the input second image or the second current image Or the third current pose;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第一CNN模型进行训练,以获取所述预先经过训练的第一CNN模型。And initializing the initialized first CNN model to acquire the pre-trained first CNN model based on the training data and the tag data.
本发明第十七方面提供一种上面第四方面所述的插机方法中的预先经过训练的第三CNN模型的获取方法,所述预先经过训练的第三CNN模型通过如下方法获取:A seventeenth aspect of the present invention provides the method for acquiring a pre-trained third CNN model in the plug-in method according to the fourth aspect above, wherein the pre-trained third CNN model is obtained by:
获取初始化的第三CNN模型,所述第三CNN模型为针对输入的第一图像、第二图像或第二当前图像和第一当前位姿,输出机械手需实施的当前运动量;Obtaining an initialized third CNN model, the third CNN model is a current amount of motion to be implemented by the robot for the input first image, the second image or the second current image, and the first current pose;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第三CNN模型进行训练,以获取所述预先经过训练的第三CNN模型。And the initialized third CNN model is trained to acquire the pre-trained third CNN model based on the training data and the tag data.
本发明第十八方面提供上面第六或第七方面所述的插机方法中的预先经过训练的第四CNN模型的获取方法,所述预先经过训练的第四CNN模型通过如下方法获取:The eighteenth aspect of the present invention provides the method for acquiring a pre-trained fourth CNN model in the plug-in method according to the sixth or seventh aspect above, wherein the pre-trained fourth CNN model is obtained by:
获取初始化的第四CNN模型,所述第四CNN模型为针对输入的第三当前图像,输出第二位姿或第二当前位姿,以及第三位姿或第三当前位姿;Obtaining an initialized fourth CNN model, which is a third current image for input, outputting a second pose or a second current pose, and a third pose or a third current pose;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第四CNN模型进行训练,以获取所述预先经过训练的第四CNN模型。And the initialized fourth CNN model is trained to acquire the pre-trained fourth CNN model based on the training data and the tag data.
本发明第十九方面提供一种第八方面所述的插机方法中的预先经过训练的第五CNN模型的获取方法,所述预先经过训练的第五CNN模型通过如下方法获取:A nineteenth aspect of the present invention provides the method for acquiring a pre-trained fifth CNN model in the plug-in method according to the eighth aspect, wherein the pre-trained fifth CNN model is obtained by:
获取初始化的第五CNN模型,所述第五CNN模型为针对输入的第三当前图像和第一当前位姿,输出机械手需实施例的当前运动量;Obtaining an initialized fifth CNN model, which is a third current image for input and a first current pose, and outputs a current amount of motion required by the robot;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第五CNN模型进行训练,以获取所述预先经过训练的第五CNN模型。And initializing the initialized fifth CNN model to acquire the pre-trained fifth CNN model based on the training data and the tag data.
本发明第二十方面提供一种上面第九、第十、第十二或第十三所述的插机方法中的预先经过训练的NN模型的获取方法,所述预先经过训练的NN模型的获取方法包括:A twentieth aspect of the present invention provides the method for acquiring a pre-trained NN model in the method of inserting a ninth, tenth, twelfth or thirteenth aspect, wherein the pre-trained NN model The acquisition methods include:
获取初始化的NN模型,所述NN模型为针对输入的所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,输出所述机械手需实施的当前运动量;Obtaining an initialized NN model, the NN model is required to output the robot for the input of the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate The current amount of exercise implemented;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的NN模型进行训练,以获取所述预先经过训练的所述NN模型。The initialized NN model is trained based on the training data and the tag data to obtain the pre-trained NN model.
本发明第二十一方面提供一种上面第十或第十一方面所述的插机方法中的预先经过训练的第六CNN模型的获取方法,所述预先经过训练的第六CNN模型的获取方法包括:The twenty-first aspect of the present invention provides the method for acquiring a pre-trained sixth CNN model in the plug-in method according to the tenth or eleventh aspect, wherein the pre-trained sixth CNN model is acquired. Methods include:
获取初始化的第六CNN模型,所述第六CNN模型为针对输入的所述第一图像和/或所述第二图像或第二当前图像,输出所述第二坐标,和/或所述第三坐标或第三当前坐标;Acquiring an initialized sixth CNN model, the sixth CNN model is for outputting the first image and/or the second image or the second current image, outputting the second coordinates, and/or the Three coordinates or third current coordinates;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第六CNN模型进行训练,以获取所述预先经过训练的所述第六CNN模型。And the initialized sixth CNN model is trained to acquire the pre-trained sixth CNN model based on the training data and the tag data.
本发明第二十二方面提供一种上面第十或第十一所述的插机方法中的预先经过训练的第七CNN模型的获取方法,所述预先经过训练的第七CNN模型的获取方法包括:The twenty-second aspect of the present invention provides the method for acquiring a pre-trained seventh CNN model in the method of inserting the tenth or eleventh, the method for acquiring the pre-trained seventh CNN model include:
获取初始化的第七CNN模型,所述第七CNN模型为针对输入的所述第二图像或第二当前图像,输出所述第三坐标或第三当前坐标;Obtaining an initialized seventh CNN model, the seventh CNN model is configured to output the third coordinate or the third current coordinate for the input second image or the second current image;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第七CNN模型进行训练,以获取所述预先经过训练的所述第七CNN模型。And the initialized seventh CNN model is trained to acquire the pre-trained seventh CNN model based on the training data and the tag data.
本发明第二十三方面提供一种上面第十三或第十四任意一项所述的插机方法中的预先经过训练的第八CNN模型的获取方法,所述预先经过训练的第八 CNN模型的获取方法包括:The twenty-third aspect of the present invention provides the method for acquiring a pre-trained eighth CNN model in the plug-in method according to any one of the thirteenth or fourteenth, wherein the pre-trained eighth CNN The methods for obtaining the model include:
获取初始化的第八CNN模型,所述第八CNN模型为针对输入的所述第三当前图像,输出所述第二坐标或第二当前坐标,以及所述第三坐标或第三当前坐标;Obtaining an initialized eighth CNN model, the eighth CNN model is outputting the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, for the third current image input;
获取训练数据和标签数据;Obtain training data and tag data;
基于所述训练数据和所述标签数据,对所述初始化的第八CNN模型进行训练,以获取所述预先经过训练的所述第八CNN模型。And initializing the initialized eighth CNN model to acquire the pre-trained eighth CNN model based on the training data and the tag data.
采用本发明的插机方法及插机设备,由于基于机器学习的方法进行插机,能够适应在背景环境复杂的情况下的插机,因此提高了插机工作的效率及准确率。By adopting the plug-in method and the plug-in device of the present invention, since the machine is inserted based on the machine learning method, the plug-in can be adapted to the case where the background environment is complicated, thereby improving the efficiency and accuracy of the plug-in work.
附图说明DRAWINGS
为了更清楚地说明本发明实施例技术方案,下面将对实施例和现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以基于这些附图获取其它的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments and the prior art description will be briefly described below. Obviously, the drawings in the following description are only some implementations of the present invention. For example, other drawings can be obtained based on these drawings without any creative work for those skilled in the art.
图1为本发明提供的插机方法的实施例的第一流程图。1 is a first flow chart of an embodiment of an insertion method provided by the present invention.
图2为本发明提供的插机方法的实施例的第二流程图。2 is a second flow chart of an embodiment of a plug-in method provided by the present invention.
图3为本发明提供的预先经过训练的NN模型获取方法的实施例的流程图。3 is a flow chart of an embodiment of a pre-trained NN model acquisition method provided by the present invention.
图4为本发明提供的预先经过训练的第一CNN模型的获取方法的实施例的流程图。4 is a flow chart of an embodiment of a method for acquiring a pre-trained first CNN model provided by the present invention.
图5为本发明提供的预先经过训练的第二CNN模型的获取方法的实施例的流程图。FIG. 5 is a flowchart of an embodiment of a method for acquiring a pre-trained second CNN model provided by the present invention.
图6为本发明提供的插机方法的实施例的第三流程图。FIG. 6 is a third flowchart of an embodiment of a plug-in method provided by the present invention.
图7为本发明提供的插机方法的实施例的第四流程图。FIG. 7 is a fourth flowchart of an embodiment of a plug-in method provided by the present invention.
图8为本发明提供的插机方法的实施例的第五流程图。FIG. 8 is a fifth flowchart of an embodiment of a plug-in method provided by the present invention.
图9为本发明提供的插机方法的实施例的第六流程图。FIG. 9 is a sixth flowchart of an embodiment of a plug-in method provided by the present invention.
图10为本发明提供的预先经过训练的第三CNN模型的获取方法的实施例的流程图。FIG. 10 is a flowchart of an embodiment of a method for acquiring a pre-trained third CNN model provided by the present invention.
图11为本发明提供的插机方法的实施例的第七流程图。Figure 11 is a seventh flow chart of an embodiment of the method of plugging in the present invention.
图12为本发明提供的插机方法的实施例的第八流程图。FIG. 12 is an eighth flowchart of an embodiment of a plug-in method provided by the present invention.
图13为本发明提供的预先经过训练的第四CNN模型的获取方法的实施例的流程图。FIG. 13 is a flowchart of an embodiment of a method for acquiring a pre-trained fourth CNN model provided by the present invention.
图14为本发明提供的插机方法的实施例的第九流程图。Figure 14 is a ninth flow chart of an embodiment of the method of plugging in the present invention.
图15为本发明提供的插机方法的实施例的第十流程图。Figure 15 is a tenth flow chart of an embodiment of the method of plugging in the present invention.
图16为本发明提供的插机方法的实施例的第十一流程图。Figure 16 is an eleventh flow chart of an embodiment of the plug-in method provided by the present invention.
图17为本发明提供的插机方法的实施例的第十二流程图。Figure 17 is a twelfth flow chart of an embodiment of the method of plugging in the present invention.
图18为本发明提供的预先经过训练的第五CNN模型的获取方法的实施例的流程图。FIG. 18 is a flowchart of an embodiment of a method for acquiring a pre-trained fifth CNN model provided by the present invention.
图19为本发明提供的插机设备的实施例的第一结构框图。FIG. 19 is a first structural block diagram of an embodiment of a plug-in device provided by the present invention.
图20为本发明提供的插机设备的实施例的第二结构框图。20 is a second structural block diagram of an embodiment of a plug-in device provided by the present invention.
图21为本发明提供的插机设备的实施例的第三结构框图。FIG. 21 is a third structural block diagram of an embodiment of a plug-in device provided by the present invention.
图22为本发明提供的插机设备的实施例的第四结构框图。FIG. 22 is a fourth structural block diagram of an embodiment of a plug-in device provided by the present invention.
图23为本发明提供的电子设备的实施例的结构框图。FIG. 23 is a structural block diagram of an embodiment of an electronic device provided by the present invention.
图24为本发明提供的模型连接实施例的第一结构框图。FIG. 24 is a first structural block diagram of a model connection embodiment provided by the present invention.
图25为本发明提供的模型连接实施例的第二结构框图。FIG. 25 is a second structural block diagram of a model connection embodiment provided by the present invention.
图26为本发明提供的模型连接实施例的第三结构框图。FIG. 26 is a third structural block diagram of a model connection embodiment provided by the present invention.
图27为本发明中的前馈申请网络的结构图。Figure 27 is a structural diagram of a feedforward application network in the present invention.
图28为本发明提供的插机方法的实施例的第十三流程图。28 is a thirteenth flow chart of an embodiment of the method of plugging in the present invention.
图29为本发明提供的插机方法的实施例的第十四流程图。Figure 29 is a fourteenth flow chart of an embodiment of the method of plugging in the present invention.
图30为本发明提供的预先经过训练的NN模型获取方法的实施例的流程图。30 is a flow chart of an embodiment of a pre-trained NN model acquisition method provided by the present invention.
图31为本发明提供的预先经过训练的第六CNN模型的获取方法的实施例的流程图。FIG. 31 is a flowchart of an embodiment of a method for acquiring a pre-trained sixth CNN model provided by the present invention.
图32为本发明提供的预先经过训练的第七CNN模型的获取方法的实施例 的流程图。32 is a flow chart of an embodiment of a method for acquiring a pre-trained seventh CNN model provided by the present invention.
图33为本发明提供的插机方法的实施例的第十五流程图。Figure 33 is a fifteenth flow chart of an embodiment of the plug-in method provided by the present invention.
图34为本发明提供的插机方法的实施例的第十六流程图。Figure 34 is a sixteenth flowchart of an embodiment of the plug-in method provided by the present invention.
图35为本发明提供的插机方法的实施例的第十七流程图。Figure 35 is a seventeenth flow chart of an embodiment of the plug-in method provided by the present invention.
图36为本发明提供的插机方法的实施例的第十八流程图。Figure 36 is an eighteenth flow chart of an embodiment of the plug-in method provided by the present invention.
图37为本发明提供的预先经过训练的第八CNN模型的获取方法的实施例的流程图。37 is a flow chart of an embodiment of a method for acquiring a pre-trained eighth CNN model provided by the present invention.
图38为本发明提供的插机方法的实施例的第十九流程图。Figure 38 is a nineteenth flow chart of an embodiment of the method of plugging in the present invention.
图39为本发明提供的插机方法的实施例的第二十流程图。Figure 39 is a twentieth flowchart of an embodiment of the method of plugging in the present invention.
具体实施方式detailed description
为了使本领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获取的所有其它实施例,都应当属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. Some embodiments of the invention, rather than all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without departing from the inventive scope should fall within the scope of the present invention.
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。In order to explain the technical solution described in the present invention, the following description will be made by way of specific embodiments.
实施例一、 Embodiment 1
图1为本发明提供的插机方法的实施例的第一流程图。图2为本发明提供的插机方法的实施例的第二流程图。图19为本发明提供的插机设备的实施例的第一结构框图。图20为本发明提供的插机设备的实施例的第二结构框图。1 is a first flow chart of an embodiment of an insertion method provided by the present invention. 2 is a second flow chart of an embodiment of a plug-in method provided by the present invention. FIG. 19 is a first structural block diagram of an embodiment of a plug-in device provided by the present invention. 20 is a second structural block diagram of an embodiment of a plug-in device provided by the present invention.
插机设备是一种自动实现将电子元件的引脚***PCB板上的目标插孔的工业自动化设备。The plug-in device is an industrial automation device that automatically inserts the pins of an electronic component into a target jack on a PCB.
如图1所示,本发明实施例提供一种插机方法,该插机方法包括:As shown in FIG. 1 , an embodiment of the present invention provides an insertion method, where the insertion method includes:
S110根据获取的包括引脚的第一图像,获取在第一坐标系下的引脚相对机械手的相对位姿。S110 obtains a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin.
如图19、20所示,机械手730在处理器740的控制下在取料位拾取电子元件800之后,将电子元件800移动至第一图像传感器710的视野范围内,使 得通过第一图像传感器710采集包括电子元件800的引脚810的第一图像。该第一图像通常不包括PCB板背景,因为如果图像中包括PCB板背景,由于背景图像复杂,会使得引脚识别存在一定困难。As shown in FIGS. 19 and 20, after the robot 730 picks up the electronic component 800 under the control of the processor 740, the electronic component 800 is moved into the field of view of the first image sensor 710 such that the first image sensor 710 passes through the first image sensor 710. A first image comprising pins 810 of electronic component 800 is acquired. The first image usually does not include the PCB board background, because if the image includes a PCB background, the background image is complicated, which makes pin identification difficult.
第一坐标系可以包括:机械手坐标系、第一图像传感器坐标系、第二图像传感器坐标系或者任意指定的经过与上述各个坐标系标定过的其它坐标系。该第一坐标系需要预先与其它坐标系进行标定,从而可以使得其它坐标系基于预先标定的矩阵转换关***一转换到该第一坐标系下。本具体实施例下面以机械手坐标系作为第一坐标系为例进行进一步详细的说明,在一些实施例中,通常将机械手的底座的中心设置为机械手坐标系。The first coordinate system may include: a robot coordinate system, a first image sensor coordinate system, a second image sensor coordinate system, or any other specified coordinate system that has been calibrated with each of the coordinate systems described above. The first coordinate system needs to be calibrated with other coordinate systems in advance, so that other coordinate systems can be uniformly converted to the first coordinate system based on the pre-calibrated matrix conversion relationship. In the following, the robot coordinate system is taken as a first coordinate system as an example for further detailed description. In some embodiments, the center of the base of the robot is usually set as a robot coordinate system.
所述机械手的位姿可以是指机械手的末端关节连接的法兰盘中心的位姿,也可以是指机械手的末端执行器的中心的位姿等等。The posture of the manipulator may refer to the posture of the center of the flange to which the end of the manipulator is articulated, or the posture of the center of the end effector of the manipulator, and the like.
根据机械手各关节发送给处理器的各关节的此时信息,此时信息包括各个关节的运动量的信息,再结合各个关节的类型和尺寸等信息,通过机械手运动学正解公式,可以求得此时机械手此时在机械手坐标系下的位姿。According to the information of the joints sent to the processor by the joints of the robot, the information includes the information of the movement amount of each joint, and combined with the information of the type and size of each joint, the formula can be obtained by the kinematics of the robot. The pose of the robot at this time in the robot coordinate system.
根据第一图像获取引脚的相对位姿,后面实施例会有进一步详细的描述。The relative position of the pin is obtained according to the first image, which will be described in further detail in the following embodiments.
在一些实施例中,通过第一图像传感器采集的第一图像,除用于获取引脚的相对位姿之外,还可以用于检查电子元件是否存在缺陷,即通过对第一图像的分析,与预先存储的非缺陷元件图像进行对比,从而判断该电子元件是否存在缺陷。如果不存在缺陷可以继续进行下面的步骤,如果存在缺陷可以控制机械手将该电子元件放回到回收位,然后再返回取料位重新拾取电子元件。In some embodiments, the first image acquired by the first image sensor, in addition to the relative pose for acquiring the pin, can also be used to check whether the electronic component has a defect, that is, by analyzing the first image, A comparison is made with a pre-stored non-defective component image to determine whether the electronic component is defective. If there are no defects, you can continue with the following steps. If there is a defect, you can control the robot to put the electronic component back to the recycling position, and then return to the reclaiming position to pick up the electronic components.
S120根据获取的机械手的各关节的当前信息,获取在第一坐标系下的机械手的第一当前位姿。S120 acquires a first current pose of the robot in the first coordinate system according to the current information of the joints of the acquired robot.
如图2所示,在一些实施例中,在获取相对位姿之后,在获取第一当前位姿,或第三位姿或第三当前位姿之前,可以执行步骤S180控制机械手带动引脚移动到目标插孔附近,这样可以节省后续的插机工作时间。在一些实施例中,在将引脚移动到某块PCB板上的第一个目标插孔附近时,可以先检测PCB板上的标记点的坐标或位姿,标记点(Mark point)为PCB板上一个带有周边空白区域的实心的圆形或者矩形点,结合PCB板布局图,可以推算出目标插孔 的大致位置,并将电子元件移动至此大致位置,则可使引脚位于目标插孔的附近。以后,由于已经知道了第一个目标插孔的位置,因此不再需要获取标记位图像,可以以第一个目标插孔的位置为基准,结合PCB板的布局图,推算出目标插孔的大致位置坐标,并控制机械手移动到此位置,从而使得引脚移动到目标插孔附近。As shown in FIG. 2, in some embodiments, after acquiring the relative pose, before acquiring the first current pose, or the third pose or the third current pose, step S180 may be performed to control the robot to drive the pin movement. Go to the target jack, which saves the subsequent plug-in work time. In some embodiments, when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the Mark point is the PCB. A solid circular or rectangular point with a blank area on the board, combined with the layout of the PCB board, can calculate the approximate position of the target jack and move the electronic components to this approximate position, so that the pins can be placed in the target Near the hole. In the future, since the position of the first target jack is already known, it is no longer necessary to obtain the image of the marker bit. The position of the first target jack can be used as a reference, and the layout of the PCB board can be combined to calculate the target jack. Approximate position coordinates and control the robot to move to this position, causing the pin to move near the target jack.
S130获取在第一坐标系下的目标插孔的第三位姿或第三当前位姿。S130 acquires a third pose or a third current pose of the target jack in the first coordinate system.
基于获取的通过第二图像传感器采集并发送的包括目标插孔的第二当前图像,提获取目标插孔在机械手坐标系下的第三位姿或第三当前位姿。后面会对具体的获取方法进行详细说明。And acquiring a third posture or a third current posture of the target jack in the robot coordinate system based on the acquired second current image including the target jack collected and transmitted by the second image sensor. The specific acquisition method will be described in detail later.
如图19所示,根据上面实施例所述,当第二图像传感器720设置在机械手的末端关节上,除此之外也可以设置在机械手的其它关节上(图未示意出),获取从第二图像传感器720采集并发送的第二当前图像,进而获取目标插孔的第三当前位姿。As shown in FIG. 19, according to the above embodiment, when the second image sensor 720 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The second current image captured and transmitted by the image sensor 720, thereby acquiring the third current pose of the target jack.
如图20所示,当设置在PCB板周边某一位置时,由于第二图像传感器720相对于PCB板900位置固定,因此,只需获取一次第二图像,根据第二图像获取目标插孔的第三位姿。As shown in FIG. 20, when the second image sensor 720 is fixed relative to the PCB board 900 when it is disposed at a position around the PCB board, only the second image is acquired once, and the target jack is acquired according to the second image. The third position.
优选将第二图像传感器720设置在机械手上,由于第二图像传感器720跟随机械手730一起移动,使得第二图像传感器720能够位于更接近目标插孔910正上方或者接近正上方的位置获取包括目标插孔的图像,从而提高目标插孔位姿提取的精度,更好的提高后续插机的准确率。The second image sensor 720 is preferably disposed on the robot, and since the second image sensor 720 moves along with the robot 730, the second image sensor 720 can be positioned closer to or directly above the target jack 910 to include the target insertion. The image of the hole, thereby improving the accuracy of the target jack pose extraction, and better improving the accuracy of subsequent plug-in.
S140根据第一当前位姿、相对位姿、以及第三位姿或第三当前位姿,基于预先经过训练的神经网络(Neural Network NN)模型计算机械手需实施的当前运动量;S150判断机械手是否满足插机条件;若满足,S160控制机械手带动引脚***目标插孔;若不满足,S170控制机械手实施所述当前运动量。S140 calculates, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of motion to be performed by the robot based on the previously trained Neural Network (NN) model; S150 determines whether the robot is satisfied The plug-in condition; if satisfied, the S160 controls the robot to drive the pin into the target jack; if not, the S170 controls the robot to implement the current amount of motion.
需要说明的是,根据第一当前位姿、相对位姿、以及第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量可以包括:根据第一当前位姿和相对位姿计算出第二当前位姿,由于在经过S110获取相对位姿后,在机械手移动过程中,机械手和引脚的相对位姿保持不变,因此, 通过获取机械手的第一当前位姿,就可以获取引脚的第二当前位姿。将第一当前位姿、第二当前位姿以及第三位姿或第三当前位姿输入所述NN模型,输出当前运动量;或者直接将第一当前位姿、相对位姿以及第三位姿或第三当前位姿输入所述NN模型,输出当前运动量;优选前一种方式,这样可以提高运动量获取的精度。It should be noted that, according to the first current pose, the relative pose, and the third pose or the third current pose, calculating the current amount of exercise to be performed by the robot based on the previously trained NN model may include: according to the first current position The second current pose is calculated by the pose and the relative pose. Since the relative pose of the robot and the pin remains unchanged during the movement of the robot after the relative pose is acquired through S110, the first current of the robot is acquired. In the pose, you can get the second current pose of the pin. Inputting the first current pose, the second current pose, and the third pose or the third current pose into the NN model, outputting the current exercise amount; or directly directly adopting the first current pose, the relative pose, and the third pose Or the third current pose is input to the NN model, and the current amount of motion is output; preferably the former manner, which can improve the accuracy of the motion amount acquisition.
判断机械手是否满足插机条件(即引脚足够接近目标插孔),可以以连续几步的当前运动量(比如:2-3步)的增量都很小,比如:小于某一个阈值,即认为引脚足够接近目标插孔。Determine whether the robot meets the plug-in condition (that is, the pin is close enough to the target jack), and the increment of the current motion amount (for example, 2-3 steps) in a few consecutive steps is small, for example, less than a certain threshold, that is, The pin is close enough to the target jack.
若满足,控制机械手带动引脚***目标插孔,从而完成该电子元件的插机动作;然后,控制机械手移动到下一电子元件的取料位,夹取下一个电子元件,并重复上述步骤,直到PCB板上所有目标插孔对应的电子元件的插机动作全部完成。If yes, the control robot drives the pin into the target jack to complete the plug-in action of the electronic component; then, the control robot moves to the take-out position of the next electronic component, picks up the next electronic component, and repeats the above steps. The plug-in actions of the electronic components corresponding to all target jacks on the PCB are completed.
若不满足,控制机械手实施对应的所述当前运动量,待机械手实施完毕对应的当前运动量后,重新重复上面的步骤。If not, the control robot implements the corresponding current amount of motion, and repeats the above steps after the robot performs the corresponding current amount of motion.
机械手需实施的当前运动量是指机械手的末端执行器或末端轴等等需实施的运动量(移动量+旋转量)。基于计算出的当前运动量,通过机械手运动学逆解公式,可以求得机械手各关节需要实施的运动量,然后将各个运动量的指令发送给各个关节的马达控制器,从而控制机械手运动相对应的运动量。The current amount of exercise to be performed by the robot refers to the amount of movement (movement amount + rotation amount) to be performed by the end effector or the end shaft of the robot. Based on the calculated current amount of motion, the inverse kinematics formula of the manipulator can be used to obtain the amount of motion that each joint of the manipulator needs to perform, and then the command of each amount of motion is sent to the motor controller of each joint, thereby controlling the amount of motion corresponding to the movement of the manipulator.
在一些实施例中,一个大的PCB板可能难以一次完成整个插机工作,因此,通常将一个大的PCB板虚拟分成多个小的模块,分多次完成多个模块的插机,从而最终完成整个PCB板的插机,因此,在这种情况下,先根据本具体实施例的插机方法完成一个模块的插机,然后重复该插机方法的步骤,依次完成其它模块的插机,直到整个PCB板完成插机。然后将该PCB板移开该工作位,并将下一块PCB板移动到该插机工作位重复本发明实施例的插机方法所述的步骤。In some embodiments, a large PCB board may be difficult to complete the entire plug-in operation at one time. Therefore, a large PCB board is usually virtualized into a plurality of small modules, and multiple modules are inserted in multiple times, thereby finally The insertion of the entire PCB board is completed. Therefore, in this case, the plug-in method of one module is completed according to the plug-in method of the specific embodiment, and then the steps of the plug-in method are repeated, and the plug-ins of other modules are sequentially completed. Until the entire PCB board is plugged in. The PCB board is then removed from the working position and the next PCB board is moved to the plug-in working station to repeat the steps described in the plug-in method of the embodiment of the present invention.
NN模型是一种运算模型,由大量的节点(或称神经元)和之间相互联接构成。每个节点代表一种特定的输出函数,称为激励函数(activation function)。每两个节点间的连接都代表一个对于通过该连接信号的加权值,称之为权重, 这相当于人工神经网络的记忆。网络的输出则依网络的连接方式,权重值和激励函数的不同而不同。NN按网络结构划分可归纳为三大类:前馈神经网络、反馈申请网络和自组织申请网络。本具体实施例优选前馈神经网络。The NN model is an operational model consisting of a large number of nodes (or neurons) and interconnections. Each node represents a specific output function called an activation function. The connection between every two nodes represents a weighting value for passing the connection signal, called weight, which is equivalent to the memory of the artificial neural network. The output of the network varies depending on the connection method of the network, the weight value and the excitation function. According to the network structure, NN can be divided into three categories: feedforward neural network, feedback application network and self-organizing application network. This embodiment is preferably a feedforward neural network.
前馈神经网络(feedforward neural network FNN),简称前馈网络。在此种神经网络中,各神经元从输入层开始,接收前一级输入,并输出到下一级,直至输出层。整个网络中无反馈,可用一个有向无环图表示。Feedforward neural network (FNN), referred to as feedforward network. In such a neural network, each neuron starts from the input layer, receives the previous stage input, and outputs it to the next stage until the output layer. There is no feedback throughout the network, and a directed acyclic graph can be used.
前馈神经网络采用一种单向多层结构。其中每一层包含若干个神经元,同一层的神经元之间没有互相连接,层间信息的传送只沿一个方向进行。其中第一层称为输入层,最后一层为输出层,中间为隐含层,简称隐层。隐层可以是一层,也可以是多层。The feedforward neural network employs a unidirectional multilayer structure. Each layer contains several neurons, and the neurons in the same layer are not connected to each other, and the transmission of information between layers is performed in only one direction. The first layer is called the input layer, the last layer is the output layer, and the middle is the hidden layer, referred to as the hidden layer. The hidden layer can be one layer or multiple layers.
在神经网络模型中,生物神经元模型被简化为由一个线性函数加上一个非线性激活函数组成的数学模型。在这个模型中,神经元接收到来自n个其他神经元传递过来的输入信号,这些输入信号通过带权重的连接(connection)进行传递,神经元接收到的总输入值将与神经元的阈值进行比较,然后通过激活函数处理以产生神经元的输出。In the neural network model, the biological neuron model is reduced to a mathematical model consisting of a linear function plus a nonlinear activation function. In this model, neurons receive input signals from n other neurons that are passed through a weighted connection. The total input value received by the neuron is compared to the threshold of the neuron. The comparison is then processed by an activation function to produce the output of the neuron.
非线性激活函数,是使得神经网络可以表征非线性函数的关键。常见的激活函数有3种,分别是Sigmoid函数,tanh函数,以及ReLU函数。Sigmoid函数的数学表达式为
Figure PCTCN2019080453-appb-000001
它可以将输入映射为0至1之间的某个数。当输入大于0时,函数输出大于0.5,且输入越大,函数输出越接近1,此时可以认为这个神经元被激活了。Tanh函数,即双曲正切函数,与sigmoid函数类似。不同的是它将输入映射为-1至1之间的某个数。ReLU函数是目前最简单,也是使用最广泛的一个激活函数,它的数学表达式可以写做g(x)=max(0,x)。也就是说,小于0的所有输入都会被压制(不激活),而大于0的输入都会激活这个神经元,并且输入越大,输出也越大(不会像另外两个一样进入饱和状态)。
The nonlinear activation function is the key to making the neural network represent the nonlinear function. There are three common activation functions, namely the Sigmoid function, the tanh function, and the ReLU function. The mathematical expression of the Sigmoid function is
Figure PCTCN2019080453-appb-000001
It maps the input to a number between 0 and 1. When the input is greater than 0, the function output is greater than 0.5, and the larger the input, the closer the function output is to 1, and the neuron can be considered activated. The Tanh function, the hyperbolic tangent function, is similar to the sigmoid function. The difference is that it maps the input to a number between -1 and 1. The ReLU function is currently the simplest and most widely used activation function, and its mathematical expression can be written as g(x)=max(0,x). That is, all inputs less than 0 are suppressed (inactive), and inputs greater than 0 activate the neuron, and the larger the input, the larger the output (not going into saturation like the other two).
由上可知,每一个神经元代表一个非线性函数,因此神经网络中的每一层则代表了一组非线性函数。这些非线性函数的输出即为下一层的输入。From the above, each neuron represents a nonlinear function, so each layer in the neural network represents a set of nonlinear functions. The output of these nonlinear functions is the input to the next layer.
图27为本发明中的前馈申请网络的结构图。Figure 27 is a structural diagram of a feedforward application network in the present invention.
如图27所示,我们可以用一个最简单的,只有2个输入,1个带有2个神经元的隐藏层,以及带有1个输出神经元的输出层的神经网络来阐述神经网络的前馈计算过程。x1,x2为输入,w为权重,h1,h2代表了中间隐藏层各神经元的输出,y代表了这个神经网络的输出。我们需要先计算隐藏层的输出h1和h2。As shown in Figure 27, we can illustrate the neural network with a simple neural network with only 2 inputs, 1 hidden layer with 2 neurons, and an output layer with 1 output neuron. Feedforward calculation process. X1, x2 is the input, w is the weight, h1, h2 represents the output of each neuron in the middle hidden layer, and y represents the output of this neural network. We need to calculate the output h1 and h2 of the hidden layer first.
Figure PCTCN2019080453-appb-000002
Figure PCTCN2019080453-appb-000002
Figure PCTCN2019080453-appb-000003
Figure PCTCN2019080453-appb-000003
其中,b1,b2是相应神经元的偏置,g代表了激活函数。Where b1, b2 are the offsets of the corresponding neurons and g represents the activation function.
神经网络的输出y可以用h1,h2来表示。通常,神经网络的输出层不使用激活函数。The output y of the neural network can be represented by h1, h2. Usually, the output layer of the neural network does not use an activation function.
Figure PCTCN2019080453-appb-000004
Figure PCTCN2019080453-appb-000004
将h1和h2代入后可以得到:Substituting h1 and h2 can be obtained:
Figure PCTCN2019080453-appb-000005
Figure PCTCN2019080453-appb-000005
这就是前馈神经网络的主要计算逻辑。不管神经网络有多少层,都可以这样一层套一层地计算下去直至输出层。This is the main computational logic of the feedforward neural network. No matter how many layers of the neural network, you can calculate this layer by layer until the output layer.
在本发明的其中一个优选实施例中,该前馈神经网络可以包含2-5层隐层,每层含有1024个神经元。每一层隐层都是一个全连接层,也就是说下一层的任意一个神经元都和上一层的所有神经元相连。这个NN模型的输出层有6个神经元,分别对应了控制机械手位姿所需的xyzuvw空间坐标。除此之外,还可以根据需要设置任意数量的隐层,每层设置任意数量的神经元。In one preferred embodiment of the invention, the feedforward neural network may comprise 2-5 layers of hidden layers, each layer containing 1024 neurons. Each layer of hidden layer is a fully connected layer, that is to say, any one of the neurons in the next layer is connected to all the neurons in the upper layer. The output layer of this NN model has six neurons, which correspond to the xyzuvw space coordinates required to control the pose of the robot. In addition, you can set any number of hidden layers as needed, and set any number of neurons on each layer.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
如图3所示,在一些实施例中,预先经过训练的NN模型可以通过如下方法获取:As shown in FIG. 3, in some embodiments, the pre-trained NN model can be obtained by:
S141获取初始化的NN模型,所述NN模型为针对输入的在第一坐标系下的第一当前位姿、第二当前位姿或相对位姿、以及第三位姿或第三当前位姿,输出机械手需实施的当前运动量。S141: acquiring an initialized NN model, where the NN model is a first current pose, a second current pose or a relative pose, and a third pose or a third current pose in the first coordinate system for the input, Output the current amount of exercise that the robot needs to implement.
NN模型实际上是一族函数,或者说一个函数族。这些函数拥有一些共同的性质,因为一旦确定了模型,模型结构是固定的,每一个模型参数的具体选取,相当于在这个函数族中选择了一个函数。模型训练实际上就是在这个函数族中选择一个最好的函数来描述输入和输出之间的数量关系。The NN model is actually a family of functions, or a family of functions. These functions have some common properties, because once the model is determined, the model structure is fixed, and the specific selection of each model parameter is equivalent to selecting a function in this family of functions. Model training is actually choosing the best function in this family of functions to describe the quantitative relationship between input and output.
初始化NN模型,实际上就是确定模型结构,以及该模型的初始参数。Initializing the NN model is actually determining the model structure and the initial parameters of the model.
初始化参数的方法可以包括:Methods for initializing parameters can include:
a)用固定的常数来初始化,例如,把所有参数都初始化为0。a) Initialize with a fixed constant, for example, initialize all parameters to zero.
b)用随机数初始化,例如,用一个零均值指定方差的高斯分布来产生随机数。b) Initialize with a random number, for example, a Gaussian distribution of variances with a zero mean to generate a random number.
S142获取训练数据和标签数据。S142 acquires training data and tag data.
可以基于传统视觉伺服运行插机多次(比如:1000次),以获取足够的训练数据用以训练初始化的NN模型。The plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
在传统视传统视觉伺服时,机械手通常按照预先设定的步数(比如:3步)或者连续几步的运动增量很小时形成最终***元件时机械手的位姿。具体可以以视觉伺服时机械手每走一步时的机械手的位姿,以及该步对应的引脚的位姿和目标插孔的位姿作为训练数据,训练NN模型。In the conventional view of the conventional visual servo, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps) or a small increment of motion. Specifically, the NN model can be trained by using the posture of the robot for each step of the servo servo, and the posture of the pin corresponding to the step and the posture of the target jack as training data.
基于视觉伺服时机械手每一步的位姿,以及最终***元件时机械手的位姿,计算出机械手从每一步位姿移动至***位姿所需的运动量,以这个运动量作为NN模型训练用的标注形成标签数据。Based on the pose of each step of the robot during visual servoing and the pose of the robot when the component is finally inserted, the amount of motion required for the robot to move from each pose to the inserted pose is calculated, and this amount of motion is used as the annotation for the training of the NN model. Tag data.
S143基于所述训练数据和标签数据,对所述初始化的NN模型进行训练,以获取预先经过训练的NN模型。S143 trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
对于每一个输入NN模型的训练数据,都有一个对应的标签数据(相当于正确答案),将训练数据输入初始化的NN模型后得到一个预测结果,预测结 果和标签数据的标注有差距,这个差距可以通过误差公式来衡量。训练模型就是通过调整模型的参数,使得模型输出的预测结果和标签数据中标注尽可能相近。For each training data of the input NN model, there is a corresponding label data (equivalent to the correct answer), and the training data is input into the initialized NN model to obtain a prediction result, and the difference between the prediction result and the label data is different. It can be measured by the error formula. The training model is to adjust the parameters of the model so that the prediction results of the model output are as close as possible to the label data.
训练误差函数可以基于需要采用相应的函数,比如:取预测结果与标签数据的均方差。在一些实施例中,还可以引入一些与NN模型的参数相关的正则项以防止过拟合,比如:全体参数的平方和、Dropout或正则化(Regularization)。The training error function can be based on the need to adopt a corresponding function, such as: taking the mean square error of the prediction result and the label data. In some embodiments, some regular terms related to the parameters of the NN model may also be introduced to prevent overfitting, such as: sum of squares of all parameters, Dropout, or regularization.
以1万组训练数据为例,每个训练数据对应带有标注的标签数据。Taking 10,000 sets of training data as an example, each training data corresponds to label data with labels.
由于训练数据数量太多,如果全部训练数据均通过模型得到预测并计算出和对应标注的误差后再更新一次模型参数的话(也就是9千次误差计算,9千个梯度计算,综合起来更新一次模型参数——迭代训练过程要更新好多次,不是所有数据跑完一次就结束,所有数据跑完一次就接着跑第二次上述流程),使得模型的参数更新的周期过长。Due to the large amount of training data, if all the training data are predicted by the model and the error of the corresponding label is calculated and then the model parameters are updated (that is, 9 thousand error calculations, 9 thousand gradient calculations, and updated once again) Model parameters - the iterative training process is updated many times, not all data ends once it runs, and all data runs once and then runs the second time), making the model's parameter update cycle too long.
因此,在一些实施例中,可以将这9千个训练数据划分为一个个容量更小的集合,例如每100个作为一个小集合(mini-batch),一共90个这样的mini-batch。然后对这90个mini-batch分别做之前的训练操作。这时候,每一个mini-batch就会更新一次模型参数(100次误差计算,100个梯度计算,综合起来更新一次模型参数),当整个训练数据(9000个)都利用了一次之后,模型参数已经更新了90次。(当然还需要继续重复上述过程)。Thus, in some embodiments, the 9000 training data can be divided into smaller sets, for example, every 100 as a mini-batch, a total of 90 such mini-batch. Then do the previous training operations for the 90 mini-batch. At this time, each mini-batch will update the model parameters (100 error calculations, 100 gradient calculations, and update the model parameters once). When the entire training data (9000) is used once, the model parameters have been Updated 90 times. (Of course, you need to continue to repeat the above process).
不断重复上述步骤,训练误差函数的优化目标可以基于需要进行各种设计,当达到训练误差的优化目标时,停止模型的训练,此时模型的当前参数即为最终模型的参数。Repeating the above steps, the optimization goal of the training error function can be based on various designs. When the optimization goal of the training error is reached, the training of the model is stopped. At this time, the current parameter of the model is the parameter of the final model.
需要说明的是,训练误差函数的优化目标可以包括但不限于如下几种情况:It should be noted that the optimization target of the training error function may include, but is not limited to, the following situations:
1、优化目标为预设的最大迭代次数,迭代完毕时对应的模型的当前参数即为该模型最终的参数;1. The optimization target is the preset maximum number of iterations, and the current parameter of the corresponding model is the final parameter of the model when the iteration is completed;
2、优化目标为某一阈值,记录每次迭代后的训练误差的函数的值,当训练误差低于此阈值时的当前参数为最终模型的参数。2. The optimization target is a certain threshold, and the value of the function of the training error after each iteration is recorded. When the training error is lower than the threshold, the current parameter is the parameter of the final model.
在一些实施例中,还可以将其中一部分训练数据划分一部分(比如1万组 数据中的其中1000组)作为验证数据,同样验证数据也有对应的标注,验证数据的作用是检查模型是否过拟合,譬如每90次更新后(也就是9000个训练数据每被利用一次后),用当前模型对这1000个验证数据进行预测,然后和标注进行误差计算,来作为是否过拟合的标准。In some embodiments, a part of the training data may be divided into a part (for example, 1000 of the 10,000 sets of data) as the verification data, and the verification data also has corresponding labels, and the verification data is used to check whether the model is over-fitting. For example, after every 90 updates (that is, after 9000 training data are used once), the 1000 validated data are predicted by the current model, and then the error is calculated with the label as a criterion for over-fitting.
不断重复上述过程,并记录每次迭代的训练误差和验证误差。训练误差理论上是不断下降的趋势(因为模型是用训练误差来更新的),但是验证误差不一定一直下降(因为模型并没有基于验证误差进行更新)。当发现随着模型的不断更新,验证样本的误差不再下降甚至上升,可判断模型已经过拟合到训练样本上,此时可停止训练。选取验证误差较小时所对应的模型参数为最终模型参数。又或者设定一个最大迭代次数,迭代完毕后同样查看哪个模型参数对应的训练误差和验证误差都比较小作为最终模型参数。也可以设定一个阈值,当验证误差与训练误差均低于此阈值时停止训练,选用当前参数为最终模型参数。Repeat the above process and record the training and verification errors for each iteration. The training error is theoretically a decreasing trend (because the model is updated with training errors), but the verification error does not always decrease (because the model is not updated based on the verification error). When it is found that the error of the verification sample no longer drops or even rises as the model is continuously updated, it can be judged that the model has been fitted to the training sample, and the training can be stopped at this time. When the selection error is small, the corresponding model parameter is the final model parameter. Or set a maximum number of iterations. After the iteration is completed, the same training parameters and verification errors corresponding to which model parameters are compared are used as the final model parameters. A threshold can also be set. When the verification error and the training error are both lower than the threshold, the training is stopped, and the current parameter is selected as the final model parameter.
图4为本发明提供的预先经过训练的第一CNN模型的获取方法的实施例的流程图。4 is a flow chart of an embodiment of a method for acquiring a pre-trained first CNN model provided by the present invention.
在一些实施例中,上面实施例S110所述的根据获取的第一图像,获取相对位姿,可以通过传统视觉方法实现,也可以通过机器学习的方法实现。In some embodiments, the relative pose based on the acquired first image described in the above embodiment S110 may be implemented by a traditional visual method or by a machine learning method.
传统视觉方式是指将第一图像进行二值化处理,然后从第一图像中识别出引脚的轮廓,根据该轮廓计算出引脚的坐标,再根据预先标定的结果将引脚坐标转换为引脚的位姿,然后根据机械手的位姿转换为相对位姿。The traditional visual mode refers to binarizing the first image, then identifying the contour of the pin from the first image, calculating the coordinates of the pin according to the contour, and converting the pin coordinates to the pre-calibrated result. The pose of the pin is then converted to a relative pose according to the pose of the robot.
通过机器学习的方法为基于预先经过训练的第一CNN模型,获取相对位姿,具体可以包括:基于预先经过训练的第一CNN模型获取引脚的位姿,然后将引脚(该引脚可以指引脚***端或者指整个引脚,优选引脚***端)的位姿结合机械手位姿转换为相对位姿;或者将第一图像和机械手位姿输入第一CNN模型,直接输出相对位姿;优选先输出引脚位姿再转换为相对位姿,这样可以提高位姿获取的精度,下面以优选实施例为例进一步详细说明。The method of machine learning is to obtain a relative pose based on the first trained CNN model, which may include: acquiring a pose of the pin based on the pre-trained first CNN model, and then placing the pin (the pin may The position of the pin insertion end or the entire pin, preferably the pin insertion end, is converted into a relative pose by the manipulator pose; or the first image and the manipulator pose are input to the first CNN model, and the relative pose is directly output; Preferably, the output pin pose is first converted to the relative pose, so that the accuracy of the pose acquisition can be improved. The preferred embodiment will be further described in detail below.
基于预先经过训练的第一CNN模型获取引脚的位姿具体可以包括:将第一图像输入预先经过训练的第一CNN模型,输出引脚的坐标,然后根据预先标定 结果将坐标转换为引脚的位姿;或经过预先经过训练的第一CNN模型直接输出引脚的位姿,然后结合机械手的位姿转换为引脚的相对位姿。Obtaining the pose of the pin based on the pre-trained first CNN model may include: inputting the first image into the previously trained first CNN model, outputting the coordinates of the pin, and then converting the coordinates into a pin according to the pre-calibration result. Or the position of the pin directly output through the pre-trained first CNN model, and then converted into the relative pose of the pin in combination with the pose of the robot.
需要说明的是,上述标定结果包括第一图像传感器自身的标定,以及第一图像传感器与机械手的标定(即手眼标定)。对第一图像传感器进行标定的作用,其一是为了求取内部参数,内部参数包括畸变系数(因为经过镜头等成像后,或多或少都有畸变)和焦距等等;当为双目或多目时,内部参数还可以包括结构参数,通过结构参数,便能把两个或两个以上相机获取的图像的每一个像素点之间的关系用数学语言定量描述,保证两个或两个以上的相机都处于“可求”的状态;其二是为了求取外部参数,得到标定板对应的世界坐标系和图像坐标系的对应的矩阵转换关系。对手眼进行标定,以得到第一图像传感器坐标系和第二图像传感器坐标系分别与机械手坐标系之间的矩阵转换关系。具体的标定方法可以采用OpenCV或者Matlab等等的方法。It should be noted that the calibration result includes the calibration of the first image sensor itself and the calibration of the first image sensor and the robot (ie, hand-eye calibration). The first image sensor is calibrated. One is to obtain internal parameters. The internal parameters include the distortion coefficient (because there is more or less distortion after imaging through a lens, etc.) and focal length, etc.; when it is binocular or When multi-objective, the internal parameters can also include structural parameters. Through the structural parameters, the relationship between each pixel of the image acquired by two or more cameras can be quantitatively described in mathematical language to ensure two or two. The above cameras are all in a "required" state; the second is to obtain the corresponding matrix conversion relationship between the world coordinate system and the image coordinate system corresponding to the calibration plate in order to obtain external parameters. The opponent's eye is calibrated to obtain a matrix conversion relationship between the first image sensor coordinate system and the second image sensor coordinate system and the robot coordinate system, respectively. The specific calibration method can adopt the method of OpenCV or Matlab.
卷积神经网络(Convolutional Neural Network,CNN)是一种前馈神经网络,一般地,CNN的基本结构包括两层,其一为特征提取层,每个神经元的输入与前一层的局部接受域相连,并提取该局部的特征。一旦该局部特征被提取后,它与其它特征间的位置关系也随之确定下来;其二是特征映射层,网络的每个计算层由多个特征映射组成,每个特征映射是一个平面,平面上所有神经元的权值相等。特征映射结构采用影响函数核小的sigmoid函数作为卷积网络的激活函数,使得特征映射具有位移不变性。此外,由于一个映射面上的神经元共享权值,因而减少了网络自由参数的个数。卷积神经网络中的每一个卷积层都紧跟着一个用来求局部平均与二次提取的计算层,这种特有的两次特征提取结构减小了特征分辨率。The Convolutional Neural Network (CNN) is a feedforward neural network. Generally, the basic structure of CNN consists of two layers, one of which is the feature extraction layer, and the input of each neuron and the local acceptance of the previous layer. The domains are connected and the local features are extracted. Once the local feature is extracted, its positional relationship with other features is also determined; the second is the feature mapping layer, each computing layer of the network is composed of multiple feature maps, and each feature map is a plane. The weights of all neurons on the plane are equal. The feature mapping structure uses a small sigmoid function that affects the function kernel as the activation function of the convolutional network, so that the feature map has displacement invariance. In addition, since the neurons on one mapping surface share weights, the number of network free parameters is reduced. Each convolutional layer in the convolutional neural network is followed by a computational layer for local averaging and quadratic extraction. This unique two-feature extraction structure reduces feature resolution.
CNN主要用来识别位移、缩放及其他形式扭曲不变性的二维图形。由于CNN的特征检测层通过训练数据进行学习,所以在使用CNN时,避免了显示的特征抽取,而隐式地从训练数据中进行学习;再者由于同一特征映射面上的神经元权值相同,所以网络可以并行学习,这也是卷积网络相对于神经元彼此相连网络的一大优势。卷积神经网络以其局部权值共享的特殊结构在图像处理方面有着独特的优越性。CNN is mainly used to identify two-dimensional graphics of displacement, scaling and other forms of distortion invariance. Since the feature detection layer of the CNN learns through the training data, when the CNN is used, the feature extraction of the display is avoided, and the learning data is implicitly learned; and the weights of the neurons on the same feature mapping surface are the same. So the network can learn in parallel, which is also a big advantage of the convolutional network relative to the neural network connected to each other. Convolutional neural networks have unique advantages in image processing with their special structure of local weight sharing.
第一卷积神经网络(CNN)模型可以包括各种网络结构,比如:LeNet,AlexNet,ZFNet,VGG,GoogLeNet,Residual Net,DenseNet,R-CNN,SPP-NET,Fast-RCNN,Faster-RCNN,FCN,Mask-RCNN,YOLO,SSD,YOLO2,以及其它现在已知或将来开发的网络模型结构。The first Convolutional Neural Network (CNN) model can include various network structures such as: LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2, and other network model structures now known or later developed.
由于插机方法对于精度要求比较高,因此,优选Fast-RCNN,Faster-RCNN,FCN等模型。Since the insertion method has higher accuracy requirements, models such as Fast-RCNN, Faster-RCNN, and FCN are preferred.
如图4所示,在一些实施例中,预先经过训练的第一CNN模型通过如下方法获取:As shown in FIG. 4, in some embodiments, the pre-trained first CNN model is obtained by:
S111获取初始化的第一CNN模型,所述第一CNN模型为针对输入的第一图像输出引脚的坐标、位姿或相对位姿,和/或包括目标插孔的第二图像或第二当前图像输出第三位姿或第三当前位姿。需要说明的是,当模型需要输出相对位姿时,还需要结合输入模型的机械手的位姿,才能输出相对位姿。S111 acquires an initialized first CNN model, the first CNN model is a coordinate, a pose or a relative pose for the input first image output pin, and/or a second image or a second current including the target jack The image outputs a third pose or a third current pose. It should be noted that when the model needs to output the relative pose, it is also necessary to combine the pose of the manipulator of the input model to output the relative pose.
这样第一CNN模型可以只用于根据第一图像,获取引脚的坐标、位姿或相对位姿;或者只用于根据第二图像或第二当前图像,获取第三位姿或第三当前位姿;或者用于即可以根据第一图像获取获取引脚的坐标、位姿或相对位姿,又可以用于根据第二图像或第二当前图像获取第三位姿或第三当前位姿。Thus, the first CNN model can be used only to obtain the coordinates, poses or relative poses of the pins according to the first image; or only to obtain the third pose or the third current according to the second image or the second current image. Position, or can be used to acquire the coordinates, pose or relative pose of the pin according to the first image, and can also be used to obtain the third pose or the third current pose according to the second image or the second current image. .
第一CNN模型的初始化参见NN模型的初始化,在此不再重复赘述。The initialization of the first CNN model is referred to the initialization of the NN model, and the details are not repeated here.
在另一些实施例中,为了节省训练时间,我们也可以用别人训练好的模型的参数来初始化自己的模型,然后在这个基础上进行微调(finetune)。In other embodiments, in order to save training time, we can also initialize the model with the parameters of other people's trained models, and then fine tune on this basis.
S112获取训练数据和标签数据;S112 acquires training data and tag data;
在插机运行过程中或静止状态下采集多张包括目标插孔的图像,需要大约1000次以获取足够的训练数据用以训练模型。Acquiring multiple images including the target jack during the operation of the plug-in or at rest requires approximately 1000 times to obtain sufficient training data to train the model.
标签数据可以通过人工或者自动的方法进行标注。自动的方法可以通过基于传统视觉方法的插机轨迹规划过程中,从包括目标插孔的图像中提取的目标插孔的坐标、位姿或相对位姿作为训练用的标注。Tag data can be labeled manually or automatically. The automatic method can use the coordinates, pose or relative pose of the target jack extracted from the image including the target jack as the training annotation during the insertion trajectory planning process based on the traditional visual method.
S113基于所述训练数据和标签数据,对所述初始化的第一CNN模型进行训练,以获取预先经过训练的所述第一CNN模型。S113 trains the initialized first CNN model based on the training data and the tag data to obtain the first CNN model that is trained in advance.
第一CNN模型具体训练的过程的相关描述参见上面实施例中NN模型的训 练过程,在此不再重复赘述。For a description of the process of the specific training of the first CNN model, refer to the training process of the NN model in the above embodiment, and the detailed description thereof will not be repeated here.
图5为本发明提供的预先经过训练的第二CNN模型的获取方法的实施例的流程图。FIG. 5 is a flowchart of an embodiment of a method for acquiring a pre-trained second CNN model provided by the present invention.
在一些实施例中,上面实施例所述的S130根据获取的第二图像或第二当前图像,获取在第一坐标系下的目标插孔的第三位姿或第三当前位姿,可以通过传统视觉方法实现,也可以通过机器学习的方法实现。In some embodiments, the S130 described in the above embodiment acquires the third pose or the third current pose of the target jack in the first coordinate system according to the acquired second image or the second current image, which may be adopted. Traditional visual methods can also be implemented by machine learning.
传统视觉方式是指将图像进行二值化处理,然后从图像中识别出目标插孔的轮廓,根据该轮廓计算出目标插孔的坐标,或者再根据预先标定的结果,将目标插孔的坐标转换为目标插孔的位姿。The traditional visual mode refers to binarizing the image, and then identifying the outline of the target jack from the image, calculating the coordinates of the target jack according to the contour, or calculating the coordinates of the target jack according to the pre-calibrated result. Convert to the pose of the target jack.
通过机器学习的方法实现是指将第二当前图像输入预先经过训练的第二CNN模型或上面实施例所述的经过训练的第一CNN模型,输出第三位姿或第三当前位姿,具体可以包括:经过所述模型输出第三坐标或第三当前坐标,然后根据标定结果转换为第三位姿或第三当前位姿;或经过所述模型直接输出第三位姿或第三当前位姿;优选前者,这样可以提高位姿提取的精度。The method implemented by the machine learning refers to inputting the second current image into the trained second CNN model or the trained first CNN model described in the above embodiment, and outputting the third pose or the third current pose, specifically The method may include: outputting a third coordinate or a third current coordinate through the model, and then converting to a third pose or a third current pose according to the calibration result; or directly outputting the third pose or the third current position through the model Position; preferred former, this can improve the accuracy of pose extraction.
需要说明的是,标定结果为获取第二图像或第二当前图像的第二图像传感器自身的标定,以及第二图像传感器与机械手之间的手眼标定,相关描述参见第一图像传感器,在此不再赘述。It should be noted that the calibration result is the calibration of the second image sensor itself for acquiring the second image or the second current image, and the hand-eye calibration between the second image sensor and the robot. For the related description, refer to the first image sensor. Let me repeat.
如图5所示,在一些实施例中,预先经过训练的第二CNN模型通过如下方法获取:As shown in FIG. 5, in some embodiments, the pre-trained second CNN model is obtained by:
S131获取初始化的第二CNN模型,所述第二CNN模型为针对输入的包括目标插孔的第二图像或第二当前图像,输出第二图像或第二当前图像中目标插孔的第三坐标或第三当前坐标,或第三位姿或第三当前位姿;S131: acquiring an initialized second CNN model, where the second CNN model outputs a third image or a second current image including the target jack for the input, and outputs a third coordinate of the target jack in the second image or the second current image. Or a third current coordinate, or a third posture or a third current posture;
第二CNN模型的初始化参见NN模型的初始化,在此不再重复赘述。The initialization of the second CNN model is referred to the initialization of the NN model, and the details are not repeated here.
在另一些实施例中,为了节省训练时间,我们也可以用别人训练好的模型的参数来初始化自己的模型,然后在这个基础上进行微调(finetune)。In other embodiments, in order to save training time, we can also initialize the model with the parameters of other people's trained models, and then fine tune on this basis.
S132获取训练数据和标签数据;S132 obtains training data and tag data;
在插机运行过程中或静止状态下采集多张包括目标插孔的图像,需要大约1000次以获取足够的训练数据用以训练神经网络。Acquiring multiple images including the target jack during the operation of the plug-in or at rest requires approximately 1000 times to obtain sufficient training data to train the neural network.
标签数据可以通过人工或者自动的方法进行标注。自动的方法可以通过基于传统视觉方法的插机轨迹规划过程中,从包括目标插孔的图像中提取的目标插孔的坐标作为训练用的标注。Tag data can be labeled manually or automatically. The automatic method can use the coordinates of the target jack extracted from the image including the target jack as the training annotation in the insertion trajectory planning process based on the traditional visual method.
S133基于所述训练数据和标签数据,对所述初始化的第二CNN模型进行训练,以获取预先经过训练的所述第二CNN模型。S133 trains the initialized second CNN model based on the training data and the tag data to obtain the second CNN model that is trained in advance.
第二CNN模型具体训练的过程的其它相关描述参见上面实施例中第一CNN或NN模型的训练过程,在此不再重复赘述。For other related descriptions of the process of the specific training of the second CNN model, refer to the training process of the first CNN or NN model in the above embodiment, and the detailed description is not repeated here.
需要说明的是,当为基于经过训练的第一CNN模型,输出第二位姿或第二当前位姿时,由于根据上面实施例所述,基于第一CNN模型还可以针对输入的第一图像输出引脚的坐标、位姿或相对位姿,因此,在对第一CNN模型进行训练时,需要输入第一图像以及第二图像或第二当前图像一同对第一CNN模型进行训练。具体的方法可以参见上面实施例所述的第一CNN模型和第二CNN模型的训练方法,在此不再赘述。It should be noted that, when the second pose or the second current pose is output based on the trained first CNN model, the first image that can be input based on the first CNN model may also be used according to the above embodiment. The coordinates, pose or relative pose of the output pin, therefore, when training the first CNN model, it is necessary to input the first image and the second image or the second current image to train the first CNN model together. For the specific method, reference may be made to the training methods of the first CNN model and the second CNN model described in the foregoing embodiments, and details are not described herein again.
在一些实施例中,结合上面实施例所述,当相对位姿,第三位姿或第三当前位姿和当前运动量都采用机器学习的方法实现时,所述各个模型连接结构如下:In some embodiments, in combination with the above embodiments, when the relative pose, the third pose or the third current pose and the current amount of motion are all implemented by a machine learning method, the respective model connection structures are as follows:
如图24所示,在一些实施例中,基于所述第一CNN模型11,进而获取相对位姿;基于所述第二CNN模型12,进而获取第三位姿或第三相对位姿;根据相对位姿或第一当前位姿获取第二当前位姿;所述MPL模型13结合第一当前位姿、第二当前位姿、第三位姿或第三当前位姿输出当前移动量。As shown in FIG. 24, in some embodiments, based on the first CNN model 11, a relative pose is obtained; and based on the second CNN model 12, a third pose or a third relative pose is obtained; The second current pose is acquired in the relative pose or the first current pose; the MPL model 13 outputs the current amount of movement in combination with the first current pose, the second current pose, the third pose or the third current pose.
如图24所示,在另一些实施例中,基于所述第一CNN模型11,进而获取相对位姿;基于所述第二CNN模型12,进而获取第三位姿或第三相对位姿;根据相对位姿或第一当前位姿获取第二当前位姿;所述MPL模型13结合第一当前位姿、相对位姿、第三位姿或第三当前位姿输出当前移动量。As shown in FIG. 24, in other embodiments, based on the first CNN model 11, a relative pose is obtained; and based on the second CNN model 12, a third pose or a third relative pose is obtained; Obtaining a second current pose according to the relative pose or the first current pose; the MPL model 13 outputs the current amount of movement in combination with the first current pose, the relative pose, the third pose or the third current pose.
如图25所示,在一些实施例中,当采用第一CNN模型11和MPL模型13时;基于所述第一CNN模型11,进而获取相对位姿以及第三位姿或第三相对位姿;根据所述第一当前位姿和相对位姿获取第二当前位姿;所述MPL模型13结合第一当前位姿、第二当前位姿、第三位姿或第三当前位姿输出当前 移动量。As shown in FIG. 25, in some embodiments, when the first CNN model 11 and the MPL model 13 are employed; based on the first CNN model 11, the relative pose and the third pose or the third relative pose are obtained. Obtaining a second current pose according to the first current pose and the relative pose; the MPL model 13 outputs the current in combination with the first current pose, the second current pose, the third pose or the third current pose The amount of movement.
如图25所示,在另一些实施例中,当采用第一CNN模型11和MPL模型13时;基于所述第一CNN模型11,进而获取相对位姿以及第三位姿或第三相对位姿;所述MPL模型13结合第一当前位姿、相对位姿、第三位姿或第三当前位姿输出当前移动量。As shown in FIG. 25, in other embodiments, when the first CNN model 11 and the MPL model 13 are employed; based on the first CNN model 11, the relative pose and the third pose or the third relative position are obtained. The MPL model 13 outputs the current amount of movement in conjunction with the first current pose, the relative pose, the third pose, or the third current pose.
如图19、20所述,在一些实施例中,本发明实施例还提供一种插机设备700,该插机设备700包括第一图像传感器710、第二图像传感器720、机械手730、处理器740和存储有计算机程序的存储器(图未示意出)。处理器740通过有线或者无线的方式藕接上述其它各个单元。As shown in FIG. 19 and FIG. 20, in some embodiments, the embodiment of the present invention further provides a plug-in device 700, which includes a first image sensor 710, a second image sensor 720, a robot 730, and a processor. 740 and a memory storing a computer program (not shown). The processor 740 interfaces the other units described above by wire or wirelessly.
无线方式可以包括但不限于:3G/4G、WIFI、蓝牙、WiMAX、Zigbee、UWB(ultra wideband),以及其它现在已知或将来开发的无线连接方式。Wireless methods may include, but are not limited to, 3G/4G, WIFI, Bluetooth, WiMAX, Zigbee, UWB (ultra wideband), and other wireless connections that are now known or developed in the future.
第一图像传感器710在工作时,采集包括引脚的第一图像,将该第一图像发送给处理器740。The first image sensor 710, while in operation, acquires a first image comprising a pin and transmits the first image to the processor 740.
第一图像传感器710通常设置在PCB板插机工作位和电子元件取料位之间的某一位置处。The first image sensor 710 is typically disposed at a location between the PCB board insertion work position and the electronic component pickup position.
第二图像传感器720在工作时,采集包括目标插孔的第二图像或第二当前图像,将该第二图像或第二当前图像发送给处理器740。When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
第二图像传感器可以设置在能够获取包括目标插孔的图像的任意位置;比如:设置在PCB板周边某一位置或者设置在机械手上;如图20所示,当第二图像传感器720设置在PCB板900周边,第二图像传感器720相对目标插孔910位置固定,因此只需获取一次第二图像即可;如图19所示,当第二图像传感器720设置在机械手730上,随着机械手730的移动,目标插孔910的位姿在不断相对变化,因此机械手730每次移动后需要重新获取第二当前图像。优选将第二图像传感器设置在机械手上,后面实施例会有进一步详细的说明。The second image sensor may be disposed at any position capable of acquiring an image including the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 20, when the second image sensor 720 is disposed on the PCB Around the board 900, the second image sensor 720 is fixed in position relative to the target jack 910, so that only the second image is acquired once; as shown in FIG. 19, when the second image sensor 720 is disposed on the robot 730, along with the robot 730 The movement of the target jack 910 is constantly changing relative, so the robot 730 needs to reacquire the second current image after each movement. Preferably, the second image sensor is placed on the robot, as will be described in further detail in the following embodiments.
第一图像传感器和第二图像传感器可以分别采用单目、双目或者多目采集包括目标(引脚或目标插孔)的图像,通过处理器对包括目标的图像的分析获取目标的3D位姿。第一图像传感器和第二图像传感器可以包括:照相机、摄像机、扫描仪或其他带有相关功能的设备(手机、电脑等)等等。下面以相机为例 进一步详细说明。The first image sensor and the second image sensor may respectively acquire an image including a target (pin or target jack) by using a monocular, binocular or multi-eye, and obtain a target 3D pose by analyzing the image including the target by the processor. . The first image sensor and the second image sensor may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like. The camera will be taken as an example for further details.
单目为只包括一部相机,即在只有一部相机的情况下,通过帧间移动来构成对应特征点的三角几何关系,从而获取目标的位姿。The monocular includes only one camera, that is, in the case of only one camera, the inter-frame movement is used to form a triangular geometric relationship of the corresponding feature points, thereby obtaining the pose of the target.
双目为包括两部相机,即用两部相机来定位。对物体上一个目标,用两部固定于不同位置的相机获取包括目标的图像,分别获取该目标在两部相机平面上的坐标。只要知道两部相机精确的相对位置,就可用几何的方法得到该目标在其中任意一部相机的坐标系中的位姿,即确定该目标的位姿。The binocular consists of two cameras, which are positioned with two cameras. For a target on the object, the camera including the target is acquired by two cameras fixed at different positions, and the coordinates of the target on the two camera planes are respectively obtained. As long as the exact relative position of the two cameras is known, the pose of the target in the coordinate system of any of the cameras can be obtained geometrically, that is, the pose of the target is determined.
多目的原理参照双目,在此不再重复赘述。The multi-purpose principle refers to binoculars, and the details are not repeated here.
所述机械手730在工作时,将机械手各关节的当前信息发送给所述处理器740;根据所述处理器740的控制移动当前运动量;根据所述处理器740的控制带动引脚***目标插孔。When the robot 730 is in operation, the current information of each joint of the robot is sent to the processor 740; the current amount of motion is moved according to the control of the processor 740; and the pin is inserted into the target jack according to the control of the processor 740. .
所述处理器740工作(即执行所述计算机程序)时实现上述各个插机方法实施例中的步骤,例如图1所示的步骤S110至S170。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S110 through S170 shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第一CNN模型获取方法和/或预先经过训练的第二CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained first CNN model acquisition method, and/or an advance The trained second CNN model acquires the various steps in the method. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
实施例二、Embodiment 2
插机设备是一种自动实现将电子元件的引脚***PCB板上的目标插孔的工业自动化设备。The plug-in device is an industrial automation device that automatically inserts the pins of an electronic component into a target jack on a PCB.
如图6所示,在一些实施例中,所述插机方法包括:As shown in FIG. 6, in some embodiments, the plug-in method includes:
S210根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚相对机械手在第一坐标系下的相对位姿。S210 obtains a relative pose of the pin relative to the robot in the first coordinate system based on the acquired first image including the pin, based on the first trained CNN model.
如图7所示,在一些实施例中,在获取相对位姿之后,在获取第一当前位姿,或第三位姿或第三当前位姿之前,可以执行步骤S280控制机械手带动引脚移动到目标插孔附近,这样可以节省后续的插机工作时间。在一些实施例中,在将引脚移动到某块PCB板上的第一个目标插孔附近时,可以先检测PCB板 上的标记点的坐标或位姿,结合PCB板布局图,可以推算出目标插孔的大致位置,并将电子元件移动至此大致位置,则可使引脚位于目标插孔的附近。以后,由于已经知道了第一个目标插孔的位置,可以以第一个目标插孔的位置为基准,结合PCB板的布局图,推算出目标插孔的大致位置坐标,并控制机械手移动到目标插孔附近。As shown in FIG. 7 , in some embodiments, after acquiring the relative pose, before acquiring the first current pose, or the third pose or the third current pose, step S280 may be performed to control the robot to drive the pin movement. Go to the target jack, which saves the subsequent plug-in work time. In some embodiments, when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the PCB layout diagram can be combined to calculate By pointing the approximate location of the target jack and moving the electronic components to this approximate position, the pins are placed near the target jack. Later, since the position of the first target jack is already known, the position of the first target jack can be used as a reference, and the layout coordinates of the target board can be combined to calculate the approximate position coordinates of the target jack, and the robot can be controlled to move to Near the target jack.
S220根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿。S220 acquires a first current pose of the robot in the first coordinate system according to current information of each joint of the robot.
S230根据包括目标插孔的第二图像或第二当前图像,基于预先经过训练的第二CNN模型或所述第一CNN模型,获取目标插孔在所述第一坐标系下的第三位姿或第三当前位姿。S230: acquiring, according to the second image or the second current image including the target jack, a third pose of the target jack in the first coordinate system based on the second trained CNN model or the first CNN model Or the third current pose.
S240根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位姿,计算机械手需实施的当前运动量;S250判断机械手是否满足插机条件;若满足,S260控制机械手带动引脚***目标插孔;若不满足,S270控制机械手实施所述当前运动量。S240 calculates, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of motion that the robot needs to implement; S250 determines whether the robot meets the plug-in condition; if satisfied, The S260 controls the robot to drive the pin into the target jack; if not, the S270 controls the robot to implement the current amount of motion.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
在一些实施例中,根据第一当前位姿、相对位姿、以及第三位姿或第三当前位姿,计算机械手需实施的当前运动量可以通过传统的视觉伺服的方法实现,也可以通过机器学习的方法实现。In some embodiments, according to the first current pose, the relative pose, and the third pose or the third current pose, the current amount of motion that the robot needs to implement may be implemented by a conventional visual servo method or by a machine. The method of learning is achieved.
传统的视觉伺服方式是获取目标插孔的当前位姿和引脚的当前位姿,根据目标插孔的位姿和引脚当前位姿,计算引脚需实施运动后的位姿,根据引脚与机械手的标定结果,计算出机械手需实施运动后的位姿,根据机械手的当前位姿和需实施运动后的位姿计算机械手需实施的当前运动量(移动量+旋转量);控制机械手实施当前运动量;然后重复上述步骤,直到连续几次的当前运动量很小或者移动预设的步数后,判断为满足插机条件,控制机械手带动引脚进行插机。The traditional visual servoing method is to obtain the current pose of the target jack and the current pose of the pin. According to the pose of the target jack and the current pose of the pin, the position of the pin after the motion needs to be calculated, according to the pin. Calculate the posture of the manipulator after the exercise, and calculate the current position of the manipulator (moving amount + rotation amount) according to the current posture of the manipulator and the posture after the exercise is required; the control manipulator implements the current The amount of exercise; then repeat the above steps until the current amount of motion is small for a few times or after moving the preset number of steps, it is judged that the insertion condition is satisfied, and the control robot drives the pin to perform the insertion.
机器学习的方法是指,根据第一当前位姿、相对位姿、以及第三位姿或第 三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量。The method of machine learning refers to calculating the current amount of exercise that the robot needs to perform based on the pre-trained NN model according to the first current pose, the relative pose, and the third pose or the third current pose.
需要说明的是,根据第一当前位姿、相对位姿、以及第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量可以包括:It should be noted that, according to the first current pose, the relative pose, and the third pose or the third current pose, calculating the current amount of exercise to be performed by the robot based on the previously trained NN model may include:
根据第一当前位姿和相对位姿计算出第二当前位姿,将第一当前位姿、第二当前位姿以及第三位姿或第三当前位姿输入所述NN模型,输出当前运动量;或者直接将第一当前位姿、相对位姿以及第三位姿或第三当前位姿输入所述NN模型,输出当前运动量;优选前一种方式,这样可以提高运动量获取的精度。Calculating a second current pose according to the first current pose and the relative pose, inputting the first current pose, the second current pose, and the third pose or the third current pose into the NN model, and outputting the current exercise amount Or directly input the first current pose, the relative pose, and the third pose or the third current pose into the NN model, and output the current amount of motion; preferably the former mode, which can improve the accuracy of the motion amount acquisition.
有关第一CNN模型、第二CNN模型和NN模型的相关描述参见实施例一中的描述,在此不再重复赘述。For a description of the first CNN model, the second CNN model, and the NN model, refer to the description in the first embodiment, and the detailed description is not repeated here.
如图19、20所述,在一些实施例中,该插机设备700包括第一图像传感器710、第二图像传感器720、机械手730、处理器740和存储器(图未示意出)。处理器740通过有线或者无线的方式藕接上述其它各个单元。As depicted in Figures 19 and 20, in some embodiments, the plug-in device 700 includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory (not shown). The processor 740 interfaces the other units described above by wire or wirelessly.
第一图像传感器710在工作时,采集包括引脚的第一图像,将该第一图像发出给处理器740。The first image sensor 710, when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
第二图像传感器720在工作时,采集包括目标插孔的第二图像或第二当前图像,将该第二图像或第二当前图像发送给处理器740。When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚***所述目标插孔。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 The foot is inserted into the target jack.
所述处理器740工作(即执行所述计算机程序)时实现上述各个插机方法实施例中的步骤,例如图6所示的步骤S210至S270。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S210 through S270 shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第一CNN模型获取方法和/或预先经过训练的第二CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained first CNN model acquisition method, and/or an advance The trained second CNN model acquires the various steps in the method. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
实施例三、Embodiment 3
图8为本发明提供的插机方法的实施例的第五流程图。图9为本发明提供的插机方法的实施例的第六流程图。图10为本发明提供的预先经过训练的第三CNN模型的获取方法的实施例的流程图。FIG. 8 is a fifth flowchart of an embodiment of a plug-in method provided by the present invention. FIG. 9 is a sixth flowchart of an embodiment of a plug-in method provided by the present invention. FIG. 10 is a flowchart of an embodiment of a method for acquiring a pre-trained third CNN model provided by the present invention.
插机设备是一种自动实现将电子元件的引脚***PCB板上的目标插孔的工业自动化设备。The plug-in device is an industrial automation device that automatically inserts the pins of an electronic component into a target jack on a PCB.
如图8所示,所述插机方法包括:As shown in FIG. 8, the plug-in method includes:
S310获取包括引脚的第一图像。S310 acquires a first image including pins.
S320根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿。S320 obtains a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot.
如图9所示,在一些实施例中,在获取第一图像之后,在获取第一当前位姿,或第二图像或第二当前图像之前,还可以包括步骤S380控制机械手带动引脚移动到目标插孔附近,这样可以节省后续的插机工作时间。在一些实施例中,在将引脚移动到某块PCB板上的第一个目标插孔附近时,可以先检测PCB板上的标记点的坐标或位姿,结合PCB板布局图,可以推算出目标插孔的大致位置,并将电子元件移动至此大致位置,则可使引脚位于目标插孔的附近。以后,由于已经知道了第一个目标插孔的位置,可以以第一个目标插孔的位置为基准,结合PCB板的布局图,推算出目标插孔的大致位置坐标,并控制机械手移动到目标插孔附近。As shown in FIG. 9 , in some embodiments, after acquiring the first image, before acquiring the first current pose, or the second image or the second current image, the method may further include the step S380 controlling the robot to drive the pin to move to Near the target jack, this saves subsequent plug-in work hours. In some embodiments, when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the PCB layout diagram can be combined to calculate By pointing the approximate location of the target jack and moving the electronic components to this approximate position, the pins are placed near the target jack. Later, since the position of the first target jack is already known, the position of the first target jack can be used as a reference, and the layout coordinates of the target board can be combined to calculate the approximate position coordinates of the target jack, and the robot can be controlled to move to Near the target jack.
S330获取包括目标插孔的第二图像或第二当前图像;S330 acquires a second image or a second current image including the target jack;
S340根据所述第一图像、所述第一当前位姿、所述第二图像或所述第二当前图像,基于预先经过训练的第三CNN模型,计算机械手需实施的当前运动量;S350判断机械手是否满足插机条件;若满足,S360控制所述机械手带动所述引脚***所述目标插孔;若不满足,S370控制所述机械手实施所述当前运动量。S340: calculating, according to the first image, the first current pose, the second image, or the second current image, a current amount of motion that the robot needs to implement based on the third CNN model that is trained in advance; S350 determining the robot Whether the insertion condition is satisfied; if satisfied, S360 controls the robot to drive the pin into the target jack; if not, S370 controls the robot to implement the current amount of motion.
将第一图像、第二图像或第二当前图像直接输入第三CNN模型中,即可直接输出机械手需实施的当前运动量,而不需要预先对第一图像、第二图像或第 三图像预先提取出引脚的相对位姿,以及目标插孔的第三位姿或第三当前位姿。Directly inputting the first image, the second image or the second current image into the third CNN model, and directly outputting the current amount of motion to be performed by the robot without pre-extracting the first image, the second image or the third image in advance The relative pose of the pin and the third or third current pose of the target jack.
有关第三CNN模型的结构和训练方法参见上面实施例的第一CNN模型,在此不再赘述。For the structure and training method of the third CNN model, refer to the first CNN model of the above embodiment, and details are not described herein again.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
如图10所示,进一步,在一些实施例中,预先经过训练的第三CNN模型通过如下方法获取:As shown in FIG. 10, further, in some embodiments, the pre-trained third CNN model is obtained by:
S341获取初始化的第三CNN模型,所述CNN模型为针对输入的第一图像,第一当前位姿、第二图像或第二当前图像;输出机械手需实施的当前位移量;S341: acquiring an initialized third CNN model, where the CNN model is a first image for input, a first current pose, a second image, or a second current image; and outputting a current displacement amount that the robot needs to implement;
S342获取训练数据和标签数据;S342 acquires training data and tag data;
可以基于传统视觉伺服运行插机多次(比如:1000次),以获取足够的训练数据用以训练初始化的NN模型。The plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
在传统视传统视觉伺服时,机械手通常按照预先设定的步数(比如:3步)形成最终***元件时机械手的位姿。具体可以以视觉伺服时机械手每走一步时的机械手的位姿,以及该步对应的包括引脚的图像和包括目标插孔的图像作为训练数据。In the conventional view of conventional visual servoing, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps). Specifically, the posture of the robot at each step of the robot during visual servoing, and the image including the pin corresponding to the step and the image including the target jack may be used as the training data.
基于视觉伺服时机械手每一步的位姿,以及最终***元件时机械手的位姿,计算出机械手从每一步位姿移动至***位姿所需的运动量,以这个运动量作为模型训练用的标注形成标签数据。Based on the pose of each step of the robot during visual servoing and the pose of the robot when the component is finally inserted, the amount of motion required by the robot to move from each pose to the inserted pose is calculated, and the amount of motion is used as a label for model training. data.
S343基于所述训练数据和标签数据,对所述初始化的第三CNN模型进行训练,以获取预先经过训练的所述第三CNN模型。S343 trains the initialized third CNN model based on the training data and the tag data to obtain the third CNN model that is trained in advance.
第三CNN模型具体训练的过程的其它相关描述参见上面实施例中第一CNN或NN模型的训练过程,在此不再重复赘述。For other related descriptions of the process of the third CNN model specific training, refer to the training process of the first CNN or NN model in the above embodiment, and the details are not repeated here.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
如图25所示,当采用一个第三CNN模型14时,将第一图像、第二图像或第三图像以及第一当前位姿输入所述第三CNN模型14,根据第一图像、第二图像或第三图像计算出中间结果,然后结合第一当前位姿,从而获取出当前移动量。As shown in FIG. 25, when a third CNN model 14 is employed, the first image, the second image or the third image, and the first current pose are input to the third CNN model 14, according to the first image, the second The image or the third image calculates an intermediate result and then combines the first current pose to obtain the current amount of movement.
如图19、20所述,在一些实施例中,该插机设备700包括第一图像传感器710、第二图像传感器720、机械手730、处理器740和存储器(图未示意出)。处理器740通过有线或者无线的方式藕接上述其它各个单元。As depicted in Figures 19 and 20, in some embodiments, the plug-in device 700 includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory (not shown). The processor 740 interfaces the other units described above by wire or wirelessly.
第一图像传感器710在工作时,采集包括引脚的第一图像,将该第一图像发出给处理器740。The first image sensor 710, when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
第二图像传感器720在工作时,采集包括目标插孔的第二图像或第二当前图像,将该第二图像或第二当前图像发送给处理器740。When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚***所述目标插孔。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 The foot is inserted into the target jack.
所述处理器740工作(即执行所述计算机程序)时实现上述各个插机方法实施例中的步骤,例如图8所示的步骤S310至S370。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S310 through S370 shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的第三CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device also includes, in operation, various steps in implementing the pre-trained third CNN model acquisition method described in the above embodiments. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
实施例四、Embodiment 4
如图11所示,在一些实施例中,所述插机方法包括:As shown in FIG. 11, in some embodiments, the plug-in method includes:
S410,根据获取的包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的的第二位姿或第二当前位姿,以及目标插孔的在所述第一坐标系下的第三位姿或第三当前位姿;S410. Acquire, according to the acquired third current image including the pin and the target jack, a second pose or a second current pose of the pin in the first coordinate system, and the first of the target jacks in the first a third pose or a third current pose in the coordinate system;
如图12、21、22所示,在一些实施例中,机械手730从取料位抓取电子元件800后,为使得第三图像传感器750能同时获取包括引脚810和目标插孔910 的第三当前图像,在S410之前还可以包括步骤S470控制机械手带动引脚移动到目标插孔附近,相关描述参见具体实施例一,在此不再重复赘述。As shown in FIGS. 12, 21, 22, in some embodiments, after the robot 730 grabs the electronic component 800 from the reclaiming position, the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910. The third current image may further include a step S470 to control the robot to drive the pin to move to the vicinity of the target jack before the S410. For details, refer to the specific embodiment 1, and details are not described herein again.
在一些实施例中,在进行PCB板第一个目标插孔的插引脚前,将机械手移动至PCB板标记点,标记点(Mark point)为一个带有周边空白区域的实心的圆形或者矩形点,利用固定在机械手末端的第二图像传感器获取标记位图像,并进行标记位位置检测,基于所述检测的标记位置,以及PCB板布局图,推算出目标插孔的大致位置,并将电子元件移动至此位置,则此时引脚位于目标插孔的附近。以后,由于已经知道了第一个目标插孔的位置,因此不再需要获取标记位图像,可以以第一个目标插孔的位置为基准,结合PCB板的布局图,推算出目标插孔的大致位置坐标,并控制机械手移动到此位置坐标,从而使得引脚移动到目标插孔附近。In some embodiments, the robot is moved to the PCB board mark before the pin of the first target jack of the PCB board is inserted, and the Mark point is a solid circle with a peripheral blank area or a rectangular point, using a second image sensor fixed at the end of the robot to acquire a mark bit image, and performing mark bit position detection, based on the detected mark position and the PCB layout map, to calculate the approximate position of the target jack, and When the electronic component moves to this position, the pin is located near the target jack. In the future, since the position of the first target jack is already known, it is no longer necessary to obtain the image of the marker bit. The position of the first target jack can be used as a reference, and the layout of the PCB board can be combined to calculate the target jack. Approximate position coordinates and control the movement of the robot to this position, causing the pin to move near the target jack.
如图20所示,根据上面实施例所述,当第三图像传感器750设置在机械手的末端关节上,除此之外也可以设置在机械手的其它关节上(图未示意出),获取从第三图像传感器750采集并发送的第三当前图像,由于第三图像传感器750相对目标插孔910运动,而相对引脚810的位姿固定,因此,基于第三当前图像可以获取引脚的第二位姿和目标插孔的第三当前位姿。As shown in FIG. 20, according to the above embodiment, when the third image sensor 750 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The third current image acquired and transmitted by the three image sensor 750, because the third image sensor 750 moves relative to the target jack 910, and the posture of the pin 810 is fixed, therefore, the second pin of the pin can be acquired based on the third current image. The pose and the third current pose of the target jack.
如图21所示,当第三图像传感器750设置在PCB板900周边某一位置时,由于第三图像传感器750相对于PCB板900位姿固定,相对引脚运动,因此,基于第三当前图像可以获取引脚的第二当前位姿和目标插孔的第三位姿。As shown in FIG. 21, when the third image sensor 750 is disposed at a position around the PCB board 900, since the third image sensor 750 is fixed relative to the PCB board 900, the relative pin moves, and therefore, based on the third current image. The second current pose of the pin and the third pose of the target jack can be obtained.
优选将第三图像传感器750设置在PCB板900周边,这样可以方便获取到包括引脚的第三当前图像。The third image sensor 750 is preferably disposed at the periphery of the PCB board 900 so that a third current image including the pins can be easily obtained.
S420,根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;S420: Acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
S430根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量;S440判断所述机械手是否满足插机条件;S450若满足,控制所述机械手带动所述引脚***所述目标插孔;S460若不满足,控制机械手实施所述当前运动量。S430 calculates, according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose, based on the pre-trained NN model, the current implementation of the robot The amount of motion; S440 determines whether the robot meets the plug-in condition; if S450 is satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if S460 is not satisfied, the control robot implements the current amount of motion.
该预先经过训练的NN模型的输入包括:第一当前位姿、第二位姿或第二当前位姿、以及第三位姿或第三当前位姿,区别在于实施例一中输入的包括引脚的相对位姿或第二当前位姿,而本实施例中的输入包括引脚的第二位姿或第二当前位姿,因此本实施例中的预先经过训练的NN模型与实施例一中所述的预先经过训练的NN模型可以采用同样的模型结构和训练方法,只是输入的训练数据略有上述的不同。The input of the pre-trained NN model includes: a first current pose, a second pose or a second current pose, and a third pose or a third current pose, the difference being that the input entered in the first embodiment includes The relative orientation of the foot or the second current pose, and the input in this embodiment includes the second pose or the second current pose of the pin, so the pre-trained NN model in this embodiment is the same as the first embodiment. The pre-trained NN model described in the above can use the same model structure and training method, except that the input training data is slightly different.
因此根据实施例一和本实施例中所述,所述预先经过训练的NN模型获取方法包括:Therefore, according to the first embodiment and the embodiment, the pre-trained NN model acquisition method includes:
S210获取初始化的NN模型,所述NN模型为针对输入的在第一坐标系下的第一当前位姿;相对位姿、第二位姿或第二当前位姿;以及第三位姿或第三当前位姿,输出机械手需实施的当前运动量。S210 obtains an initialized NN model, where the NN model is a first current pose in the first coordinate system for input; a relative pose, a second pose or a second current pose; and a third pose or The three current poses output the current amount of exercise that the robot needs to implement.
S220获取训练数据和标签数据。S220 acquires training data and tag data.
S230基于所述训练数据和标签数据,对所述初始化的NN模型进行训练,以获取预先经过训练的NN模型。S230 trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
有关NN模型的结构和训练方法参见上面实施例的描述,在此不再重复赘述。For the structure and training method of the NN model, refer to the description of the above embodiment, and the details are not repeated here.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
图13为本发明提供的预先经过训练的第四CNN模型的获取方法的实施例的流程图。FIG. 13 is a flowchart of an embodiment of a method for acquiring a pre-trained fourth CNN model provided by the present invention.
进一步,在一些实施例中,上面实施例S410根据包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的的第二位姿或第二当前位姿,以及目标插孔的在所述第一坐标系下的第三位姿或第三当前位姿,可以通过传统视觉方法实现,也可以通过机器学习的方法实现。Further, in some embodiments, the above embodiment S410 obtains a second pose or a second current pose of the pin in the first coordinate system according to the third current image including the pin and the target jack, and the target The third pose or the third current pose of the jack in the first coordinate system can be implemented by a traditional visual method or by a machine learning method.
传统视觉方式是指将第三当前图像进行二值化处理,然后从第三当前图像中识别出引脚和目标插孔的轮廓,根据轮廓计算引脚的第二位姿或第二当前位 姿和目标插孔的第三位姿或第三当前位姿。The traditional visual mode refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second pose or the second current pose of the pin according to the contour. And the third pose or the third current pose of the target jack.
通过机器学习的方法实现是指将第三当前图像输入预先经过训练的第四CNN模型,直接输出第二位姿或第二当前位姿,以及第三位姿或第三当前位姿。The implementation by the machine learning method refers to inputting the third current image into the pre-trained fourth CNN model, and directly outputting the second pose or the second current pose, and the third pose or the third current pose.
第四CNN模型可以包括LeNet,AlexNet,ZFNet,VGG,GoogLeNet,Residual Net,DenseNet,R-CNN,SPP-NET,Fast-RCNN,Faster-RCNN,FCN,Mask-RCNN,YOLO,SSD,YOLO2.The fourth CNN model may include LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2.
如图13所示,在一些实施例中,预先经过训练的第四CNN模型通过如下方法获取:As shown in FIG. 13, in some embodiments, the pre-trained fourth CNN model is obtained by:
S411获取初始化的第四CNN模型,所述第四CNN模型为针对输入的包括引脚和目标插孔的第三当前图像,输出第三当前图像中引脚的第二位姿或第二当前位姿,以及目标插孔的第三位姿或第三当前位姿;S411 acquires an initialized fourth CNN model, which is a third current image including a pin and a target jack for input, and outputs a second pose or a second current position of the pin in the third current image. Position, and the third posture or the third current posture of the target jack;
S412获取训练数据和标签数据;S412 acquires training data and tag data;
在插机运行过程中或静止状态下采集多张包括引脚和目标插孔的图像,需要大约1000次以获取足够的训练数据用以训练模型。Acquiring multiple images including pins and target jacks during or after the plug-in operation requires approximately 1000 acquisitions to obtain sufficient training data to train the model.
标签数据可以通过人工或者自动的方法进行标注。自动的方法可以通过基于传统视觉方法的插机轨迹规划过程中,从包括引脚和目标插孔的图像中提取的引脚和目标插孔的位姿作为训练用的标注。Tag data can be labeled manually or automatically. The automatic method can be used as a training annotation by the position of the pin and the target jack extracted from the image including the pin and the target jack during the insertion trajectory planning process based on the conventional visual method.
S413基于所述训练数据和标签数据,对所述初始化的第三CNN模型进行训练,以获取预先经过训练的所述第三CNN模型。S413 trains the initialized third CNN model based on the training data and the tag data to obtain the third CNN model that is trained in advance.
有关第四CNN模型的结构和训练方法参见上面实施例所述的第一CNN模型,在此不再赘述。For the structure and training method of the fourth CNN model, refer to the first CNN model described in the above embodiment, and details are not described herein again.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
如图21、22所示,本发明实施例还提供一种插机设备700,所述插机设备700包括第三图像传感器750、机械手730和处理器740。处理器740通过有线或者无线的方式藕接所述机械手730和所述第三图像传感器750。As shown in FIG. 21 and FIG. 22, an embodiment of the present invention further provides a plug-in device 700. The plug-in device 700 includes a third image sensor 750, a robot 730, and a processor 740. The processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
第三图像传感器可以包括:照相机、摄像机、扫描仪或其他带有相关功能的设备(手机、电脑等)等等。The third image sensor may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like.
预先对第三图像传感器750、以及第三图像传感器750和机械手730之间进行标定。The third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
第三图像传感器750在工作时,获取包括引脚和目标插孔的第三当前图像,并将第三当前图像发送给处理器740。The third image sensor 750, when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
第三图像传感器可以设置在能够获取包括引脚和目标插孔的图像的任意位置;比如:设置在PCB板周边某一位置或者设置在机械手上;如图22所示,当第三图像传感器750设置在PCB板900周边,第三图像传感器750相对目标插孔910位姿固定,相对运动的引脚810位姿移动;如图21所示,当第三图像传感器750设置在机械手730上,随着机械手730的移动,目标插孔910的位姿在不断相对变化,而引脚的位姿相对固定,优选将第三图像传感器750设置在PCB板900周边,后面实施例会有进一步详细的说明。The third image sensor may be disposed at any position capable of acquiring an image including the pin and the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 22, when the third image sensor 750 Set at the periphery of the PCB board 900, the third image sensor 750 is fixed in position with respect to the target jack 910, and the relative movement pin 810 moves in a posture; as shown in FIG. 21, when the third image sensor 750 is disposed on the robot 730, With the movement of the robot 730, the posture of the target jack 910 is constantly changing, and the posture of the pins is relatively fixed. Preferably, the third image sensor 750 is disposed around the periphery of the PCB board 900, which will be described in further detail in the following embodiments.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚810***所述目标插孔910。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
所述处理器740工作(即执行所述计算机程序)时实现上述各个插机方法实施例中的步骤,例如图1所示的步骤S410至S460。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S410 through S460 shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第四CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes, in operation, each of the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained fourth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
实施例五、Embodiment 5
图14为本发明提供的插机方法的实施例的第九流程图。图15为本发明提供的插机方法的实施例的第十流程图。Figure 14 is a ninth flow chart of an embodiment of the method of plugging in the present invention. Figure 15 is a tenth flow chart of an embodiment of the method of plugging in the present invention.
如图14所示,在一些实施例中,所述插机方法包括:As shown in FIG. 14, in some embodiments, the plug-in method includes:
S510根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第四CNN模型,获取引脚在第一坐标系下的第二位姿或第二当前位姿,以及目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;S510 obtains a second pose or a second current pose of the pin in the first coordinate system, and a target jack according to the third current CNN model that is trained according to the third current image including the pin and the target jack. a third pose or a third current pose in the first coordinate system;
如图15、21、22所示,在一些实施例中,机械手730从取料位抓取电子元件800后,为使得第三图像传感器750能同时获取包括引脚810和目标插孔910的第三当前图像,在S410之前还可以包括步骤S470控制机械手带动引脚移动到目标插孔附近。As shown in FIGS. 15, 21, 22, in some embodiments, after the robot 730 grabs the electronic component 800 from the reclaiming position, the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910. The three current images may further include a step S470 to control the robot to drive the pin to move to the vicinity of the target jack before S410.
第三图像传感器750可以设置在机械730手上,也可以设置在PCB板900的周边;优选将第三图像传感器750设置在PCB板900周边,这样可以方便获取到包括引脚的第三当前图像。The third image sensor 750 may be disposed on the hand of the machine 730 or may be disposed at the periphery of the PCB board 900; preferably, the third image sensor 750 is disposed at the periphery of the PCB board 900, so that the third current image including the pin is conveniently obtained. .
S520根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;S520: acquiring, according to current information of each joint of the robot, a first current pose of the robot in the first coordinate system;
S530根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿计算机械手需实施的当前运动量;S540判断机械手是否满足插机条件;S550若满足,控制所述机械手带动所述引脚***所述目标插孔;S560若不满足,控制机械手实施所述当前运动量。S530 calculates, according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose, a current amount of motion that the robot needs to implement; S540 determines whether the robot satisfies the insertion If the S550 is satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if S560 is not satisfied, the control robot implements the current amount of motion.
有关第四CNN模型的相关描述参加实施例四,在此不再重复赘述。The related description about the fourth CNN model participates in the fourth embodiment, and the details are not repeated here.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
在一些实施例中,根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿计算机械手需实施的当前运动量可以通过传统的方法,也可以通过机器学习的方法实现,也可以通过传统的视觉伺服的方法实现,优选通过机器学习的方法实现,因为通过机器学习的方法可以提高当前运动量计算的准确率和效率。In some embodiments, calculating the current amount of exercise to be performed by the robot according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose may pass The traditional method can also be realized by the method of machine learning, or by the traditional visual servo method, preferably by the machine learning method, because the machine learning method can improve the accuracy and efficiency of the current exercise amount calculation.
机器学习的方法是指,基于预先经过训练的NN模型实现。有关NN模型的相关描述参见具体实施例一,在此不再重复赘述。The machine learning method is based on a pre-trained NN model. For a description of the NN model, refer to the specific embodiment 1, and the detailed description is not repeated here.
如图21、22所示,本发明实施例还提供一种插机设备700,所述插机设备700包括第三图像传感器750、机械手730和处理器740。处理器740通过有线或者无线的方式藕接所述机械手730和所述第三图像传感器750。As shown in FIG. 21 and FIG. 22, an embodiment of the present invention further provides a plug-in device 700. The plug-in device 700 includes a third image sensor 750, a robot 730, and a processor 740. The processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
预先对第三图像传感器750、以及第三图像传感器750和机械手730之间进行标定。The third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
第三图像传感器750在工作时,获取包括引脚和目标插孔的第三当前图像,并将第三当前图像发送给处理器740。The third image sensor 750, when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚810***所述目标插孔910。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
所述处理器740工作(即执行所述计算机程序)时实现上述各个插机方法实施例中的步骤,例如图14所示的步骤S510至S560。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S510 through S560 shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第四CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes, in operation, each of the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained fourth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
实施例六、Embodiment 6
如图16所示,在一些实施例中,所述插机方法包括:As shown in FIG. 16, in some embodiments, the plug-in method includes:
S610根据获取的所述机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;S610: acquiring, according to the obtained current information of each joint of the robot, a first current pose of the robot in the first coordinate system;
如图17、21、22所示,在一些实施例中,机械手730从取料位抓取电子元件800后,为使得第三图像传感器750能同时获取包括引脚810和目标插孔910的第三当前图像,在S410之前还可以包括步骤S670控制机械手带动引脚移动到目标插孔附近。As shown in FIGS. 17, 21, 22, in some embodiments, after the robot 730 grabs the electronic component 800 from the reclaiming position, the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910. The three current images may further include a step S670 to control the robot to drive the pin to move to the vicinity of the target jack before S410.
第三图像传感器750可以设置在机械730手上,也可以设置在PCB板900的周边;优选将第三图像传感器750设置在PCB板900周边,这样可以方便获取到包括引脚的第三当前图像。The third image sensor 750 may be disposed on the hand of the machine 730 or may be disposed at the periphery of the PCB board 900; preferably, the third image sensor 750 is disposed at the periphery of the PCB board 900, so that the third current image including the pin is conveniently obtained. .
S620获取包括引脚和目标插孔的第三当前图像;The S620 acquires a third current image including the pin and the target jack;
S630根据所述第三当前图像、所述第一当前位姿,基于预先经过训练的第五CNN模型,计算机械手需实施的当前运动量;S640判断机械手是否满足插机条件;若满足,S650控制所述机械手带动所述引脚***所述目标插孔;若不满足,S660控制机械手实施所述当前运动量。S630, according to the third current image, the first current pose, based on the previously trained fifth CNN model, calculating a current amount of motion that the robot needs to implement; S640 determining whether the robot meets the plug-in condition; if satisfied, the S650 control station The robot drives the pin into the target jack; if not, the S660 controls the robot to implement the current amount of motion.
所述第五CNN模型其它相关描述参加上面实施例中的第一CNN模型的描述,在此不再重复赘述。The other related descriptions of the fifth CNN model participate in the description of the first CNN model in the above embodiment, and the details are not repeated herein.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
如图18所示,进一步,在一些实施例中,预先经过训练的第五CNN模型通过如下方法获取:As shown in FIG. 18, further, in some embodiments, the pre-trained fifth CNN model is obtained by the following method:
S631获取初始化的第五CNN模型,所述第五CNN模型为针对输入的第三当前图像和第一当前位姿;输出机械手需实施的当前位移量。S631 obtains an initialized fifth CNN model, which is a third current image and a first current pose for the input; and outputs a current displacement amount that the robot needs to implement.
S632获取训练数据和标签数据。S632 acquires training data and tag data.
可以基于传统视觉伺服运行插机多次(比如:1000次),以获取足够的训练数据用以训练初始化的NN模型。The plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
在传统视传统视觉伺服时,机械手通常按照预先设定的步数(比如:3步)形成最终***元件时机械手的位姿。具体可以以视觉伺服时机械手每走一步时的机械手的位姿,以及该步对应的包括引脚和目标插孔的图像作为训练数据。In the conventional view of conventional visual servoing, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps). Specifically, the position of the robot at each step of the robot during visual servoing, and the image including the pin and the target jack corresponding to the step may be used as training data.
基于视觉伺服时机械手每一步的位姿,以及最终***元件时机械手的位姿,计算出机械手从每一步位姿移动至***位姿所需的运动量,以这个运动量作为模型训练用的标注形成标签数据。Based on the pose of each step of the robot during visual servoing and the pose of the robot when the component is finally inserted, the amount of motion required by the robot to move from each pose to the inserted pose is calculated, and the amount of motion is used as a label for model training. data.
S633基于所述训练数据和标签数据,对所述初始化的第五CNN模型进行训练,以获取预先经过训练的所述第五CNN模型。S633 trains the initialized fifth CNN model based on the training data and the tag data to obtain the fifth CNN model that is trained in advance.
有关第五CNN模型的结构和训练方法参见上面实施例的第一CNN模型,在此不再赘述。For the structure and training method of the fifth CNN model, refer to the first CNN model of the above embodiment, and details are not described herein again.
图25为本发明实施例所述的模型的连接框图。FIG. 25 is a connection block diagram of a model according to an embodiment of the present invention.
如图25所示,当采用一个第五CNN模型14时,将第三当前图像以及第一当 前位姿输入所述第五CNN模型14,根据第三当前图像计算出中间结果,然后结合第一当前位姿,从而获取出当前移动量。As shown in FIG. 25, when a fifth CNN model 14 is employed, a third current image and a first current pose are input to the fifth CNN model 14, an intermediate result is calculated according to the third current image, and then combined with the first The current pose, thus obtaining the current amount of movement.
如图21、22所示,本发明实施例还提供一种插机设备700,所述插机设备700包括第三图像传感器750、机械手730和处理器740。处理器740通过有线或者无线的方式藕接所述机械手730和所述第三图像传感器750。As shown in FIG. 21 and FIG. 22, an embodiment of the present invention further provides a plug-in device 700. The plug-in device 700 includes a third image sensor 750, a robot 730, and a processor 740. The processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
预先对第三图像传感器750、以及第三图像传感器750和机械手730之间进行标定。The third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
第三图像传感器750在工作时,获取包括引脚和目标插孔的第三当前图像,并将第三当前图像发送给处理器740。The third image sensor 750, when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚810***所述目标插孔910。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
所述处理器740工作(即执行所述计算机程序)时实现上述各个插机方法实施例中的步骤,例如图16所示的步骤S610至S660。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operating (ie, executing the computer program), such as steps S610 through S660 shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的第五CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device also includes, in operation, various steps in implementing the pre-trained fifth CNN model acquisition method described in the above embodiments. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
实施例七、Example VII.
如图28所示,本发明实施例还提供一种插机方法,该插机方法包括:As shown in FIG. 28, an embodiment of the present invention further provides an insertion method, where the insertion method includes:
S110’根据获取的包括引脚的第一图像,获取引脚的第二坐标。S110' acquires the second coordinates of the pin according to the acquired first image including the pin.
如图19、20所示,机械手730在处理器740的控制下在取料位拾取电子元件800之后,将电子元件800移动至第一图像传感器710的视野范围内,使得通过第一图像传感器710采集包括电子元件800的引脚810的第一图像。该第一图像通常不包括PCB板背景,因为如果图像中包括PCB板背景,由于背景图像复杂,会使得引脚识别存在一定困难。As shown in FIGS. 19 and 20, after the robot 730 picks up the electronic component 800 under the control of the processor 740, the electronic component 800 is moved into the field of view of the first image sensor 710 such that the first image sensor 710 passes through the first image sensor 710. A first image comprising pins 810 of electronic component 800 is acquired. The first image usually does not include the PCB board background, because if the image includes a PCB background, the background image is complicated, which makes pin identification difficult.
处理器获取通过第一图像传感器采集并发送的第一图像,提取引脚的第二坐标。引脚的第二坐标可以为引脚***目标插孔的***端的第二坐标或者为整个引脚的第二坐标,优选引脚***端的第二坐标。The processor acquires a first image acquired and transmitted by the first image sensor, and extracts a second coordinate of the pin. The second coordinate of the pin may be the second coordinate of the pin inserted into the insertion end of the target jack or the second coordinate of the entire pin, preferably the second coordinate of the pin insertion end.
在一些实施例中,通过第一图像传感器采集的第一图像,除用于获取引脚的相对位姿之外,还可以用于检查电子元件是否存在缺陷,即通过对第一图像的分析,与预先存储的非缺陷元件图像进行对比,从而判断该电子元件是否存在缺陷。如果不存在缺陷可以继续进行下面的步骤,如果存在缺陷可以控制机械手将该电子元件放回到回收位,然后再返回取料位重新拾取电子元件。In some embodiments, the first image acquired by the first image sensor, in addition to the relative pose for acquiring the pin, can also be used to check whether the electronic component has a defect, that is, by analyzing the first image, A comparison is made with a pre-stored non-defective component image to determine whether the electronic component is defective. If there are no defects, you can continue with the following steps. If there is a defect, you can control the robot to put the electronic component back to the recycling position, and then return to the reclaiming position to pick up the electronic components.
S120’根据获取的机械手的各关节的当前信息,获取在第一坐标系下的机械手的第一当前位姿。S120' acquires a first current pose of the robot in the first coordinate system based on the current information of the joints of the acquired robot.
第一坐标系可以为机械手坐标系、第一图像传感器坐标系、第二图像传感器坐标系或者任意指定的其它坐标系。本具体实施例下面以机械手坐标系作为第一坐标系为例进行进一步详细的说明。在一些实施例中,通常将机械手的底座的中心设置为机械手坐标系。The first coordinate system may be a robot coordinate system, a first image sensor coordinate system, a second image sensor coordinate system, or any other coordinate system specified. In the specific embodiment, the robot coordinate system is taken as the first coordinate system as an example for further detailed description. In some embodiments, the center of the base of the robot is typically set to a robot coordinate system.
所述机械手的第一当前位姿可以为机械手的末端关节连接的法兰盘中心的第一当前位姿,也可以为机械手的末端执行器的中心的第一当前位姿等等。The first current pose of the robot may be a first current pose of the center of the flange of the end joint of the robot, a first current pose of the center of the end effector of the robot, and the like.
根据机械手各关节发送给处理器的各关节的当前信息,此时信息包括各个关节的运动量的信息,再结合各个关节的类型和尺寸等信息,通过机械手运动学正解公式,可以求得此时机械手此时在机械手坐标系下的位姿。According to the current information of each joint sent by the joints of the robot to the processor, the information includes the information of the movement amount of each joint, and combined with the information of the type and size of each joint, the robot can be obtained by the positive formula of the kinematics of the robot. The pose in the robot coordinate system at this time.
如图29所示,在一些实施例中,在获取第二坐标之后,在获取第一当前位姿,或第三坐标或第三当前坐标之前,可以执行步骤S180’控制机械手带动引脚移动到目标插孔附近,这样可以节省后续的插机工作时间。在一些实施例中,在将引脚移动到某块PCB板上的第一个目标插孔附近时,可以先检测PCB板上的标记点的坐标或位姿,标记点(Mark point)为PCB板上一个带有周边空白区域的实心的圆形或者矩形点,结合PCB板布局图,可以推算出目标插孔的大致位置,并将电子元件移动至此大致位置,则可使引脚位于目标插孔的附近。以后,在一些实施例中,由于已经知道了第一个目标插孔的位置,因此可以不再需要获取标记位图像,而可以以第一个目标插孔的位置为基准, 结合PCB板的布局图,推算出目标插孔的大致位置,并控制机械手移动到此位置,从而使得引脚移动到目标插孔附近。As shown in FIG. 29, in some embodiments, after acquiring the second coordinate, before acquiring the first current pose, or the third coordinate or the third current coordinate, step S180' may be performed to control the robot to drive the pin to move to Near the target jack, this saves subsequent plug-in work hours. In some embodiments, when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the Mark point is the PCB. A solid circular or rectangular point with a blank area on the board, combined with the layout of the PCB board, can calculate the approximate position of the target jack and move the electronic components to this approximate position, so that the pins can be placed in the target Near the hole. Later, in some embodiments, since the position of the first target jack is already known, it is no longer necessary to acquire the mark bit image, but the position of the first target jack can be used as a reference, and the layout of the PCB board is combined. The figure estimates the approximate position of the target jack and controls the robot to move to this position, causing the pin to move near the target jack.
S130’获取在第一坐标系下的目标插孔的第三坐标或第三当前坐标。S130' acquires a third coordinate or a third current coordinate of the target jack in the first coordinate system.
基于获取的通过第二图像传感器采集并发送的包括目标插孔的第二当前图像,提取并计算目标插孔的第三坐标或第三当前坐标。And extracting and calculating a third coordinate or a third current coordinate of the target jack based on the acquired second current image including the target jack collected and transmitted by the second image sensor.
如图19所示,根据上面实施例所述,当第二图像传感器720设置在机械手的末端关节上,目标插孔相对第二图像传感器运动,获取从第二图像传感器720采集并发送的第二当前图像,根据第二当前图像获取目标插孔的第三当前坐标。As shown in FIG. 19, according to the above embodiment, when the second image sensor 720 is disposed on the end joint of the robot, the target jack moves relative to the second image sensor, and the second image sensor 720 is acquired and transmitted. The current image acquires a third current coordinate of the target jack according to the second current image.
如图20所示,当设置在PCB板周边某一位置时,由于第二图像传感器720相对于PCB板900位置固定,因此,只需获取一次第二图像,根据第二图像获取目标插孔的第三坐标。As shown in FIG. 20, when the second image sensor 720 is fixed relative to the PCB board 900 when it is disposed at a position around the PCB board, only the second image is acquired once, and the target jack is acquired according to the second image. The third coordinate.
优选将第二图像传感器720设置在机械手上,由于第二图像传感器720跟随机械手730一起移动,使得第二图像传感器720能够位于更接近目标插孔910正上方或者接近正上方的位置获取包括目标插孔的图像,从而提高目标插孔坐标提取的精度,更好的提高后续插机的准确率。The second image sensor 720 is preferably disposed on the robot, and since the second image sensor 720 moves along with the robot 730, the second image sensor 720 can be positioned closer to or directly above the target jack 910 to include the target insertion. The image of the hole, thereby improving the accuracy of the coordinate extraction of the target jack, and better improving the accuracy of the subsequent plug-in.
S140’根据第一当前位姿、第二坐标、以及第三坐标或第三当前坐标,基于预先经过训练的神经网络(Neural Network NN)模型计算机械手需实施的当前运动量;S150’判断机械手是否满足插机条件;若满足,S160’控制机械手带动引脚***目标插孔;若不满足,S170’控制机械手实施所述当前运动量。S140' calculates a current amount of motion to be performed by the robot based on the previously trained neural network (Neural Network NN) model according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate; S150' determines whether the robot is satisfied The plug-in condition; if satisfied, the S160' controls the robot to drive the pin into the target jack; if not, the S170' controls the robot to implement the current amount of motion.
判断机械手是否满足插机条件(即引脚足够接近目标插孔),通常是以连续几步的当前运动量(比如:2-3步)的增量都很小,比如:小于某一个阈值,即认为引脚足够接近目标插孔。Determine whether the robot meets the plug-in condition (ie, the pin is close enough to the target jack), usually in increments of the current amount of motion (eg, 2-3 steps) in consecutive steps, such as: less than a certain threshold, ie Think of the pin as close enough to the target jack.
若满足,控制机械手带动引脚***目标插孔,从而完成该电子元件的插机动作;然后,机械手在处理器的控制下移动到下一电子元件的取料位,夹取下一个电子元件,并重复上述步骤,直到PCB板上所有目标插孔对应的电子元件的插机动作全部完成。If it is satisfied, the control robot drives the pin into the target jack to complete the plug-in action of the electronic component; then, the robot moves under the control of the processor to the take-out position of the next electronic component, and the next electronic component is clamped. Repeat the above steps until the plug-in actions of the electronic components corresponding to all target jacks on the PCB are completed.
若不满足,控制机械手实施对应的所述当前运动量,待机械手实施完毕对 应的当前运动量后,重新重复上面的步骤。If not, the control robot implements the corresponding current amount of exercise, and repeats the above steps after the robot has implemented the corresponding current amount of exercise.
机械手需实施的当前运动量是指机械手的末端执行器或末端轴需实施的运动量(移动量+旋转量)。基于计算出的当前运动量,通过机械手运动学逆解公式,可以求得机械手各关节的需要实施的运动信息,然后将各个运动信息的指令发送给各个关节的马达控制器,从而控制机械手运动相对应的运动量。The current amount of exercise that the robot needs to implement refers to the amount of movement (movement amount + rotation amount) that the end effector or end shaft of the manipulator needs to perform. Based on the calculated current amount of motion, the inverse kinematics formula of the manipulator can be used to obtain the motion information that needs to be implemented by each joint of the manipulator, and then the command of each motion information is sent to the motor controller of each joint, thereby controlling the motion of the manipulator. The amount of exercise.
在一些实施例中,一个大的PCB板可能难以一次完成整个插机工作,因此,通常将一个大的PCB板虚拟分成多个小的模块,分多次完成多个模块的插机,从而最终完成整个PCB板的插机,因此,在这种情况下,先根据本具体实施例的插机方法完成一个模块的插机,然后重复该插机方法的步骤,依次完成其它模块的插机,直到整个PCB板完成插机。然后将该PCB板移开该工作位,并将下一块PCB板移动到该插机工作位重复本发明实施例的插机方法所述的步骤。In some embodiments, a large PCB board may be difficult to complete the entire plug-in operation at one time. Therefore, a large PCB board is usually virtualized into a plurality of small modules, and multiple modules are inserted in multiple times, thereby finally The insertion of the entire PCB board is completed. Therefore, in this case, the plug-in method of one module is completed according to the plug-in method of the specific embodiment, and then the steps of the plug-in method are repeated, and the plug-ins of other modules are sequentially completed. Until the entire PCB board is plugged in. The PCB board is then removed from the working position and the next PCB board is moved to the plug-in working station to repeat the steps described in the plug-in method of the embodiment of the present invention.
有关NN模型的相关其它描述参见上面的实施例,在此不再重复赘述。For other related descriptions of the NN model, refer to the above embodiments, and the description thereof will not be repeated here.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
如图30所示,在一些实施例中,预先经过训练的NN模型可以通过如下方法获取:As shown in FIG. 30, in some embodiments, the pre-trained NN model can be obtained by:
S141’获取初始化的NN模型,所述NN模型为针对输入的在第一坐标系下的第一当前位姿、第二坐标、以及第三坐标或第三当前坐标,输出机械手需实施的当前运动量。S141' obtains an initialized NN model, the NN model is a first current pose, a second coordinate, and a third coordinate or a third current coordinate in the first coordinate system for the input, and outputs the current exercise amount to be implemented by the robot .
S142’获取训练数据和标签数据。S142' acquires training data and tag data.
可以基于传统视觉伺服运行插机多次(比如:1000次),以获取足够的训练数据用以训练初始化的NN模型。The plug-in can be run multiple times (eg, 1000 times) based on traditional visual servoing to obtain sufficient training data to train the initialized NN model.
在传统视传统视觉伺服时,机械手通常按照预先设定的步数(比如:3步)形成最终***元件时机械手的位姿。具体可以以视觉伺服时机械手每走一步时的机械手的位姿,以及该步对应的引脚的坐标和目标插孔的坐标作为训练数据,训练NN模型。In the conventional view of conventional visual servoing, the robot usually forms the position of the robot when the final component is inserted according to a preset number of steps (for example, 3 steps). Specifically, the NN model can be trained by using the posture of the robot at each step of the robot during visual servoing, and the coordinates of the corresponding pin of the step and the coordinates of the target jack as training data.
基于视觉伺服时机械手每一步的位姿,以及最终***元件时机械手的位姿,计算出机械手从每一步位姿移动至***位姿所需的运动量,以这个运动量作为NN模型训练用的标注形成标签数据。Based on the pose of each step of the robot during visual servoing and the pose of the robot when the component is finally inserted, the amount of motion required for the robot to move from each pose to the inserted pose is calculated, and this amount of motion is used as the annotation for the training of the NN model. Tag data.
S143’基于所述训练数据和标签数据,对所述初始化的NN模型进行训练,以获取预先经过训练的NN模型。S143' trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
有关有关预先经过训练的NN模型获取方法的其它相关描述参见上面的实施例,在此不再重复赘述。For other related descriptions about the pre-trained NN model acquisition method, refer to the above embodiments, and the details are not repeated here.
图31为本发明提供的预先经过训练的第六CNN模型的获取方法的实施例的流程图。FIG. 31 is a flowchart of an embodiment of a method for acquiring a pre-trained sixth CNN model provided by the present invention.
在一些实施例中,上面实施例S110’所述的根据获取的第一图像,获取第二坐标,可以通过传统视觉方法实现,也可以通过机器学习的方法实现。In some embodiments, obtaining the second coordinate according to the acquired first image described in the above embodiment S110' may be implemented by a traditional visual method or by a machine learning method.
传统视觉方式是指将第一图像进行二值化处理,然后从第一图像中识别出引脚的轮廓,根据轮廓提取引脚的第二坐标。The traditional visual mode refers to binarizing the first image, then identifying the outline of the pin from the first image, and extracting the second coordinate of the pin according to the contour.
通过机器学习的方法实现是指将第一图像输入预先经过训练的第一卷积神经网络(CNN)模型,直接输出引脚的第二坐标。The implementation by the machine learning method refers to inputting the first image into the previously trained first convolutional neural network (CNN) model and directly outputting the second coordinates of the pin.
有关CNN模型的其它相关描述参见相面的实施例,在此不再重复赘述。For other related descriptions of the CNN model, refer to the opposite embodiment, and the detailed description is not repeated here.
如图31所示,在一些实施例中,预先经过训练的第六CNN模型通过如下方法获取:As shown in FIG. 31, in some embodiments, the pre-trained sixth CNN model is obtained by:
S111’获取初始化的第六CNN模型,所述第六CNN模型为针对输入的包括引脚的第一图像和/或包括目标插孔的第二图像或第二当前图像,输出第一图像中引脚的第二坐标和/或目标插孔的第三坐标或第三坐标。S111' obtains an initialized sixth CNN model, which is a first image including an input pin and/or a second image or a second current image including a target jack, and outputs a first image The second coordinate of the foot and/or the third or third coordinate of the target jack.
这样第六CNN模型可以只用于根据第一图像,获取第二坐标;或者只用于根据第二图像或第二当前图像,获取第二坐标或第二当前坐标;或者用于即可以根据第一图像获取第二坐标,又可以用于根据第二图像或第二当前图像获取第三坐标或第三当前坐标。Thus, the sixth CNN model may be used only to acquire the second coordinates according to the first image; or only to acquire the second coordinates or the second current coordinates according to the second image or the second current image; or An image acquisition second coordinate may be used to acquire a third coordinate or a third current coordinate according to the second image or the second current image.
第六CNN模型的初始化参见NN模型的初始化,在此不再重复赘述。The initialization of the sixth CNN model is referred to the initialization of the NN model, and the details are not repeated here.
S112’获取训练数据和标签数据。S112' acquires training data and tag data.
在插机运行过程中或静止状态下采集多张包括目标插孔的图像,需要大约 1000次以获取足够的训练数据用以训练模型。Acquiring multiple images including the target jack during the operation of the plug-in or at rest requires approximately 1000 times to obtain sufficient training data to train the model.
标签数据可以通过人工或者自动的方法进行标注。自动的方法可以通过基于传统视觉方法的插机轨迹规划过程中,从包括引脚的图像中提取的引脚的坐标作为训练用的标注。Tag data can be labeled manually or automatically. The automatic method can use the coordinates of the pin extracted from the image including the pin as the training annotation in the insertion trajectory planning process based on the traditional visual method.
S113’基于所述训练数据和标签数据,对所述初始化的第六CNN模型进行训练,以获取预先经过训练的所述第六CNN模型。S113' trains the initialized sixth CNN model based on the training data and the tag data to obtain the sixth CNN model that is trained in advance.
第六CNN模型具体训练的过程的相关描述参见上面实施例中NN模型的训练过程,在此不再重复赘述。For a description of the process of the specific training of the sixth CNN model, refer to the training process of the NN model in the above embodiment, and the details are not repeated here.
图32为本发明提供的预先经过训练的第七CNN模型的获取方法的实施例的流程图。FIG. 32 is a flowchart of an embodiment of a method for acquiring a pre-trained seventh CNN model provided by the present invention.
在一些实施例中,上面实施例所述的S130’根据第二图像或第二当前图像,获取第三坐标或第三当前坐标,可以通过传统视觉方法实现,也可以通过机器学习的方法实现。In some embodiments, the S130' described in the above embodiment acquires the third coordinate or the third current coordinate according to the second image or the second current image, and may be implemented by a traditional visual method or by a machine learning method.
传统视觉方式是指将图像进行二值化处理,然后从图像中识别出目标插孔的轮廓,根据该轮廓识别出目标插孔的第三坐标或第三当前坐标。The traditional visual mode refers to binarizing an image, and then identifying the outline of the target jack from the image, and identifying the third coordinate or the third current coordinate of the target jack according to the contour.
通过机器学习的方法实现是指将第二当前图像输入预先经过训练的第七CNN模型,直接输出第三坐标或第三当前坐标。The implementation by the machine learning method refers to inputting the second current image into the previously trained seventh CNN model, and directly outputting the third coordinate or the third current coordinate.
具体的基于所述训练数据和标签数据,有关经过训练的第七CNN模型的获取方法参见第六CNN模型和NN模型,在此不再重复赘述。Specifically, based on the training data and the tag data, the method for acquiring the trained seventh CNN model is referred to the sixth CNN model and the NN model, and details are not repeated herein.
如图19、20所述,在一些实施例中,本发明还提供一种插机设备,该插机设备700包括第一图像传感器710、第二图像传感器720、机械手730、处理器740和存储器(图未示意出)。处理器740通过有线或者无线的方式藕接上述其它各个单元。19, 20, in some embodiments, the present invention also provides a plug-in device that includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory. (The figure is not shown). The processor 740 interfaces the other units described above by wire or wirelessly.
无线方式可以包括但不限于:3G/4G、WIFI、蓝牙、WiMAX、Zigbee、UWB(ultra wideband),以及其它现在已知或将来开发的无线连接方式。Wireless methods may include, but are not limited to, 3G/4G, WIFI, Bluetooth, WiMAX, Zigbee, UWB (ultra wideband), and other wireless connections that are now known or developed in the future.
第一图像传感器和第二图像传感器可以包括:照相机、摄像机、扫描仪或其他带有相关功能的设备(手机、电脑等)等等。The first image sensor and the second image sensor may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like.
第一图像传感器710在工作时,采集包括引脚的第一图像,将该第一图像 发出给处理器740。The first image sensor 710, when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
第一图像传感器710通常设置在PCB板插机工作位和电子元件取料位之间的某一位置处。机械手在处理器的控制下在取料位拾取电子元件之后,将电子元件移动至第一图像传感器视野范围内,使得通过第一图像传感器采集包括电子元件的引脚的第一图像。该第一图像通常不包括PCB板背景,因为如果图像中包括PCB板背景,由于背景图像复杂,会使得引脚识别存在一定困难。The first image sensor 710 is typically disposed at a location between the PCB board insertion work position and the electronic component pickup position. After the robot picks up the electronic component at the take-up position under the control of the processor, the electronic component is moved into the field of view of the first image sensor such that the first image of the pin including the electronic component is captured by the first image sensor. The first image usually does not include the PCB board background, because if the image includes a PCB background, the background image is complicated, which makes pin identification difficult.
第二图像传感器720在工作时,采集包括目标插孔的第二图像或第二当前图像,将该第二图像或第二当前图像发送给处理器740。When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
第二图像传感器可以设置在能够获取包括目标插孔的图像的任意位置;比如:设置在PCB板周边某一位置或者设置在机械手上;如图20所示,当第二图像传感器720设置在PCB板900周边,第二图像传感器720相对目标插孔910位置固定,因此只需获取一次第二图像即可;如图19所示,当第二图像传感器720设置在机械手730上,随着机械手730的移动,目标插孔910的位姿在不断相对变化,因此机械手730每次移动后需要重新获取第二当前图像。优选将第二传感器设置在机械手上,后面实施例会有进一步详细的说明。The second image sensor may be disposed at any position capable of acquiring an image including the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 20, when the second image sensor 720 is disposed on the PCB Around the board 900, the second image sensor 720 is fixed in position relative to the target jack 910, so that only the second image is acquired once; as shown in FIG. 19, when the second image sensor 720 is disposed on the robot 730, along with the robot 730 The movement of the target jack 910 is constantly changing relative, so the robot 730 needs to reacquire the second current image after each movement. Preferably, the second sensor is disposed on the robot, as will be described in further detail in the following embodiments.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动当前运动量;基于所述处理器740的控制带动引脚***目标插孔。When the robot 730 is in operation, transmitting current information of the joints to the processor 740; moving the current amount of motion based on the control of the processor 740; driving the pin insertion target insertion based on the control of the processor 740 hole.
所述处理器740工作(即执行存储在存储器中的计算机程序)时实现上述各个插机方法实施例中的步骤,例如图28所示的步骤S110’至S170’。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S110' through S170' shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第六CNN模型获取方法和/或预先经过训练的第七CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained sixth CNN model acquisition method, and/or an advance The trained seventh CNN model acquires the various steps in the method. Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
有关插机设备的其它相关描述参见上面的实施例,在此不再重复赘述。For other related descriptions of the plug-in device, refer to the above embodiments, and the detailed description is not repeated here.
实施例八、Example VIII.
如图33所示,在一些实施例中,本发明提供一种插机方法,该插机方法 包括:As shown in FIG. 33, in some embodiments, the present invention provides an interpolating method, the interpolating method comprising:
S210’根据获取的包括引脚的第一图像,基于预先经过训练的第六CNN模型,获取引脚的第二坐标。S210' obtains the second coordinates of the pin based on the acquired first image including the pin based on the previously trained sixth CNN model.
S220’根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿。S220' acquires a first current pose of the robot in the first coordinate system according to current information of the joints of the robot.
如图34所示,在一些实施例中,在获取第二坐标之后,在获取第一当前位姿,或第三坐标或第三当前坐标之前,可以执行步骤S280控制机械手带动引脚移动到目标插孔附近,这样可以节省后续的插机工作时间。在一些实施例中,在将引脚移动到某块PCB板上的第一个目标插孔附近时,可以先检测PCB板上的标记点的坐标或位姿,标记点(Mark point)为PCB板上一个带有周边空白区域的实心的圆形或者矩形点,结合PCB板布局图,可以推算出目标插孔的大致位置,并将电子元件移动至此大致位置,则可使引脚位于目标插孔的附近。以后,在一些实施例中,由于已经知道了第一个目标插孔的位置,因此可以不再需要获取标记位图像,而可以以第一个目标插孔的位置为基准,结合PCB板的布局图,推算出目标插孔的大致位置,并控制机械手移动到此位置,从而使得引脚移动到目标插孔附近。As shown in FIG. 34, in some embodiments, after acquiring the second coordinate, before acquiring the first current pose, or the third coordinate or the third current coordinate, step S280 may be performed to control the robot to drive the pin to move to the target. Near the jack, this saves the subsequent plug-in working hours. In some embodiments, when the pin is moved to the vicinity of the first target jack on a certain PCB board, the coordinates or pose of the marked point on the PCB board can be detected first, and the Mark point is the PCB. A solid circular or rectangular point with a blank area on the board, combined with the layout of the PCB board, can calculate the approximate position of the target jack and move the electronic components to this approximate position, so that the pins can be placed in the target Near the hole. Later, in some embodiments, since the position of the first target jack is already known, it is no longer necessary to acquire the mark bit image, but the position of the first target jack can be used as a reference, and the layout of the PCB board is combined. The figure estimates the approximate position of the target jack and controls the robot to move to this position, causing the pin to move near the target jack.
S230’根据获取的包括目标插孔的第二图像或第二当前图像,基于所述预先经过训练的第六CNN模型或预先经过训练的第七CNN模型,获取目标插孔的第三坐标或第三当前坐标。S230' obtain a third coordinate or a third of the target jack based on the acquired second image or the second current image including the target jack based on the pre-trained sixth CNN model or the pre-trained seventh CNN model Three current coordinates.
S240’根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,计算机械手需实施的当前运动量;S250’判断机械手是否满足插机条件;若满足,S260’控制机械手带动引脚***目标插孔;若不满足,S270’控制机械手实施所述当前运动量。S240' calculates a current amount of motion that the robot needs to implement according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate; S250' determines whether the robot satisfies the insertion condition; if satisfied, The S260' controls the robot to drive the pin into the target jack; if not, the S270' controls the robot to implement the current amount of motion.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
在一些实施例中,根据第一当前位姿、第二坐标、以及第三坐标或第三当前坐标,计算机械手需实施的当前运动量可以通过传统的视觉伺服的方法实 现,也可以通过机器学习的方法实现。In some embodiments, calculating the current amount of motion that the robot needs to implement according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate may be implemented by a conventional visual servo method or by machine learning. Method implementation.
传统的方式是获取目标插孔的当前位姿和引脚的当前位姿,根据目标插孔的位姿和引脚当前位姿,计算引脚需实施运动后的位姿,根据引脚与机械手的标定结果,计算出机械手需实施运动后的位姿,根据机械手的当前位姿和需实施运动后的位姿计算机械手需实施的当前运动量(移动量+旋转量);控制机械手实施当前运动量;然后重复上述步骤,直到连续几次的当前运动量很小或者移动预设的步数后,判断为满足插机条件,控制机械手带动引脚进行插机。The traditional way is to obtain the current pose of the target jack and the current pose of the pin. According to the pose of the target jack and the current pose of the pin, calculate the pose of the pin after exercise, according to the pin and the manipulator. The calibration result calculates the posture of the robot after the exercise is performed, and calculates the current amount of movement (movement amount + rotation amount) that the robot needs to perform according to the current posture of the manipulator and the posture after the exercise is required; and controls the manipulator to implement the current exercise amount; Then, the above steps are repeated until after the current amount of motion is small or the preset number of steps is moved, it is judged that the plug-in condition is satisfied, and the control robot drives the pin to perform the plug-in.
机器学习的方法是指,基于预先经过训练的NN模型实现。The machine learning method is based on a pre-trained NN model.
有关第六CNN模型、第七CNN模型和NN模型的相关描述参见上面实施例中的描述,在此不再重复赘述。For a description of the sixth CNN model, the seventh CNN model, and the NN model, refer to the description in the above embodiment, and the detailed description is not repeated here.
如图19、20所述,在一些实施例中,该插机设备700包括第一图像传感器710、第二图像传感器720、机械手730、处理器740和存储器(图未示意出)。处理器740通过有线或者无线的方式藕接上述其它各个单元。As depicted in Figures 19 and 20, in some embodiments, the plug-in device 700 includes a first image sensor 710, a second image sensor 720, a robot 730, a processor 740, and a memory (not shown). The processor 740 interfaces the other units described above by wire or wirelessly.
第一图像传感器710在工作时,采集包括引脚的第一图像,将该第一图像发出给处理器740。The first image sensor 710, when operating, acquires a first image comprising a pin and issues the first image to the processor 740.
第二图像传感器720在工作时,采集包括目标插孔的第二图像或第二当前图像,将该第二图像或第二当前图像发送给处理器740。When the second image sensor 720 is in operation, the second image or the second current image including the target jack is acquired, and the second image or the second current image is transmitted to the processor 740.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚***所述目标插孔。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 The foot is inserted into the target jack.
所述处理器740工作(即执行存储在存储器中的计算机程序)时实现上述各个插机方法实施例中的步骤,例如图34所示的步骤S210’至S270’。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S210' through S270' shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第六CNN模型获取方法和/或预先经过训练的第七CNN模型获取方法中的各个步骤。此外,上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes a pre-trained NN model acquisition method implemented in the above embodiments, a pre-trained sixth CNN model acquisition method, and/or an advance The trained seventh CNN model acquires the various steps in the method. Furthermore, each of the above methods may also be performed by a processor of another device than the plug-in device.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
实施例九、Example IX.
如图35所示,在一些实施例中,本发明还提供一种插机方法,该插机方法包括:As shown in FIG. 35, in some embodiments, the present invention further provides an insertion method, the insertion method comprising:
S310’根据获取的包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的的第二坐标或第二当前坐标,以及目标插孔的在所述第一坐标系下的第三坐标或第三当前坐标;S310' obtains a second coordinate or a second current coordinate of the pin in the first coordinate system according to the acquired third current image including the pin and the target jack, and the first coordinate system of the target jack a third coordinate or a third current coordinate;
如图36、21、22所示,在一些实施例中,机械手730从取料位抓取电子元件800后,为使得第三图像传感器750能同时获取包括引脚810和目标插孔910的第三当前图像,在S310’之前还可以包括步骤S370’控制机械手带动引脚移动到目标插孔附近。As shown in FIGS. 36, 21, 22, in some embodiments, after the robot 730 grabs the electronic component 800 from the reclaiming position, the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910. The three current images may further include a step S370' to control the robot to drive the pins to move to the vicinity of the target jack before S310'.
在一些实施例中,在进行PCB板第一个目标插孔的插引脚前,将机械手移动至PCB板标记点,标记点(Mark point)为一个带有周边空白区域的实心的圆形或者矩形点,利用固定在机械手末端的第二图像传感器获取标记位图像,并进行标记位位置检测,基于所述检测的标记位置,以及PCB板布局图,推算出目标插孔的大致位置,并将电子元件移动至此位置,则此时引脚位于目标插孔的附近。以后,由于已经知道了第一个目标插孔的位置,因此不再需要获取标记位图像,可以以第一个目标插孔的位置为基准,结合PCB板的布局图,推算出目标插孔的大致位置坐标,并控制机械手移动到此位置坐标,从而使得引脚移动到目标插孔附近。In some embodiments, the robot is moved to the PCB board mark before the pin of the first target jack of the PCB board is inserted, and the Mark point is a solid circle with a peripheral blank area or a rectangular point, using a second image sensor fixed at the end of the robot to acquire a mark bit image, and performing mark bit position detection, based on the detected mark position and the PCB layout map, to calculate the approximate position of the target jack, and When the electronic component moves to this position, the pin is located near the target jack. In the future, since the position of the first target jack is already known, it is no longer necessary to obtain the image of the marker bit. The position of the first target jack can be used as a reference, and the layout of the PCB board can be combined to calculate the target jack. Approximate position coordinates and control the movement of the robot to this position, causing the pin to move near the target jack.
如图21所示,根据上面实施例所述,当第三图像传感器750设置在机械手的末端关节上,除此之外也可以设置在机械手的其它关节上(图未示意出),获取从第三图像传感器750采集并发送的第三当前图像,由于第三图像传感器750相对目标插孔910运动,而相对引脚810固定,因此,基于第三当前图像可以获取引脚的第二坐标和目标插孔的第三当前位姿。As shown in FIG. 21, according to the above embodiment, when the third image sensor 750 is disposed on the end joint of the robot, in addition to the other joints of the robot (not shown), The third current image acquired and transmitted by the three image sensor 750 is fixed relative to the pin 810 because the third image sensor 750 moves relative to the target jack 910. Therefore, the second coordinate and the target of the pin can be acquired based on the third current image. The third current pose of the jack.
如图22所示,当第三图像传感器750设置在PCB板900周边某一位置时,由于第三图像传感器750相对于PCB板900固定,相对引脚运动,因此,基于第三当前图像可以获取引脚的第二当前坐标和目标插孔的第三位姿。As shown in FIG. 22, when the third image sensor 750 is disposed at a position around the PCB board 900, since the third image sensor 750 is fixed relative to the PCB board 900, the relative pin moves, and therefore, the third current image can be acquired. The second current coordinate of the pin and the third pose of the target jack.
优选将第三图像传感器750设置在PCB板900周边,这样可以方便获取到包括引脚的第三当前图像。The third image sensor 750 is preferably disposed at the periphery of the PCB board 900 so that a third current image including the pins can be easily obtained.
S320’根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;S320' obtains a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
S330’根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算机械手需实施的当前运动量;S340’判断所述机械手是否满足插机条件;S350’若满足,控制所述机械手带动所述引脚***所述目标插孔;S360’若不满足,控制机械手实施所述当前运动量。S330′ calculating, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current exercise amount to be implemented by the robot based on the trained NN model; S340 'determining whether the robot meets the plug-in condition; if the S350' is satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if the S360' is not satisfied, the control robot performs the current amount of exercise.
该预先经过训练的NN模型的输入包括:第一当前位姿、第二坐标或第二当前坐标、以及第三坐标或第三当前坐标,区别在于实施例一中输入的包括第二坐标,而本实施例中的即可以包括第二坐标,或者也可以包括第二当前坐标,因此本实施例中的预先经过训练的NN模型与实施例一中所述的预先经过训练的NN模型可以采用同样的模型结构和训练方法,只是输入的数据和训练数据略有上述的不同。The input of the pre-trained NN model includes: a first current pose, a second coordinate or a second current coordinate, and a third coordinate or a third current coordinate, except that the input in the first embodiment includes the second coordinate, and In this embodiment, the second coordinate may be included, or the second current coordinate may be included. Therefore, the pre-trained NN model in this embodiment may be the same as the pre-trained NN model described in the first embodiment. The model structure and training method are only slightly different from the input data and training data.
因此综合实施例一和本实施例中所述,所述预先经过训练的NN模型获取方法包括:Therefore, in the first embodiment and the embodiment, the pre-trained NN model acquisition method includes:
S210’获取初始化的NN模型,所述NN模型为针对输入的在第一坐标系下的第一当前位姿;第二坐标或第二当前坐标;以及第三坐标或第三当前坐标,输出机械手需实施的当前运动量。S210' acquires an initialized NN model, the NN model is a first current pose in the first coordinate system for the input; the second coordinate or the second current coordinate; and the third coordinate or the third current coordinate, the output robot The current amount of exercise to be implemented.
S220’获取训练数据和标签数据。S220' acquires training data and tag data.
S230’基于所述训练数据和标签数据,对所述初始化的NN模型进行训练,以获取预先经过训练的NN模型。S230' trains the initialized NN model based on the training data and the tag data to obtain a pre-trained NN model.
该预先经过训练的NN模型结构及获取方法参见实施例一中的有关预先经过训练的NN模型中的描述;插机方法的其它相关描述参见具体实施例一,在此不再重复赘述。For the pre-trained NN model structure and the acquisition method, refer to the description in the pre-trained NN model in the first embodiment. For other related descriptions of the plug-in method, refer to the specific embodiment 1, and the detailed description is not repeated here.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动 过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
图37为本发明提供的预先经过训练的第八CNN模型的获取方法的实施例的流程图。37 is a flow chart of an embodiment of a method for acquiring a pre-trained eighth CNN model provided by the present invention.
进一步,在一些实施例中,上面实施例S410’根据包括引脚和目标插孔的第三当前图像,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标,可以通过传统视觉伺服方法实现,也可以通过机器学习的方法实现。Further, in some embodiments, the above embodiment S410' acquires the second coordinate or the second current coordinate of the pin and the third coordinate or the third of the target jack according to the third current image including the pin and the target jack. The three current coordinates can be implemented by a traditional visual servo method or by a machine learning method.
传统视觉方式是指将第三当前图像进行二值化处理,然后从第三当前图像中识别出引脚和目标插孔的轮廓,根据轮廓计算引脚的第二坐标或第二当前坐标和目标插孔的第三坐标或第三当前坐标。The traditional visual mode refers to binarizing the third current image, and then identifying the outline of the pin and the target jack from the third current image, and calculating the second coordinate or the second current coordinate and target of the pin according to the contour. The third coordinate of the jack or the third current coordinate.
通过机器学习的方法实现是指将第三当前图像输入预先经过训练的第三CNN模型,直接输出第二坐标或第二当前坐标,以及第三坐标或第三当前坐标。The method implemented by the machine learning refers to inputting the third current image into the third CNN model that is trained in advance, and directly outputting the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate.
第八CNN模型可以包括LeNet,AlexNet,ZFNet,VGG,GoogLeNet,Residual Net,DenseNet,R-CNN,SPP-NET,Fast-RCNN,Faster-RCNN,FCN,Mask-RCNN,YOLO,SSD,YOLO2.The eighth CNN model may include LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2.
如图37所示,在一些实施例中,预先经过训练的第八CNN模型通过如下方法获取:As shown in FIG. 37, in some embodiments, the previously trained eighth CNN model is obtained by:
S311’获取初始化的第八CNN模型,所述第八CNN模型为针对输入的包括引脚和目标插孔的第三当前图像,输出第三当前图像中引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;S311' obtains an initialized eighth CNN model, which is a third current image including a pin and a target jack for input, and outputs a second coordinate or a second current coordinate of the pin in the third current image. And the third coordinate or the third current coordinate of the target jack;
S312’获取训练数据和标签数据;S312' acquires training data and tag data;
在插机运行过程中或静止状态下采集多张包括引脚和目标插孔的图像,需要大约1000次以获取足够的训练数据用以训练模型。Acquiring multiple images including pins and target jacks during or after the plug-in operation requires approximately 1000 acquisitions to obtain sufficient training data to train the model.
标签数据可以通过人工或者自动的方法进行标注。自动的方法可以通过基于传统视觉方法的插机轨迹规划过程中,从包括引脚和目标插孔的图像中提取的引脚和目标插孔的坐标作为训练用的标注。Tag data can be labeled manually or automatically. The automatic method can be used as a training annotation by the coordinates of the pin and the target jack extracted from the image including the pin and the target jack during the insertion trajectory planning process based on the conventional visual method.
S313’基于所述训练数据和标签数据,对所述初始化的第八CNN模型进行训练,以获取预先经过训练的所述第八CNN模型。S313' trains the initialized eighth CNN model based on the training data and the tag data to obtain the eighth CNN model that is trained in advance.
第八CNN模型的获取方法和结构参见实施例一中的预先经过训练的第六CNN模型,在此不再重复赘述。For the acquisition method and structure of the eighth CNN model, refer to the pre-trained sixth CNN model in the first embodiment, and the details are not repeated here.
如图21、22所示,本发明实施例还提供一种插机设备700,所述插机设备700包括第三图像传感器750、机械手730、处理器740和存储器(图未示意出)。处理器740通过有线或者无线的方式藕接所述机械手730和所述第三图像传感器750。As shown in FIG. 21 and FIG. 22, an embodiment of the present invention further provides a plug-in device 700. The plug-in device 700 includes a third image sensor 750, a robot 730, a processor 740, and a memory (not shown). The processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
预先对第三图像传感器750、以及第三图像传感器750和机械手730之间进行标定。The third image sensor 750, and the third image sensor 750 and the robot 730 are calibrated in advance.
第三图像传感器750在工作时,获取包括引脚和目标插孔的第三当前图像,并将第三当前图像发送给处理器740。The third image sensor 750, when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
第三图像传感器可以设置在能够获取包括引脚和目标插孔的图像的任意位置;比如:设置在PCB板周边某一位置或者设置在机械手上;如图16所示,当第三图像传感器750设置在PCB板900周边,第三图像传感器750相对目标插孔910位姿固定,相对运动的引脚810位姿移动;如图15所示,当第三图像传感器750设置在机械手730上,随着机械手730的移动,目标插孔910的位姿在不断相对变化,而引脚的位姿相对固定,优选将第三图像传感器750设置在PCB板900周边,后面实施例会有进一步详细的说明。The third image sensor may be disposed at any position capable of acquiring an image including the pin and the target jack; for example, disposed at a position around the PCB board or disposed on the robot; as shown in FIG. 16, when the third image sensor 750 Positioned on the periphery of the PCB board 900, the third image sensor 750 is fixed in position with respect to the target jack 910, and the relative movement pin 810 moves in a posture; as shown in FIG. 15, when the third image sensor 750 is disposed on the robot 730, With the movement of the robot 730, the posture of the target jack 910 is constantly changing, and the posture of the pins is relatively fixed. Preferably, the third image sensor 750 is disposed around the periphery of the PCB board 900, which will be described in further detail in the following embodiments.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚810***所述目标插孔910。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
所述处理器740工作(即执行存储在存储器中的计算机程序)时实现上述各个插机方法实施例中的步骤,例如图35所示的步骤S310’至S370’。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S310' through S370' shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第八CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes, in operation, the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained eighth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
实施例十、Embodiment 10
如图38所示,在一些实施例中,本发明还提供一种插机方法,所述插机方法包括:As shown in FIG. 38, in some embodiments, the present invention further provides an insertion method, the insertion method comprising:
S410’根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第三CNN模型,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;S410' acquires a second coordinate or a second current coordinate of the pin, and a third coordinate or a third coordinate of the target jack based on the third current CNN model trained in advance according to the third current image including the pin and the target jack Current coordinates
如图39、21、22所示,在一些实施例中,机械手730从取料位抓取电子元件800后,为使得第三图像传感器750能同时获取包括引脚810和目标插孔910的第三当前图像,在S410’之前还可以包括步骤S470’控制机械手带动引脚移动到目标插孔附近,具体描述参见具体实施例一中的相关描述,在此不再重复赘述。As shown in FIGS. 39, 21, 22, in some embodiments, after the robot 730 grabs the electronic component 800 from the reclaiming position, the third image sensor 750 can simultaneously acquire the first including the pin 810 and the target jack 910. The third current image may further include a step S470' to control the robot to drive the pin to move to the vicinity of the target jack before the S410'. For details, refer to the related description in the first embodiment, and details are not described herein again.
第三图像传感器750可以设置在机械730手上,也可以设置在PCB板900的周边;优选将第三图像传感器750设置在PCB板900周边,这样可以方便获取到包括引脚的第三当前图像。The third image sensor 750 may be disposed on the hand of the machine 730 or may be disposed at the periphery of the PCB board 900; preferably, the third image sensor 750 is disposed at the periphery of the PCB board 900, so that the third current image including the pin is conveniently obtained. .
S420’根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;S420' obtains a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
S430’根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标计算机械手需实施的当前运动量;S440’判断机械手是否满足插机条件;S450’若满足,控制所述机械手带动所述引脚***所述目标插孔;S460’若不满足,控制机械手实施所述当前运动量。S430' calculates a current amount of motion that the robot needs to implement according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate; S440' determines whether the robot satisfies the insertion condition S450', if satisfied, controlling the robot to drive the pin into the target jack; if not satisfied, the control robot implements the current amount of motion.
所述第八CNN模型的描述参见具体实施例四,在此不再重复赘述。For the description of the eighth CNN model, refer to the fourth embodiment, and the details are not repeated here.
采用上面的插机方法,通过基于机器学习的方法进行的插机,能够提高各种复杂的环境下插机的准确率;另外,可以在一些情况下,可以减少插机运动过程中的步数,提高了工作效率。With the above plug-in method, the plug-in based on the machine learning method can improve the accuracy of the plug-in in various complicated environments; in addition, in some cases, the number of steps in the motion of the plug-in can be reduced. ,Improve work efficiency.
在一些实施例中,根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标计算机械手需实施的当前运动量可以通过传统的方法,也可以通过机器学习的方法实现,也可以通过传统的视觉伺服的 方法实现,优选通过机器学习的方法实现,因为通过机器学习的方法可以提高当前运动量计算的准确率和效率。In some embodiments, calculating the current amount of motion that the robot needs to implement according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate may be by a conventional method, It can also be realized by the method of machine learning, or by the traditional method of visual servoing, preferably by the method of machine learning, because the method of machine learning can improve the accuracy and efficiency of current exercise calculation.
机器学习的方法是指,基于预先经过训练的NN模型实现。有关NN模型的相关描述参见实施例九或七,在此不再重复赘述。The machine learning method is based on a pre-trained NN model. For a description of the NN model, refer to Embodiment 9 or 7. The description is not repeated here.
如图21、22所示,本发明实施例还提供一种插机设备700,所述插机设备700包括第三图像传感器750、机械手730、处理器740和存储器(图未示意出)。处理器740通过有线或者无线的方式藕接所述机械手730和所述第三图像传感器750。As shown in FIG. 21 and FIG. 22, an embodiment of the present invention further provides a plug-in device 700. The plug-in device 700 includes a third image sensor 750, a robot 730, a processor 740, and a memory (not shown). The processor 740 interfaces the robot 730 and the third image sensor 750 by wire or wirelessly.
第三图像传感器750可以包括:照相机、摄像机、扫描仪或其他带有相关功能的设备(手机、电脑等)等等。The third image sensor 750 may include a camera, a video camera, a scanner, or other device with a related function (mobile phone, computer, etc.) and the like.
第三图像传感器750在工作时,获取包括引脚和目标插孔的第三当前图像,并将第三当前图像发送给处理器740。The third image sensor 750, when in operation, acquires a third current image including the pin and the target jack, and transmits the third current image to the processor 740.
所述机械手730在工作时,将所述各关节的当前信息发送给所述处理器740;基于所述处理器740的控制移动所述当前运动量;基于所述处理器740的控制带动所述引脚810***所述目标插孔910。The robot 730, when operating, transmits current information of the joints to the processor 740; moves the current amount of motion based on the control of the processor 740; and drives the reference based on the control of the processor 740 A foot 810 is inserted into the target jack 910.
所述处理器740工作(即执行存储在存储器中的计算机程序)时实现上述各个插机方法实施例中的步骤,例如图38所示的步骤S410’至S460’。The steps in the embodiments of the various plug-in methods described above are implemented when the processor 740 is operational (i.e., executing a computer program stored in memory), such as steps S410' through S460' shown in FIG.
在一些实施例中,所述插机设备的处理器740在工作时还包括实现上面实施例所述的预先经过训练的NN模型获取方法、预先经过训练的第八CNN模型获取方法中的各个步骤。此外上述各个方法也可以通过插机设备以外的其它设备的处理器执行。In some embodiments, the processor 740 of the plug-in device further includes, in operation, the steps of implementing the pre-trained NN model acquisition method described in the above embodiments, and the pre-trained eighth CNN model acquisition method. . Furthermore, each of the above methods can also be executed by a processor of a device other than the plug-in device.
有关插机方法、插机设备省略的其它相关描述参见前面其它实施例,在此不再重复赘述。For other related descriptions about the plug-in method and the plug-in device, refer to other embodiments, and the details are not repeated here.
在一些实施例中,本发明还提供一种插机装置,所述插机装置包括:相对位姿获取程序模块、第一当前位姿获取程序模块、第二当前位姿获取程序模块、第三位姿和第三当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和所述控制运动程序模块;In some embodiments, the present invention further provides a plug-in device, the plug-in device comprising: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, and a third a pose and a third current pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and the control motion program module;
所述相对位姿获取程序模块,用于根据获取的包括引脚的第一图像,获取引脚相对机械手在第一坐标系下的相对位姿;The relative pose acquisition program module is configured to acquire a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin;
所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
所述第二当前位姿获取程序模块,用于根据所述第一当前位姿和所述相对位姿,获取所述引脚在所述第一坐标系下的第二当前位姿;The second current pose acquisition program module is configured to acquire a second current pose of the pin in the first coordinate system according to the first current pose and the relative pose;
所述第三位姿和第三当前位姿获取程序模块,用于根据包括目标插孔的第二图像或第二当前图像,获取目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;The third pose and the third current pose acquisition program module are configured to acquire a third pose of the target jack in the first coordinate system according to the second image or the second current image including the target jack Or the third current pose;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate a robot based on the pre-trained NN model according to the first current pose, the second current pose, and the third pose or the third current pose The current motion amount to be implemented; the determining program module, configured to determine whether the robot meets the plug-in condition; and the control plug-in program module, if configured to control the robot to drive the pin to insert the target a jack; the control motion program module, if not satisfied, controlling the robot to perform the current amount of exercise; or
所述插机装置包括:相对位姿获取程序模块、第一当前位姿获取程序模块、第二当前位姿获取程序模块、第三位姿和第三当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, a third pose and a third current pose acquisition program module, and a current exercise amount acquisition program. a module, a judgment program module, a control plug-in program module, and a control motion program module;
所述相对位姿获取模块,用于根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚相对机械手在第一坐标系下的相对位姿;The relative pose acquisition module is configured to acquire, according to the acquired first image including the pin, a relative pose of the pin relative to the robot in the first coordinate system based on the first trained CNN model;
所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
所述第二当前位姿获取程序模块,用于根据所述第一当前位姿和所述相对位姿,获取所述引脚在所述第一坐标系下的第二当前位姿;The second current pose acquisition program module is configured to acquire a second current pose of the pin in the first coordinate system according to the first current pose and the relative pose;
所述第三位姿和第三当前位姿获取程序模块,用于根据包括目标插孔的第二图像或第二当前图像,基于预先经过训练的第二CNN模型或所述第一CNN模型,获取所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;The third pose and the third current pose acquisition program module are configured to be based on the second CNN model or the first CNN model that is trained in advance according to the second image or the second current image including the target jack Obtaining a third pose or a third current pose of the target jack in the first coordinate system;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二当 前位姿、以及所述第三位姿或第三当前位姿,计算机械手需实施的当前运动量;The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second current pose, and the third pose or the third current pose, a current amount of exercise that the robot needs to implement;
所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The determining program module is configured to determine whether the robot meets the plug-in condition; the control plug-in program module is configured to control the robot to drive the pin to insert into the target jack if satisfied; the control motion program a module for controlling the robot to perform the current amount of exercise if not satisfied; or
所述插机装置包括:第一图像获取程序模块、第一当前位姿获取程序模块、第二图像或第二当前图像获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和所述控制运动程序模块;The plug-in device includes: a first image acquisition program module, a first current pose acquisition program module, a second image or second current image acquisition program module, a current motion amount acquisition program module, a judgment program module, and a control plug-in program module. And the control motion program module;
所述第一图像获取程序模块,用于获取包括引脚的第一图像;The first image acquisition program module is configured to acquire a first image including a pin;
所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
所述第二图像或第二当前图像获取程序模块,用于获取包括目标插孔的第二图像或第二当前图像;The second image or second current image acquisition program module is configured to acquire a second image or a second current image including a target jack;
所述当前运动量获取程序模块,用于根据所述第一图像、所述第一当前位姿、所述第二图像或所述第二当前图像,基于预先经过训练的第三CNN模型,计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate a robot based on the first trained CNN model according to the first image, the first current pose, the second image, or the second current image. The current exercise amount to be implemented; the judgment program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to control the robot to drive the pin to be inserted into the target jack if satisfied The control motion program module, configured to control the robot to perform the current exercise amount if not satisfied; or
所述插机装置包括:第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second pose or a second current pose and a third pose or a third current pose acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, and a judgment program module , controlling the plug-in program module and controlling the motion program module;
所述第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块,用于根据获取的包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的第二位姿或第二当前位姿,以及目标插孔的在所述第一坐标系下的第三位姿或第三当前位姿;The second pose or the second current pose and the third pose or the third current pose acquisition program module are configured to acquire a pin according to the acquired third current image including the pin and the target jack. a second pose or a second current pose in a coordinate system, and a third pose or a third current pose of the target jack in the first coordinate system;
所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二位 姿或第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current exercise amount acquisition program module is configured to perform pre-training according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The NN model calculates a current amount of motion to be implemented by the robot; the determining program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to control the robot to drive the lead if satisfied Inserting a foot into the target jack; the control motion program module, if not satisfied, controlling the robot to implement the current amount of exercise; or
所述插机装置包括:第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second pose or a second current pose and a third pose or a third current pose acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, and a judgment program module , controlling the plug-in program module and controlling the motion program module;
所述第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块,用于根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第四CNN模型,获取引脚在第一坐标系下的第二位姿或第二当前位姿,以及目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;The second pose or the second current pose and the third pose or the third current pose acquisition program module are configured to be based on the fourth trained image according to the third current image including the pin and the target jack The CNN model obtains a second pose or a second current pose of the pin in the first coordinate system, and a third pose or a third current pose of the target jack in the first coordinate system;
所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿计算所述机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块;用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate the robotic requirement according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The current motion amount is implemented; the determining program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to, if satisfied, control the robot to drive the pin to insert the target plug a control motion program module; for controlling the robot to perform the current exercise amount if not satisfied; or
所述插机装置包括:第三图像获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a third image acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and a control motion program module;
所述第三图像获取程序模块,用于获取包括引脚和目标插孔的第三当前图像;The third image acquisition program module is configured to acquire a third current image including a pin and a target jack;
所述第一当前位姿获取程序模块,用于根据获取的所述机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the obtained current information of each joint of the robot;
所述当前运动量获取程序模块,用于根据所述第三当前图像和所述第一当 前位姿,基于预先经过训练的第五CNN模型,计算所述机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the third current image and the first current pose, a current amount of motion to be performed by the robot based on the trained fifth CNN model; a module, configured to determine whether the robot meets an insertion condition; the control insertion program module is configured to: if satisfied, control the robot to drive the pin into the target jack; the control motion program module, Used to control the robot to perform the current amount of exercise if not satisfied; or
所述插机装置包括:第二坐标获取程序模块、第一当前位姿获取程序模块、第三坐标或第三当前坐标获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate acquiring program module, a first current pose acquiring program module, a third coordinate or a third current coordinate acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in program module. And controlling the motion program module;
所述第二坐标获取程序模块,用于根据获取的包括引脚的第一图像,获取引脚的第二坐标;The second coordinate acquiring program module is configured to acquire a second coordinate of the pin according to the acquired first image including the pin;
所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
所述第三坐标或第三当前坐标获取程序模块,用于根据包括目标插孔的第二图像或第二当前图像,获取目标插孔的第三坐标或第三当前坐标;The third coordinate or third current coordinate acquiring program module is configured to acquire a third coordinate or a third current coordinate of the target jack according to the second image or the second current image including the target jack;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块若满足,用于控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current requirement to be implemented by the robot based on the trained NN model The judging program module is configured to determine whether the robot meets the plug-in condition; if the control plug-in program module is satisfied, the control plug-in program module is configured to control the robot to drive the pin into the target jack; Controlling a motion program module for controlling the robot to perform the current amount of exercise if not satisfied; or
所述插机装置包括:第二坐标获取程序模块、第一当前位姿获取程序模块、第三坐标或第三当前坐标获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate acquiring program module, a first current pose acquiring program module, a third coordinate or a third current coordinate acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in program module. And controlling the motion program module;
所述第二坐标获取程序模块,用于根据获取的包括引脚的第一图像,基于预先经过训练的第六CNN模型,获取引脚的第二坐标;The second coordinate acquiring program module is configured to acquire, according to the acquired first image including the pin, the second coordinate of the pin based on the sixth trained CNN model;
所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
所述第三坐标或第三当前坐标获取程序模块,用于根据包括目标插孔的第 二图像或第二当前图像,基于预先经过训练的第七CNN模型或所述预先经过训练的第六CNN模型,获取目标插孔的第三坐标或第三当前坐标;The third coordinate or third current coordinate acquisition program module is configured to be based on a pre-trained seventh CNN model or the pre-trained sixth CNN according to the second image or the second current image including the target jack a model that obtains a third coordinate or a third current coordinate of the target jack;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制机械手带动引脚***目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; the determining program module For determining whether the robot meets the plug-in condition; the control plug-in program module is configured to: if satisfied, the control robot drives the pin to be inserted into the target jack; and the control motion program module is configured to control the robot if not satisfied The current amount of exercise; or
所述插机装置包括:第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate or a second current coordinate and a third coordinate or third current coordinate acquiring program module, a first current pose acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in Program module and control motion program module;
所述第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块,用于根据获取的包括引脚和目标插孔的第三当前图像,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;The second coordinate or the second current coordinate and the third coordinate or the third current coordinate acquiring program module, configured to acquire the second coordinate of the pin or the first according to the acquired third current image including the pin and the target jack Two current coordinates, and a third coordinate or a third current coordinate of the target jack;
所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate based on the pre-trained NN model according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate The current amount of motion to be implemented by the robot; the determining program module, configured to determine whether the robot meets the plug-in condition; and the control plug-in program module, if the controller is controlled to drive the pin to insert the pin a target jack; the control motion program module, configured to control the robot to perform the current amount of motion if not satisfied; or
所述插机装置包括:第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate or a second current coordinate and a third coordinate or third current coordinate acquiring program module, a first current pose acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in Program module and control motion program module;
所述第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块,用于根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第八CNN模型,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;The second coordinate or second current coordinate and the third coordinate or third current coordinate acquiring program module are configured to obtain, according to the third current CNN model that is trained according to the third current image including the pin and the target jack a second coordinate or a second current coordinate of the pin, and a third coordinate or a third current coordinate of the target jack;
所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量。The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of exercise that the robot needs to implement; The judging program module is configured to determine whether the robot meets the plug-in condition; insert the target jack; and the control motion program module is configured to: if not satisfied, control the robot to implement the current motion amount.
在一些实施例中,本发明还提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现上面任意一个实施例所述的插机方法。In some embodiments, the present invention also provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the plug-in method described in any of the above embodiments.
如图23所示,在一些实施例中,本发明还提供一种电子设备,包括存储器750、处理器740以及存储在所述存储器760中并可在所述处理器740上运行的计算机程序770,所述处理器执行所述计算机程序时实现上面任意一个实施例所述的插机方法。As shown in FIG. 23, in some embodiments, the present invention also provides an electronic device including a memory 750, a processor 740, and a computer program 770 stored in the memory 760 and executable on the processor 740. The processor inserting method described in any one of the above embodiments is implemented when the processor executes the computer program.
示例性的,所述计算机程序可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器(图未示意出)中,并由所述处理器740执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序在所述插机设备中的轨迹规划的过程。例如,所述计算机程序可以被分割成所述插机装置包括:相对位姿获取程序模块、第一当前位姿获取程序模块、第二当前位姿获取程序模块、第三位姿和第三当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和所述控制运动程序模块;各模块具体功能如下:所述相对位姿获取程序模块,用于根据获取的包括引脚的第一图像,获取引脚相对机械手在第一坐标系下的相对位姿;所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;所述第二当前位姿获取程序模块,用于根据所述第一当前位姿和所述相对位姿,获取所述引脚在所述第一坐标系下的第二当前位 姿;所述第三位姿和第三当前位姿获取程序模块,用于根据包括目标插孔的第二图像或第二当前图像,获取目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,若不满足,控制所述机械手实施所述当前运动量。Illustratively, the computer program can be partitioned into one or more modules/units, which are stored in the memory (not shown) and by the processor 740 Executed to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, the instruction segments being used to describe the process of trajectory planning of the computer program in the plug-in device. For example, the computer program may be divided into the plug-in device, including: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, a third pose, and a third current a pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and the control motion program module; the specific functions of each module are as follows: the relative pose acquisition program module is configured to be included according to the acquisition a first image of the pin, the relative pose of the pin relative to the robot in the first coordinate system; the first current pose acquisition program module, configured to acquire the robot according to the current information of the acquired joints of the robot a first current pose in the first coordinate system; the second current pose acquisition program module, configured to acquire the pin according to the first current pose and the relative pose a second current pose in the first coordinate system; the third pose and a third current pose acquisition program module for using the second image including the target jack or a second current image, a third pose or a third current pose of the target jack in the first coordinate system; the current motion amount acquisition program module, configured to: according to the first current pose a second current pose, and the third pose or the third current pose, the current amount of motion to be implemented by the robot is calculated based on the previously trained NN model; the determining program module is configured to determine whether the robot satisfies the insertion The control plug-in program module is configured to, if satisfied, control the robot to drive the pin into the target jack; the control motion program module, if not satisfied, control the robot to implement the Current amount of exercise.
所述电子设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述电子设备可包括,但不仅限于,处理器、存储器。本领域技术人员可以理解,所述示意图仅仅是电子设备的示例,并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备还可以包括输入输出设备、网络接入设备、总线等。The electronic device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The electronic device can include, but is not limited to, a processor, a memory. It will be understood by those skilled in the art that the schematic diagram is merely an example of an electronic device, does not constitute a limitation on an electronic device, may include more or less components than those illustrated, or combine some components, or different components. For example, the electronic device may also include an input and output device, a network access device, a bus, and the like.
所称处理器740可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 740 may be a central processing unit (CPU), or may be other general-purpose processors, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
所述存储器可以是所述插机设备或电子设备内置的存储设备,例如硬盘或内存。所述存储器也可以是所述插机设备或电子设备的外部存储设备,例如所述插机设备或电子设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器还可以既包括所述插机设备700或电子设备的内部存储单元,也包括外部存储设备。所述存储器用于存储所述计算机程序以及所述电子设备或插机设备所需的其他程序和数据。所述存储器还可以用于暂时地存储已经输出或者将要输出的数据。The memory may be a storage device built into the plug-in device or an electronic device, such as a hard disk or a memory. The memory may also be an external storage device of the plug-in device or an electronic device, such as a plug-in hard disk equipped with the plug-in device or the electronic device, a smart memory card (SMC), and a secure digital device ( Secure Digital, SD) cards, flash cards, etc. Further, the memory may also include both the plug-in device 700 or an internal storage unit of the electronic device, and an external storage device. The memory is for storing the computer program and other programs and data required by the electronic device or the plug-in device. The memory can also be used to temporarily store data that has been output or is about to be output.
本领域技术人员可以理解,图19-23仅仅是插机设备和电子设备的示例, 并不构成对插机设备和电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述插机设备还可以包括存储器、输入输出设备等。It will be understood by those skilled in the art that FIGS. 19-23 are merely examples of the plug-in device and the electronic device, and do not constitute a limitation of the plug-in device and the electronic device, and may include more or less components than the illustration, or a combination thereof. Certain components, or different components, such as the plug-in device, may also include memory, input and output devices, and the like.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in the specification are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts of the respective embodiments may be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant parts can be referred to the method part.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person skilled in the art will further appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software or a combination of both, in order to clearly illustrate the hardware and software. Interchangeability, the composition and steps of the various examples have been generally described in terms of function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
附图中的流程图和框图,图示了按照本发明各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality and operation of possible implementations of systems, methods and computer program products in accordance with various embodiments of the invention. In this regard, each block in the flowchart or block diagram can represent a module, a program segment, or a portion of code, and a module, a program segment, or a portion of code includes one or more Executable instructions. It should also be noted that in some alternative implementations, the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的技术方案及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护 范围内。The principles and embodiments of the present invention have been described with reference to specific examples. The description of the above embodiments is only for helping to understand the technical solutions of the present invention and the core ideas thereof. It should be noted that those skilled in the art can make various modifications and changes to the present invention without departing from the spirit and scope of the invention.

Claims (28)

  1. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚的第一图像,获取引脚相对机械手在第一坐标系下的相对位姿;Obtaining a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin;
    根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;Obtaining, according to current information of each joint of the robot, the first current pose of the robot in the first coordinate system;
    根据获取的包括目标插孔的第二图像或第二当前图像,获取目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Obtaining a third pose or a third current pose of the target jack in the first coordinate system according to the acquired second image or the second current image including the target jack;
    根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating a current amount of exercise to be performed by the robot based on the pre-trained NN model according to the first current pose, the relative pose, and the third pose or the third current pose; determining the robot Whether the plug-in condition is satisfied; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  2. 根据权利要求1所述的插机方法,其特征在于,所述相对位姿通过如下方法获取:The plug-in method according to claim 1, wherein the relative pose is obtained by the following method:
    根据所述第一图像,基于预先经过训练的第一CNN模型,获取所述相对位姿;和/或Obtaining the relative pose based on the first trained CNN model according to the first image; and/or
    所述第三位姿或第三当前位姿通过如下方法获取:The third pose or the third current pose is obtained by the following method:
    根据所述第二图像或第二当前图像;基于所述预先经过训练的第一CNN模型或预先经过训练的第二CNN模型,获取所述第三位姿或第三当前位姿。Obtaining the third pose or the third current pose based on the second image or the second current image based on the pre-trained first CNN model or the pre-trained second CNN model.
  3. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚相对机械手在第一坐标系下的相对位姿;Obtaining a relative pose of the pin relative to the robot in the first coordinate system based on the acquired first image including the pin, based on the pre-trained first CNN model;
    根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;Obtaining, according to current information of each joint of the robot, the first current pose of the robot in the first coordinate system;
    根据获取的包括目标插孔的第二图像或第二当前图像,基于预先经过训练的第二CNN模型或所述第一CNN模型,获取所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Acquiring the target jack in the first coordinate system based on the acquired second image or the second current image including the target jack based on the previously trained second CNN model or the first CNN model a three-position or a third current pose;
    根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位 姿,计算机械手需实施的当前运动量;判断机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of motion that the robot needs to implement; determining whether the robot satisfies the insertion condition; if satisfied, the control center The robot drives the pin into the target jack; if not, the robot is controlled to perform the current amount of motion.
  4. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    获取包括引脚的第一图像;Obtaining a first image including a pin;
    根据获取的机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    获取包括目标插孔的第二图像或第二当前图像;Obtaining a second image or a second current image including the target jack;
    根据所述第一图像、所述第一当前位姿、所述第二图像或所述第二当前图像,基于预先经过训练的第三CNN模型,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first image, the first current pose, the second image, or the second current image, a current amount of exercise to be performed by the robot based on a third CNN model trained in advance; Whether the robot meets the insertion condition; if not, the robot is controlled to drive the pin into the target jack; if not, the robot is controlled to perform the current amount of exercise.
  5. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚和目标插孔的第三当前图像,获取所述引脚在第一坐标系下的第二位姿或第二当前位姿,以及所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Obtaining a second pose or a second current pose of the pin in the first coordinate system according to the acquired third current image including the pin and the target jack, and the target jack is at the first a third pose or a third current pose in the coordinate system;
    根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the pre-trained NN model, the robot to perform according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The current amount of exercise; determining whether the robot meets the plug-in condition; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  6. 根据权利要求5所述的插机方法,其特征在于,所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿通过如下方法获取:The plug-in method according to claim 5, wherein the second pose or the second current pose, and the third pose or the third current pose are obtained by:
    根据所述第三当前图像,基于预先经过训练的第四CNN模型,获取所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿。And acquiring, according to the third current image, the second pose or the second current pose, and the third pose or the third current pose based on the trained fourth CNN model.
  7. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第四CNN 模型,获取所述引脚在第一坐标系下的第二位姿或第二当前位姿,以及所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;Obtaining a second pose or a second current pose of the pin in the first coordinate system based on the third CNN model trained in advance according to the third current image including the pin and the target jack, and the a third pose or a third current pose of the target jack in the first coordinate system;
    根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
    根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating a current amount of motion to be performed by the robot according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose; determining the robot Whether the plug-in condition is satisfied; if yes, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  8. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    获取包括引脚和目标插孔的第三当前图像;Obtaining a third current image including the pin and the target jack;
    根据获取的机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    根据所述第三当前图像和所述第一当前位姿,基于预先经过训练的第五CNN模型,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the third current image and the first current pose, a current amount of motion to be performed by the robot based on the trained fifth CNN model; determining whether the robot meets the insertion condition; if satisfied, controlling The robot drives the pin into the target jack; if not, controls the robot to implement the current amount of motion.
  9. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚的第一图像,获取引脚的第二坐标;Obtaining a second coordinate of the pin according to the acquired first image including the pin;
    根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    根据获取的包括目标插孔的第二图像或第二当前图像,获取目标插孔的第三坐标或第三当前坐标;Obtaining a third coordinate or a third current coordinate of the target jack according to the acquired second image or the second current image including the target jack;
    根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion to be performed by the robot based on the trained NN model; determining whether the robot is satisfied Inserting conditions; if satisfied, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  10. 根据权利要求9所述的插机方法,其特征在于,所述第二坐标通过如下方法获取:The plug-in method according to claim 9, wherein the second coordinates are obtained by the following method:
    根据所述第一图像,基于预先经过训练的第一CNN模型,获取所述第二坐标;和/或Acquiring the second coordinate based on the first image that is trained in advance according to the first image; and/or
    所述第三坐标或第三当前坐标通过如下方法获取:The third coordinate or the third current coordinate is obtained by the following method:
    根据所述第二图像或第二当前图像;基于所述预先经过训练的第一CNN模型或预先经过训练的第二CNN模型,获取所述第三坐标或第三当前坐标。Obtaining the third coordinate or the third current coordinate based on the second image or the second current image according to the pre-trained first CNN model or the previously trained second CNN model.
  11. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚的第二坐标;Obtaining a second coordinate of the pin based on the acquired first image including the pin, based on the previously trained first CNN model;
    根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
    根据获取的包括目标插孔的第二图像或第二当前图像,基于所述预先经过训练的第一CNN模型或预先经过训练的第二CNN模型,获取目标插孔的第三坐标或第三当前坐标;Obtaining a third coordinate or a third current of the target jack based on the acquired second image or the second current image including the target jack based on the pre-trained first CNN model or the pre-trained second CNN model coordinate;
    根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动引脚***目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; determining whether the robot meets an insertion condition; if satisfied, Controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  12. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的第二坐标或第二当前坐标,以及目标插孔的在所述第一坐标系下的第三坐标或第三当前坐标;Obtaining a second coordinate or a second current coordinate of the pin in the first coordinate system according to the acquired third current image including the pin and the target jack, and the first of the target jacks in the first coordinate system Three coordinates or third current coordinates;
    根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of motion to be performed by the robot based on the trained NN model; Whether the robot meets the plug-in condition; if satisfied, the robot is controlled to drive the pin to be inserted into the target jack; if not, the robot is controlled to implement the current amount of motion.
  13. 根据权利要求12所述的插机方法,其特征在于,所述第二坐标或第二 当前坐标、以及所述第三坐标或第三当前坐标通过如下方法获取:The plug-in method according to claim 12, wherein the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate are obtained by:
    根据所述第三当前图像,基于预先经过训练的第三CNN模型,获取所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标。And acquiring, according to the third current image, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, based on the third CNN model trained in advance.
  14. 一种插机方法,其特征在于,所述插机方法包括:An insertion method, characterized in that the insertion method comprises:
    根据获取的包括引脚和目标插孔的第三当前图像,基于预先经过训练的第三CNN模型,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;Obtaining a second coordinate or a second current coordinate of the pin and a third coordinate or a third of the target jack based on the acquired third current image including the pin and the target jack based on the previously trained third CNN model Current coordinates
    根据机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;Obtaining a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
    根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标计算所述机械手需实施的当前运动量;判断所述机械手是否满足插机条件;若满足,控制所述机械手带动所述引脚***所述目标插孔;若不满足,控制所述机械手实施所述当前运动量。Calculating, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; determining whether the robot meets the insertion condition And if so, controlling the robot to drive the pin into the target jack; if not, controlling the robot to implement the current amount of motion.
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至14任意一项所述的插机方法。A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the method of plugging according to any one of claims 1 to 14.
  16. 一种电子设备,其特征在于,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至14任意一项所述的插机方法。An electronic device, comprising: a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor implementing the computer program to implement claims 1 to 14 The method of plugging in any of the above.
  17. 一种插机设备,其特征在于,所述插机设备包括第一图像传感器、第二图像传感器、机械手和处理器;A plug-in device, characterized in that the plug-in device comprises a first image sensor, a second image sensor, a robot and a processor;
    所述处理器分别藕接所述第一图像传感器、所述第二图像传感器和所述机械手;The processor respectively connects the first image sensor, the second image sensor, and the robot;
    所述第一图像传感器在工作时,采集包括引脚的第一图像,并将所述第一图像发送给所述处理器;The first image sensor, when in operation, acquires a first image including a pin and transmits the first image to the processor;
    所述第二图像传感器在工作时,采集包括目标插孔的第二图像或第二当前图像,并将所述第二图像或第二当前图像发送给所述处理器;While the second image sensor is in operation, acquiring a second image or a second current image including the target jack, and transmitting the second image or the second current image to the processor;
    所述机械手在工作时,将机械手各关节的当前信息发送给所述处理器;基于所述处理器的控制移动所述当前运动量;基于所述处理器的控制带动所述引 脚***所述目标插孔;The robot sends the current information of the joints of the robot to the processor when working; moves the current amount of motion based on the control of the processor; and the pin is inserted into the target based on the control of the processor Jack
    所述处理器在工作时实现权利要求1至4任意一项所述的插机方法;或The processor, when in operation, implements the method of plugging according to any one of claims 1 to 4; or
    权利要求9-11任意一项所述的插机方法。The method of plugging in any one of claims 9-11.
  18. 一种插机设备,其特征在于,所述插机设备包括第三图像传感器、机械手和处理器;A plug-in device, characterized in that the plug-in device comprises a third image sensor, a robot and a processor;
    所述处理器分别藕接所述第三图像传感器和所述机械手;The processor respectively connects the third image sensor and the robot;
    所述第三图像传感器在工作时,采集包括引脚和目标插孔的第三当前图像,并将所述第三当前图像发送给所述处理器;The third image sensor, when in operation, acquires a third current image including a pin and a target jack, and transmits the third current image to the processor;
    所述机械手在工作时,将机械手各关节的当前信息发送给所述处理器;基于所述处理器的控制移动所述当前运动量;基于所述处理器的控制带动所述引脚***所述目标插孔;The robot sends the current information of the joints of the robot to the processor when working; moves the current amount of motion based on the control of the processor; and the pin is inserted into the target based on the control of the processor Jack
    所述处理器在工作时实现权利要求5至8任意一项所述的插机方法;或The processor, when operating, implements the plug-in method of any one of claims 5 to 8; or
    权利要求12-14任意一项所述的插机方法。The method of plugging in any one of claims 12-14.
  19. 一种插机装置,其特征在于,所述插机装置包括:相对位姿获取程序模块、第一当前位姿获取程序模块、第三位姿和第三当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;A plug-in device, comprising: a relative pose acquisition program module, a first current pose acquisition program module, a third pose and a third current pose acquisition program module, and a current exercise amount acquisition a program module, a judgment program module, a control plug-in program module, and a control motion program module;
    所述相对位姿获取程序模块,用于根据获取的包括引脚的第一图像,获取引脚相对机械手在第一坐标系下的相对位姿;The relative pose acquisition program module is configured to acquire a relative position of the pin relative to the robot in the first coordinate system according to the acquired first image including the pin;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述第三位姿和第三当前位姿获取程序模块,用于根据包括目标插孔的第二图像或第二当前图像,获取目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;The third pose and the third current pose acquisition program module are configured to acquire a third pose of the target jack in the first coordinate system according to the second image or the second current image including the target jack Or the third current pose;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引 脚***所述目标插孔;所述控制运动程序模块,若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate a robot based on the pre-trained NN model according to the first current pose, the relative pose, and the third pose or the third current pose The current motion amount; the determining program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to, if satisfied, control the robot to drive the pin into the target jack The control motion program module, if not satisfied, controlling the robot to implement the current amount of exercise; or
    所述插机装置包括:相对位姿获取程序模块、第一当前位姿获取程序模块、第二当前位姿获取程序模块、第三位姿或第三当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a relative pose acquisition program module, a first current pose acquisition program module, a second current pose acquisition program module, a third pose or a third current pose acquisition program module, and a current exercise amount acquisition program. a module, a judgment program module, a control plug-in program module, and a control motion program module;
    所述相对位姿获取模块,用于根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取所述引脚相对所述机械手在第一坐标系下的相对位姿;The relative pose acquisition module is configured to acquire, according to the acquired first image including the pin, the relative pose of the pin relative to the robot in the first coordinate system based on the first trained CNN model ;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述第三位姿和第三当前位姿获取程序模块,用于根据获取的包括目标插孔的第二图像或第二当前图像,基于预先经过训练的第二CNN模型或所述第一CNN模型,获取所述目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;The third pose and the third current pose acquisition program module are configured to be based on the acquired second CNN model or the first CNN according to the acquired second image or the second current image including the target jack a model, acquiring a third pose or a third current pose of the target jack in the first coordinate system;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述相对位姿、以及所述第三位姿或第三当前位姿,计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the relative pose, and the third pose or the third current pose, a current amount of exercise that the robot needs to perform; a program module, configured to determine whether the robot meets the plug-in condition; the control plug-in program module is configured to: if satisfied, control the robot to drive the pin into the target jack; and the control motion program module is used Controlling the robot to perform the current amount of exercise if not satisfied; or
    所述插机装置包括:第一图像获取程序模块、第一当前位姿获取程序模块、第二图像或第二当前图像获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a first image acquisition program module, a first current pose acquisition program module, a second image or second current image acquisition program module, a current motion amount acquisition program module, a judgment program module, and a control plug-in program module. And controlling the motion program module;
    所述第一图像获取程序模块,用于获取包括引脚的第一图像;The first image acquisition program module is configured to acquire a first image including a pin;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述第二图像或第二当前图像获取程序模块,用于获取包括目标插孔的第二图像或第二当前图像;The second image or second current image acquisition program module is configured to acquire a second image or a second current image including a target jack;
    所述当前运动量获取程序模块,用于根据所述第一图像、所述第一当前位 姿、所述第二图像或所述第二当前图像,基于预先经过训练的第三CNN模型,计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate a robot based on the first trained CNN model according to the first image, the first current pose, the second image, or the second current image. The current exercise amount to be implemented; the judgment program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to control the robot to drive the pin to be inserted into the target jack if satisfied The control motion program module, configured to control the robot to perform the current exercise amount if not satisfied; or
    所述插机装置包括:第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second pose or a second current pose and a third pose or a third current pose acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, and a judgment program module , controlling the plug-in program module and controlling the motion program module;
    所述第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块,用于根据获取的包括引脚和目标插孔的第三当前图像,获取引脚在第一坐标系下的第二位姿或第二当前位姿,以及目标插孔的在所述第一坐标系下的第三位姿或第三当前位姿;The second pose or the second current pose and the third pose or the third current pose acquisition program module are configured to acquire a pin according to the acquired third current image including the pin and the target jack. a second pose or a second current pose in a coordinate system, and a third pose or a third current pose of the target jack in the first coordinate system;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取所述机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current exercise amount acquisition program module is configured to perform pre-training according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The NN model calculates a current amount of motion to be implemented by the robot; the determining program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to control the robot to drive the lead if satisfied Inserting a foot into the target jack; the control motion program module, if not satisfied, controlling the robot to implement the current amount of exercise; or
    所述插机装置包括:所述第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: the second pose or the second current pose and the third pose or the third current pose acquisition program module, the first current pose acquisition program module, the current exercise amount acquisition program module, and the judgment a program module, a control plug-in program module, and a control motion program module;
    所述第二位姿或第二当前位姿以及第三位姿或第三当前位姿获取程序模块,用于根据包括引脚和目标插孔的第三当前图像,基于预先经过训练的第四CNN模型,获取引脚在第一坐标系下的第二位姿或第二当前位姿,以及目标插孔在所述第一坐标系下的第三位姿或第三当前位姿;The second pose or the second current pose and the third pose or the third current pose acquisition program module are configured to be based on the fourth trained image according to the third current image including the pin and the target jack The CNN model obtains a second pose or a second current pose of the pin in the first coordinate system, and a third pose or a third current pose of the target jack in the first coordinate system;
    所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获 取机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to obtain a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二位姿或第二当前位姿、以及所述第三位姿或第三当前位姿计算所述机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate the robotic requirement according to the first current pose, the second pose or the second current pose, and the third pose or the third current pose The current motion amount is implemented; the determining program module is configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to, if satisfied, control the robot to drive the pin to insert the target plug a control motion program module, configured to control the robot to perform the current exercise amount if not satisfied; or
    所述插机装置包括:第三图像获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a third image acquisition program module, a first current pose acquisition program module, a current motion amount acquisition program module, a judgment program module, a control plug-in program module, and a control motion program module;
    所述第三图像获取程序模块,用于获取包括引脚和目标插孔的第三当前图像;The third image acquisition program module is configured to acquire a third current image including a pin and a target jack;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取机械手在所述第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述当前运动量获取程序模块,用于根据所述第三当前图像和所述第一当前位姿,基于预先经过训练的第五CNN模型,计算所述机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制所述机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the third current image and the first current pose, a current amount of motion to be performed by the robot based on the trained fifth CNN model; a module, configured to determine whether the robot meets an insertion condition; the control insertion program module is configured to: if satisfied, control the robot to drive the pin into the target jack; the control motion program module, Used to control the robot to perform the current amount of exercise if not satisfied; or
    所述插机装置包括:第二坐标获取程序模块、第一当前位姿获取程序模块、第三坐标或第三当前坐标获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate acquiring program module, a first current pose acquiring program module, a third coordinate or a third current coordinate acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in program module. And controlling the motion program module;
    所述第二坐标获取程序模块,用于根据获取的包括引脚的第一图像,获取引脚的第二坐标;The second coordinate acquiring program module is configured to acquire a second coordinate of the pin according to the acquired first image including the pin;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述第三坐标或第三当前坐标获取程序模块,用于根据包括目标插孔的第 二图像或第二当前图像,获取目标插孔的第三坐标或第三当前坐标;The third coordinate or third current coordinate acquisition program module is configured to acquire a third coordinate or a third current coordinate of the target jack according to the second image or the second current image including the target jack;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块若满足,用于控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current requirement to be implemented by the robot based on the trained NN model The judging program module is configured to determine whether the robot meets the plug-in condition; if the control plug-in program module is satisfied, the control plug-in program module is configured to control the robot to drive the pin into the target jack; Controlling a motion program module for controlling the robot to perform the current amount of exercise if not satisfied; or
    所述插机装置包括:第二坐标获取程序模块、第一当前位姿获取程序模块、第三坐标或第三当前坐标获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate acquiring program module, a first current pose acquiring program module, a third coordinate or a third current coordinate acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in program module. And controlling the motion program module;
    所述第二坐标获取程序模块,用于根据获取的包括引脚的第一图像,基于预先经过训练的第一CNN模型,获取引脚的第二坐标;The second coordinate acquiring program module is configured to acquire, according to the acquired first image including the pin, the second coordinate of the pin based on the first trained CNN model;
    所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
    所述第三坐标或第三当前坐标获取程序模块,用于根据包括目标插孔的第二图像或第二当前图像,基于预先经过训练的第二CNN模型,获取目标插孔的第三坐标或第三当前坐标;The third coordinate or third current coordinate acquisition program module is configured to acquire a third coordinate of the target jack based on the second CNN model that is trained before, according to the second image or the second current image that includes the target jack Third current coordinate;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标、以及所述第三坐标或第三当前坐标,计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制机械手带动引脚***目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate, and the third coordinate or the third current coordinate, a current amount of motion that the robot needs to implement; the determining program module For determining whether the robot meets the plug-in condition; the control plug-in program module is configured to: if satisfied, the control robot drives the pin to be inserted into the target jack; and the control motion program module is configured to control the robot if not satisfied The current amount of exercise; or
    所述插机装置包括:第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate or a second current coordinate and a third coordinate or third current coordinate acquiring program module, a first current pose acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in Program module and control motion program module;
    所述第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块,用于根据获取的包括引脚和目标插孔的第三当前图像,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;The second coordinate or the second current coordinate and the third coordinate or the third current coordinate acquiring program module, configured to acquire the second coordinate of the pin or the first according to the acquired third current image including the pin and the target jack Two current coordinates, and a third coordinate or a third current coordinate of the target jack;
    所述第一当前位姿获取程序模块,用于根据获取的机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to the current information of the acquired joints of the robot;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,基于预先经过训练的NN模型计算机械手需实施的当前运动量;所述判断程序模块,用于判断所述机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量;或The current motion quantity acquisition program module is configured to calculate based on the pre-trained NN model according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate The current amount of motion to be implemented by the robot; the determining program module, configured to determine whether the robot meets the plug-in condition; and the control plug-in program module, if the controller is controlled to drive the pin to insert the pin a target jack; the control motion program module, configured to control the robot to perform the current amount of motion if not satisfied; or
    所述插机装置包括:第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块、第一当前位姿获取程序模块、当前运动量获取程序模块、判断程序模块、控制插机程序模块和控制运动程序模块;The plug-in device includes: a second coordinate or a second current coordinate and a third coordinate or third current coordinate acquiring program module, a first current pose acquiring program module, a current motion amount acquiring program module, a determining program module, and a control plug-in Program module and control motion program module;
    所述第二坐标或第二当前坐标以及第三坐标或第三当前坐标获取程序模块,用于根据获取的包括引脚和目标插孔的第三当前图像,基于预先经过训练的第三CNN模型,获取引脚的第二坐标或第二当前坐标,以及目标插孔的第三坐标或第三当前坐标;The second coordinate or second current coordinate and the third coordinate or third current coordinate acquisition program module are configured to be based on the third trained CNN model according to the acquired third current image including the pin and the target jack Obtaining a second coordinate or a second current coordinate of the pin, and a third coordinate or a third current coordinate of the target jack;
    所述第一当前位姿获取程序模块,用于根据机械手各关节的当前信息,获取机械手在第一坐标系下的第一当前位姿;The first current pose acquisition program module is configured to acquire a first current pose of the robot in the first coordinate system according to current information of each joint of the robot;
    所述当前运动量获取程序模块,用于根据所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标计算机械手需实施的当前运动量;所述判断程序模块,用于判断机械手是否满足插机条件;所述控制插机程序模块,用于若满足,控制所述机械手带动所述引脚***所述目标插孔;所述控制运动程序模块,用于若不满足,控制机械手实施所述当前运动量。The current motion quantity acquisition program module is configured to calculate, according to the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, a current amount of exercise that the robot needs to implement; a judging program module, configured to determine whether the robot meets the plug-in condition; and the control plug-in program module is configured to: if satisfied, control the robot to drive the pin into the target jack; the control motion program module And if it is not satisfied, the control robot implements the current amount of exercise.
  20. 一种权利要求1、2、5或6所述的插机方法中的预先经过训练的NN模型的获取方法,其特征在于,所述预先经过训练的NN模型通过如下方法获取:A method for acquiring a pre-trained NN model in the method of plugging according to claim 1, 2, 5 or 6, wherein the pre-trained NN model is obtained by:
    获取初始化的NN模型,所述NN模型为针对输入的第一当前位姿、相对位姿、第二当前位姿或第二位姿、以及第三位姿或第三当前位姿,输出机械手需实施的当前运动量;Obtaining an initialized NN model, wherein the NN model is required for outputting a first current pose, a relative pose, a second current pose or a second pose, and a third pose or a third current pose. The current amount of exercise implemented;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的NN模型进行训练,以获取所述预先经过训练的NN模型。The initialized NN model is trained based on the training data and the tag data to obtain the pre-trained NN model.
  21. 一种权利要求2或3所述的插机方法中的预先经过训练的第一CNN模型的获取方法,其特征在于,所述预先经过训练的第一CNN模型通过如下方法获取:A method for acquiring a pre-trained first CNN model in the plug-in method according to claim 2 or 3, wherein the pre-trained first CNN model is obtained by:
    获取初始化的第一CNN模型,所述第一CNN模型为针对输入的第一图像输出相对位姿或引脚位姿;和/或针对输入的第二图像或第二当前图像输出第三位姿或第三当前位姿;Obtaining an initialized first CNN model, the first CNN model outputs a relative pose or pin pose for the input first image; and/or outputs a third pose for the input second image or the second current image Or the third current pose;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第一CNN模型进行训练,以获取所述预先经过训练的第一CNN模型。And initializing the initialized first CNN model to acquire the pre-trained first CNN model based on the training data and the tag data.
  22. 一种权利要求4所述的插机方法中的预先经过训练的第三CNN模型的获取方法,其特征在于,所述预先经过训练的第三CNN模型通过如下方法获取:A method for acquiring a pre-trained third CNN model in the plug-in method of claim 4, wherein the pre-trained third CNN model is obtained by:
    获取初始化的第三CNN模型,所述第三CNN模型为针对输入的第一图像、第二图像或第二当前图像和第一当前位姿,输出机械手需实施的当前运动量;Obtaining an initialized third CNN model, the third CNN model is a current amount of motion to be implemented by the robot for the input first image, the second image or the second current image, and the first current pose;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第三CNN模型进行训练,以获取所述预先经过训练的第三CNN模型。And the initialized third CNN model is trained to acquire the pre-trained third CNN model based on the training data and the tag data.
  23. 一种权利要求6或7所述的插机方法中的预先经过训练的第四CNN模型的获取方法,其特征在于,所述预先经过训练的第四CNN模型通过如下方法获取:A method for acquiring a pre-trained fourth CNN model in the plug-in method according to claim 6 or 7, wherein the pre-trained fourth CNN model is obtained by:
    获取初始化的第四CNN模型,所述第四CNN模型为针对输入的第三当前图像,输出第二位姿或第二当前位姿,以及第三位姿或第三当前位姿;Obtaining an initialized fourth CNN model, which is a third current image for input, outputting a second pose or a second current pose, and a third pose or a third current pose;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第四CNN模型进行训练,以获取所述预先经过训练的第四CNN模型。And the initialized fourth CNN model is trained to acquire the pre-trained fourth CNN model based on the training data and the tag data.
  24. 一种权利要求8所述的插机方法中的预先经过训练的第五CNN模型的获取方法,其特征在于,所述预先经过训练的第五CNN模型通过如下方法获取:A method for acquiring a pre-trained fifth CNN model in the plug-in method of claim 8, wherein the pre-trained fifth CNN model is obtained by:
    获取初始化的第五CNN模型,所述第五CNN模型为针对输入的第三当前图像和第一当前位姿,输出机械手需实施例的当前运动量;Obtaining an initialized fifth CNN model, which is a third current image for input and a first current pose, and outputs a current amount of motion required by the robot;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第五CNN模型进行训练,以获取所述预先经过训练的第五CNN模型。And initializing the initialized fifth CNN model to acquire the pre-trained fifth CNN model based on the training data and the tag data.
  25. 一种权利要求9、10、12或13所述的插机方法中的预先经过训练的NN模型的获取方法,其特征在于,所述预先经过训练的NN模型的获取方法包括:A method for obtaining a pre-trained NN model in the method of interpolating according to claim 9, 10, 12 or 13, wherein the method for obtaining the pre-trained NN model comprises:
    获取初始化的NN模型,所述NN模型为针对输入的所述第一当前位姿、所述第二坐标或第二当前坐标、以及所述第三坐标或第三当前坐标,输出所述机械手需实施的当前运动量;Obtaining an initialized NN model, the NN model is required to output the robot for the input of the first current pose, the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate The current amount of exercise implemented;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的NN模型进行训练,以获取所述预先经过训练的所述NN模型。The initialized NN model is trained based on the training data and the tag data to obtain the pre-trained NN model.
  26. 一种权利要求10或11所述的插机方法中的预先经过训练的第六CNN模型的获取方法,其特征在于,所述预先经过训练的第六CNN模型的获取方法包括:A method for obtaining a pre-trained sixth CNN model in the method of the plug-in according to claim 10 or 11, wherein the method for acquiring the pre-trained sixth CNN model comprises:
    获取初始化的第六CNN模型,所述第六CNN模型为针对输入的所述第一图像和/或所述第二图像或第二当前图像,输出所述第二坐标,和/或所述第三坐标或第三当前坐标;Acquiring an initialized sixth CNN model, the sixth CNN model is for outputting the first image and/or the second image or the second current image, outputting the second coordinates, and/or the Three coordinates or third current coordinates;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第六CNN模型进行训练,以获取所述预先经过训练的所述第六CNN模型。And the initialized sixth CNN model is trained to acquire the pre-trained sixth CNN model based on the training data and the tag data.
  27. 一种权利要求10或11所述的插机方法中的预先经过训练的第七CNN模型的获取方法,其特征在于,所述预先经过训练的第七CNN模型的获取方法包括:A method for acquiring a pre-trained seventh CNN model in the method of the plug-in according to claim 10 or 11, wherein the method for acquiring the pre-trained seventh CNN model comprises:
    获取初始化的第七CNN模型,所述第七CNN模型为针对输入的所述第二图像或第二当前图像,输出所述第三坐标或第三当前坐标;Obtaining an initialized seventh CNN model, the seventh CNN model is configured to output the third coordinate or the third current coordinate for the input second image or the second current image;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第七CNN模型进行训练,以获取所述预先经过训练的所述第七CNN模型。And the initialized seventh CNN model is trained to acquire the pre-trained seventh CNN model based on the training data and the tag data.
  28. 一种权利要求13或14任意一项所述的插机方法中的预先经过训练的第八CNN模型的获取方法,其特征在于,所述预先经过训练的第八CNN模型的获取方法包括:A method for acquiring a pre-trained eighth CNN model in the method of the plug-in according to any one of claims 13 or 14, wherein the method for acquiring the pre-trained eighth CNN model comprises:
    获取初始化的第八CNN模型,所述第八CNN模型为针对输入的所述第三当前图像,输出所述第二坐标或第二当前坐标,以及所述第三坐标或第三当前坐标;Obtaining an initialized eighth CNN model, the eighth CNN model is outputting the second coordinate or the second current coordinate, and the third coordinate or the third current coordinate, for the third current image input;
    获取训练数据和标签数据;Obtain training data and tag data;
    基于所述训练数据和所述标签数据,对所述初始化的第八CNN模型进行训练,以获取所述预先经过训练的所述第八CNN模型。And initializing the initialized eighth CNN model to acquire the pre-trained eighth CNN model based on the training data and the tag data.
PCT/CN2019/080453 2018-04-02 2019-03-29 Plug-in method and plug-in device WO2019192402A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980000632.9A CN110463376B (en) 2018-04-02 2019-03-29 Machine plugging method and machine plugging equipment

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810281121 2018-04-02
CN201810281673 2018-04-02
CN201810281121.6 2018-04-02
CN201810281673.7 2018-04-02

Publications (1)

Publication Number Publication Date
WO2019192402A1 true WO2019192402A1 (en) 2019-10-10

Family

ID=68099897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080453 WO2019192402A1 (en) 2018-04-02 2019-03-29 Plug-in method and plug-in device

Country Status (2)

Country Link
CN (1) CN110463376B (en)
WO (1) WO2019192402A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113412048A (en) * 2021-06-09 2021-09-17 东莞市冠佳电子设备有限公司 Discharging pin arranging device
CN114700953A (en) * 2022-04-29 2022-07-05 华中科技大学 Particle swarm hand-eye calibration method and system based on joint zero error
CN115017857A (en) * 2022-06-14 2022-09-06 大连日佳电子有限公司 Method and system for determining pin inserting position of electronic component
CN116184616A (en) * 2022-12-06 2023-05-30 中国科学院空间应用工程与技术中心 Method and system for controlling pose of prism of gravity meter

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112312666B (en) * 2020-11-06 2023-08-15 浪潮电子信息产业股份有限公司 Circuit board screw driving method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5090927A (en) * 1991-06-27 1992-02-25 At&T Bell Laboratories Connectors including lead alignment strips
CN103841814A (en) * 2012-11-26 2014-06-04 台达电子电源(东莞)有限公司 Device and method for assembling electronic device on socket
CN103963058A (en) * 2014-04-30 2014-08-06 重庆环视科技有限公司 Mechanical arm grasping control system and method based on multi-azimuth visual positioning
CN105451461A (en) * 2015-11-25 2016-03-30 四川长虹电器股份有限公司 PCB board positioning method based on SCARA robot
CN107205335A (en) * 2016-03-17 2017-09-26 深圳市堃琦鑫华股份有限公司 One kind automation plug-in method

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110591B2 (en) * 2001-03-28 2006-09-19 Siemens Corporate Research, Inc. System and method for recognizing markers on printed circuit boards
JP2005121478A (en) * 2003-10-16 2005-05-12 Hitachi High-Tech Instruments Co Ltd Method and apparatus for inspecting mounted component, and fixture substrate therefor
CN1979137A (en) * 2005-01-11 2007-06-13 欧姆龙株式会社 Substrate inspection device, method and device for setting inspection logic
JP4664752B2 (en) * 2005-06-30 2011-04-06 Juki株式会社 Component adsorption method and apparatus
JP2013046923A (en) * 2011-08-29 2013-03-07 Ricoh Co Ltd Lead pin correction device and lead pin correction method
TWI459165B (en) * 2012-03-14 2014-11-01 Giga Byte Tech Co Ltd Insertion part recognition system and insertion part recognition method
CN102883548B (en) * 2012-10-16 2015-03-18 南京航空航天大学 Component mounting and dispatching optimization method for chip mounter on basis of quantum neural network
CN105228437B (en) * 2015-09-30 2018-01-23 广东省自动化研究所 A kind of temperature electronic components and parts assembling method based on compound positioning
CN205812526U (en) * 2016-06-30 2016-12-14 深圳市顶点视觉自动化技术有限公司 The vision positioning system of electronic component pin
CN106228563B (en) * 2016-07-29 2019-02-26 杭州鹰睿科技有限公司 Automatic setup system based on 3D vision
CN106709909B (en) * 2016-12-13 2019-06-25 重庆理工大学 A kind of flexible robot's visual identity and positioning system based on deep learning
CN206470210U (en) * 2016-12-24 2017-09-05 大连日佳电子有限公司 Machine vision scolding tin position detecting system
CN106874914B (en) * 2017-01-12 2019-05-14 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5090927A (en) * 1991-06-27 1992-02-25 At&T Bell Laboratories Connectors including lead alignment strips
CN103841814A (en) * 2012-11-26 2014-06-04 台达电子电源(东莞)有限公司 Device and method for assembling electronic device on socket
CN103963058A (en) * 2014-04-30 2014-08-06 重庆环视科技有限公司 Mechanical arm grasping control system and method based on multi-azimuth visual positioning
CN105451461A (en) * 2015-11-25 2016-03-30 四川长虹电器股份有限公司 PCB board positioning method based on SCARA robot
CN107205335A (en) * 2016-03-17 2017-09-26 深圳市堃琦鑫华股份有限公司 One kind automation plug-in method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113412048A (en) * 2021-06-09 2021-09-17 东莞市冠佳电子设备有限公司 Discharging pin arranging device
CN114700953A (en) * 2022-04-29 2022-07-05 华中科技大学 Particle swarm hand-eye calibration method and system based on joint zero error
CN114700953B (en) * 2022-04-29 2023-09-08 华中科技大学 Particle swarm hand-eye calibration method and system based on joint zero error
CN115017857A (en) * 2022-06-14 2022-09-06 大连日佳电子有限公司 Method and system for determining pin inserting position of electronic component
CN116184616A (en) * 2022-12-06 2023-05-30 中国科学院空间应用工程与技术中心 Method and system for controlling pose of prism of gravity meter
CN116184616B (en) * 2022-12-06 2023-11-14 中国科学院空间应用工程与技术中心 Method and system for controlling pose of prism of gravity meter

Also Published As

Publication number Publication date
CN110463376B (en) 2021-10-29
CN110463376A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2019192402A1 (en) Plug-in method and plug-in device
US11565407B2 (en) Learning device, learning method, learning model, detection device and grasping system
CN109800864B (en) Robot active learning method based on image input
CN111695562B (en) Autonomous robot grabbing method based on convolutional neural network
JP6793885B1 (en) Image processing system and image processing method
Wu et al. Pixel-attentive policy gradient for multi-fingered grasping in cluttered scenes
CN111709980A (en) Multi-scale image registration method and device based on deep learning
US20110208685A1 (en) Motion Capture Using Intelligent Part Identification
Cheng et al. A vision-based robot grasping system
CN114387513A (en) Robot grabbing method and device, electronic equipment and storage medium
Saxena et al. Generalizable pose estimation using implicit scene representations
CN113160330B (en) End-to-end-based camera and laser radar calibration method, system and medium
JP2019164836A (en) Learning device, learning method, learning model, detection device, and holding system
CN113551661A (en) Pose identification and track planning method, device and system, storage medium and equipment
JP7349423B2 (en) Learning device, learning method, learning model, detection device and grasping system
Adrian et al. Dfbvs: Deep feature-based visual servo
US20240013497A1 (en) Learning Articulated Shape Reconstruction from Imagery
Wang et al. Robot grasping in dense clutter via view-based experience transfer
Zhou et al. Visual tracking using improved multiple instance learning with co-training framework for moving robot
Tekden et al. Neural field movement primitives for joint modelling of scenes and motions
JP2021056542A (en) Pose detection of object from image data
Figundio Pose estimation and semantic meaning extraction for robotics using neural networks
Sun et al. PanelPose: A 6D Pose Estimation of Highly-Variable Panel Object for Robotic Robust Cockpit Panel Inspection
He et al. Deep Learning-based Mobile Robot Target Object Localization and Pose Estimation Research
Xing et al. How Does A Camera Look At One 3D CAD Object?

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19781613

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 280121)

122 Ep: pct application non-entry in european phase

Ref document number: 19781613

Country of ref document: EP

Kind code of ref document: A1