CN114750168B - Mechanical arm control method and system based on machine vision - Google Patents

Mechanical arm control method and system based on machine vision Download PDF

Info

Publication number
CN114750168B
CN114750168B CN202210664422.3A CN202210664422A CN114750168B CN 114750168 B CN114750168 B CN 114750168B CN 202210664422 A CN202210664422 A CN 202210664422A CN 114750168 B CN114750168 B CN 114750168B
Authority
CN
China
Prior art keywords
manipulator
information
task
coordinate
obtaining unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210664422.3A
Other languages
Chinese (zh)
Other versions
CN114750168A (en
Inventor
许理浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Shangshun Technology Co ltd
Original Assignee
Smooth Machine Systems Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smooth Machine Systems Co ltd filed Critical Smooth Machine Systems Co ltd
Priority to CN202210664422.3A priority Critical patent/CN114750168B/en
Publication of CN114750168A publication Critical patent/CN114750168A/en
Application granted granted Critical
Publication of CN114750168B publication Critical patent/CN114750168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a mechanical arm control method and system based on machine vision, wherein the method comprises the following steps: obtaining mechanical structure information of a first manipulator; acquiring a first image information base according to a first image acquisition device; obtaining a first feature set; constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set; obtaining a real-time position data set according to the position detection device; inputting a real-time stress detection set of the real-time position data set into the touch detection model, and further judging whether a first touch instruction is triggered; and if the first touch and stop instruction is not triggered, priority adjustment is carried out on the initial control task of the first manipulator, and the first manipulator is controlled according to the adjustment control task. The technical problem of among the prior art manipulator meet with the unexpected collision back when actual work, unable accurate control manipulator leads to manipulator job task to accomplish smoothly, high-efficiently is solved.

Description

Mechanical arm control method and system based on machine vision
Technical Field
The application relates to the technical field of computer application, in particular to a mechanical arm control method and system based on machine vision.
Background
With the continuous promotion of the technological technical level, the manipulator has obtained general application in the construction of multiple trade field. At present, mechanical arms are mainly divided into two categories: one is a multi-purpose manipulator, which can clamp various articles with different shapes, sizes and weights by improving the dexterity of the manipulator; the other type is a special mechanical arm meeting specific tasks and used for clamping articles to move and transfer. The rapid development of the manipulator strongly promotes the automation process. However, when the manipulator in the prior art executes a work task, the manipulator accidental collision condition is intelligently analyzed based on the real-time work task type of the manipulator, so that a targeted adjustment control measure is taken. The research on the intelligent control method of the manipulator based on the computer technology has great significance for the development of the manipulator.
In the process of implementing the technical solution in the embodiment of the present application, the inventor of the present application finds that the above-mentioned technology has at least the following technical problems:
in the prior art, after the manipulator encounters an accidental collision in actual working, the manpower cannot be quickly adjusted in adaptive control according to the real-time working requirement of the manipulator, so that the manipulator cannot be accurately controlled, and the technical problem that the working task of the manipulator cannot be smoothly and efficiently completed is solved.
Disclosure of Invention
The application aims to provide a mechanical arm control method and system based on machine vision, and the method and system are used for solving the technical problem that in the prior art, after the mechanical arm encounters accidental collision in actual working, the manual force cannot be quickly adjusted in adaptive control according to the real-time working requirement of the mechanical arm, so that the mechanical arm cannot be accurately controlled, and the working task of the mechanical arm cannot be smoothly and efficiently completed.
In view of the foregoing problems, embodiments of the present application provide a robot control method and system based on machine vision.
In a first aspect, the present application provides a robot control method based on machine vision, which is implemented by a robot control system based on machine vision, wherein the method includes: obtaining mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information; according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base; obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator; constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule; obtaining a real-time position data set of the first manipulator according to the position detection device; inputting a real-time stress detection set of the real-time position data set into the touch detection model, and judging whether a first touch instruction is triggered according to detection information output by the touch detection model; and if the first touch and stop instruction is not triggered, performing priority adjustment on an initial control task of the first manipulator, and controlling the first manipulator according to the adjustment control task.
In another aspect, the present application further provides a machine vision-based robot control system for executing the machine vision-based robot control method according to the first aspect, wherein the system includes: a first obtaining unit: the first obtaining unit is used for obtaining mechanical structure information of the first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information; a second obtaining unit: the second obtaining unit is used for carrying out image acquisition and attribute information input on an execution object of the first manipulator according to the first image acquisition device to obtain a first image information base; a third obtaining unit: the third obtaining unit is configured to obtain a first feature set based on the first image information base, where the first feature set is an execution association feature set of the first manipulator; a first building unit: the first construction unit is used for constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule; a fourth obtaining unit: the fourth obtaining unit is configured to obtain a real-time position data set of the first manipulator according to the position detection device; a first judgment unit: the first judging unit is used for inputting the real-time stress detection set of the real-time position data set into the touch detection model and judging whether a first touch instruction is triggered according to detection information output by the touch detection model; a first execution unit: the first execution unit is used for performing priority adjustment on the initial control task of the first manipulator and controlling the first manipulator according to the adjustment control task if the first touch and stop instruction is not triggered.
In a third aspect, embodiments of the present application further provide a robot control system based on machine vision, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
1. obtaining mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information; according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base; obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator; constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule; obtaining a real-time position data set of the first manipulator according to the position detection device; inputting a real-time stress detection set of the real-time position data set into the touch detection model, and judging whether a first touch instruction is triggered according to detection information output by the touch detection model; and if the first touch and stop instruction is not triggered, performing priority adjustment on an initial control task of the first manipulator, and controlling the first manipulator according to the adjustment control task. Through the actual working condition based on the manipulator, a corresponding manipulator touch-stop detection model is constructed, so that the accidental collision condition when the manipulator executes a working task is intelligently collected and analyzed, the manipulator control task is pertinently adjusted, and the smooth and efficient completion of the manipulator working task is ensured. The manipulator collision avoidance system has the advantages that the technical effects of intelligently dynamically adjusting the manipulator and improving the quality and efficiency of the manipulator work task aiming at accidental collision of the manipulator during real-time work are achieved.
2. The coordinate entropy values are calculated based on the coordinate processing module, so that the manipulator position and the manipulator posture coordinate entropy are compared, and accurate information entropy data of the manipulator position and the manipulator posture are obtained through calculation, so that the rule is adjusted based on an accurate calculation result, and the technical effects of improving the reasonability, reliability and effectiveness of the priority adjustment rule based on accurate calculation data are achieved.
3. Through touch and stop the detection model, the intelligent manipulator unexpected atress condition discernment judgement, and then whether reply manipulator execution work task by first collision upper limit detection rule automatic judgement pauses, has reached the unexpected condition when intelligent detection, intelligent processing and analysis manipulator actual work, improves the technological effect of manipulator control intelligent degree.
4. The current manipulator collision tolerance evaluation is respectively analyzed based on the manipulator structure and the manipulator task execution object, and then comprehensive analysis is carried out to obtain a first collision upper limit detection rule, so that the technical effects of objectively evaluating the manipulator collision tolerance degree based on test data and setting a corresponding collision upper limit are achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a robot control method based on machine vision according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of the adjustment control task obtained in the robot control method based on machine vision according to the embodiment of the present application;
fig. 3 is a schematic flowchart illustrating obtaining the first touch instruction in a robot control method based on machine vision according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating the first collision upper limit detection rule in the robot control method based on machine vision according to the embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot control system based on machine vision according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals:
a first obtaining unit 11, a second obtaining unit 12, a third obtaining unit 13, a first constructing unit 14, a fourth obtaining unit 15, a first judging unit 16, a first executing unit 17, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides a mechanical arm control method and system based on machine vision, and solves the technical problem that in the prior art, when a mechanical arm encounters an accidental collision in actual working, the manual force cannot be quickly adjusted in adaptive control according to the real-time working requirement of the mechanical arm, so that the mechanical arm cannot be accurately controlled, and the working task of the mechanical arm cannot be smoothly and efficiently completed. Through the actual working condition based on the manipulator, a corresponding manipulator touch-stop detection model is constructed, so that the accidental collision condition when the manipulator executes a working task is intelligently collected and analyzed, the manipulator control task is pertinently adjusted, and the smooth and efficient completion of the manipulator working task is ensured. The manipulator collision avoidance system has the advantages that the technical effects of intelligently dynamically adjusting the manipulator and improving the quality and efficiency of the manipulator work task aiming at accidental collision during real-time work of the manipulator are achieved.
In the following, the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. It should be further noted that, for the convenience of description, only some but not all of the elements relevant to the present application are shown in the drawings.
The application provides a mechanical arm control method based on machine vision, which is applied to a mechanical arm control system based on machine vision, wherein the method comprises the following steps: obtaining mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information; according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base; obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator; constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule; obtaining a real-time position data set of the first manipulator according to the position detection device; inputting a real-time stress detection set of the real-time position data set into the touch detection model, and judging whether a first touch instruction is triggered according to detection information output by the touch detection model; and if the first touch and stop instruction is not triggered, priority adjustment is carried out on the initial control task of the first manipulator, and the first manipulator is controlled according to the adjustment control task.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
Referring to fig. 1, an embodiment of the present application provides a robot control method based on machine vision, where the method is applied to a robot control system based on machine vision, and the method specifically includes the following steps:
step S100: acquiring mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information;
particularly, the manipulator control method based on the machine vision is applied to the manipulator control system based on the machine vision, and a corresponding manipulator touch and stop detection model can be constructed according to the actual working condition based on the manipulator, so that the accidental collision condition when the manipulator executes a working task is intelligently collected and analyzed, the manipulator control task is adjusted in a targeted manner, and the smooth and efficient completion of the manipulator working task is ensured.
The manipulator is a robot device which simulates human hands and arms to design and further replaces human hands to complete various related operations such as grabbing and carrying of some objects. The first manipulator is any manipulator which uses the manipulator control system based on machine vision to carry out intelligent control. The first manipulator is designed by simulating a human hand and an arm, so that the corresponding mechanical structure of the manipulator comprises a joint structure simulating a human elbow joint, an execution structure simulating a human hand and a base structure simulating a human arm. And collecting and recording related structure information such as design sizes and design materials of the first manipulator joint structure, the execution structure and the base structure to form the mechanical structure information. By acquiring the mechanical structure information of the first manipulator, the technical effect of fully knowing the structural parameters and the related characteristics of the first manipulator is achieved.
Step S200: according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base;
specifically, the first image acquisition device comprises a camera and other related shooting devices, can monitor and shoot the working condition of the first manipulator in real time, acquires images of a task object executed by the manipulator in multiple angles and distances, identifies and records task attributes of the first manipulator in the images, and further constructs the first image information base. For example, the high-definition camera collects the moving images of the manipulator after carrying the materials in real time, the corresponding materials are the key points of the collected images, and the corresponding carrying is the task attribute of the manipulator. The manipulator real-time monitoring effect based on machine vision is achieved by monitoring and acquiring related images in real time.
Step S300: obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator;
specifically, based on image data of a first manipulator execution object acquired by a first image acquisition device in real time, a manipulator control system based on machine vision intelligently analyzes an execution object in an image, determines relevant feature information such as the shape and the size of the execution object, and determines attribute features of a task executed by the first manipulator in combination with attributes of the task executed by the first manipulator, so as to form the first feature set. By analyzing the information of the execution object of the manipulator, the characteristic information of the corresponding task executed by the manipulator is determined, and the technical effect of comprehensively and accurately monitoring the working condition of the manipulator is achieved.
Step S400: constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule;
specifically, a touch detection model is constructed by combining the mechanical structure information of the first manipulator and the basic data generated by the first feature set, and is used for intelligently responding to the situations such as accidental collision suffered by the first manipulator when the first manipulator executes a work task. The touch detection model comprises a first collision upper limit detection rule. The first collision upper limit detection rule refers to a maximum collision event which can be suffered by the corresponding first manipulator in the process of executing the current work task, namely the maximum collision strength. Once the collision that first manipulator received exceeds first collision upper limit, the system automated inspection should unexpected collision information, and then triggers the work task of suspending first manipulator, has reached intelligent discernment unexpected collision and when unexpected collision reaches a certain degree, starts the technical effect that the task of suspending comes the protection personal and property safety.
Step S500: obtaining a real-time position data set of the first manipulator according to the position detection device;
step S600: inputting a real-time stress detection set of the real-time position data set into the touch detection model, and judging whether a first touch instruction is triggered according to detection information output by the touch detection model;
specifically, the position detection device is in communication connection with the robot control system based on machine vision, and is disposed on an execution structure of the first robot, and is configured to monitor a position change condition of the first robot in real time, so as to generate a real-time position data set of the first robot. The real-time position data set comprises detailed position data such as corresponding position height, position azimuth and the like of the first manipulator at different times. Furthermore, based on the real-time position data set, the force applied to the first mechanical arm at different positions is detected, so that the real-time stress detection set is formed, the real-time stress detection set is input into the touch detection model, and the touch detection model intelligently analyzes and then outputs corresponding detection information. Finally, a robot control system based on machine vision judges whether the current stress of the first manipulator triggers a first touch instruction according to the detection information, namely whether the first manipulator suspends the task.
Through the real-time position based on first manipulator, and then the atress condition of analysis first manipulator in each real-time position, based on its atress condition analysis result, whether the execution task condition of system intellectuality definite first manipulator is normal, whether need suspend the task. The technical effect of pertinently and intelligently controlling the work task of the manipulator based on the actual condition of the manipulator is achieved.
Step S700: and if the first touch and stop instruction is not triggered, performing priority adjustment on an initial control task of the first manipulator, and controlling the first manipulator according to the adjustment control task.
Specifically, when the robot control system based on machine vision analyzes that the stress condition of the first manipulator is normal, that is, the first touch and stop instruction is not triggered, the system automatically performs priority adjustment on an initial control task of the first manipulator, and then intelligent control of the first manipulator is realized. Through the actual working condition based on the manipulator, a corresponding manipulator touch-stop detection model is constructed, so that the accidental collision condition when the manipulator executes a working task is intelligently collected and analyzed, the manipulator control task is pertinently adjusted, and the smooth and efficient completion of the manipulator working task is ensured. The manipulator collision avoidance system has the advantages that the technical effects of intelligently dynamically adjusting the manipulator and improving the quality and efficiency of the manipulator work task aiming at accidental collision of the manipulator during real-time work are achieved.
Further, as shown in fig. 2, step S700 in the embodiment of the present application further includes:
step S710: if the first touch-stop instruction is not triggered, a first disassembly instruction is obtained;
step S720: disassembling the initial control task according to the first disassembling instruction to obtain N subtasks, wherein the initial control task is a task from a first collision position of the first manipulator to a preset end position;
step S730: obtaining a first category subtask and a second category subtask by performing category division on the N subtasks, wherein the first category subtask is a position task, and the second category subtask is an attitude task;
step S740: and obtaining the adjustment control task by adjusting the priority of the first category subtask and the second category subtask.
Specifically, when the robot control system based on machine vision analyzes that the stress condition of the first robot is normal, that is, the first touch and stop instruction is not triggered, the system automatically sends out the first disassembling instruction for disassembling the initial control task of the first robot. And the task disassembled based on the first disassembling instruction becomes a plurality of subtasks, wherein the initial control task is a work task from the position of the first manipulator when the first manipulator is in accidental collision to a preset end position of the initial control task. The plurality of subtasks are the N subtasks. And performing category identification on the N subtasks, and further dividing the N subtasks into a first category subtask and a second category subtask. Wherein, the first category subtask is a position task, for example, a manipulator transports a material at a position a to a position B; the second category of subtasks are gesture tasks, such as task operations of lifting the material a by a manipulator to perform related maintenance. And performing priority adjustment on the first category subtasks and the second category subtasks to obtain corresponding adjustment control tasks. The technical effect of performing targeted adjustment on the task after the accidental collision based on the task condition of the first manipulator is achieved.
Further, step S740 in the embodiment of the present application further includes:
step S741: acquiring task attribute information of the execution object according to the initial control task;
step S742: respectively taking the position information as an x axis and the attitude information as a y axis to construct a two-dimensional coordinate axis;
step S743: inputting the task attribute information into the two-dimensional coordinate axis, and obtaining a plurality of relative coordinate sets according to the two-dimensional coordinate axis;
step S744: and inputting the relative coordinate sets into a coordinate processing module for calculation, and generating a preset adjustment rule according to a calculation result, wherein the preset adjustment rule is a priority adjustment rule.
Specifically, task attribute information of the first manipulator execution object is obtained according to the initial control task information of the first manipulator. Further, position information of a first manipulator execution object is used as an x axis, attitude information is used as a y axis, a position-attitude two-dimensional coordinate axis is constructed, and task attribute information is input into the two-dimensional coordinate axis to obtain a plurality of relative coordinate sets according to the two-dimensional coordinate axis, so that the task information of the first manipulator is visually observed. And finally, sequentially inputting the relative coordinate sets into a coordinate processing module, thereby obtaining a preset adjustment rule generated based on intelligent calculation. Wherein, the preset adjustment rule is a priority adjustment rule. That is to say, after the first manipulator accidentally collides, the position-posture coordinates of the first manipulator before the collision are drawn first, the position-posture coordinates are intelligently analyzed by the coordinate processing module, and the priority of the position-posture task of the first manipulator after the accidental collision is determined, so that the targeted adjustment is performed. The technical effect that the task priority of the first mechanical arm after collision is intelligently analyzed and determined based on the actual task execution condition of the mechanical arm is achieved.
Further, step S744 in this embodiment further includes:
step S7441: the coordinate processing module comprises a coordinate conversion module, a coordinate comparison module and a coordinate calculation module;
step S7442: inputting each coordinate in the relative coordinate sets into the coordinate conversion module for information entropy calculation to obtain a plurality of information entropy coordinates;
step S7443: inputting the information entropy coordinates into the coordinate comparison module to obtain a plurality of comparison results of the x-axis information entropy and the y-axis information entropy;
step S7444: and inputting the information entropy coordinates into the coordinate calculation module to obtain a plurality of difference results of the x-axis information entropy and the y-axis information entropy.
Specifically, the coordinate processing module comprises a coordinate conversion module, a coordinate comparison module and a coordinate calculation module. The coordinate conversion module is used for performing quantity conversion on position-posture coordinates when the mechanical arm executes a work task, so that the information entropy change condition when the mechanical arm executes the work task is obtained. The information entropy refers to the amount of information of the position and posture of the manipulator in each coordinate. The coordinate comparison module is used for comparing the position information entropy and the posture information entropy converted by the coordinate conversion module, namely comparing the x-axis information entropy and the y-axis information entropy of the two-dimensional coordinate axis, and further calculating the difference value of the x-axis information entropy and the y-axis information entropy through the coordinate calculation module.
The coordinate entropy values are calculated based on the coordinate processing module, so that the manipulator position and the manipulator posture coordinate entropy are compared, and accurate information entropy data of the manipulator position and the manipulator posture are obtained through calculation, so that the rule is adjusted based on an accurate calculation result, and the technical effects of improving the reasonability, reliability and effectiveness of the priority adjustment rule based on accurate calculation data are achieved.
Further, step S744 in this embodiment further includes:
step S7445: obtaining a first relative coefficient of an x axis and a second relative coefficient of a y axis according to the comparison results and the difference results;
step S7446: when the first relative coefficient is larger than the second relative coefficient, generating a first constraint condition, wherein the first constraint condition is that the position priority is larger than the posture priority;
step S7447: when the first relative coefficient is smaller than the second relative coefficient, generating a second constraint condition, wherein the second constraint condition is that the posture priority is larger than the position priority;
step S7448: and generating the preset adjusting rule according to the first constraint condition and the second constraint condition.
Specifically, based on the entropy comparison and difference result of the position and posture information in each coordinate of the manipulator obtained by the coordinate processing module, a first relative coefficient of an x axis where the position is located and a second relative coefficient of a y axis where the posture is located are obtained.
Further, the first relative coefficient and the second relative coefficient are compared, and when the first relative coefficient is larger than the second relative coefficient, the system automatically generates a first constraint condition. Wherein the first constraint condition is that the position priority is greater than the posture priority. That is, when the first relative coefficient of the x-axis where the position is located is greater than the second relative coefficient of the y-axis where the posture is located, it is described that, in the work task currently performed by the manipulator, the manipulator is used to carry the material, so that the position of the material is changed, which is more important than the posture information of the manipulator gripping the material, that is, the position priority is greater than the posture priority. And otherwise, comparing the first relative coefficient with the second relative coefficient, and when the first relative coefficient is smaller than the second relative coefficient, automatically generating a second constraint condition by the system. Wherein the second constraint condition is that the position priority is smaller than the posture priority. That is to say, when the first relative coefficient of the x axis where the position is located is smaller than the second relative coefficient of the y axis where the posture is located, it is described that, in the work task executed by the manipulator, the manipulator is used for carrying the material and fixing the material at a certain position, so that the worker can execute the relevant task, that is, the posture of the manipulator execution structure is significantly more important than the position information, and at this time, the position priority is smaller than the posture priority. And finally, generating the corresponding preset adjustment rule according to the first constraint condition and the second constraint condition.
Through analyzing the calculation and processing results of the coordinate processing module, corresponding priority rules under different conditions are formulated, so that the preset adjustment rules are set, and the technical effect of setting the corresponding adjustment manipulator priority processing tasks based on accurate data information obtained through calculation is achieved.
Further, as shown in fig. 3, step S600 in this embodiment of the present application further includes:
step S610: acquiring a real-time stress detection set of the first manipulator, wherein the real-time position data set corresponds to the real-time stress detection set one by one;
step S620: generating a plurality of touch detection sets according to the real-time position data set and the real-time stress detection set;
step S630: inputting the multiple groups of touch detection sets as input information into the touch detection model, and identifying first abnormal stress information;
step 640: inputting the first abnormal stress information into the first collision upper limit detection rule, and judging whether the first abnormal stress information is larger than a first upper limit threshold value;
step S650: and if the first touch instruction is larger than the first upper limit threshold, obtaining the first touch instruction.
Specifically, based on real-time monitoring of stress results of the first manipulator at different positions, stress detection data corresponding to different positions are obtained, and then a real-time stress detection set of the first manipulator is formed. And the real-time position data set corresponds to the real-time stress detection set one by one. Further, a plurality of touch-stop detection sets are generated based on the real-time position data set and the real-time stress detection set. Each group of touch detection data in the multiple groups of touch detection sets comprises real-time position data and corresponding position stress detection data.
And inputting the multiple groups of touch detection sets serving as input information into the touch detection model, wherein the touch detection model automatically analyzes and identifies abnormal stress data information, namely the first abnormal stress information, of the first manipulator in the stress data corresponding to each position. The first abnormal stress information shows that the first manipulator is accidentally subjected to external forces such as collision and the like under the corresponding position. Further, the first abnormal stress information is input into the first collision upper limit detection rule, the first collision upper limit detection rule automatically judges whether the first abnormal stress information currently received by the first manipulator exceeds a first upper limit threshold, and once the first abnormal stress information exceeds the first upper limit threshold, the system automatically sends a corresponding task suspension instruction, namely the first touch instruction. The first upper limit threshold is data of the maximum accidental collision strength of the first manipulator, which is set in advance by the system based on the actual task requirement and the task execution condition of the first manipulator.
Through touch and stop the detection model, the intelligent manipulator unexpected atress condition discernment judgement, and then whether reply manipulator execution work task by first collision upper limit detection rule automatic judgement pauses, has reached the unexpected condition when intelligent detection, intelligent processing and analysis manipulator actual work, improves the technological effect of manipulator control intelligent degree.
Further, as shown in fig. 4, step S400 in the embodiment of the present application further includes:
step S410: performing collision tolerance evaluation on the mechanical structure information serving as basic structure information to obtain first evaluation data, wherein the first evaluation data is the collision tolerance of the first manipulator;
step S420: carrying out bearing quality analysis by taking the first characteristic set as stress basic information to obtain second evaluation data;
step S430: generating basic stress data and upper limit stress data according to the first evaluation data and the second evaluation data;
step S440: and adjusting the basic stress data and the upper limit stress data by constructing a first preset threshold, and constructing the first collision upper limit detection rule according to the adjusted data.
Specifically, a collision tolerance evaluation test is performed using the mechanical structure information of the first robot as the infrastructure information, and the test obtains first evaluation data. Wherein the first evaluation data is a collision-resistant degree evaluation result of the first manipulator. Meanwhile, the first feature set of the execution object when the first manipulator executes the task is used as the stress basic information to carry out bearing quality analysis, and second evaluation data are obtained. And the second evaluation data refers to the evaluation result of the collision resistance degree of the task object currently executed by the first manipulator. And finally, comprehensively analyzing the first evaluation data and the second evaluation data to generate basic stress data and upper limit stress data of the first manipulator. The first manipulator basic stress data refers to minimum stress evaluation data of the first manipulator structure and the executive material object, and the first manipulator upper limit stress data refers to maximum stress evaluation data of the first manipulator structure and the executive material object. And adjusting the basic stress data and the upper limit stress data by constructing a first preset threshold, and constructing the first collision upper limit detection rule according to the adjusted data.
The current manipulator collision tolerance evaluation is respectively analyzed based on the manipulator structure and the manipulator task execution object, and then comprehensive analysis is carried out to obtain a first collision upper limit detection rule, so that the technical effects of objectively evaluating the manipulator collision tolerance degree based on test data and setting a corresponding collision upper limit are achieved.
In summary, the manipulator control method based on machine vision provided by the embodiment of the present application has the following technical effects:
1. obtaining mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information; according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base; obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator; constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule; obtaining a real-time position data set of the first manipulator according to the position detection device; inputting a real-time stress detection set of the real-time position data set into the touch detection model, and judging whether a first touch instruction is triggered according to detection information output by the touch detection model; and if the first touch and stop instruction is not triggered, performing priority adjustment on an initial control task of the first manipulator, and controlling the first manipulator according to the adjustment control task. Through the actual working condition based on the manipulator, a corresponding manipulator touch-stop detection model is constructed, so that the accidental collision condition when the manipulator executes a working task is intelligently collected and analyzed, the manipulator control task is pertinently adjusted, and the smooth and efficient completion of the manipulator working task is ensured. The manipulator collision avoidance system has the advantages that the technical effects of intelligently dynamically adjusting the manipulator and improving the quality and efficiency of the manipulator work task aiming at accidental collision of the manipulator during real-time work are achieved.
2. The coordinate entropy values are calculated based on the coordinate processing module, so that the manipulator position and the manipulator posture coordinate entropy are compared, and accurate information entropy data of the manipulator position and the manipulator posture are obtained through calculation, so that the rule is adjusted based on an accurate calculation result, and the technical effects of improving the reasonability, reliability and effectiveness of the priority adjustment rule based on accurate calculation data are achieved.
3. Through touch and stop the detection model, the intelligent manipulator unexpected atress condition discernment judgement, and then whether reply manipulator execution work task by first collision upper limit detection rule automatic judgement pauses, has reached the unexpected condition when intelligent detection, intelligent processing and analysis manipulator actual work, improves the technological effect of manipulator control intelligent degree.
4. The current manipulator collision tolerance evaluation is respectively analyzed based on the manipulator structure and the manipulator task execution object, and then comprehensive analysis is carried out to obtain a first collision upper limit detection rule, so that the technical effects of objectively evaluating the manipulator collision tolerance degree based on test data and setting a corresponding collision upper limit are achieved.
Example two
Based on the same inventive concept as the robot control method based on machine vision in the foregoing embodiment, the present invention further provides a robot control system based on machine vision, referring to fig. 5, where the system includes:
a first obtaining unit 11, configured to obtain mechanical structure information of a first manipulator, where the mechanical structure information includes joint structure information, execution structure information, and base structure information;
a second obtaining unit 12, where the second obtaining unit 12 is configured to perform image acquisition and attribute information entry on an execution object of the first manipulator according to the first image acquisition device, so as to obtain a first image information base;
a third obtaining unit 13, configured to obtain a first feature set based on the first image information base, where the first feature set is an execution association feature set of the first manipulator;
a first construction unit 14, wherein the first construction unit 14 is configured to construct a touch detection model according to the mechanical structure information and the basic data generated by the first feature set, and the touch detection model includes a first collision upper limit detection rule;
a fourth obtaining unit 15, configured to obtain a real-time position data set of the first manipulator according to a position detection device;
a first judging unit 16, where the first judging unit 16 is configured to input the real-time stress detection set of the real-time position data set into the touch detection model, and judge whether to trigger a first touch instruction according to detection information output by the touch detection model;
and the first execution unit 17 is configured to, if the first touch and stop instruction is not triggered, perform priority adjustment on an initial control task of the first manipulator, and control the first manipulator according to an adjustment control task.
Further, the system further comprises:
a fifth obtaining unit, configured to obtain a first disassembly instruction if the first touch instruction is not triggered;
a sixth obtaining unit, configured to disassemble the initial control task according to the first disassembly instruction to obtain N subtasks, where the initial control task is a task in a range from a first collision position of the first manipulator to a preset end position;
a seventh obtaining unit, configured to obtain a first category subtask and a second category subtask by performing category division on the N subtasks, where the first category subtask is a position task and the second category subtask is an attitude task;
an eighth obtaining unit, configured to obtain the adjustment control task by performing priority adjustment on the first category subtask and the second category subtask.
Further, the system further comprises:
a ninth obtaining unit, configured to obtain task attribute information of the execution object according to the initial control task;
the second construction unit is used for constructing a two-dimensional coordinate axis by taking the position information as an x axis and the attitude information as a y axis respectively;
a tenth obtaining unit, configured to input the task attribute information into the two-dimensional coordinate axis, and obtain a plurality of relative coordinate sets according to the two-dimensional coordinate axis;
and the first generating unit is used for inputting the relative coordinate sets into the coordinate processing module for calculation and generating a preset adjusting rule according to a calculation result, wherein the preset adjusting rule is a priority adjusting rule.
Further, the system further comprises:
the first definition unit is used for the coordinate processing module to comprise a coordinate conversion module, a coordinate comparison module and a coordinate calculation module;
an eleventh obtaining unit, configured to input each coordinate in the multiple sets of relative coordinates into the coordinate transformation module for information entropy calculation, so as to obtain multiple information entropy coordinates;
a twelfth obtaining unit, configured to input the multiple information entropy coordinates into the coordinate comparison module, and obtain multiple comparison results of the x-axis information entropy and the y-axis information entropy;
a thirteenth obtaining unit, configured to input the multiple information entropy coordinates into the coordinate calculation module, and obtain multiple difference results of the x-axis information entropy and the y-axis information entropy.
Further, the system further comprises:
a fourteenth obtaining unit, configured to obtain a first relative coefficient of an x-axis and a second relative coefficient of a y-axis according to the plurality of comparison results and the plurality of difference results;
a second generating unit, configured to generate a first constraint condition when the first relative coefficient is greater than the second relative coefficient, where the first constraint condition is that a position priority is greater than an attitude priority;
a third generating unit, configured to generate a second constraint condition when the first relative coefficient is smaller than the second relative coefficient, where the second constraint condition is that an attitude priority is greater than a position priority;
a fourth generating unit, configured to generate the preset adjustment rule according to the first constraint condition and the second constraint condition.
Further, the system further comprises:
a fifteenth obtaining unit, configured to obtain a real-time stress detection set of the first manipulator, where the real-time position data set corresponds to the real-time stress detection set one to one;
a fifth generating unit, configured to generate multiple touch detection sets according to the real-time position data set and the real-time stress detection set;
the first identification unit is used for inputting the multiple groups of touch detection sets into the touch detection model as input information and identifying first abnormal stress information;
a second judgment unit, configured to input the first abnormal stress information into the first collision upper limit detection rule, and judge whether the first abnormal stress information is greater than a first upper limit threshold;
a sixteenth obtaining unit, configured to obtain the first touch instruction if the first touch instruction is greater than the first upper threshold.
Further, the system further comprises:
a seventeenth obtaining unit configured to perform collision tolerance evaluation using the mechanical structure information as infrastructure information to obtain first evaluation data, wherein the first evaluation data is collision tolerance of the first manipulator;
an eighteenth obtaining unit, configured to perform bearing quality analysis using the first feature set as stress basic information to obtain second evaluation data;
a sixth generating unit, configured to generate basic stress data and upper limit stress data according to the first evaluation data and the second evaluation data;
and the third construction unit is used for adjusting the basic stress data and the upper limit stress data by constructing a first preset threshold value, and constructing the first collision upper limit detection rule according to the adjusted data.
In the present description, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, the aforementioned robot control method based on machine vision in the first embodiment of fig. 1 and the specific example are also applicable to a robot control system based on machine vision in this embodiment, and a robot control system based on machine vision in this embodiment is clearly known to those skilled in the art through the foregoing detailed description of the robot control method based on machine vision, so for the brevity of the description, detailed description is not repeated here. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Exemplary electronic device
The electronic apparatus of the embodiment of the present application is described below with reference to fig. 6.
Fig. 6 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of a machine vision based manipulator control method as in the previous embodiments, the present invention further provides a machine vision based manipulator control system, on which a computer program is stored, which when executed by a processor implements the steps of any one of the above-mentioned machine vision based manipulator control methods.
Where in fig. 6 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
The application provides a mechanical arm control method based on machine vision, which is applied to a mechanical arm control system based on machine vision, wherein the method comprises the following steps: obtaining mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information; according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base; obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator; constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule; obtaining a real-time position data set of the first manipulator according to the position detection device; inputting a real-time stress detection set of the real-time position data set into the touch detection model, and judging whether a first touch instruction is triggered or not according to detection information output by the touch detection model; and if the first touch and stop instruction is not triggered, performing priority adjustment on an initial control task of the first manipulator, and controlling the first manipulator according to the adjustment control task. The problem of among the prior art manipulator meet with unexpected collision back when the actual work, the manpower can't carry out the adaptability control adjustment fast to the real-time work demand of manipulator to unable accurate control manipulator leads to the unable smooth, high-efficient technical problem who accomplishes of manipulator job task. Through the actual working condition based on the manipulator, a corresponding manipulator touch-stop detection model is constructed, so that the accidental collision condition when the manipulator executes a working task is intelligently collected and analyzed, the manipulator control task is pertinently adjusted, and the smooth and efficient completion of the manipulator working task is ensured. The manipulator collision avoidance system has the advantages that the technical effects of intelligently dynamically adjusting the manipulator and improving the quality and efficiency of the manipulator work task aiming at accidental collision of the manipulator during real-time work are achieved.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely software embodiment, an entirely hardware embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application is in the form of a computer program product that may be embodied on one or more computer-usable storage media having computer-usable program code embodied therewith. And such computer-usable storage media include, but are not limited to: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk Memory, a Compact Disc Read-Only Memory (CD-ROM), and an optical Memory.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. A method for controlling a robot based on machine vision, the method being applied to a system for controlling a robot based on machine vision, the system being communicatively connected to a position detection device, the method comprising:
step S100: acquiring mechanical structure information of a first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information;
step S200: according to a first image acquisition device, carrying out image acquisition and attribute information input on an execution object of the first manipulator to obtain a first image information base;
step S300: obtaining a first feature set based on the first image information base, wherein the first feature set is an execution association feature set of the first manipulator;
step S400: constructing a touch detection model according to the mechanical structure information and basic data generated by the first feature set, wherein the touch detection model comprises a first collision upper limit detection rule, and the first collision upper limit detection rule refers to a maximum collision event, namely a maximum collision force, suffered by the first manipulator in the process of executing the current work task;
step S500: obtaining a real-time position data set of the first manipulator according to the position detection device;
step S600: based on the real-time position data set, detecting and obtaining forces applied to different positions of the first manipulator to form a real-time stress detection set, inputting the real-time stress detection set into the touch detection model, and judging whether a first touch instruction is triggered according to detection information output by the touch detection model;
step S700: if the first touch and stop instruction is not triggered, performing priority adjustment on an initial control task of the first manipulator, and controlling the first manipulator according to the adjustment control task;
wherein, step S700 further includes:
step S710: if the first touch-stop instruction is not triggered, a first disassembly instruction is obtained;
step S720: disassembling the initial control task according to the first disassembling instruction to obtain N subtasks, wherein the initial control task is a task from a first collision position of the first manipulator to a preset end position;
step S730: obtaining a first category subtask and a second category subtask by performing category division on the N subtasks, wherein the first category subtask is a position task, and the second category subtask is an attitude task;
step S740: obtaining the adjustment control task by performing priority adjustment on the first category subtasks and the second category subtasks;
step S740 further includes:
step S741: acquiring task attribute information of the execution object according to the initial control task;
step S742: respectively taking the position information as an x axis and the attitude information as a y axis to construct a two-dimensional coordinate axis;
step S743: inputting the task attribute information into the two-dimensional coordinate axis, and obtaining a plurality of relative coordinate sets according to the two-dimensional coordinate axis;
step S744: inputting the relative coordinate sets into a coordinate processing module for calculation, and generating a preset adjustment rule according to a calculation result, wherein the preset adjustment rule is a priority adjustment rule;
step S744 further includes:
step S7441: the coordinate processing module comprises a coordinate conversion module, a coordinate comparison module and a coordinate calculation module;
step S7442: inputting each coordinate in the relative coordinate sets into the coordinate conversion module for information entropy calculation to obtain a plurality of information entropy coordinates;
step S7443: inputting the information entropy coordinates into the coordinate comparison module to obtain a plurality of comparison results of the x-axis information entropy and the y-axis information entropy;
step S7444: inputting the information entropy coordinates into the coordinate calculation module to obtain a plurality of difference results of the x-axis information entropy and the y-axis information entropy;
step S7445: obtaining a first relative coefficient of an x axis and a second relative coefficient of a y axis according to the comparison results and the difference results;
step S7446: when the first relative coefficient is larger than the second relative coefficient, generating a first constraint condition, wherein the first constraint condition is that the position priority is larger than the posture priority;
step S7447: when the first relative coefficient is smaller than the second relative coefficient, generating a second constraint condition, wherein the second constraint condition is that the posture priority is larger than the position priority;
step S7448: and generating the preset adjusting rule according to the first constraint condition and the second constraint condition.
2. The method of claim 1, wherein the step S600 further comprises:
step S610: acquiring a real-time stress detection set of the first manipulator, wherein the real-time position data set corresponds to the real-time stress detection set one by one;
step S620: generating a plurality of touch detection sets according to the real-time position data set and the real-time stress detection set;
step S630: inputting the multiple groups of touch detection sets as input information into the touch detection model, and identifying first abnormal stress information;
step S640: inputting the first abnormal stress information into the first collision upper limit detection rule, and judging whether the first abnormal stress information is larger than a first upper limit threshold value;
step S650: and if the first touch instruction is larger than the first upper limit threshold value, the first touch instruction is obtained.
3. The method of claim 1, wherein the step S400 further comprises:
step S410: performing collision tolerance evaluation on the mechanical structure information serving as basic structure information to obtain first evaluation data, wherein the first evaluation data is the collision tolerance of the first manipulator;
step S420: carrying out bearing quality analysis by taking the first feature set as stress basic information to obtain second evaluation data;
step S430: generating basic stress data and upper limit stress data according to the first evaluation data and the second evaluation data;
step S440: and adjusting the basic stress data and the upper limit stress data by constructing a first preset threshold, and constructing the first collision upper limit detection rule according to the adjusted data.
4. A machine vision based manipulator control system, the system comprising:
a first obtaining unit: the first obtaining unit is used for obtaining mechanical structure information of the first manipulator, wherein the mechanical structure information comprises joint structure information, execution structure information and base structure information;
a second obtaining unit: the second obtaining unit is used for carrying out image acquisition and attribute information input on an execution object of the first manipulator according to the first image acquisition device to obtain a first image information base;
a third obtaining unit: the third obtaining unit is configured to obtain a first feature set based on the first image information base, where the first feature set is an execution association feature set of the first manipulator;
a first building unit: the first construction unit is configured to construct a touch detection model according to the mechanical structure information and basic data generated by the first feature set, where the touch detection model includes a first collision upper limit detection rule, and the first collision upper limit detection rule refers to a maximum collision event, that is, a maximum collision force, which can be suffered by the first manipulator during the process of executing the current work task;
a fourth obtaining unit: the fourth obtaining unit is configured to obtain a real-time position data set of the first manipulator according to the position detection device;
a fifteenth obtaining unit: the fifteenth obtaining unit is configured to obtain a real-time stress detection set of the first manipulator, where the real-time position data set corresponds to the real-time stress detection set one to one;
a first judgment unit: the first judging unit is used for inputting the real-time stress detection set of the real-time position data set into the touch detection model and judging whether a first touch instruction is triggered according to detection information output by the touch detection model;
a first execution unit: the first execution unit is used for carrying out priority adjustment on an initial control task of the first manipulator and controlling the first manipulator according to the adjustment control task if the first touch and stop instruction is not triggered;
a fifth obtaining unit: the fifth obtaining unit is used for obtaining a first disassembling instruction if the first touch-stop instruction is not triggered;
a sixth obtaining unit: the sixth obtaining unit is configured to disassemble the initial control task according to the first disassembling instruction to obtain N subtasks, where the initial control task is a task from a first collision position of the first manipulator to a preset end position;
a seventh obtaining unit: the seventh obtaining unit is configured to obtain a first category subtask and a second category subtask by performing category division on the N subtasks, where the first category subtask is a position task and the second category subtask is an attitude task;
an eighth obtaining unit: the eighth obtaining unit is configured to obtain the adjustment control task by performing priority adjustment on the first category subtask and the second category subtask;
a ninth obtaining unit: the ninth obtaining unit is configured to obtain task attribute information of the execution object according to the initial control task;
a second building element: the second construction unit is used for constructing a two-dimensional coordinate axis by taking the position information as an x axis and the attitude information as a y axis respectively;
a tenth obtaining unit: the tenth obtaining unit is configured to input the task attribute information into the two-dimensional coordinate axis, and obtain a plurality of relative coordinate sets according to the two-dimensional coordinate axis;
a first generation unit: the first generating unit is used for inputting the relative coordinate sets into a coordinate processing module for calculation and generating a preset adjusting rule according to a calculation result, wherein the preset adjusting rule is a priority adjusting rule;
a first defining unit: the first definition unit is used for the coordinate processing module to comprise a coordinate conversion module, a coordinate comparison module and a coordinate calculation module;
an eleventh obtaining unit: the eleventh obtaining unit is configured to input each coordinate in the multiple sets of relative coordinates into the coordinate transformation module for information entropy calculation, so as to obtain multiple information entropy coordinates;
a twelfth obtaining unit: the twelfth obtaining unit is configured to input the multiple information entropy coordinates into the coordinate comparison module, and obtain multiple comparison results of the x-axis information entropy and the y-axis information entropy;
a thirteenth obtaining unit: the thirteenth obtaining unit is configured to input the multiple information entropy coordinates into the coordinate calculation module, and obtain multiple difference results of the x-axis information entropy and the y-axis information entropy;
a fourteenth obtaining unit: the fourteenth obtaining unit is configured to obtain a first relative coefficient of an x-axis and a second relative coefficient of a y-axis according to the plurality of comparison results and the plurality of difference results;
a second generation unit: the second generating unit is used for generating a first constraint condition when the first relative coefficient is larger than the second relative coefficient, wherein the first constraint condition is that the position priority is larger than the posture priority;
a third generation unit: the third generating unit is configured to generate a second constraint condition when the first relative coefficient is smaller than the second relative coefficient, where the second constraint condition is that the posture priority is greater than the position priority;
a fourth generation unit: the fourth generating unit is configured to generate the preset adjustment rule according to the first constraint condition and the second constraint condition.
5. A machine vision based manipulator control system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method of any one of claims 1 to 3.
6. A computer-readable storage medium, characterized in that a computer program is stored on the storage medium, which computer program, when being executed by a processor, carries out the method according to any one of claims 1-3.
CN202210664422.3A 2022-06-14 2022-06-14 Mechanical arm control method and system based on machine vision Active CN114750168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210664422.3A CN114750168B (en) 2022-06-14 2022-06-14 Mechanical arm control method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210664422.3A CN114750168B (en) 2022-06-14 2022-06-14 Mechanical arm control method and system based on machine vision

Publications (2)

Publication Number Publication Date
CN114750168A CN114750168A (en) 2022-07-15
CN114750168B true CN114750168B (en) 2022-09-20

Family

ID=82336580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210664422.3A Active CN114750168B (en) 2022-06-14 2022-06-14 Mechanical arm control method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN114750168B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116111885B (en) * 2023-03-10 2023-11-24 苏州上舜精密工业科技有限公司 Rotating speed control method and system of brushless direct current motor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104985598A (en) * 2015-06-24 2015-10-21 南京埃斯顿机器人工程有限公司 Industrial robot collision detection method
CN105786605A (en) * 2016-03-02 2016-07-20 中国科学院自动化研究所 Task management method and system in robot
CN109318232A (en) * 2018-10-22 2019-02-12 佛山智能装备技术研究院 A kind of polynary sensory perceptual system of industrial robot
CN110497405A (en) * 2019-08-14 2019-11-26 深圳市烨嘉为技术有限公司 For controling the force feedback man-machine collaboration anticollision detection method and module of integral control system
CN112757345A (en) * 2021-01-27 2021-05-07 上海节卡机器人科技有限公司 Cooperative robot collision detection method, device, medium and electronic equipment
WO2021195916A1 (en) * 2020-03-31 2021-10-07 西门子股份公司 Dynamic hand simulation method, apparatus and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104985598A (en) * 2015-06-24 2015-10-21 南京埃斯顿机器人工程有限公司 Industrial robot collision detection method
CN105786605A (en) * 2016-03-02 2016-07-20 中国科学院自动化研究所 Task management method and system in robot
CN109318232A (en) * 2018-10-22 2019-02-12 佛山智能装备技术研究院 A kind of polynary sensory perceptual system of industrial robot
CN110497405A (en) * 2019-08-14 2019-11-26 深圳市烨嘉为技术有限公司 For controling the force feedback man-machine collaboration anticollision detection method and module of integral control system
WO2021195916A1 (en) * 2020-03-31 2021-10-07 西门子股份公司 Dynamic hand simulation method, apparatus and system
CN112757345A (en) * 2021-01-27 2021-05-07 上海节卡机器人科技有限公司 Cooperative robot collision detection method, device, medium and electronic equipment

Also Published As

Publication number Publication date
CN114750168A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
Roser et al. A practical bottleneck detection method
US11440183B2 (en) Hybrid machine learning-based systems and methods for training an object picking robot with real and simulated performance data
CN114750168B (en) Mechanical arm control method and system based on machine vision
JP7262847B2 (en) System Identification of Industrial Robot Dynamics for Safety-Critical Applications
CN111702760B (en) Internet of things mechanical arm cooperative operation system and method
Nyhuis et al. Applying simulation and analytical models for logistic performance prediction
WO2021085345A1 (en) Machine learning data generation device, machine learning device, work system, computer program, machine learning data generation method, and method for manufacturing work machine
CN114037673B (en) Hardware connection interface monitoring method and system based on machine vision
US20220339787A1 (en) Carrying out an application using at least one robot
CN114227685A (en) Mechanical arm control method and device, computer readable storage medium and mechanical arm
Magistris et al. Dynamic digital human models for ergonomic analysis based on humanoid robotics techniques
Glorieux et al. Quality and productivity driven trajectory optimisation for robotic handling of compliant sheet metal parts in multi-press stamping lines
US20180311821A1 (en) Synchronization of multiple robots
CN114331114A (en) Intelligent supervision method and system for pipeline safety risks
US6963827B1 (en) System and method for performing discrete simulation of ergonomic movements
CN111015667B (en) Robot arm control method, robot, and computer-readable storage medium
CN115329942A (en) Bolt fastening device and method based on artificial intelligence
JP7229338B2 (en) Inspection device and inspection method
Komenda et al. ema-a Software Tool for Planning Human-Machine-Collaboration.
CN116109080B (en) Building integrated management platform based on BIM and AR
WO2024070189A1 (en) Factor analysis device and factor analysis method
Benter et al. Derivation of MTM-HWD® analyses from digital human motion data
CN117601127A (en) Robot evaluation method and device, electronic equipment and storage medium
JP7208254B2 (en) Work optimization system and work optimization device
CN115741713A (en) Robot working state determination method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 199, Beihuan Road, Tangqiao town, Zhangjiagang City, Suzhou City, Jiangsu Province

Patentee after: Suzhou Shangshun Technology Co.,Ltd.

Country or region after: China

Address before: No. 199, Beihuan Road, Tangqiao town, Zhangjiagang City, Suzhou City, Jiangsu Province

Patentee before: SMOOTH MACHINE SYSTEMS Co.,Ltd.

Country or region before: China