CN104298169A - Data converting method of intelligent vision numerical control system - Google Patents

Data converting method of intelligent vision numerical control system Download PDF

Info

Publication number
CN104298169A
CN104298169A CN201410436235.5A CN201410436235A CN104298169A CN 104298169 A CN104298169 A CN 104298169A CN 201410436235 A CN201410436235 A CN 201410436235A CN 104298169 A CN104298169 A CN 104298169A
Authority
CN
China
Prior art keywords
pose
machine vision
robot
unit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410436235.5A
Other languages
Chinese (zh)
Other versions
CN104298169B (en
Inventor
王高
柳宁
叶文生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan university shaoguan institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan university shaoguan institute filed Critical Jinan university shaoguan institute
Priority to CN201410436235.5A priority Critical patent/CN104298169B/en
Publication of CN104298169A publication Critical patent/CN104298169A/en
Application granted granted Critical
Publication of CN104298169B publication Critical patent/CN104298169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/19Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by positioning or contouring control systems, e.g. to control position from one programmed point to another or to control movement along a programmed continuous path

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The invention relates to a data converting method of an intelligent vision numerical control system. The data converting method comprises the following steps that a machine vision unit is connected with a robot numerical control unit; an imaging coordinate pose of a workpiece is extracted through the machine vision unit and sent to a pose standard module; the pose standard module converts the imaging coordinate pose into a world space coordinate pose, and sends the world space coordinate pose to the numerical control unit. Compared with the prior art, a processing algorithm is fixed into a vision sub-system and a numerical control sub-system, and specifically, the processing algorithm is fixed to the pose standard module. Thus, when developing projects, an application engineer does not need to concern the calculation process with high theoretical property and only needs to pay attention to the application development of the numerical control system and parameter setting of a robot body and the machine vision unit, the requirement for core algorithm confidentiality is met, and the development period is shortened.

Description

A kind of data conversion method of Visual intelligent digital control system
Technical field
The invention belongs to advanced manufacture and robot controlling field, relate to a kind of data conversion method, particularly a kind of data conversion method of Visual intelligent digital control system.
Background technology
Machine vision applications is the important milestone of current robot industryization application in robot controlling field.Independently industrial robot application is by the multiple constraint of place, editing objective, processing mode and development efficiency, and the narrow amount in application face is few.The numerical control equipment or the industrial robot that are integrated with Visual intelligent digital control system can perform more complicated, fine and smooth action at larger work space, can planar positioning workpieces position, perform grasping movement by robot after workpiece coordinate being transformed to robot coordinate.
Vision Builder for Automated Inspection may be used for the location spraying workpiece border and barycenter, and then solves spraying robot robot end physical location and the excessive problem of desired locations error; Machine vision is used for product quality and detects, and classifies to good/substandard products by intelligent numerical control equipment; Binocular machine vision can be used for identification and the detection of three dimensions neutral body target, and Visual intelligent industrial robot can realize the action such as crawl and tracking to three-dimensional target.In the advantage of robot, machine vision applications is that obtaining information amount is large, process is quick, precision and automaticity is higher and cost is not high.
At present, Vision Builder for Automated Inspection and robot controller integrated, be applied in engineering more and more, but there is the large problem of application and development difficulty, and exploitability is not high, development efficiency is low, and engineering reliability and applicability are not high, develop more difficult.Wherein, the demarcation of Visual intelligent robot to target workpiece is the key link of Visual intelligent NC technology application.
In theory, relevant colleges and universities and scientific research institutions solve the error accumulation problem of matrix homogeneous transformation equation theoretically, ensure that the stated accuracy of vision system, have cleared away obstacle for machine vision enters industrial robot application.The theoretical analysis of associated calibration method and achievement issue the Optimization Solution concentrating on Robot Hand-eye vision camera system inside and outside portion parameter.But these methods have requirement in condition setting or state constraint, and therefore distance applications has certain distance.
In prior art, ABB AB discloses the document for demarcating robot end TCP for 2004, see " Product Manuals; ABB Robot Document IRB 1410 M2004; 2004 ", concrete to process in institute of robot the coordinate that target workpiece is set up be workpiece coordinate system, and under robot is usually operated at workpiece coordinate system, the foundation of workpiece coordinate system can be obtained by " three point method " demarcation.Workpiece coordinate system is freely set up according to target workpiece pose, but the method set up is not limited only to " three point method ", even basis coordinates system of robot etc. can be all workpiece coordinate system, its objective is for convenience of robot the processing of workpiece.
The essence of these methods is in the prior art all that workpiece, robot and camera imaging system coordinate system is unified, ignores camera imaging depth information, simplifies scaling method, reduce calculated amount, be easy to field conduct.But its Integrated Development pattern embodied is: according to concrete robot application target, type selecting robot body and Vision Builder for Automated Inspection, need respectively to robot controller and Vision Builder for Automated Inspection exploitation.Performance history adopts PC framework, then adopt different development mode because relating to different system platform and cause data transmission between controller and vision system, share, exchange and the unified process of data, coordinate transform and process data fusion aspect difficulty larger.
Therefore, be limited to basic developing instrument feature, modeling involved by performance history, solve, change, the link such as output do not have mode standard, the customized exploitation of control object need be relied on, once object configuration, parameter, pose, configuration etc. change, then need to re-start demarcation, even Processing Algorithm also can make corresponding adjustment.This development mode, high to personnel qualifications, exploit natural resources and expend large and efficiency is not high, follow-up improvement, difficult in maintenance.The difference of development scheme causes the difference of engineering construction efficiency and implementation quality, to be embodied in engineering the problem of the aspects such as i.e. product yield, accuracy, roughness, output.
Summary of the invention
The invention reside in the shortcoming and deficiency that overcome prior art, a kind of data conversion method of Visual intelligent digital control system is provided.
The present invention is realized by following technical scheme: a kind of data conversion method of Visual intelligent digital control system, comprises the following steps:
Machine vision unit is connected with robot ' NC unit;
By the imager coordinate of this machine vision unit picked-up workpiece, and these imager coordinate data are sent to pose standard module;
These imager coordinate data are converted to world space coordinate pose by this pose standard module, and are sent to numerical control unit.
Compared to prior art, the present invention, by by machine vision subsystem and the robot ' NC system integration, namely defines the digital control system with Visual intelligent.By the system that hardware platform is unified, advantage is convenient exploitation, and data are convenient to management, and system itself is processing terminal, and PC does not participate in engineering construction as development platform.
Further, have cured Processing Algorithm in vision subsystem and numerical control subsystem internal, specifically this Processing Algorithm is solidificated in a pose standard module.Therefore, Application Engineer, in engineering development, without the need to involving the strong computation process of theoretical property, only needing to pay close attention to the setting parameter of the application and development of digital control system and robot body, machine vision unit, both having met core algorithm security requirements and also shortened the construction cycle.Meanwhile, the exploitation of this standardized module and renewal are only for target hardware platform, and the standard development flow process of this refinement, is applicable to conglomerate Visual intelligent robot and imports application.
As a further improvement on the present invention, imager coordinate data are converted to world space coordinate pose and specifically comprise the following steps by described pose standard module:
Transform method is solidificated in pose standard module;
Input information is to pose standard module, the information of input comprises: the matrix that the joint of the mounting means parameter of machine vision unit and robot body, robot body's pose, robot body is formed, the position migration parameter of robot body, demarcate number of times, nominal data, and demarcation mode parameter;
According to transform method, input information is carried out conversion and solves;
By this pose standard module output information, the information of output comprises: the volume coordinate pose data of solving state parameter, solving result, machine vision unit, and the transition matrix that transform method uses.
As a further improvement on the present invention, the mounting means of described machine vision unit and robot body is: by machine vision cellular installation robot end or by machine vision cellular installation on the point of fixity of working environment;
If machine vision cellular installation is robot end, then the mounting means parameter value inputted is 1; The demarcation number of times of described input is at least 2 times; Described demarcation mode three point method; The spatial pose data of the machine vision unit of described output are the spatial pose data of robot end when different measurement points;
If machine vision cellular installation is on the point of fixity of working environment, then the parameter value of the mounting means inputted is 0; Described demarcation number of times is at least 1 time; Described scaling method is two-point method; The volume coordinate pose data of described machine vision unit are fixed value.
As a further improvement on the present invention, the solving state of described output includes solution state and without solution state; Described have the parameter value of solution state to be 1, and the parameter value without the state of solution is 0.
As a further improvement on the present invention, when data conversion method of the present invention is applied to robot system exploitation, comprise the following steps:
In machine vision cell location treatment scheme;
At robot ' NC unit developing application;
According to visual processes needs, call pose standard module, carry out data transformation computing, result is transmitted respectively machine visual unit, robot ' NC unit, control for target.
In order to understand better and implement, describe the present invention in detail below in conjunction with accompanying drawing.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the data conversion method of Visual intelligent control system of the present invention.
Fig. 2 is the pose shift process figure of pose standard module of the present invention.
Fig. 3 is that machine vision cellular installation is in the schematic diagram of robot body.
Fig. 4 is that machine vision cellular installation is in the point of fixity schematic diagram of working environment.
Fig. 5 is the schematic diagram of Visual intelligent digital control system of the present invention.
Fig. 6 is the schematic diagram of pose standard module of the present invention.
Fig. 7 is input and the input information schematic diagram of pose standard module of the present invention.
Embodiment
Refer to Fig. 1, it is the process flow diagram of the data conversion method of Visual intelligent control system of the present invention.
A data conversion method for Visual intelligent digital control system, described Visual intelligent digital control system comprises machine vision unit, numerical control unit and pose standard module.Described data conversion method comprises the following steps:
S1: machine vision unit is connected with robot ' NC unit;
S2: by the imager coordinate pose of this machine vision unit picked-up workpiece, and these imager coordinate data are sent to pose standard module;
S3: this imager coordinate pose is converted to world space coordinate pose by this pose standard module, and is sent to numerical control unit.
Refer to Fig. 2, it is the pose shift process figure of pose standard module.Further, in described step S3, specifically comprise the following steps:
S31: transform method is solidificated in pose standard module;
S32: input information is to pose standard module, the information of input comprises: the matrix that the joint of the mounting means parameter of machine vision unit and robot body, robot body's pose, robot body is formed, the position migration parameter of robot body, demarcate number of times, nominal data, and demarcation mode parameter;
S33: according to transform method, carries out conversion by input information and solves;
S43: by this pose standard module output information, the information of output comprises: the volume coordinate pose data of solving state parameter, solving result, machine vision unit, and the transition matrix that transform method uses.
Further, the mounting means of described machine vision unit and robot body is: by machine vision cellular installation robot end or by machine vision cellular installation on the point of fixity of working environment.
Refer to Fig. 3, it is that machine vision cellular installation is in the schematic diagram of robot body.
If machine vision cellular installation is robot end, then the information inputted is specially:
Described mounting means parameter value is 1.
Described robot body's pose is 6 floating numbers, the pose data (x of the relative world coordinate system of robot base namely in residing working environment b, y b, z b, θ b, α b, γ b).
The matrix that the joint of described robot body is formed is robot D-H matrix, represents that the joint of robot is formed;
The position migration parameter of described robot body is input as 6 floating-point coordinate figure (x bo, y bo, z bo, θ bo, α bo, γ bo).
The demarcation number of times of described input is at least 2 times, and surmounting this demarcation number of times is then redundant data, for process processing module with reference to checking.
The form of described nominal data is: end pose (x ti, y ti, z ti, θ ti, α ti, γ tiposition coordinates (the x of)+workpiece mi, y mi), wherein: i=1,2,3....
Described demarcation mode three point method, namely takes on three points of workpiece simultaneously.
Described output information is specially:
Described transform method solving state parameter, it is 1 that this option has solution to be then worth, and being then worth without solution is 0.
Described conversion solving result is workpiece space pose, (x w w, y w w, z w w, θ w w, α w w, γ w w).
The spatial pose data of described machine vision unit, for robot body repeatedly converts the corresponding pose data of camera of measurement point.
Described transition matrix exports, and is specifically adopted different transform method by corresponding respectively.
Refer to Fig. 4, it is that machine vision cellular installation is in the point of fixity schematic diagram of working environment.
If machine vision cellular installation is on the point of fixity of working environment, then the information inputted is specially:
Described mounting means parameter value is 0.
Described robot body's pose is 6 floating numbers, the pose data (x of the relative world coordinate system of robot base namely in residing working environment b, y b, z b, θ b, α b, γ b).
The matrix that the joint of described robot body is formed is robot D-H matrix, represents that the joint of robot is formed;
The position migration parameter of described robot body is input as 6 floating-point coordinate figure (x bo, y bo, z bo, θ bo, α bo, γ bo).
The demarcation number of times of described input is at least 1 time, and surmounting this demarcation number of times is then redundant data, for process processing module with reference to checking.
The form of described nominal data is: the position coordinates of point of fixity pose+workpiece.
Described demarcation mode two-point method, namely takes on two points of workpiece simultaneously.
Described output information is specially:
Described transform method solving state parameter, it is 1 that this option has solution to be then worth, and being then worth without solution is 0.
Described conversion solving result is fixed value (x ci, y ci, z ci, θ ci, α ci, γ ci) i=1,2.
The spatial pose data of described machine vision unit, for robot body repeatedly converts the corresponding pose data of camera of measurement point.
Described transition matrix exports, and is specifically adopted different transform method by corresponding respectively.
A kind of Visual intelligent digital control system for realizing above-mentioned data conversion method is below provided.
Further, when data conversion method of the present invention is applied to robot system exploitation, following steps can be adopted:
In machine vision cell location treatment scheme;
At robot ' NC unit developing application;
According to visual processes needs, call pose standard module, carry out data transformation computing, result is transmitted respectively machine visual unit, robot ' NC unit, control for target.
Refer to Fig. 5, it is the schematic diagram of Visual intelligent digital control system of the present invention.Visual intelligent digital control system of the present invention, comprises machine vision unit 1, numerical control unit 2, robot body 3, light source 4, monitor 5, switch 6, remote server 7 and pose standard module 10.
Described machine vision unit 1, for taking workpiece 9, the imager coordinate pose of picked-up workpiece 9.As preferably, in the present embodiment, described machine vision unit 1 is a camera.
This imager coordinate pose, for receiving the imager coordinate pose of machine visual unit 1, is converted to world space coordinate pose by described pose standard module 10, and this world space coordinate pose is sent to numerical control unit 2.
Described robot body 3 is provided with motor control module 31 and a motion-control module; Described motor control module is in order to the joint motor on control machine human agent 3, thus the action of control machine human agent 3.Described motion module is in order to the movement of control machine human agent 3 entirety.
Described numerical control unit 2 comprises a motion control subelement 21 and logic control subelement 22, and wherein, described motion control subelement is used for the action of control machine human agent 3.Concrete, described motion control subelement 21 by being connected with motor control module 31, thus realizes the action to robot body 3.The motion of logic control subelement 22 described in this for control and the shooting information of handling machine visual unit 1.Concrete, described logic control subelement 22, by being connected with the motion I/O port 32 of motion module, controls the motion to robot.Described logic control subelement is connected with the I/O port one 1 of machine vision unit 1 by 22, realizes the process of the shooting information to machine vision unit 1.
Described light source 4, for carrying out illumination to workpiece 9, with the shooting coordinating machine vision unit 1 to carry out workpiece 9.
Described monitor 5 is connected with machine vision unit 1, in order to the video information of display device visual unit 1 by this switch 6.
Described remote server 7 is connected with machine vision unit 1 and numerical control unit 2 respectively by this switch 6, carries out monitoring and the computing of production run.
Further, the present invention holds Integrated Development debugging enironment 8 to realize programming, the debugging of the numerical control unit 2 to robot by PC.Concrete, local USB or Ethernet mode can be adopted, realize the connection of PC and numerical control unit 2.Meanwhile, this PC also carries out remote debugging by switch 6 pairs of machine vision unit 1.
Please refer to Fig. 6 and Fig. 7, it is respectively the schematic diagram of pose standard module of the present invention and the input of pose standard module of the present invention and input information schematic diagram.
Further, described pose standard module 10 comprises an input end 102, transform method submodule 101 and output terminal 103.
Described input end in order to receive the mounting means parameter of machine visual unit 1 and robot body 3, robot body 3 pose, robot body 3 joint forms matrix, robot body 3 position migration parameter, demarcate number of times, nominal data, and demarcation mode parameter.
Described transform method submodule carries out conversion according to the input information of input end and solves, and result is sent to output terminal.
Described output terminal, in order to the solving result of receiving conversion method submodule, and the volume coordinate pose data of the solving state parameter of output transform method submodule, solving result, machine vision unit 1, and the transition matrix that transform method submodule uses.
Concrete, described machine vision unit 1 and the mounting means of robot body 3, can be divided into: machine vision unit 1 is arranged on robot body 3, and machine vision unit 1 is arranged on the point of fixity of working environment.
Below configuration and the course of work of machine intelligence digital control system of the present invention is introduced in detail:
(1) set public data area, comprise system data and user data, and transmission entry is shared in configuration.This step is held in Integrated Development Environment at PC and is completed setting, wherein: the relevant options being arranged through Motion and PLC programmed environment of numerical control unit 2 realizes, the relevant options being arranged through SC (Smart Camera) programmed environment of machine vision unit realizes, communication pattern takes the forms of broadcasting, makes each subsystem can obtain system data and the user data of other subsystems.
(2) design robot motion and logical program, realizes Motion, PLC programming and SC programming in the Integrated Development Environment of PC end.This step holds Integrated Development Environment to realize at PC, without the need to writing code, only needs to determine that workflow makes corresponding SFC SFC, PLC ladder diagram and SC operation process chart, downloads to each subsystem hardware platform after compiling/explanation.
(3) according to the task matching of machine vision unit, configuration part calibration pose standard module.And by its embedding machine people motion and logical program.Robot and camera data are imported, in process implementing process, realizes auto-changing.Wherein, data transformation result, namely workpiece pose and camera pose, can share to each subsystem.
(4) machine vision unit carries out 2 different positions and pose shootings to 3 of target workpiece in horizontal transmission bands monumented points, each shooting need carry out centering in field range, namely monumented point is positioned at field of view center, obtains now relevant camera pose and can obtain the output of real-time workpiece pose data according to the transformational relation between camera-robot-target workpiece by the input of every subjob pixel coordinate.
Compared to prior art, the present invention by by machine vision subsystem and digital control system in open type integrated, namely define the digital control system with Visual intelligent.By the system that hardware platform is unified, advantage is convenient exploitation, and data are convenient to management, and system itself is processing terminal, and PC does not participate in engineering construction as development platform.
Further, have cured Processing Algorithm in vision subsystem and numerical control subsystem internal, specifically this Processing Algorithm is solidificated in a pose standard module.Therefore, Application Engineer in engineering development, without the need to involving the strong computation process of theoretical property, only need pay close attention to application digital control system and robot body, machine vision unit setting parameter, both met core algorithm security requirements and also shortened the construction cycle.Meanwhile, the exploitation of this standardized module and renewal are only for target hardware platform, and the standard development flow process of this refinement, is applicable to conglomerate Visual intelligent robot and imports application.
The present invention is not limited to above-mentioned embodiment, if do not depart from the spirit and scope of the present invention to various change of the present invention or distortion, if these are changed and distortion belongs within claim of the present invention and equivalent technologies scope, then the present invention is also intended to comprise these changes and distortion.

Claims (5)

1. a data conversion method for Visual intelligent digital control system, is characterized in that: described Visual intelligent digital control system comprises machine vision unit, robot ' NC unit and pose standard module; This data conversion method comprises the following steps:
Machine vision unit is connected with robot ' NC unit;
By the imager coordinate of this machine vision unit picked-up workpiece, and these imager coordinate data are sent to pose standard module;
These imaging coordinate system data are converted to world space coordinate pose by this pose standard module, and are sent to numerical control unit.
2. the data conversion method of Visual intelligent digital control system according to claim 1, is characterized in that: described pose master die
Imager coordinate pose is converted to world space coordinate pose and specifically comprises the following steps by block:
Transform method is solidificated in pose standard module;
Input information is to pose standard module, the information of input comprises: the matrix that the joint of the mounting means parameter of machine vision unit and robot end, robot body's pose, robot body is formed, the position migration parameter of robot body, demarcate number of times, nominal data, and demarcation mode parameter;
According to transform method, input information is carried out conversion and solves;
By this pose standard module output information, the information of output comprises: the volume coordinate pose data of solving state parameter, solving result, machine vision unit, and the transition matrix that transform method obtains.
3. the data conversion method of Visual intelligent digital control system according to claim 2, is characterized in that: the mounting means of described machine vision unit and robot body is: by machine vision cellular installation robot end or by machine vision cellular installation on the point of fixity of working environment;
If machine vision cellular installation is robot end, then the mounting means parameter value inputted is 1; The demarcation number of times of described input is at least 2 times; Described demarcation mode three point method, namely takes on three points of workpiece simultaneously; The spatial pose data of the machine vision unit of described output be robot end at different measurement points time spatial pose data;
If machine vision cellular installation is on the point of fixity of working environment, then the parameter value of the mounting means inputted is 0; Described demarcation number of times is at least 1 time; Described scaling method is two-point method, namely takes the point of two on workpiece simultaneously; The volume coordinate pose data of described machine vision unit are fixed value.
4. the data conversion method of Visual intelligent digital control system according to claim 3, is characterized in that: the solving state of described output includes solution state and without solution state; Described have the parameter value of solution state to be 1, and the parameter value without the state of solution is 0.
5. the data conversion method of Visual intelligent digital control system according to claim 3, is characterized in that: when this data conversion method is applied to robot system exploitation, comprise the following steps:
In machine vision cell location treatment scheme;
At robot ' NC unit developing application;
According to visual processes needs, call pose standard module, carry out data transformation computing, result is transmitted respectively machine visual unit, robot ' NC unit, control for target.
CN201410436235.5A 2014-08-29 2014-08-29 A kind of data conversion method of Visual intelligent digital control system Active CN104298169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410436235.5A CN104298169B (en) 2014-08-29 2014-08-29 A kind of data conversion method of Visual intelligent digital control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410436235.5A CN104298169B (en) 2014-08-29 2014-08-29 A kind of data conversion method of Visual intelligent digital control system

Publications (2)

Publication Number Publication Date
CN104298169A true CN104298169A (en) 2015-01-21
CN104298169B CN104298169B (en) 2017-11-28

Family

ID=52317948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410436235.5A Active CN104298169B (en) 2014-08-29 2014-08-29 A kind of data conversion method of Visual intelligent digital control system

Country Status (1)

Country Link
CN (1) CN104298169B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898489A (en) * 2015-05-29 2015-09-09 上海发那科机器人有限公司 Visual positioning system connection structure
CN108011589A (en) * 2017-12-14 2018-05-08 杭州电子科技大学 A kind of photovoltaic cell intelligent position detection and feedback control apparatus for correcting
CN112558545A (en) * 2020-11-17 2021-03-26 沈机(上海)智能***研发设计有限公司 Interactive system, method and storage medium based on machine tool machining
CN112859742A (en) * 2021-01-12 2021-05-28 广州盛硕科技有限公司 Numerical control machine tool control system based on visual identification

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1419104A (en) * 2002-12-26 2003-05-21 北京航空航天大学 Object space position detector
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN201945293U (en) * 2010-10-29 2011-08-24 中国科学技术大学 Flexibility stereoscopic vision measurement device of target space coordinate
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
US20120154571A1 (en) * 2010-12-17 2012-06-21 Mitutoyo Corporation Edge detection using structured illumination
CN102607526A (en) * 2012-01-03 2012-07-25 西安电子科技大学 Target posture measuring method based on binocular vision under double mediums
CN103471500A (en) * 2013-06-05 2013-12-25 江南大学 Conversion method of plane coordinate and space three-dimensional coordinate point in vision of monocular machine

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1419104A (en) * 2002-12-26 2003-05-21 北京航空航天大学 Object space position detector
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN201945293U (en) * 2010-10-29 2011-08-24 中国科学技术大学 Flexibility stereoscopic vision measurement device of target space coordinate
US20120154571A1 (en) * 2010-12-17 2012-06-21 Mitutoyo Corporation Edge detection using structured illumination
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN102607526A (en) * 2012-01-03 2012-07-25 西安电子科技大学 Target posture measuring method based on binocular vision under double mediums
CN103471500A (en) * 2013-06-05 2013-12-25 江南大学 Conversion method of plane coordinate and space three-dimensional coordinate point in vision of monocular machine

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
熊春山,等: "手眼立体视觉的算法与实现", 《机器人》 *
王红涛: "基于视觉的工业机器人目标识别定位方位的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *
谢杨敏,等: "高精度自动贴片机视觉***定位算法研究", 《光学技术》 *
陈法明: "视觉智能开放式专用数控***集成开发环境研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 工程科技I辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898489A (en) * 2015-05-29 2015-09-09 上海发那科机器人有限公司 Visual positioning system connection structure
CN108011589A (en) * 2017-12-14 2018-05-08 杭州电子科技大学 A kind of photovoltaic cell intelligent position detection and feedback control apparatus for correcting
CN108011589B (en) * 2017-12-14 2023-09-29 杭州电子科技大学 Photovoltaic cell intelligent position detection and feedback control correcting device
CN112558545A (en) * 2020-11-17 2021-03-26 沈机(上海)智能***研发设计有限公司 Interactive system, method and storage medium based on machine tool machining
CN112558545B (en) * 2020-11-17 2022-07-15 沈机(上海)智能***研发设计有限公司 Interactive system, method and storage medium based on machine tool machining
CN112859742A (en) * 2021-01-12 2021-05-28 广州盛硕科技有限公司 Numerical control machine tool control system based on visual identification
CN112859742B (en) * 2021-01-12 2021-08-06 广州盛硕科技有限公司 Numerical control machine tool control system based on visual identification

Also Published As

Publication number Publication date
CN104298169B (en) 2017-11-28

Similar Documents

Publication Publication Date Title
CN109571476A (en) The twin real time job control of industrial robot number, monitoring and precision compensation method
CN104298169A (en) Data converting method of intelligent vision numerical control system
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN104123412A (en) Method for detecting collision of curtain wall through BIM technology
CN104385282A (en) Visual intelligent numerical control system and visual measuring method thereof
CN103760976A (en) Kinect based gesture recognition smart home control method and Kinect based gesture recognition smart home control system
CN104259839A (en) Automatic screw machine based on visual identification and screw installation method thereof
Chung et al. Smart facility management systems utilizing open BIM and augmented/virtual reality
CN103612262B (en) The Remote Control Automatic attending device of target body and maintaining method thereof under the environment of a kind of hot cell
CN114935916A (en) Method for realizing industrial meta universe by using Internet of things and virtual reality technology
Ben et al. Research on visual orientation guidance of industrial robot based on cad model under binocular vision
Wang Cyber manufacturing: research and applications
CN111352398A (en) Intelligent precision machining unit
CN110175648B (en) Non-invasive information communication method for equipment by applying artificial intelligent cloud computing
Caiza et al. Digital twin for monitoring an industrial process using augmented reality
Xia et al. Tool wear image on-machine detection based on trajectory planning of 6-DOF serial robot driven by digital twin
Müller et al. The assist-by-X system: calibration and application of a modular production equipment for visual assistance
CN1256990A (en) Intelligent locating working method
Liu et al. Research on real-time monitoring technology of equipment based on augmented reality
Juhás et al. Key components of the architecture of cyber-physical manufacturing systems
CN104932407A (en) Modular robot driving control system and method based on PLC
Secco et al. An Integrated Method for the Geometric Inspection of Wind Turbine Hubs with Industrial Robot
CN103679345A (en) Method and system for processing full life cycle of embedded part used for hydraulic structure
Andonovski et al. Towards a Development of Robotics Tower Crane System
Farisi [Retracted] Motion Control System of IoT Intelligent Robot Based on Improved ResNet Model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180417

Address after: 510000 West Whampoa Road, Guangdong, Guangzhou, No. 601

Patentee after: Jinan University

Address before: 512026 Guangdong, Shaoguan, Wujiang District, Dongguan (Shaoguan) industrial transfer industrial park, high tech pioneering service center, third floor East

Patentee before: JINAN UNIVERSITY SHAOGUAN INSTITUTE