CN114170314A - 3D glasses process track execution method based on intelligent 3D vision processing - Google Patents

3D glasses process track execution method based on intelligent 3D vision processing Download PDF

Info

Publication number
CN114170314A
CN114170314A CN202111512132.9A CN202111512132A CN114170314A CN 114170314 A CN114170314 A CN 114170314A CN 202111512132 A CN202111512132 A CN 202111512132A CN 114170314 A CN114170314 A CN 114170314A
Authority
CN
China
Prior art keywords
glasses
processing
intelligent
execution
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111512132.9A
Other languages
Chinese (zh)
Other versions
CN114170314B (en
Inventor
姚绪松
陈方
卢绍粦
席豪圣
代勇
刘聪
蓝猷凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qunbin Intelligent Manufacturing Technology Suzhou Co ltd
Original Assignee
Shenzhen Qb Precision Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qb Precision Industrial Co ltd filed Critical Shenzhen Qb Precision Industrial Co ltd
Priority to CN202111512132.9A priority Critical patent/CN114170314B/en
Publication of CN114170314A publication Critical patent/CN114170314A/en
Application granted granted Critical
Publication of CN114170314B publication Critical patent/CN114170314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/19Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by positioning or contouring control systems, e.g. to control position from one programmed point to another or to control movement along a programmed continuous path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a 3D glasses process track execution method based on intelligent 3D vision processing, which comprises the following steps: calculating a spatial position relationship of the execution device and the camera; scanning the 3D glasses and generating a corresponding standard model; dividing the standard model into a plurality of areas, and grabbing feature points in each area, wherein the range of each area is 1 mm-5 mm; connecting the feature points within each region to form a process treatment trajectory for the 3D glasses; and controlling the execution equipment to run according to the process processing track. According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into the plurality of areas, so that the 3D glasses are planar in the areas, the teaching path can be suitable for the areas, and the product yield is improved.

Description

3D glasses process track execution method based on intelligent 3D vision processing
[ technical field ] A method for producing a semiconductor device
The application relates to the technical field of 3D vision, in particular to a 3D glasses process track executing method based on intelligent 3D vision processing.
[ background of the invention ]
The 3D glasses adopt the most advanced time division method at present, and are realized by a signal which is synchronized with the display by the 3D glasses. When the display outputs the left eye image, the left lens is in a light-transmitting state, and the right eye is in a light-tight state, and when the display outputs the right eye image, the right lens is light-transmitting and the left eye is light-tight, so that the two glasses see different game pictures. In the production and manufacturing process of the 3D glasses, in order to obtain glasses with standard models and no deviation, the existing mode is usually carried out by using an integral deviation rectifying mode, namely, a 3D camera with a large visual field is used for carrying out one-time image acquisition on a product, a first picture is used as a template after the integral spatial position of the product is confirmed, and an execution track capable of realizing the process is taught by using the template. And then, carrying out secondary image acquisition on the product through the 3D camera, comparing the second image with the first image, calculating the three-dimensional space position deviation of the two images, and acting the three-dimensional deviation relation in the template process teaching track to correct the process track of the template. Most of the existing 3D glasses are made of non-metal materials such as plastics and the like, overall or local deformation can be caused after the glasses are manufactured, the teaching path is a rigid path and cannot be applied to products with more curved surfaces or local deformation, and therefore the problem that the yield of products manufactured by the process is low is caused.
[ summary of the invention ]
In view of the above, there is a need to provide a method for executing a process trajectory of 3D glasses based on intelligent 3D vision processing, which can ensure product yield, so as to solve the above problems.
The embodiment of the application provides a 3D glasses process track executing method based on intelligent 3D vision processing, which comprises the following steps:
calculating a spatial position relationship of the execution device and the camera;
scanning the 3D glasses and generating a corresponding standard model;
dividing the standard model into a plurality of areas, and grabbing feature points in each area, wherein the range of each area is 1 mm-5 mm;
connecting the feature points within each region to form a process treatment trajectory for the 3D glasses;
and controlling the execution equipment to run according to the process processing track.
In at least one embodiment of the present application, the step of "dividing the standard model into a plurality of regions and grasping feature points in each region, wherein each region ranges from 1mm to 5 mm" comprises the steps of:
selecting the characteristics of the 3D glasses and dividing the regions on the characteristics.
In at least one embodiment of the present application, the feature is a curved surface or a flat surface.
In at least one embodiment of the present application, the step of "dividing the standard model into a plurality of regions and grasping feature points in each region, wherein each region ranges from 1mm to 5 mm" comprises the steps of:
generating a three-dimensional space measuring box in each divided region;
and calculating corresponding characteristic points by using the three-dimensional space side measuring box according to the geometric characteristics of the 3D glasses.
In at least one embodiment of the present application, the geometric features of the 3D glasses are a combination of one or more of triaxial features, normal vectors, state quantities, and velocity quantities.
In at least one embodiment of the present application, the step of "connecting the feature points in each region to form the process processing trajectory of the 3D glasses" further comprises the steps of:
and replacing the coordinate system of the process processing track into the coordinate system of the execution equipment.
In at least one embodiment of the present application, the step of replacing the coordinate system of the process track into the coordinate system of the execution device further includes:
and replacing the process processing track into the coordinates in the execution equipment and uploading the coordinates to the execution equipment.
In at least one embodiment of the present application, the step of "calculating a spatial positional relationship of the execution apparatus with the camera" includes the steps of:
installing a calibration block on the execution equipment on the robot;
moving the robot to a position right below the camera to acquire image information;
and calculating the spatial position relation between the camera and the execution equipment according to the image information.
In at least one embodiment of the present application, the step of "moving the robot to just below the camera to capture image information" comprises the steps of:
moving the robot to different positions under the 3D camera for multiple times;
and respectively acquiring image information of the robot at a plurality of different positions.
In at least one embodiment of the present application, the number of movements of the robot is 4 to 9.
According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into the plurality of areas, so that the 3D glasses are planar in the areas, the teaching path can be suitable for the areas, and the product yield is improved.
[ description of the drawings ]
Fig. 1 is a block flow diagram of a 3D glasses process trajectory execution method based on intelligent 3D vision processing in an embodiment of the present application.
[ detailed description ] embodiments
The embodiments of the present application will be described in conjunction with the drawings in the embodiments of the present application, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It will be understood that when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. When an element is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. The terms "top," "bottom," "upper," "lower," "left," "right," "front," "rear," and the like as used herein are for illustrative purposes only.
The embodiment of the application provides a 3D glasses process track executing method based on intelligent 3D vision processing, which comprises the following steps:
calculating a spatial position relationship of the execution device and the camera;
scanning the 3D glasses and generating a corresponding standard model;
dividing the standard model into a plurality of areas, and grabbing feature points in each area, wherein the range of each area is 1 mm-5 mm;
connecting the feature points within each region to form a process treatment trajectory for the 3D glasses;
and controlling the execution equipment to run according to the process processing track.
According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into the plurality of areas, so that the 3D glasses are planar in the areas, the teaching path can be suitable for the areas, and the product yield is improved.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, a method for executing a process track of 3D glasses based on intelligent 3D vision processing includes the following steps:
s10: the spatial position relationship of the execution device and the camera is calculated.
S20: the 3D glasses are scanned and the corresponding standard model is generated.
S30: dividing the standard model into a plurality of areas, and grabbing feature points in each area, wherein the range of each area is 1 mm-5 mm.
S40: the feature points within each region are connected to form a process trace for the 3D glasses.
S50: and controlling the execution equipment to run according to the process processing track.
It should be noted that, in the production and manufacturing of the 3D glasses, a process execution track of the 3D glasses needs to be established first, and corresponding execution equipment is operated to perform processing along the process execution track. In the existing process track making process, the teaching is often completed in a teaching mode, and the teaching belongs to a rigid path, so that the teaching method cannot be applied to the position of a deformed curved surface when a product is locally deformed, the defect exists in the external shape of the product with the curved surface made in the teaching mode, and the reject ratio of the product is increased. According to the scheme, the plurality of areas are divided on the generated standard model, and the areas are controlled within a certain range, so that the product forms a nearly planar structure in the divided areas, normal teaching operation can be performed in the areas, and the defect of the product caused by the fact that the product is not suitable for teaching is reduced.
In one embodiment, the 3D glasses include, but are not limited to, AR and VR glasses, any glasses having 3D viewing angles.
In one embodiment, the area is 3mm, but obviously is not limited thereto, and as in another embodiment, the area may also be 1mm, 2mm, 4mm, 5mm, etc.
Step S10 includes the steps of:
s11: and installing a calibration block on the execution equipment on the robot.
S12: and moving the robot to a position right below the camera to acquire image information.
S13: and calculating the spatial position relation between the camera and the execution equipment according to the image information.
The calibration block is disposed on the execution device, and when the spatial position relationship between the execution device and the 3D camera is detected, the calibration block on the execution device is mounted on the robot, and the robot is moved to a position right below the camera to capture image information. At this time, the execution device also records a group of position information, and the position information is combined with the image information, and the spatial position relation between the execution device and the camera can be calculated by using a corresponding algorithm. And controlling the robot to operate to a specific direction according to the spatial position relationship so as to realize the manufacture of the 3D glasses.
Further, step S12 includes the steps of:
s121: the robot is moved to different positions under the 3D camera a number of times.
S122: respectively collecting image information of the robot at a plurality of different positions,
it should be noted that, in order to ensure the accuracy of collecting the image information of the calibration block, the robot is moved to different positions of the 3D camera for multiple times to obtain the positions of the calibration block on the robot at the multiple different positions, and the positions are combined with each other to determine the specific position of the calibration block, so that the accuracy is higher, and the error of collecting the image information is reduced.
Further, the number of times the robot moves is 4 to 9 times. In one embodiment, the number of robot movements is 4, but obviously not limited thereto, as in another embodiment the number of robot movements is 9.
Preferably, in step S20, the 3D glasses are divided into regions, a high-precision camera is used to capture a plurality of high-precision images of different regions, and the plurality of images are stitched together, so as to improve the precision and definition of the generated standard model, which is specifically described as follows:
in the 3D vision calibration process, a 3D camera is required to scan and photograph the device to be calibrated. When the number of the pictures is less than or equal to four, the conventional splicing process can not generate errors exceeding the project requirements, so that the splicing of the images is normally finished. The 3D glasses in the application have a plurality of curved surfaces, when the number of scanning and photographing is controlled to be less than four, the problem that images scanned and photographed are not clear is caused, and when the number of scanning and photographing is controlled to be more than four, errors exceeding project requirements can be generated by adopting a conventional splicing method, so that the precision of 3D vision calibration is influenced. Therefore, according to the scheme, a standard model is created and divided into different scanning areas, so that high-precision scanning and photographing are carried out on the different scanning areas independently for multiple times, and splicing of multiple images is achieved in a characteristic point mode, so that the image acquisition precision is guaranteed, and the problem that errors exceeding project requirements are generated during splicing of the multiple images is solved.
The marking model includes a base (not shown) and a plurality of positioning posts (not shown) fixed on the base. The base is used for bearing a plurality of positioning columns, the positioning columns are fixed at corresponding positions on the base according to the specific shapes of the 3D glasses, so that the positioning columns fixed on the base enclose a model which is consistent with the shape of the 3D glasses, and equipment for executing the 3D glasses is calibrated.
In one embodiment, the positioning columns are adhered to the base through glue, wherein a part of the positioning columns are perpendicular to the base, and the other part of the positioning columns and the base are arranged at an angle. It can be understood that the fixing manner of the positioning posts and the base is not limited thereto, and in another embodiment, the base is provided with a plurality of mounting holes, and the positioning posts are inserted and fixed in the mounting holes.
In one embodiment, the base is a substantially rectangular block-like structure, but obviously is not limited thereto, and in another embodiment, the base may also be an ellipsoid structure or the like.
In order to obtain a model with a shape consistent with that of the 3D glasses, it is necessary to select the external contour of each part of the 3D glasses and the height and size information of each part, and fix a plurality of positioning pillars with different lengths at different positions on the base so that the end surfaces of the positioning pillars far from the base are commonly surrounded to form a model with a shape consistent with that of the 3D glasses.
In order to ensure the accuracy of the model formed by the plurality of positioning columns, the formed model needs to be detected to prevent the deviation of the model, which causes the problem of calibration error of the execution equipment when the execution equipment is calibrated. Preferably, the central hole of the positioning column is selected as the characteristic point, so that the hole positions are conveniently formed in the positioning column, and the positions of the positioning columns can be better detected through the central hole. Furthermore, in the scheme, a central hole is formed in one part of the positioning columns, and cross marks are arranged on the center lines of the other part of the positioning columns so as to position and mark the plurality of positioning columns. Furthermore, when the model precision is detected, the standard model is placed in a specified calibration instrument, and the center hole and the cross mark are captured by a computer algorithm to obtain the specific setting position of the positioning column, so that the precision of the standard model is better judged, and the precision of the calibration of the execution equipment is better ensured when the execution equipment is calibrated.
It should be understood that the feature points are not limited to this, and in another embodiment, a mark such as "-" or other shape may be further disposed on the positioning pillars to match with the corresponding algorithm to obtain the positions of the positioning pillars.
It should be noted that, the plurality of positioning columns arranged on the base are used for dividing the area, so that the complicated operation of marking on the base can be reduced, and the area division is facilitated. And through the mode of dividing the positioning columns, after the areas are divided, obvious positioning column distribution difference exists in each area, so that the areas are divided more obviously, and the problem of poor detection precision caused by the fact that the areas are not obvious is solved.
Further, by dividing the standard model into a plurality of different regions and scanning each of the different regions by the 3D camera to form a corresponding model, the display range of the formed model is relatively small to maximize the sharpness of the model when the 3D camera has the same focal length.
Furthermore, at least three groups of positioning columns are arranged in the scanning area, so that more overlapped parts exist in different formed images, and the quality of image splicing can be guaranteed to the maximum when the overlapped parts with more overlapped features are spliced in the subsequent image splicing process.
It should be noted that, a certain scanning range exists in the used 3D camera, and in order to ensure that at least three positioning columns exist in the field of view so as to facilitate the splicing of two images and reduce the number of times of scanning and photographing as much as possible, it is preferable that the three positioning columns fall into the scanning range of the 3D camera at the same time by reducing the diameters of the positioning columns.
It should be noted that, in the scanning and photographing process of the 3D camera, an image obtained by the 3D camera for the first time photographing is a (not shown), and when the 3D camera is used for the second time photographing, a partial area of the image a is used as an overlapping area and is scanned into the second image, so that there are some overlapping areas between the second scanned image and the first scanned image, which is convenient for subsequent splicing.
The two pieces of image information are compared by the same overlapped part and the parts with the same image information are combined together, so that different parts on the images scanned from different angles can be completely compared, the splicing of the calibration images is realized under the combination of multiple images, and the calibration images spliced by the method are consistent with the 3D glasses in shape and size.
In order to facilitate the construction of a more accurate process track, step S30 includes the steps of:
s31: selecting the characteristics of the 3D glasses and dividing the regions on the characteristics.
It should be noted that the above-mentioned feature is the outline of the 3D glasses, and in one embodiment, the above-mentioned feature is a curved surface, but obviously, the above-mentioned feature is not limited thereto, and in another embodiment, the above-mentioned feature may also be a flat surface.
Step S30 includes the steps of:
s32: a three-dimensional space measuring box is generated in each of the divided regions.
S33: and calculating corresponding characteristic points by using the three-dimensional space side measuring box according to the geometric characteristics of the 3D glasses.
It should be noted that, by generating a three-dimensional space detection box in each divided region to measure a specific point in the region through the three-dimensional space measurement box, after producing feature points in each region, a process track of a product can be obtained by connecting all the feature points.
Preferably, the geometric feature of the 3D glasses is a combination of one or more of a triaxial feature, a normal vector, a state quantity and a speed quantity.
Further, step 40 is followed by the step of:
s41: and replacing the coordinate system of the process processing track into the coordinate system of the execution equipment.
It should be noted that, the coordinate system where the process processing track is located is replaced into the coordinate system where the execution device is located, so that the execution device can accurately detect a specific process position, thereby facilitating the processing of the product and increasing the processing accuracy of the product.
Still further, step S41 is followed by the step of:
s411: and replacing the process processing track into the coordinates in the execution equipment and uploading the coordinates to the execution equipment.
It should be noted that, the process processing track is replaced to the coordinate in the execution device and uploaded to the execution device, so as to store the data to prevent omission of the data, and when the next operation is performed, the execution device repeats the previous data to complete the processing of the product, thereby avoiding the tedious operation caused by resetting, simplifying the process steps, and increasing the processing efficiency.
Further, after the process processing track is uploaded into the execution equipment, the execution equipment runs the execution equipment according to the process track, and therefore the execution equipment is controlled to produce corresponding products according to the track.
According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into the plurality of areas, so that the 3D glasses are planar in the areas, the teaching path can be suitable for the areas, and the product yield is improved.
While the foregoing is directed to embodiments of the present application, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims.

Claims (10)

1. A3D glasses process track execution method based on intelligent 3D vision processing is characterized by comprising the following steps:
calculating a spatial position relationship of the execution device and the camera;
scanning the 3D glasses and generating a corresponding standard model;
dividing the standard model into a plurality of areas, and grabbing feature points in each area, wherein the range of each area is 1 mm-5 mm;
connecting the feature points within each region to form a process treatment trajectory for the 3D glasses;
and controlling the execution equipment to run according to the process processing track.
2. The smart 3D vision processing 3D glasses process trajectory execution method according to claim 1, wherein the step of dividing the standard model into a plurality of regions and grabbing feature points in each region, wherein each region ranges from 1mm to 5mm comprises the steps of:
selecting the characteristics of the 3D glasses and dividing the regions on the characteristics.
3. The intelligent 3D vision processing 3D glasses-based process trajectory execution method of claim 2, wherein the feature is a curved surface or a flat surface.
4. The smart 3D vision processing 3D glasses process trajectory execution method according to claim 1, wherein the step of dividing the standard model into a plurality of regions and grabbing feature points in each region, wherein each region ranges from 1mm to 5mm comprises the steps of:
generating a three-dimensional space measuring box in each divided region;
and calculating corresponding characteristic points by using the three-dimensional space side measuring box according to the geometric characteristics of the 3D glasses.
5. The intelligent 3D glasses-based visual processing 3D glasses process trajectory execution method according to claim 4, wherein the geometric features of the 3D glasses are combinations of one or more of triaxial features, normal vectors, state quantities and speed quantities.
6. The intelligent 3D glasses-based visual processing 3D glasses processing trajectory execution method according to claim 4, wherein the step of connecting the feature points in each region to form the processing trajectory of the 3D glasses further comprises the steps of:
and replacing the coordinate system of the process processing track into the coordinate system of the execution equipment.
7. The intelligent 3D glasses processing track executing method based on the 3D vision processing according to claim 6, wherein the step of replacing the coordinate system of the processing track into the coordinate system of the executing device further comprises the steps of:
and replacing the process processing track into the coordinates in the execution equipment and uploading the coordinates to the execution equipment.
8. The intelligent 3D glasses processing trajectory execution method based on 3D vision processing as claimed in claim 1, wherein the step of calculating the spatial position relationship of the execution device and the camera comprises the steps of:
installing a calibration block on the execution equipment on the robot;
moving the robot to a position right below the camera to acquire image information;
and calculating the spatial position relation between the camera and the execution equipment according to the image information.
9. The intelligent 3D vision processing 3D glasses process track execution method according to claim 8, wherein the step of moving the robot to a position right below the camera to collect image information comprises the steps of:
moving the robot to different positions under the 3D camera for multiple times;
and respectively acquiring image information of the robot at a plurality of different positions.
10. The intelligent 3D vision processing 3D glasses process trajectory execution method of claim 9, wherein the number of robot movements is 4 to 9.
CN202111512132.9A 2021-12-07 2021-12-07 Intelligent 3D vision processing-based 3D glasses process track execution method Active CN114170314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111512132.9A CN114170314B (en) 2021-12-07 2021-12-07 Intelligent 3D vision processing-based 3D glasses process track execution method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111512132.9A CN114170314B (en) 2021-12-07 2021-12-07 Intelligent 3D vision processing-based 3D glasses process track execution method

Publications (2)

Publication Number Publication Date
CN114170314A true CN114170314A (en) 2022-03-11
CN114170314B CN114170314B (en) 2023-05-26

Family

ID=80485665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111512132.9A Active CN114170314B (en) 2021-12-07 2021-12-07 Intelligent 3D vision processing-based 3D glasses process track execution method

Country Status (1)

Country Link
CN (1) CN114170314B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4338672A (en) * 1978-04-20 1982-07-06 Unimation, Inc. Off-line teach assist apparatus and on-line control apparatus
US20030035100A1 (en) * 2001-08-02 2003-02-20 Jerry Dimsdale Automated lens calibration
CN104281098A (en) * 2014-10-27 2015-01-14 南京航空航天大学 Modeling method for dynamic machining features of complex curved surface
CN104408408A (en) * 2014-11-10 2015-03-11 杭州保迪自动化设备有限公司 Extraction method and extraction device for robot spraying track based on curve three-dimensional reconstruction
CN104634276A (en) * 2015-02-12 2015-05-20 北京唯创视界科技有限公司 Three-dimensional measuring system, photographing device, photographing method, depth calculation method and depth calculation device
CN105118088A (en) * 2015-08-06 2015-12-02 曲阜裕隆生物科技有限公司 3D imaging and fusion method based on pathological slice scanning device
CN105354880A (en) * 2015-10-15 2016-02-24 东南大学 Line laser scanning-based sand blasting robot automatic path generation method
US20160140713A1 (en) * 2013-07-02 2016-05-19 Guy Martin System and method for imaging device modelling and calibration
CN106354098A (en) * 2016-11-04 2017-01-25 大连理工大学 Method for forming cutter machining tracks on NURBS combined curved surface
CN107767414A (en) * 2017-10-24 2018-03-06 林嘉恒 The scan method and system of mixed-precision
CN108527319A (en) * 2018-03-28 2018-09-14 广州瑞松北斗汽车装备有限公司 The robot teaching method and system of view-based access control model system
CN109454642A (en) * 2018-12-27 2019-03-12 南京埃克里得视觉技术有限公司 Robot coating track automatic manufacturing method based on 3D vision
CN109822550A (en) * 2019-02-21 2019-05-31 华中科技大学 A kind of complex-curved robot high-efficiency high-accuracy teaching method
CN210452170U (en) * 2019-06-18 2020-05-05 蓝点触控(北京)科技有限公司 Flexible intelligent polishing system of robot based on six-dimensional force sensor
CN111462253A (en) * 2020-04-23 2020-07-28 深圳群宾精密工业有限公司 Three-dimensional calibration plate, system and calibration method suitable for laser 3D vision
CN111496786A (en) * 2020-04-15 2020-08-07 武汉海默机器人有限公司 Point cloud model-based mechanical arm operation processing track planning method
CN111823734A (en) * 2020-09-10 2020-10-27 季华实验室 Positioning calibration assembly, device, printer and jet printing point coordinate positioning calibration method
CN112435350A (en) * 2020-11-19 2021-03-02 深圳群宾精密工业有限公司 Processing track deformation compensation method and system
CN112489195A (en) * 2020-11-26 2021-03-12 新拓三维技术(深圳)有限公司 Rapid machine adjusting method and system for pipe bender
CN112486098A (en) * 2020-11-20 2021-03-12 张均 Computer-aided machining system and computer-aided machining method
CN112497192A (en) * 2020-11-25 2021-03-16 广州捷士电子科技有限公司 Method for improving teaching programming precision by adopting automatic calibration mode
CN113103226A (en) * 2021-03-08 2021-07-13 同济大学 Visual guide robot system for ceramic biscuit processing and manufacturing

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4338672A (en) * 1978-04-20 1982-07-06 Unimation, Inc. Off-line teach assist apparatus and on-line control apparatus
US20030035100A1 (en) * 2001-08-02 2003-02-20 Jerry Dimsdale Automated lens calibration
US20160140713A1 (en) * 2013-07-02 2016-05-19 Guy Martin System and method for imaging device modelling and calibration
CN104281098A (en) * 2014-10-27 2015-01-14 南京航空航天大学 Modeling method for dynamic machining features of complex curved surface
CN104408408A (en) * 2014-11-10 2015-03-11 杭州保迪自动化设备有限公司 Extraction method and extraction device for robot spraying track based on curve three-dimensional reconstruction
CN104634276A (en) * 2015-02-12 2015-05-20 北京唯创视界科技有限公司 Three-dimensional measuring system, photographing device, photographing method, depth calculation method and depth calculation device
CN105118088A (en) * 2015-08-06 2015-12-02 曲阜裕隆生物科技有限公司 3D imaging and fusion method based on pathological slice scanning device
CN105354880A (en) * 2015-10-15 2016-02-24 东南大学 Line laser scanning-based sand blasting robot automatic path generation method
CN106354098A (en) * 2016-11-04 2017-01-25 大连理工大学 Method for forming cutter machining tracks on NURBS combined curved surface
CN107767414A (en) * 2017-10-24 2018-03-06 林嘉恒 The scan method and system of mixed-precision
CN108527319A (en) * 2018-03-28 2018-09-14 广州瑞松北斗汽车装备有限公司 The robot teaching method and system of view-based access control model system
CN109454642A (en) * 2018-12-27 2019-03-12 南京埃克里得视觉技术有限公司 Robot coating track automatic manufacturing method based on 3D vision
CN109822550A (en) * 2019-02-21 2019-05-31 华中科技大学 A kind of complex-curved robot high-efficiency high-accuracy teaching method
CN210452170U (en) * 2019-06-18 2020-05-05 蓝点触控(北京)科技有限公司 Flexible intelligent polishing system of robot based on six-dimensional force sensor
CN111496786A (en) * 2020-04-15 2020-08-07 武汉海默机器人有限公司 Point cloud model-based mechanical arm operation processing track planning method
CN111462253A (en) * 2020-04-23 2020-07-28 深圳群宾精密工业有限公司 Three-dimensional calibration plate, system and calibration method suitable for laser 3D vision
CN111823734A (en) * 2020-09-10 2020-10-27 季华实验室 Positioning calibration assembly, device, printer and jet printing point coordinate positioning calibration method
CN112435350A (en) * 2020-11-19 2021-03-02 深圳群宾精密工业有限公司 Processing track deformation compensation method and system
CN112486098A (en) * 2020-11-20 2021-03-12 张均 Computer-aided machining system and computer-aided machining method
CN112497192A (en) * 2020-11-25 2021-03-16 广州捷士电子科技有限公司 Method for improving teaching programming precision by adopting automatic calibration mode
CN112489195A (en) * 2020-11-26 2021-03-12 新拓三维技术(深圳)有限公司 Rapid machine adjusting method and system for pipe bender
CN113103226A (en) * 2021-03-08 2021-07-13 同济大学 Visual guide robot system for ceramic biscuit processing and manufacturing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵伟: "航空发动机叶片三维重建与评估技术研究" *

Also Published As

Publication number Publication date
CN114170314B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
EP1378790B1 (en) Method and device for correcting lens aberrations in a stereo camera system with zoom
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN102376089B (en) Target correction method and system
US8564655B2 (en) Three-dimensional measurement method and three-dimensional measurement apparatus
CN109859272B (en) Automatic focusing binocular camera calibration method and device
JP4418841B2 (en) Working device and calibration method thereof
US20060291719A1 (en) Image processing apparatus
JP2008014940A (en) Camera calibration method for camera measurement of planar subject and measuring device applying same
JP2004127239A (en) Method and system for calibrating multiple cameras using calibration object
US20210291376A1 (en) System and method for three-dimensional calibration of a vision system
CN111707187B (en) Measuring method and system for large part
CN101726246A (en) Correcting sheet and correcting method
CN113198692B (en) High-precision dispensing method and device suitable for batch products
CN114331924B (en) Large workpiece multi-camera vision measurement method
CN112082480A (en) Method and system for measuring spatial orientation of chip, electronic device and storage medium
CN110519586B (en) Optical equipment calibration device and method
US20200007843A1 (en) Spatiotemporal calibration of rgb-d and displacement sensors
CN112415010A (en) Imaging detection method and system
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
US20120056999A1 (en) Image measuring device and image measuring method
CN115289997B (en) Binocular camera three-dimensional contour scanner and application method thereof
CN114170314A (en) 3D glasses process track execution method based on intelligent 3D vision processing
CN111145247A (en) Vision-based position detection method, robot and computer storage medium
CN113888651B (en) Dynamic and static visual detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230509

Address after: Building A3, 3E Digital Smart Building, No. 526, Fangqiao Road, Caohu Street, Suzhou City, Jiangsu Province, 215000

Applicant after: Qunbin Intelligent Manufacturing Technology (Suzhou) Co.,Ltd.

Address before: 518000 room 314, 3 / F, 39 Queshan new village, Gaofeng community, Dalang street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN QB PRECISION INDUSTRIAL CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant