WO2022261962A1 - 基于增强现实的加工精度评估方法及装置 - Google Patents

基于增强现实的加工精度评估方法及装置 Download PDF

Info

Publication number
WO2022261962A1
WO2022261962A1 PCT/CN2021/101026 CN2021101026W WO2022261962A1 WO 2022261962 A1 WO2022261962 A1 WO 2022261962A1 CN 2021101026 W CN2021101026 W CN 2021101026W WO 2022261962 A1 WO2022261962 A1 WO 2022261962A1
Authority
WO
WIPO (PCT)
Prior art keywords
workpiece
dimensional model
model
dimensional
rendered
Prior art date
Application number
PCT/CN2021/101026
Other languages
English (en)
French (fr)
Inventor
沈轶轩
徐蔚峰
卢超
傅玲
Original Assignee
西门子股份公司
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子股份公司, 西门子(中国)有限公司 filed Critical 西门子股份公司
Priority to CN202180096942.2A priority Critical patent/CN117222949A/zh
Priority to PCT/CN2021/101026 priority patent/WO2022261962A1/zh
Publication of WO2022261962A1 publication Critical patent/WO2022261962A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/23Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]

Definitions

  • the present disclosure relates to the technical field of machining, and more specifically, to an augmented reality-based machining accuracy evaluation method, device, computing device, computer-readable storage medium, and program product.
  • the top On mechanical processing equipment (such as lathes, grinders, etc.), the top is usually used to support the workpiece.
  • the tip is used to keep the workpiece in place, so that the workpiece can be processed accordingly, such as turning and grinding.
  • the size of the holding force plays a crucial role in the processing of the workpiece. If the holding force is too small, the center cannot hold the workpiece in place or rotate it, which prevents the machined surface of the workpiece from being machined. However, if the holding force is too large, the workpiece will be greatly deformed, which will make it impossible for the machining equipment to process at a predetermined position on the processing surface of the workpiece, resulting in reduced processing accuracy. Therefore, the magnitude of the jacking force is directly related to the machining accuracy of the workpiece. The magnitude of the jacking force should be determined according to the required machining accuracy, or at least the jacking force should be within a reasonable range.
  • the operator of the machining equipment usually determines how much holding force should be applied to the workpiece according to his own experience.
  • the machining equipment itself also has a rated maximum holding force and machining accuracy, which can be used as a reference for workpiece machining.
  • the rated maximum holding force of the machining equipment is the maximum holding force that the headstock and tailstock of the machining equipment can bear, which cannot be applied to the workpiece to be processed, and the rated machining accuracy is only for a certain material
  • the design value of the workpiece is not only not accurate enough but also not suitable for all situations.
  • the first embodiment of the present disclosure proposes a machining accuracy evaluation method based on augmented reality, including: constructing a three-dimensional model of the workpiece based on the image data and position data of a set of images of the workpiece; using the three-dimensional Model to train the recognition model of the workpiece, and provide the recognition model to recognize the position and direction of the workpiece in the real environment in real time; use the 3D model to simulate the specific holding force applied to both ends of the workpiece to obtain the simulation results , the simulation results indicate the deformation distribution of each area of the 3D model under the action of the supporting force, and the deformation distribution is related to the machining accuracy of the workpiece; and the rendered 3D model is generated by processing the simulation results and performing color rendering according to the deformation distribution , the rendered three-dimensional model for being superimposed on the workpiece according to the identified position and orientation of the workpiece.
  • a second embodiment of the present disclosure proposes an augmented reality-based processing accuracy evaluation device, including: a three-dimensional model construction module configured to construct a three-dimensional model of a workpiece based on image data and position data of a set of images of the workpiece model; a recognition model training module configured to use a three-dimensional model to train a recognition model of the workpiece, and provide a recognition model to recognize the position and direction of the workpiece in a real environment in real time; a three-dimensional model simulation module configured to use The 3D model simulates the specific supporting force applied to both ends of the workpiece to obtain the simulation results.
  • the simulation results indicate the deformation distribution of each area of the 3D model under the action of the supporting force.
  • the deformation distribution is related to the machining accuracy of the workpiece and a simulation result mapping module configured to generate a rendered three-dimensional model by processing the simulation result and performing color rendering according to the deformation distribution, and the rendered three-dimensional model is used to transform the workpiece according to the identified position and orientation of the workpiece is superimposed on the workpiece.
  • a third embodiment of the present disclosure proposes a computing device including: a processor; and a memory for storing computer-executable instructions that, when executed, cause the processor to perform the first embodiment method in the example.
  • a fourth embodiment of the present disclosure proposes a computer-readable storage medium having computer-executable instructions stored thereon for performing the method of the first embodiment.
  • a fifth embodiment of the present disclosure proposes a computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions that, when executed, cause at least one The processor executes the method of the first embodiment.
  • FIG. 1 shows a flowchart of a method for evaluating machining accuracy based on augmented reality according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of building a three-dimensional model of a workpiece according to an embodiment of the present disclosure
  • Figure 3 shows a schematic diagram of a set of images acquired from different directions of the workpiece
  • FIG. 4 shows a schematic block diagram of an augmented reality-based machining accuracy evaluation system according to an embodiment of the present disclosure
  • Fig. 5 shows the flowchart of the machining accuracy evaluation method based on augmented reality in the embodiment of Fig. 4;
  • Fig. 6 shows a schematic block diagram of a computing device for machining accuracy evaluation according to an embodiment of the present disclosure.
  • Fig. 1 shows a flowchart of an augmented reality (AR)-based machining accuracy evaluation method according to an embodiment of the present disclosure.
  • method 100 starts at step 11 .
  • a three-dimensional model of the workpiece is constructed based on image data and position data of a set of images of the workpiece. Capture at least one frame of a depth image of the workpiece using a depth camera/video camera. For example, multiple frames of depth images of the workpiece may be captured from various azimuths around the workpiece.
  • AR augmented reality
  • an inertial measurement unit such as a gyroscope
  • an inertial measurement unit such as a gyroscope
  • Images and locations can be captured directly using a depth camera and gyroscope integrated on an augmented reality device (AR device), or other separate depth cameras/video cameras or other inertial measurement units can be used to capture images and record locations. Therefore, for each frame of the depth image, in addition to the image data including the three-dimensional coordinates and color information of each pixel, the depth image also has corresponding position data. A 3D model of the workpiece can then be constructed based on these image data and position data.
  • AR device augmented reality device
  • the image of the workpiece to be processed is captured in real time and the corresponding position data is recorded for use in building a three-dimensional model.
  • the constructed three-dimensional model may be stored in memory. If the same workpiece needs to be modeled in the future, the constructed 3D model can be read directly from memory without rebuilding.
  • the three-dimensional model is used to train the recognition model of the workpiece, and the recognition model is provided to recognize the position and orientation of the workpiece in the real environment in real time.
  • the generated three-dimensional model can be used to train the recognition model of the workpiece.
  • the recognition model can be trained using any suitable algorithm.
  • the trained recognition model is provided to the augmented reality device.
  • the camera of the augmented reality device captures the real-time image of the workpiece in the real environment, and uses the real-time image as the input of the recognition model to obtain the real-time position and direction of the workpiece.
  • step 13 using the three-dimensional model, a specific supporting force is applied to both ends of the workpiece for simulation to obtain a simulation result.
  • the simulation result indicates the deformation distribution of each area of the three-dimensional model under the action of the supporting force, and the deformation The distribution is associated with the machining accuracy of the workpiece.
  • a supporting force is applied to both ends of the workpiece, a slight deformation of the workpiece occurs.
  • the deformation is large, the mechanical processing equipment cannot process at a predetermined position on the processing surface of the workpiece, resulting in reduced processing accuracy. Therefore, the greater the deformation of the processing surface of the workpiece, the lower the processing accuracy, and the smaller the deformation of the processing surface of the workpiece, the higher the processing accuracy.
  • the deformation distribution indicated by the simulation results can be used to judge the machining accuracy of the workpiece under the condition that the specific holding force is applied.
  • the simulation is performed based on the finite element analysis (FEA) method.
  • FEA finite element analysis
  • the simulation result can be a list of FEA nodes including node coordinates and node deformation values (such as percentage values), which can be displayed on the display interface in the form of cloud diagrams.
  • a simulation is performed on the specific holding force applied to both ends of the workpiece in real time.
  • the simulation results can also be stored in memory. In the future, if the two ends of the same workpiece need to be simulated with the same amount of holding force, the simulation results can be directly read from the memory without re-simulation. In other embodiments, other suitable simulation methods may also be used to perform simulation using a three-dimensional model.
  • step 14 by processing the simulation results and performing color rendering according to the deformation distribution, a rendered 3D model is generated for being superimposed on the workpiece according to the identified position and orientation of the workpiece superior.
  • Processing the simulation results includes rebuilding a three-dimensional model in accordance with the format required for display of the augmented reality device according to the node coordinates in the FEA node list.
  • the 3D model of the entity is used in FEA simulation, but the display of the augmented reality device requires the 3D model of the surface, which requires extracting the node coordinates on the surface of the 3D model from the FEA node list to reconstruct the 3D model of the surface.
  • different colors are used to render nodes whose deformation values are in different numerical ranges.
  • the rendered three-dimensional model can intuitively indicate the deformation distribution of each region of the three-dimensional model under the simulated supporting force.
  • the rendered three-dimensional model is then provided to the augmented reality device to be superimposed on the artifact in the real environment according to the identified position and orientation of the artifact in the real environment.
  • augmented reality-based machining accuracy evaluation method is applicable to any mechanical processing equipment that uses the tip as a workpiece fixing device, including but not limited to lathes, grinders, drilling machines and the like.
  • FIG. 2 shows a flowchart of building a three-dimensional model of a workpiece according to one embodiment of the present disclosure.
  • step 11 includes sub-step 111 -sub-step 114 .
  • sub-step 111 based on the image data and position data, a three-dimensional point cloud of the workpiece is reconstructed.
  • each frame of depth image has image data and position data including three-dimensional coordinates and color information of each pixel.
  • a point cloud is generated using the 3D coordinates of the pixels.
  • the position data of each frame of depth image is used for coordinate system conversion, that is, from the local coordinate system to the world coordinate system, and the generated multiple point clouds are fused. Since some areas in some depth images have the same color information, the color information of these depth images can be used to calibrate the point cloud, and finally obtain the 3D point cloud of the workpiece.
  • the 3D point cloud is preprocessed.
  • Preprocessing can include downsampling and point cloud filtering.
  • the 3D point cloud can be down-sampled, and the operations on all points can be converted to the points obtained by down-sampling. Downsampling may be performed using any suitable sampling scheme and sampling algorithm.
  • the downsampled point cloud is filtered to remove outliers. Outliers in the point cloud can be removed by any known point cloud filtering methods (such as statistical methods and geometric methods).
  • the three-dimensional point cloud is converted into a grid map by using a grid map generation algorithm.
  • the points in the point cloud are considered as points on the surface of the 3D model. Therefore, first select the k nearest neighbors of each point to determine the normal vector of the point. Then, according to the normal vector to find the appropriate adjacent points of the point, three points form a triangular patch. Finally, multiple triangular patches make up the surface of the 3D model, that is, the mesh map.
  • the grid map generation algorithm includes a Poisson surface reconstruction algorithm or a VRIP algorithm. In other embodiments, any other grid map generation algorithm may also be used.
  • post-processing and format conversion are performed on the grid image to obtain a three-dimensional model.
  • Post-processing of the mesh map includes removal of isolated triangular patches.
  • An isolated triangle is a triangle that has no shared vertices or edges with other triangles in the mesh graph.
  • the simulation and recognition model training requires a solid 3D model
  • whether to perform format conversion can be determined according to actual needs.
  • a set of images is captured by a depth camera of an augmented reality device
  • position data is determined by a gyroscope of the augmented reality device
  • the augmented reality device also uses a recognition model to recognize the position and orientation, and superimpose the rendered 3D model on the workpiece based on position and orientation.
  • Fig. 3 shows a schematic diagram of a set of images of a workpiece acquired from different directions. As shown in FIG. 3 , the augmented reality device captures four frames of depth images of the workpiece from four directions A, B, C, and D, respectively. It should be noted that FIG. 3 is only an example, and those skilled in the art should understand that depth images of any number of workpieces may be acquired from other positions and/or directions.
  • Image capture and position recording are performed through the depth camera and gyroscope of the augmented reality device, and the rendered 3D model is superimposed after recognizing the position and orientation of the workpiece in the real environment, and the operator can instantly complete the 3D model of the workpiece when needed Build, simulate, and view simulation results without having to model the workpiece in advance, greatly improving convenience and operability.
  • the rendered three-dimensional model when the rendered three-dimensional model is superimposed on the workpiece, indicates whether the processed surface of the workpiece meets the required processing accuracy.
  • the entire 3D model is rendered in different colors to represent different deformations. Since the machining equipment only processes the processing surface of the workpiece, when evaluating the processing accuracy, usually only the degree of deformation of the processing surface of the workpiece should be considered.
  • the operator After superimposing the rendered 3D model on the workpiece, the operator can intuitively see the corresponding color of the processed surface of the workpiece, and judge whether the supporting force applied to the workpiece meets the requirements by judging whether the corresponding color of the processed surface meets expectations The processing accuracy determines the processing feasibility.
  • step 12 further includes: extracting two-dimensional features of the three-dimensional model from multiple sets of virtual positions and virtual directions of the three-dimensional model to obtain multiple sets of two-dimensional features; Two-dimensional features are used as training samples to train the recognition model.
  • Each set of virtual positions and virtual directions is used to simulate a viewing angle of the three-dimensional model captured by the virtual camera.
  • a set of two-dimensional features of the three-dimensional model under the viewing angle are extracted.
  • the extracted multiple groups of two-dimensional features are used as training samples for the recognition model.
  • the more virtual positions and virtual directions are selected, the more accurate the trained recognition model is.
  • the trained recognition model can output the position and orientation of the workpiece according to the input workpiece image.
  • step 13 further includes: receiving a specific jacking force provided from the outside; identifying two force-bearing end surfaces of the three-dimensional model; and constraining one of the two force-bearing end surfaces in the simulation And under another condition that the specific supporting force is applied, the deformation distribution of each region of the three-dimensional model is calculated.
  • the operator may input the simulated holding force via the augmented reality device. The operator can obtain the magnitude of the holding force through the force sensor. If you need to check the deformation of the workpiece under the action of multiple holding forces, you can input and simulate them one by one. In other embodiments, the value of the holding force may also be received directly from the force sensor.
  • the simulation first identify the two stressed end faces of the 3D model. It can be understood that the workpiece in the present disclosure has an elongated shape, so the surfaces at both ends in the length direction of the workpiece are used as the two force-bearing end surfaces. As mentioned above, the simulation was performed based on the finite element analysis (FEA) method.
  • FFA finite element analysis
  • the three-dimensional model of the workpiece is meshed, and according to the preset workpiece material and the input supporting force, one of the two stressed end faces is constrained and the other is applied to the specified Under the condition of the supporting force, the transmission and loss of the supporting force between the grid nodes are calculated, and the FEA node list including node coordinates and node deformation values (such as percentage values) is obtained.
  • FIG. 4 shows a schematic block diagram of an augmented reality-based machining accuracy evaluation system 400 according to an embodiment of the present disclosure.
  • FIG. 5 shows a flow chart of an augmented reality-based machining accuracy evaluation method 500 in the embodiment of FIG. 4 .
  • the machining accuracy evaluation system 400 includes two parts: an augmented reality device 41 and an edge device or cloud device 42 .
  • the augmented reality device 41 is integrated with a depth camera, a gyroscope, a display or a lens.
  • Edge device or cloud device 42 may be any computing device that is physically separate from enhanced display device 41 .
  • the augmented reality device 41 includes an image capture module 411 , an object recognition module 412 , a display module 413 and a communication module (not shown in FIG. 4 ).
  • the image capture module 411 includes a depth camera and a gyroscope
  • the display module 413 includes a display or a lens.
  • the edge device or cloud device 42 includes a 3D model building module 421 , a recognition model training module 422 , a 3D model simulation module 423 , a simulation result mapping module 424 and a communication module (not shown in FIG. 4 ).
  • the augmented reality device 41 and the edge device or cloud device 42 communicate via the communication module to transmit data.
  • the machining accuracy evaluation system 400 based on augmented reality can be applied in such a scenario: in the design stage of a new product, evaluate whether the machining equipment can realize the designed machining accuracy of a certain workpiece or determine how much support force is required to achieve the machining precision.
  • the situations in which the machining accuracy evaluation system 400 can be used are not limited to the above-mentioned scenarios, and the machining accuracy evaluation system 400 can be used in any scenarios where the machining accuracy of workpieces needs to be evaluated.
  • a head-mounted or handheld augmented reality device 41 around the workpiece to capture a set of depth images of the workpiece for use in Construction of the 3D model, and input of the supporting force that needs to be simulated.
  • the operator can obtain the magnitude of the holding force through the force sensor.
  • the augmented reality device 41 sends the image data and location data of the depth image to the edge device or cloud device 42 .
  • the operator uses the augmented reality device 41 to capture a real-time image of the workpiece at a certain viewing angle.
  • the edge device or cloud device 42 builds a three-dimensional model of the workpiece based on the image data and position data, and simulates the input holding force applied to both ends of the workpiece to generate a rendered three-dimensional model, and at the same time trains the identification model of the workpiece, which will be Both the rendered 3D model and the recognition model are sent to the augmented reality device 41 .
  • the augmented reality device 41 uses the recognition model to identify the workpiece position and orientation in the real-time image, and superimposes the rendered three-dimensional model on the real workpiece.
  • the operator checks the simulated deformation of the workpiece through the monitor or lens, that is, the color of the 3D model superimposed on the workpiece, especially the corresponding color of the processed surface of the workpiece, and evaluates by judging whether the corresponding color of the processed surface of the workpiece is consistent with the expected color Machining accuracy, and then determine the feasibility of processing. For example, when a specific holding force is applied to both ends of the workpiece, the corresponding color of a processing surface is red, which means that the deformation of the processing surface is relatively large. At this time, the operator's judgment is inconsistent with the expected blue color, so the workpiece Applying this holding force cannot satisfy the required machining accuracy. The operator adjusts the size of the holding force and re-inputs it.
  • the edge device or cloud device 42 uses the 3D model of the workpiece to simulate again and generates a rendered 3D model, which is superimposed on the real workpiece by the augmented reality device 41. The operator checks again and Judge the machining accuracy.
  • the image capturing module 411 captures a set of depth images of the workpiece and records the image position, and sends the image data and position data to the 3D model building module 421 via the communication module .
  • the communication module also sends the holding force input by the operator to the three-dimensional model simulation module 423 .
  • the 3D model construction module 421 constructs a 3D model of the workpiece based on the image data and position data of the group of depth images, and provides the constructed 3D model to the recognition model training module 422 and the 3D model simulation module 423 respectively .
  • the process of building a three-dimensional model has been described in the above embodiments, and will not be repeated here.
  • step 53 train the recognition model of the workpiece and perform simulation using the 3D model.
  • the recognition model training module 422 uses the constructed three-dimensional model to train the recognition model of the workpiece, and sends the trained recognition model to the object recognition module 412 via the communication module.
  • the object recognition module 412 uses the real-time image of the workpiece as the input of the recognition model to obtain the position and orientation of the workpiece in the real environment at this time, and provides the position and orientation data to the display module 413 .
  • the 3D model simulation module 423 uses the constructed 3D model to simulate the situation where the input holding force is applied to the workpiece through the finite element analysis method to obtain a simulation result.
  • the simulation result is a list of FEA nodes including node coordinates and node deformation values (such as percentage values). The simulation process has been described in the above embodiments, and will not be repeated here.
  • the simulation result mapping module 424 generates a rendered 3D model by processing the simulation result and performing color rendering according to the deformation distribution, and sends the rendered 3D model to the display module 413 via the communication module.
  • the display module 413 superimposes the rendered 3D model on the real workpiece according to the position and orientation of the workpiece in the real environment.
  • the operator can see different colors on the real workpiece via the display module 413, indicating the deformation distribution under the input holding force.
  • the deformation of the workpiece processing surface is related to the processing accuracy of the processing surface. The operator can only focus on the corresponding color of the workpiece processing surface, while ignoring the corresponding color of the non-processing surface, and evaluate the processing accuracy by judging whether the corresponding color of the workpiece processing surface is consistent with the expected color, and then determine the processing feasibility.
  • the deformation of the processed surface is consistent with the required processing accuracy.
  • the workpiece has multiple processing surfaces, some processing surfaces have a higher tolerance for processing accuracy, and some processing surfaces have stricter requirements on processing accuracy. In these cases, the deformation of multiple processing surfaces can be considered comprehensively according to the actual situation.
  • step 53 and step 54 are executed sequentially, and step 55 and step 56 are executed sequentially, but step 53 and step 55 are not dependent on each other when executed, so they can be executed in parallel or in any manner It is performed sequentially, as long as it can ensure that the three-dimensional model rendered in step 57 can be superimposed on the workpiece.
  • the supporting force needs to be re-inputted, and the three-dimensional model simulation module 423 performs the simulation again, and the simulation result mapping module 424 generates the rendered three-dimensional model again, And superimposed on the workpiece by the display module 413 . If the operator wishes to know the machining accuracy of another workpiece, the above process needs to be repeated starting from capturing a set of depth images of another workpiece using the augmented reality device 41 .
  • the present disclosure also proposes a processing accuracy evaluation device based on augmented reality.
  • Each module of the machining accuracy evaluation device can be realized by software, hardware (such as integrated circuit, FPGA, etc.) or a combination of software and hardware.
  • the device includes a 3D model construction module 421 , a recognition model training module 422 , a 3D model simulation module 423 and a simulation result mapping module 424 .
  • the three-dimensional model building module 421 is configured to build a three-dimensional model of the workpiece based on image data and position data of a set of images of the workpiece.
  • the recognition model training module 422 is configured to use the three-dimensional model to train the recognition model of the workpiece, and provide the recognition model to recognize the position and orientation of the workpiece in the real environment in real time.
  • the three-dimensional model simulation module 423 is configured to use the three-dimensional model to simulate a specific supporting force applied to both ends of the workpiece to obtain a simulation result.
  • the simulation result indicates the deformation distribution of each area of the three-dimensional model under the action of the supporting force, This deformation distribution correlates with the machining accuracy of the workpiece.
  • the simulation result mapping module 424 is configured to generate a rendered three-dimensional model by processing the simulation result and performing color rendering according to the deformation distribution, and the rendered three-dimensional model is used to be superimposed on the workpiece according to the identified position and orientation of the workpiece superior.
  • a set of images is captured by the depth camera of the augmented reality device, the location data is determined by the gyroscope of the augmented reality device, and the augmented reality device also uses a recognition model to recognize The position and orientation of the workpiece, and superimpose the rendered 3D model on the workpiece according to the position and orientation.
  • the 3D model construction module 421 is further configured to: reconstruct the 3D point cloud of the workpiece based on the image data and position data; preprocess the 3D point cloud; utilize The grid image generation algorithm converts the 3D point cloud into a grid image; and performs post-processing and format conversion on the grid image to obtain a 3D model.
  • the grid map generation algorithm includes a Poisson surface reconstruction algorithm or a VRIP algorithm.
  • the recognition model training module is further configured to: respectively extract two-dimensional features of the three-dimensional model from multiple sets of virtual positions and virtual directions of the three-dimensional model, so as to obtain multiple sets of Two-dimensional features; and multiple sets of two-dimensional features are used as training samples to train the recognition model.
  • the 3D model simulation module is further configured to: receive a specific supporting force provided externally; identify two force-bearing end faces of the 3D model; and simulate the two Under the condition that one of the force-bearing end faces is constrained and the other is applied with a specific supporting force, the deformation distribution of each area of the 3D model is calculated.
  • the simulation is performed based on a finite element analysis method.
  • the rendered three-dimensional model when the rendered three-dimensional model is superimposed on the workpiece, the rendered three-dimensional model indicates whether the processing surface of the workpiece meets the required processing accuracy.
  • a computing device 600 for machining accuracy evaluation includes a central processing unit (CPU) 601 (such as a processor) and a memory 602 coupled with the central processing unit (CPU) 601 .
  • the memory 602 is used for storing computer-executable instructions, and when the computer-executable instructions are executed, the central processing unit (CPU) 601 executes the methods in the above embodiments.
  • a central processing unit (CPU) 601 and a memory 602 are connected to each other via a bus, and an input/output (I/O) interface is also connected to the bus.
  • Computing device 600 may also include multiple components (not shown in FIG. 6 ) connected to the I/O interface, including but not limited to: input units, such as keyboards, mice, etc.; output units, such as various types of displays, speakers etc.; storage units, such as magnetic disks, optical discs, etc.; and communication units, such as network cards, modems, wireless communication transceivers, etc.
  • the communication unit allows the computing device 600 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • a computer-readable storage medium carries computer-readable program instructions for implementing various embodiments of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • the present disclosure provides a computer-readable storage medium having computer-executable instructions stored thereon for performing the teachings of the present disclosure. Methods in the various examples.
  • the present disclosure provides a computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions that, when executed, At least one processor is caused to execute the methods in various embodiments of the present disclosure.
  • the various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device.
  • aspects of the embodiments of the present disclosure are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it is to be understood that the blocks, devices, systems, techniques, or methods described herein may serve as non-limiting Examples of are implemented in hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
  • the computer-readable program instructions or computer program products used to execute various embodiments of the present disclosure can also be stored in the cloud, and when called, the user can access the information stored on the cloud through the mobile Internet, fixed network or other networks. Execute the computer-readable program instructions of one embodiment of the present disclosure, so as to implement the technical solutions disclosed in various embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于增强现实的加工精度评估方法,包括:基于工件的一组图像的图像数据和位置数据,构建工件的三维模型(11);利用三维模型来训练工件的识别模型,并提供识别模型,以实时地识别工件在真实环境中的位置和方向(12);利用三维模型,对工件被施加特定的顶持力进行仿真,以得到仿真结果,仿真结果指示三维模型的各区域在顶持力作用下的变形分布,变形分布与加工精度相关联(13);以及通过对仿真结果进行处理并根据变形分布进行颜色渲染,生成经渲染的三维模型,经渲染的三维模型用于根据所识别的位置和方向而被叠加在工件上(14)。该方法独立于人为经验,较少地需要人为输入,适用于各种不同工件,既方便快捷又能提高加工精度的评估准确性。

Description

基于增强现实的加工精度评估方法及装置 技术领域
本公开内容涉及机械加工的技术领域,更具体地说,涉及基于增强现实的加工精度评估方法、装置、计算设备、计算机可读存储介质和程序产品。
背景技术
在机械加工设备(如车床、磨床等)上,通常采用顶尖来顶持工件。顶尖用于使工件保持在适当位置,从而对工件进行诸如车削、磨削之类的相应的加工。顶持力的大小对于工件的加工起到至关重要的作用。如果顶持力太小,则顶尖无法将工件保持在适当位置或无法带动工件旋转,这导致无法对工件的加工面进行加工。但是,如果顶持力过大,工件会发生较大的变形,这会使得机械加工设备无法在工件的加工面上的预定位置处进行加工,从而导致加工精度降低。因此,顶持力的大小与工件的加工精度的高低直接相关,应当根据所需要的加工精度确定顶持力的大小,或者至少使顶持力处于一个合理范围内。
以往,通常由机械加工设备的操作者根据自身经验来确定应当对工件施加多大的顶持力。另外,机械加工设备本身也具有额定的最大顶持力和加工精度,作为工件加工的参考。然而,随着对加工精度的要求不断提高,很难确定满足所需加工精度的顶持力的准确数值或数值范围。
发明内容
在现有技术中,由操作者根据经验来确定顶持力大小的方式极度依赖于人为经验,不具有通用性,也很难标准化。另外,即使操作者具有丰富的经验,知晓应当对待加工的工件施加多大的顶持力,操作者也无法评估实际加工精度,这导致实际加工精度可能远远低于所要求的加工精度,因此,新产品设计的生命周期通常较为漫长。而且,机械加工设备的额定最大顶持力是机械加工设备的头架和尾架自身所能承受的最大顶持力,无法适用于被加 工的工件,而额定加工精度也仅仅是针对某种材料的工件的设计值,不但不够准确而且不适用于所有情形。
鉴于上述技术问题,本公开内容的第一实施例提出了一种基于增强现实的加工精度评估方法,包括:基于工件的一组图像的图像数据和位置数据,构建工件的三维模型;利用三维模型来训练工件的识别模型,并提供识别模型,以实时地识别工件在真实环境中的位置和方向;利用三维模型,对工件的两端被施加特定的顶持力进行仿真,以得到仿真结果,仿真结果指示在顶持力作用下三维模型的各区域的变形分布,变形分布与工件的加工精度相关联;以及通过对仿真结果进行处理并根据变形分布进行颜色渲染,生成经渲染的三维模型,该经渲染的三维模型用于根据所识别的工件的位置和方向而被叠加在工件上。
在该实施例中,通过实时地为工件建立三维模型,对工件的两端被施加特定的顶持力进行仿真,生成经渲染的三维模型后叠加在工件上,操作者能够快速、直观地查看工件在该顶持力作用下的变形情况并评估加工精度,适时地调整仿真中顶持力的大小,进而判断工件的加工可行性,缩短产品设计的生命周期。该方法无需事先建立工件的三维模型,独立于人为经验,也较少地需要人为输入,因此适用于各种不同的工件,既方便快捷又能提高加工精度的评估准确性。
本公开内容的第二实施例提出了一种基于增强现实的加工精度评估装置,包括:三维模型构建模块,其被配置为基于工件的一组图像的图像数据和位置数据,构建工件的三维模型;识别模型训练模块,其被配置为利用三维模型来训练工件的识别模型,并提供识别模型,以实时地识别工件在真实环境中的位置和方向;三维模型仿真模块,其被配置为利用三维模型,对工件的两端被施加特定的顶持力进行仿真,以得到仿真结果,仿真结果指示在顶持力作用下三维模型的各区域的变形分布,变形分布与工件的加工精度相关联;以及仿真结果映射模块,其被配置为通过对仿真结果进行处理并根据变形分布进行颜色渲染,生成经渲染的三维模型,该经渲染的三维模型用于根据所识别的工件的位置和方向而被叠加在工件上。
在该实施例中,通过实时地为工件建立三维模型,对工件的两端被施加特定的顶持力进行仿真,生成经渲染的三维模型后叠加在工件上,操作者 能够快速、直观地查看工件在该顶持力作用下的变形情况并评估加工精度,适时地调整仿真中顶持力的大小,进而判断工件的加工可行性,缩短产品设计的生命周期。该方法无需事先建立工件的三维模型,独立于人为经验,也较少地需要人为输入,因此适用于各种不同的工件,既方便快捷又能提高加工精度的评估准确性。
本公开内容的第三实施例提出了一种计算设备,该计算设备包括:处理器;以及存储器,其用于存储计算机可执行指令,当计算机可执行指令被执行时使得处理器执行第一实施例中的方法。
本公开内容的第四实施例提出了一种计算机可读存储介质,该计算机可读存储介质具有存储在其上的计算机可执行指令,计算机可执行指令用于执行第一实施例的方法。
本公开内容的第五实施例提出了一种计算机程序产品,该计算机程序产品被有形地存储在计算机可读存储介质上,并且包括计算机可执行指令,计算机可执行指令在被执行时使至少一个处理器执行第一实施例的方法。
附图说明
结合附图并参考以下详细说明,本公开内容的各实施例的特征、优点及其他方面将变得更加明显,在此以示例性而非限制性的方式示出了本公开内容的若干实施例,在附图中:
图1示出了根据本公开内容的一个实施例的基于增强现实的加工精度评估方法的流程图;
图2示出了根据本公开内容的一个实施例构建工件的三维模型的流程图;
图3示出了从不同方向获取工件的一组图像的示意图;
图4示出了根据本公开内容的一个实施例的基于增强现实的加工精度评估***的示意方框图;
图5示出了图4的实施例中基于增强现实的加工精度评估方法的流程图;以及
图6示出了根据本公开内容的一个实施例的用于加工精度评估的计算设备的示意方框图。
具体实施方式
以下参考附图详细描述本公开内容的各个示例性实施例。虽然以下所描述的示例性方法、装置包括在其它组件当中的硬件上执行的软件和/或固件,但是应当注意,这些示例仅仅是说明性的,而不应看作是限制性的。例如,考虑在硬件中独占地、在软件中独占地、或在硬件和软件的任何组合中可以实施任何或所有硬件、软件和固件组件。因此,虽然以下已经描述了示例性的方法和装置,但是本领域的技术人员应容易理解,所提供的示例并不用于限制用于实现这些方法和装置的方式。
此外,附图中的流程图和框图示出了根据本公开内容的各个实施例的方法和***的可能实现的体系架构、功能和操作。应当注意,方框中所标注的功能也可以按照不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,或者它们有时也可以按照相反的顺序执行,这取决于所涉及的功能。同样应当注意的是,流程图和/或框图中的每个方框、以及流程图和/或框图中的方框的组合,可以使用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以使用专用硬件与计算机指令的组合来实现。
本文所使用的术语“包括”、“包含”及类似术语是开放性的术语,即“包括/包含但不限于”,表示还可以包括其他内容。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”等等。
下面根据若干个实施例来说明本公开内容。图1示出了根据本公开内容的一个实施例的基于增强现实(AR)的加工精度评估方法的流程图。参考图1,方法100从步骤11开始。在步骤11中,基于工件的一组图像的图像数据和位置数据,构建工件的三维模型。使用深度相机/摄像机捕获工件的至少一帧深度图像。例如,可以围绕工件从各个方位捕获工件的多帧深度图像。使用惯性测量单元(如陀螺仪)记录深度相机/摄像机在捕获深度图像时与工件之间的相对位置,并将该相对位置作为深度图像的位置数据。可以直接使用集成在增强现实设备(AR设备)上的深度相机和陀螺仪来捕获图像和记录位置,也可以使用其它单独的深度相机/摄像机或其它惯性测量单元来 捕获图像和记录位置。因此,对于每帧深度图像,除了包括每个像素的三维坐标和颜色信息的图像数据以外,深度图像还具有对应的位置数据。之后,便可以基于这些图像数据和位置数据构建工件的三维模型。可以理解,捕获的图像越多,则所建立的三维模型越准确。在本实施例中,实时地捕获待加工的工件的图像并记录对应的位置数据,以用于构建三维模型。在其它实施例中,可以将所构建的三维模型保存在存储器中。将来如果需要对相同工件进行建模,则可以直接从存储器中读取已构建的三维模型,而无需再重新构建。
接下来,在步骤12中,利用三维模型来训练工件的识别模型,并提供识别模型,以实时地识别工件在真实环境中的位置和方向。为了将仿真后得到的三维模型叠加在真实环境中的工件上,需要识别工件在真实环境中的位置和方向。因此,可以使用所生成的三维模型来训练工件的识别模型。可以使用任何适当的算法来对识别模型进行训练。训练好的识别模型被提供给增强现实设备。增强现实设备的相机捕获工件在真实环境中的实时图像,将该实时图像作为识别模型的输入,得到工件的实时位置和方向。
之后,在步骤13中,利用三维模型,对工件的两端被施加特定的顶持力进行仿真,以得到仿真结果,仿真结果指示在顶持力作用下三维模型的各区域的变形分布,变形分布与工件的加工精度相关联。当工件的两端被施加顶持力时,工件会发生微小的变形。当变形较大时,机械加工设备无法在工件的加工面上的预定位置处进行加工,从而导致加工精度降低。因此,工件的加工面变形越大,加工精度越低,而工件的加工面变形越小,加工精度越高。因此,仿真结果所指示的变形分布可以用于判断工件在被施加该特定的顶持力情形下的加工精度。在本实施例中,仿真基于有限元分析(FEA)方法来进行。在进行有限元分析时,对工件的三维模型进行网格划分,根据预设的工件材料和外部提供的顶持力,计算该顶持力在网格节点间的传递和损失,得到仿真结果。仿真结果可以是包括节点坐标和节点变形值(如百分比值)的FEA节点列表,能够以云图的形式显示在显示界面上。在本实施例中,实时地对工件的两端被施加特定的顶持力进行仿真。在其它实施例中,还可以将仿真结果保存在存储器中。将来如果需要对相同工件的两端被施加相同大小的顶持力进行仿真,则可以直接从存储器中读取仿真结果,而无需 再重新仿真。在其它实施例中,也可以使用其它适当的仿真方法来利用三维模型进行仿真。
最后,在步骤14中,通过对仿真结果进行处理并根据变形分布进行颜色渲染,生成经渲染的三维模型,该经渲染的三维模型用于根据所识别的工件的位置和方向而被叠加在工件上。对仿真结果进行处理包括根据FEA节点列表中的节点坐标,重新构建与增强现实设备的显示所需格式相符的三维模型。例如,FEA仿真时采用实体的三维模型,而增强现实设备的显示需要表面的三维模型,这需要从FEA节点列表中提取位于三维模型表面上的节点坐标,重新构建表面的三维模型。同时,采用不同颜色对变形值处于不同数值区间内的节点进行渲染。例如,划分若干个数值区间,将数值区间从大到小依次对应为红色、黄色、绿色、蓝色等等。变形值处于某个数值区间内的节点被渲染为与该数值区间的对应颜色。该经渲染的三维模型能够直观地指示三维模型的各区域在所仿真的顶持力作用下的变形分布。之后,将该经渲染的三维模型提供给增强现实设备,以根据所识别的工件在真实环境中的位置和方向而被叠加在真实环境中的工件上。
本领域技术人员应当理解,上述基于增强现实的加工精度评估方法适用于将顶尖作为工件固定设备的任何机械加工设备中,包括但不限于车床、磨床、钻床等等。
在上述实施例中,通过实时地为工件建立三维模型,对工件的两端被施加特定的顶持力进行仿真,生成经渲染的三维模型后叠加在工件上,操作者能够快速、直观地查看工件在该顶持力作用下的变形情况并评估加工精度,适时地调整仿真中顶持力的大小,进而判断工件的加工可行性,缩短产品设计的生命周期。该方法无需事先建立工件的三维模型,独立于人为经验,也较少地需要人为输入,因此适用于各种不同的工件,既方便快捷又能提高加工精度的评估准确性。
下面参考图2说明根据本公开内容的一个实施例构建工件的三维模型的具体过程。图2示出了根据本公开内容的一个实施例构建工件的三维模型的流程图。在图2中,步骤11包括子步骤111-子步骤114。在子步骤111中,基于图像数据和位置数据,重构工件的三维点云。如上所述,每帧深度图像都具有包括每个像素的三维坐标和颜色信息的图像数据和位置数据。对于每 帧深度图像,使用像素的三维坐标生成点云。随后,使用各帧深度图像的位置数据进行坐标系转换,即从局部坐标系转换到世界坐标系,将所生成的多个点云进行融合。由于一些深度图像中的部分区域具有相同的颜色信息,因此可以使用这些深度图像的颜色信息对点云进行校准,最终获得工件的三维点云。
之后,在子步骤112中,对三维点云进行预处理。预处理可以包括下采样和点云滤波。为了降低计算量,提高计算速度,可以对三维点云进行下采样,将对全部点的操作转换到下采样所得到的点上。可以使用任何适当的采样方式和采样算法来进行下采样。随后,对下采样后的点云进行滤波,去除离群点。可以通过任何已知的点云滤波方式(如统计学方式和几何方式)去除点云中的离群点。
接着,在子步骤113中,利用网格图生成算法,将三维点云转换成网格图。点云中的点被认为是三维模型表面上的点。因此,首先选择每个点最近的k个相邻点,确定该点的法向量。随后,根据法向量找出该点的适当的相邻点,三个点组成一个三角面片。最后,多个三角面片组成三维模型的表面,即网格图。在本实施例中,网格图生成算法包括泊松表面重建算法或VRIP算法。在其它实施例中,也可以使用任何其它网格图生成算法。
最后,在子步骤114中,对网格图进行后处理和格式转换,以得到三维模型。网格图的后处理包括清除孤立的三角面片。孤立的三角面片是指网格图中与其它三角面片没有共用的顶点或边的三角面片。在本实施例中,由于仿真和识别模型训练所需的三维模型格式与网格图不同(例如,仿真和识别模型训练需要实体的三维模型),因此在对网格图进行后处理后,需要对网格图进行格式转换来得到所需的三维模型。然而,在其它实施例中,可以根据实际需要来确定是否进行格式转换。
在依据本公开内容的一个实施例之中,由增强现实设备的深度相机捕获一组图像,由增强现实设备的陀螺仪确定位置数据,并且,增强现实设备还利用识别模型识别工件的位置和方向,并根据位置和方向将经渲染的三维模型叠加在工件上。图3示出了从不同方向获取工件的一组图像的示意图。如图3中示出的,增强现实设备分别从四个方向A、B、C和D捕获工件的四帧深度图像。应当指出,图3仅仅是一个示例,本领域技术人员应当理解, 可以从其它位置和/或方向获取任何数量的工件的深度图像。通过增强现实设备的深度相机和陀螺仪来进行图像捕获和位置记录,并在识别工件在真实环境中的位置和方向后叠加经渲染的三维模型,操作者在需要时能够即时完成工件的三维模型构建、仿真和仿真结果查看,而无需事先为工件建模,极大地提升了便利性和可操作性。
在依据本公开内容的一个实施例之中,在经渲染的三维模型被叠加在工件上时,该经渲染的三维模型指示工件的加工面是否满足所要求的加工精度。在仿真后,整个三维模型都会被渲染不同颜色以表示不同的变形情况。由于机械加工设备仅对工件的加工面进行加工,因此,在评估加工精度时,通常只需考虑工件的加工面的变形程度。在工件上叠加经渲染的三维模型后,操作者能够直观地看到工件的加工面的对应颜色,并通过判断加工面的对应颜色是否符合预期来判断对工件施加该顶持力是否满足所要求的加工精度,进而决定加工可行性。
在依据本公开内容的一个实施例之中,步骤12进一步包括:在三维模型的多组虚拟位置和虚拟方向,分别提取三维模型的二维特征,以得到多组二维特征;以及将多组二维特征作为训练样本,对识别模型进行训练。每组虚拟位置和虚拟方向都用于模拟虚拟相机拍摄该三维模型的一个视角。在每组虚拟位置和虚拟方向,都提取三维模型在该视角下的一组二维特征。将所提取的多组二维特征作为识别模型的训练样本。所选取的虚拟位置和虚拟方向数量越多,则所训练的识别模型越准确。训练好的识别模型能够根据输入的工件图像输出工件的位置和方向。
在依据本公开内容的一个实施例之中,步骤13进一步包括:接收外部提供的特定的顶持力;识别三维模型的两个受力端面;以及在模拟两个受力端面中的一个被约束且另一个被施加该特定的顶持力的条件下,计算三维模型的各区域的变形分布。在本实施例中,可以由操作者经由增强现实设备输入需要仿真的顶持力。操作者可以通过力传感器获得顶持力的大小。如果需要查看工件在多个顶持力作用下的变形情况,则逐一输入和仿真。在其它实施例中,也可以直接从力传感器接收顶持力的数值。在仿真时,首先识别三维模型的两个受力端面。可以理解,本公开内容中的工件具有细长形状,因此将工件的长度方向上的两端部的表面作为两个受力端面。如上所述,仿真 基于有限元分析(FEA)方法来进行。在进行有限元分析时,对工件的三维模型进行网格划分,根据预设的工件材料和所输入的顶持力,在模拟两个受力端面中的一个被约束且另一个被施加该特定的顶持力的条件下,计算该顶持力在网格节点间的传递和损失,得到包括节点坐标和节点变形值(如百分比值)的FEA节点列表。
下面以一个具体实施例为例来说明本公开内容的加工精度评估方法的应用场景和应用过程。图4示出了根据本公开内容的一个实施例的基于增强现实的加工精度评估***400的示意方框图。图5示出了图4的实施例中基于增强现实的加工精度评估方法500的流程图。如图4中示出的,加工精度评估***400包括增强现实设备41和边缘设备或云设备42两个部分。增强现实设备41集成有深度相机、陀螺仪、显示器或镜头。边缘设备或云设备42可以是物理上与增强显示设备41分离的任何计算设备。
更具体地,参考图4,增强现实设备41包括图像捕获模块411、物体识别模块412、显示模块413和通信模块(图4中未示出)。图像捕获模块411包括深度相机和陀螺仪,显示模块413包括显示器或镜头。边缘设备或云设备42包括三维模型构建模块421、识别模型训练模块422、三维模型仿真模块423、仿真结果映射模块424和通信模块(图4中未示出)。增强现实设备41和边缘设备或云设备42经由通信模块进行通信以传输数据。
基于增强现实的加工精度评估***400可以应用于这样的场景中:在新产品的设计阶段,评估机械加工设备是否能实现某个工件的设计加工精度或者确定需要多大的顶持力来实现该加工精度。然而,可以使用加工精度评估***400的情形并不限于上述场景,在任何需要评估工件的加工精度的场景中均可使用该加工精度评估***400。
当操作者需要评估某个工件在被施加特定的顶持力时的加工精度时,首先,他/她头戴或手持增强现实设备41围绕该工件四周来捕获工件的一组深度图像以用于三维模型的构建,并输入需要仿真的顶持力。操作者可以通过力传感器获得顶持力的大小。增强现实设备41将深度图像的图像数据和位置数据发送给边缘设备或云设备42。随后,操作者在某个视角下使用增强现实设备41捕获工件的实时图像。边缘设备或云设备42基于图像数据和位置数据构建工件的三维模型,并对工件两端被施加所输入的顶持力进行仿真, 生成经渲染的三维模型,同时训练工件的识别模型,将经渲染的三维模型和识别模型均发送给增强现实设备41。增强现实设备41使用识别模型来识别实时图像中的工件位置和方向,并将经渲染的三维模型叠加在真实工件上。操作者经由显示器或镜头查看仿真的工件变形情况,即叠加在工件上的三维模型的颜色,尤其是工件的加工面的对应颜色,并通过判断工件加工面的对应颜色是否与预期颜色一致来评估加工精度,进而决定加工可行性。例如,在工件两端被施加某个特定的顶持力时,一个加工面的对应颜色为红色,意味着该加工面变形较大,此时操作者判断与预期的蓝色不一致,因此对工件施加该顶持力无法满足所需要的加工精度。操作者调整顶持力的大小并重新输入,边缘设备或云设备42利用工件的三维模型再次进行仿真并生成经渲染的三维模型,由增强现实设备41叠加在真实工件上,操作者再次查看并判断加工精度。
更具体地,同时参考图4和图5,在步骤51中,图像捕获模块411捕获工件的一组深度图像并记录图像位置,并且经由通信模块将图像数据和位置数据发送给三维模型构建模块421。另外,通信模块还将操作者输入的顶持力发送给三维模型仿真模块423。在步骤52中,三维模型构建模块421基于该组深度图像的图像数据和位置数据,构建该工件的三维模型,并将所构建的三维模型分别提供给识别模型训练模块422和三维模型仿真模块423。在上面的实施例中已经阐述了构建三维模型的过程,在此将不再赘述。
接下来,训练工件的识别模型以及利用三维模型进行仿真。在步骤53中,识别模型训练模块422利用所构建的三维模型来训练工件的识别模型,并经由通信模块将经训练的识别模型发送给物体识别模块412。随后,在步骤54中,物体识别模块412将工件的实时图像作为识别模型的输入,得到工件此时在真实环境中的位置和方向,并将该位置和方向数据提供给显示模块413。
另一方面,在步骤55中,三维模型仿真模块423利用所构建的三维模型,通过有限元分析方法对工件被施加所输入的顶持力的情形进行仿真,以得到仿真结果。仿真结果为包括节点坐标和节点变形值(如百分比值)的FEA节点列表。在上面的实施例中已经阐述了仿真过程,在此将不再赘述。随后,在步骤56中,仿真结果映射模块424通过对仿真结果进行处理并根 据变形分布进行颜色渲染,生成经渲染的三维模型,并经由通信模块将该经渲染的三维模型发送给显示模块413。最后,在步骤57中,显示模块413根据工件在真实环境中的位置和方向,将经渲染的三维模型叠加在真实工件上。操作者能够经由显示模块413看到真实工件上的不同颜色,表示在输入的顶持力作用下的变形分布。如上所述,工件加工面的变形情况与该加工面的加工精度相关联。操作者可以只关注工件加工面的对应颜色,而忽略非加工面的对应颜色,并通过判断工件加工面的对应颜色是否与预期颜色一致来评估加工精度,进而决定加工可行性。如果工件加工面的对应颜色与预期颜色一致,则表示该加工面的变形情况与所要求的加工精度相符。在一些情形下,工件具有多个加工面,某些加工面对加工精度的容忍度较高,而某些加工面对加工精度的要求较为严格。在这些情形下,可以根据实际情况,对多个加工面的变形情况综合考虑。
应当指出,在上面的步骤中,步骤53和步骤54顺序执行,步骤55和步骤56顺序执行,但是步骤53和步骤55在执行时并不依赖于彼此,因此可以并行执行,也可以以任意方式顺序执行,只要能确保在步骤57中经渲染的三维模型能够叠加在工件上即可。
如果操作者希望获知工件在另一个顶持力作用下的加工精度,则需重新输入顶持力,由三维模型仿真模块423再次进行仿真,由仿真结果映射模块424再次生成经渲染的三维模型,并由显示模块413叠加在工件上。如果操作者希望获知另一工件的加工精度,则需要从使用增强现实设备41捕获另一工件的一组深度图像开始重复以上过程。
在上述实施例中,通过实时地为工件建立三维模型,对工件的两端被施加特定的顶持力进行仿真,生成经渲染的三维模型后叠加在工件上,操作者能够快速、直观地查看工件在该顶持力作用下的变形情况并评估加工精度,适时地调整仿真中顶持力的大小,进而判断工件的加工可行性,缩短产品设计的生命周期。该方法无需事先建立工件的三维模型,独立于人为经验,也较少地需要人为输入,因此适用于各种不同的工件,既方便快捷又能提高加工精度的评估准确性。
本公开内容还提出了一种基于增强现实的加工精度评估装置。加工精度评估装置的各模块可以利用软件、硬件(例如集成电路、FPGA等)或者 软硬件结合的方式来实现。参照图4,装置包括三维模型构建模块421、识别模型训练模块422、三维模型仿真模块423和仿真结果映射模块424。三维模型构建模块421被配置为基于工件的一组图像的图像数据和位置数据,构建工件的三维模型。识别模型训练模块422被配置为利用三维模型来训练工件的识别模型,并提供识别模型,以实时地识别工件在真实环境中的位置和方向。三维模型仿真模块423被配置为利用三维模型,对工件的两端被施加特定的顶持力进行仿真,以得到仿真结果,仿真结果指示在顶持力作用下三维模型的各区域的变形分布,该变形分布与工件的加工精度相关联。仿真结果映射模块424被配置为通过对仿真结果进行处理并根据变形分布进行颜色渲染,生成经渲染的三维模型,经渲染的三维模型用于根据所识别的工件的位置和方向而被叠加在工件上。
可选地,在依据本公开内容的一个实施例之中,由增强现实设备的深度相机捕获一组图像,由增强现实设备的陀螺仪确定位置数据,并且,增强现实设备还利用识别模型识别工件的位置和方向,并根据位置和方向将经渲染的三维模型叠加在工件上。
可选地,在依据本公开内容的一个实施例之中,三维模型构建模块421被进一步配置为:基于图像数据和位置数据,重构工件的三维点云;对三维点云进行预处理;利用网格图生成算法,将三维点云转换成网格图;以及对网格图进行后处理和格式转换,以得到三维模型。
可选地,在依据本公开内容的一个实施例之中,网格图生成算法包括泊松表面重建算法或VRIP算法。
可选地,在依据本公开内容的一个实施例之中,识别模型训练模块被进一步配置为:在三维模型的多组虚拟位置和虚拟方向,分别提取三维模型的二维特征,以得到多组二维特征;以及将多组二维特征作为训练样本,对识别模型进行训练。
可选地,在依据本公开内容的一个实施例之中,三维模型仿真模块被进一步配置为:接收外部提供的特定的顶持力;识别三维模型的两个受力端面;以及在模拟两个受力端面中的一个被约束且另一个被施加特定的顶持力的条件下,计算三维模型的各区域的变形分布。
可选地,在依据本公开内容的一个实施例之中,仿真基于有限元分析 方法来进行。
可选地,在依据本公开内容的一个实施例之中,在经渲染的三维模型被叠加在工件上时,该经渲染的三维模型指示工件的加工面是否满足所要求的加工精度。
图6示出了根据本公开内容的一个实施例的用于加工精度评估的计算设备的示意方框图。从图6中可以看出,用于加工精度评估的计算设备600包括中央处理单元(CPU)601(例如处理器)以及与中央处理单元(CPU)601耦合的存储器602。存储器602用于存储计算机可执行指令,当计算机可执行指令被执行时使得中央处理单元(CPU)601执行以上实施例中的方法。中央处理单元(CPU)601和存储器602通过总线彼此相连,输入/输出(I/O)接口也连接至总线。计算设备600还可以包括连接至I/O接口的多个部件(图6中未示出),包括但不限于:输入单元,例如键盘、鼠标等;输出单元,例如各种类型的显示器、扬声器等;存储单元,例如磁盘、光盘等;以及通信单元,例如网卡、调制解调器、无线通信收发机等。通信单元允许该计算设备600通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
此外,替代地,上述方法能够通过计算机可读存储介质来实现。计算机可读存储介质上载有用于执行本公开内容的各个实施例的计算机可读程序指令。计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
因此,在另一个实施例中,本公开内容提出了一种计算机可读存储介质,该计算机可读存储介质具有存储在其上的计算机可执行指令,计算机可执行指令用于执行本公开内容的各个实施例中的方法。
在另一个实施例中,本公开内容提出了一种计算机程序产品,该计算机程序产品被有形地存储在计算机可读存储介质上,并且包括计算机可执行指令,该计算机可执行指令在被执行时使至少一个处理器执行本公开内容的各个实施例中的方法。
一般而言,本公开内容的各个示例实施例可以在硬件或专用电路、软件、固件、逻辑,或其任何组合中实施。某些方面可以在硬件中实施,而其他方面可以在可以由控制器、微处理器或其他计算设备执行的固件或软件中实施。当本公开内容的实施例的各方面被图示或描述为框图、流程图或使用某些其他图形表示时,将理解此处描述的方框、装置、***、技术或方法可以作为非限制性的示例在硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或其某些组合中实施。
用于执行本公开内容的各个实施例的计算机可读程序指令或者计算机程序产品也能够存储在云端,在需要调用时,用户能够通过移动互联网、固网或者其他网络访问存储在云端上的用于执行本公开内容的一个实施例的计算机可读程序指令,从而实施依据本公开内容的各个实施例所公开内容的技术方案。
虽然已经参考若干具体实施例描述了本公开内容的实施例,但是应当理解,本公开内容的实施例并不限于所公开内容的具体实施例。本公开内容的实施例旨在涵盖在所附权利要求的精神和范围内所包括的各种修改和等同布置。权利要求的范围符合最宽泛的解释,从而包含所有这样的修改及等同结构和功能。

Claims (19)

  1. 基于增强现实的加工精度评估方法,包括:
    基于工件的一组图像的图像数据和位置数据,构建所述工件的三维模型;
    利用所述三维模型来训练所述工件的识别模型,并提供所述识别模型,以实时地识别所述工件在真实环境中的位置和方向;
    利用所述三维模型,对所述工件的两端被施加特定的顶持力进行仿真,以得到仿真结果,所述仿真结果指示在所述顶持力作用下所述三维模型的各区域的变形分布,所述变形分布与所述工件的加工精度相关联;以及
    通过对所述仿真结果进行处理并根据所述变形分布进行颜色渲染,生成经渲染的三维模型,所述经渲染的三维模型用于根据所识别的所述工件的位置和方向而被叠加在所述工件上。
  2. 根据权利要求1所述的方法,其中,由增强现实设备的深度相机捕获所述一组图像,由所述增强现实设备的陀螺仪确定所述位置数据,并且,所述增强现实设备还利用所述识别模型识别所述工件的位置和方向,并根据所述位置和方向将所述经渲染的三维模型叠加在所述工件上。
  3. 根据权利要求1所述的方法,其中,基于工件的一组图像的图像数据和位置数据,构建所述工件的三维模型进一步包括:
    基于所述图像数据和位置数据,重构所述工件的三维点云;
    对所述三维点云进行预处理;
    利用网格图生成算法,将所述三维点云转换成网格图;以及
    对所述网格图进行后处理和格式转换,以得到所述三维模型。
  4. 根据权利要求3所述的方法,其中,所述网格图生成算法包括泊松表面重建算法或VRIP算法。
  5. 根据权利要求1所述的方法,其中,利用所述三维模型来训练所述工件的识别模型进一步包括:
    在所述三维模型的多组虚拟位置和虚拟方向,分别提取所述三维模型的二维特征,以得到多组二维特征;以及
    将所述多组二维特征作为训练样本,对所述识别模型进行训练。
  6. 根据权利要求1所述的方法,其中,利用所述三维模型,对所述工件的两端被施加特定的顶持力进行仿真进一步包括:
    接收外部提供的所述特定的顶持力;
    识别所述三维模型的两个受力端面;以及
    在模拟所述两个受力端面中的一个被约束且另一个被施加所述特定的顶持力的条件下,计算所述三维模型的各区域的变形分布。
  7. 根据权利要求1所述的方法,其中,所述仿真基于有限元分析方法来进行。
  8. 根据权利要求1所述的方法,其中,在所述经渲染的三维模型被叠加在所述工件上时,所述经渲染的三维模型指示所述工件的加工面是否满足所要求的加工精度。
  9. 基于增强现实的加工精度评估装置,包括:
    三维模型构建模块,其被配置为基于工件的一组图像的图像数据和位置数据,构建所述工件的三维模型;
    识别模型训练模块,其被配置为利用所述三维模型来训练所述工件的识别模型,并提供所述识别模型,以实时地识别所述工件在真实环境中的位置和方向;
    三维模型仿真模块,其被配置为利用所述三维模型,对所述工件的两端被施加特定的顶持力进行仿真,以得到仿真结果,所述仿真结果指示在所述顶持力作用下所述三维模型的各区域的变形分布,所述变形分布与所述工件的加工精度相关联;以及
    仿真结果映射模块,其被配置为通过对所述仿真结果进行处理并根据所述变形分布进行颜色渲染,生成经渲染的三维模型,所述经渲染的三维模型 用于根据所识别的所述工件的位置和方向而被叠加在所述工件上。
  10. 根据权利要求9所述的装置,其中,由增强现实设备的深度相机捕获所述一组图像,由所述增强现实设备的陀螺仪确定所述位置数据,并且,所述增强现实设备还利用所述识别模型识别所述工件的所述位置和方向,并根据所述位置和方向将所述经渲染的三维模型叠加在所述工件上。
  11. 根据权利要求9所述的装置,其中,所述三维模型构建模块被进一步配置为:
    基于所述图像数据和位置数据,重构所述工件的三维点云;
    对所述三维点云进行预处理;
    利用网格图生成算法,将所述三维点云转换成网格图;以及
    对所述网格图进行后处理和格式转换,以得到所述三维模型。
  12. 根据权利要求11所述的装置,其中,所述网格图生成算法包括泊松表面重建算法或VRIP算法。
  13. 根据权利要求9所述的装置,其中,所述识别模型训练模块被进一步配置为:
    在所述三维模型的多组虚拟位置和虚拟方向,分别提取所述三维模型的二维特征,以得到多组二维特征;以及
    将所述多组二维特征作为训练样本,对所述识别模型进行训练。
  14. 根据权利要求9所述的装置,其中,所述三维模型仿真模块被进一步配置为:
    接收外部提供的所述特定的顶持力;
    识别所述三维模型的两个受力端面;以及
    在模拟所述两个受力端面中的一个被约束且另一个被施加所述特定的顶持力的条件下,计算所述三维模型的各区域的变形分布。
  15. 根据权利要求9所述的装置,其中,所述仿真基于有限元分析方法来进行。
  16. 根据权利要求9所述的装置,其中,在所述经渲染的三维模型被叠加在所述工件上时,所述经渲染的三维模型指示所述工件的加工面是否满足所要求的加工精度。
  17. 计算设备,包括:
    处理器;以及
    存储器,其用于存储计算机可执行指令,当所述计算机可执行指令被执行时使得所述处理器执行根据权利要求1-8中任一项所述的方法。
  18. 计算机可读存储介质,所述计算机可读存储介质具有存储在其上的计算机可执行指令,所述计算机可执行指令用于执行根据权利要求1-8中任一项所述的方法。
  19. 计算机程序产品,所述计算机程序产品被有形地存储在计算机可读存储介质上,并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据权利要求1-8中任一项所述的方法。
PCT/CN2021/101026 2021-06-18 2021-06-18 基于增强现实的加工精度评估方法及装置 WO2022261962A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180096942.2A CN117222949A (zh) 2021-06-18 2021-06-18 基于增强现实的加工精度评估方法及装置
PCT/CN2021/101026 WO2022261962A1 (zh) 2021-06-18 2021-06-18 基于增强现实的加工精度评估方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101026 WO2022261962A1 (zh) 2021-06-18 2021-06-18 基于增强现实的加工精度评估方法及装置

Publications (1)

Publication Number Publication Date
WO2022261962A1 true WO2022261962A1 (zh) 2022-12-22

Family

ID=84526636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101026 WO2022261962A1 (zh) 2021-06-18 2021-06-18 基于增强现实的加工精度评估方法及装置

Country Status (2)

Country Link
CN (1) CN117222949A (zh)
WO (1) WO2022261962A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630338A (zh) * 2009-02-18 2010-01-20 上海理工大学 大型曲轴车床的动态力学性能仿真方法
US20140088746A1 (en) * 2012-09-26 2014-03-27 Apple Inc. Contact patch simulation
US20170324947A1 (en) * 2013-12-27 2017-11-09 Google Inc. Systems and Devices for Acquiring Imagery and Three-Dimensional (3D) Models of Objects
CN108555908A (zh) * 2018-04-12 2018-09-21 同济大学 一种基于rgbd相机的堆叠工件姿态识别及拾取方法
CN110069972A (zh) * 2017-12-11 2019-07-30 赫克斯冈技术中心 自动探测真实世界物体
CN111095139A (zh) * 2017-07-20 2020-05-01 西门子股份公司 一种用于检测机器的异常状态的方法和***

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630338A (zh) * 2009-02-18 2010-01-20 上海理工大学 大型曲轴车床的动态力学性能仿真方法
US20140088746A1 (en) * 2012-09-26 2014-03-27 Apple Inc. Contact patch simulation
US20170324947A1 (en) * 2013-12-27 2017-11-09 Google Inc. Systems and Devices for Acquiring Imagery and Three-Dimensional (3D) Models of Objects
CN111095139A (zh) * 2017-07-20 2020-05-01 西门子股份公司 一种用于检测机器的异常状态的方法和***
CN110069972A (zh) * 2017-12-11 2019-07-30 赫克斯冈技术中心 自动探测真实世界物体
CN108555908A (zh) * 2018-04-12 2018-09-21 同济大学 一种基于rgbd相机的堆叠工件姿态识别及拾取方法

Also Published As

Publication number Publication date
CN117222949A (zh) 2023-12-12

Similar Documents

Publication Publication Date Title
US11153553B2 (en) Synthesis of transformed image views
WO2018119889A1 (zh) 三维场景定位方法和装置
US20150062123A1 (en) Augmented reality (ar) annotation computer system and computer-readable medium and method for creating an annotated 3d graphics model
EP2700040B1 (en) Color channels and optical markers
EP3101624A1 (en) Image processing method and image processing device
WO2021017471A1 (zh) 一种基于图像处理的点云滤波方法、装置和存储介质
EP3633606B1 (en) Information processing device, information processing method, and program
US9996947B2 (en) Monitoring apparatus and monitoring method
JP2017182695A (ja) 情報処理プログラム、情報処理方法および情報処理装置
CN110573992B (zh) 使用增强现实和虚拟现实编辑增强现实体验
JP6293386B2 (ja) データ処理装置、データ処理方法及びデータ処理プログラム
EP4178194A1 (en) Video generation method and apparatus, and readable medium and electronic device
JP6031819B2 (ja) 画像処理装置、画像処理方法
TWI669683B (zh) 三維影像重建方法、裝置及其非暫態電腦可讀取儲存媒體
WO2023093739A1 (zh) 一种多视图三维重建的方法
CN103973976A (zh) 一种利用光学成像的显著性提取装置及方法
CN106570482A (zh) 人体动作识别方法及装置
US20190273845A1 (en) Vibration monitoring of an object using a video camera
WO2022261962A1 (zh) 基于增强现实的加工精度评估方法及装置
CN105378573A (zh) 信息处理装置、检查范围的计算方法以及程序
WO2020155908A1 (zh) 用于生成信息的方法和装置
CN112381929A (zh) 一种三维电力设备模型建模方法
JP2023021469A (ja) 測位方法、測位装置、ビジュアルマップの生成方法およびその装置
CN112652056B (zh) 一种3d信息展示方法及装置
CN112634439B (zh) 一种3d信息展示方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21945545

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180096942.2

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21945545

Country of ref document: EP

Kind code of ref document: A1