CN115294308A - Augmented reality auxiliary assembly operation guiding system based on deep learning - Google Patents

Augmented reality auxiliary assembly operation guiding system based on deep learning Download PDF

Info

Publication number
CN115294308A
CN115294308A CN202210972333.5A CN202210972333A CN115294308A CN 115294308 A CN115294308 A CN 115294308A CN 202210972333 A CN202210972333 A CN 202210972333A CN 115294308 A CN115294308 A CN 115294308A
Authority
CN
China
Prior art keywords
assembly
augmented reality
virtual
real
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210972333.5A
Other languages
Chinese (zh)
Inventor
李旺
魏明
余军
吴振威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Fiberhome Technical Services Co Ltd
Original Assignee
Wuhan Fiberhome Technical Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Fiberhome Technical Services Co Ltd filed Critical Wuhan Fiberhome Technical Services Co Ltd
Priority to CN202210972333.5A priority Critical patent/CN115294308A/en
Publication of CN115294308A publication Critical patent/CN115294308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an augmented reality auxiliary assembly operation guiding system based on deep learning. By means of the knowledge graph technology, organization and management of information in the augmented reality auxiliary assembly process are researched, the augmented reality auxiliary assembly knowledge graph is established and used as data support of an augmented reality auxiliary assembly operation guide system, visual content adaptive presentation suitable for assembly scenes is generated, and personalized visual content requirements are provided for operators. And the timeliness, stability and accuracy of virtual and real registration and shielding in the augmented reality visual guiding process are realized by adopting an augmented reality virtual and real registration and shielding method based on deep learning. And the convolutional neural network is utilized to carry out integrated detection on neglected loading and wrong loading in the assembly process, so that the assembly quality of operators in the augmented reality auxiliary assembly process is improved.

Description

Augmented reality auxiliary assembly operation guiding system based on deep learning
Technical Field
The invention belongs to the technical field of assembly and manufacturing, and particularly relates to an augmented reality auxiliary assembly operation guiding system based on deep learning.
Background
The assembly operation is an important stage in the production process of products, and the assembly of a plurality of current complex products (such as automobiles, engines, aerospace equipment and the like) is difficult to replace by automatic production and still depends on manual assembly. The traditional assembly operation process relies on an operation manual to guide an operator to complete the assembly operation. The operation manual describes the assembly process through two-dimensional characters, symbols, pictures or the like. The operator needs to refer to the operation manual repeatedly, and can obtain the required assembly information after understanding and memorizing the information, the whole process is long in time consumption, the working efficiency is low, and certain burden is caused to the memory of workers.
The environment and personnel of current assembly work are experiencing two major trends: complicated working process and aging of human resources. In the aspect of complicated operation process, the individuation and diversification of market demands promote the rapid modification, upgrading and updating of products, a multi-variety and variable-batch production mode becomes mainstream, the content of manual assembly operation of complex products is continuously changed along with the product demands, and the memory burden of workers in the assembly process is continuously increased; moreover, the production and assembly data are greatly enriched in the industrial internet environment, so that the information acquisition burden of workers is further increased, and hidden danger is brought to the assembly quality. In the aspect of human resources of assembly work, the world population is gradually aged, and the research on manufacturing and assembly activities under the condition of aged human resources is increasingly emphasized. Although the general experience of old-aged human resources is rich, the old-aged human resources have the remarkable characteristics of reduced learning ability, accelerated forgetting speed, reduced memory ability and the like, and have potential influence on the assembly efficiency and the assembly quality.
The change provides new challenges for assembly guidance under manual operation conditions, and timely, simplified and accurate manual assembly operation information assistance has important value and application prospects. With the development of industry 4.0, people began to apply Augmented Reality (AR) technology to the auxiliary assembly process. The augmented reality technology can convert complex operation instructions into virtual models or animations in a three-dimensional space, and the virtual models or animations are superposed at specific positions in a real assembly scene to visually guide the assembly operation process. Compared with the traditional paper handbook, the augmented reality auxiliary assembly operation can effectively improve the assembly guide efficiency and reduce the cognitive burden of operators. Through the auxiliary assembly operation of augmented reality, on the one hand can reduce the work experience requirement of the assembly process to young operators, and on the other hand can alleviate the memory burden of old operators with abundant work experience but reduced memory capacity, and is favorable for alleviating the problem of labor resource shortage in the field of assembly and manufacture.
Augmented reality Assisted Assembly (AA) means that in the product Assembly process, technologies such as Augmented reality three-dimensional registration, virtual-real shielding and real-time interaction are utilized, assembly guidance information is superimposed on a real Assembly scene in a visual presentation mode to form a virtual-real fused Assembly environment, and the Assembly environment is displayed in a handheld mobile device, a fixed-position screen, a helmet/glasses, a projection mode and the like, so that Assembly operation guidance is provided for an operator, the cognitive efficiency of the operator is improved, and the operator is assisted to complete Assembly tasks quickly with high quality. At present, research on augmented reality auxiliary assembly at home and abroad is advanced to a certain extent, but problems still exist in the aspects of information organization and management, self-adaptive presentation of visual contents, virtual-real fusion registration and virtual-real shielding, assembly detection feedback and the like in the augmented reality auxiliary assembly process. With the development of technologies such as big data processing, artificial intelligence, deep learning and the like. Data processing technology represented by Knowledge Graph (KG) achieves fruitful results in aspects of Knowledge organization, knowledge presentation, knowledge reasoning and the like in the assembly field.
Disclosure of Invention
The image processing method based on deep learning realizes the result of far-exceeding the traditional visual processing in the aspects of assembly object identification, assembly object detection and the like. The application and popularization of the methods can favorably promote the development and breakthrough of the augmented reality auxiliary assembly technology. In view of the above defects or improvement requirements of the prior art, the present invention provides an augmented reality aided assembly work guidance system based on deep learning, which aims to realize assembly object recognition, assembly object detection, etc. based on image processing of deep learning, improve assembly efficiency, and reduce assembly cost.
In order to achieve the above object, the present invention provides an augmented reality auxiliary assembly work guidance system based on deep learning, which comprises a software platform and a hardware platform, wherein:
the software platform consists of an enhanced assembly information extraction system, an enhanced assembly content editing system, an enhanced assembly guiding system and an enhanced assembly knowledge map module, wherein the three subsystems are respectively applied to different stages of an enhanced assembly implementation process, and data in different stages are organized and managed through the enhanced assembly knowledge map module; wherein:
the enhanced assembly information extraction system is used for converting a model in the CAD auxiliary software into a lightweight model in an information extraction stage, extracting pose information of the model for constructing a virtual scene, and storing data in the enhanced assembly knowledge map module;
the enhanced assembly content editing system is applied to a content editing stage and is responsible for planning and distributing different types of visual elements provided by the enhanced assembly knowledge graph module according to the description of an assembly process file so as to generate an enhanced assembly guide file;
the enhanced assembly guidance system is applied to the field guidance stage, can perform virtual-real fusion on the enhanced assembly guidance file and the real scene image, outputs a virtual-real fusion video to guide an operator to perform assembly operation, and can detect and feed back missing or wrong assembly problems in the operation process;
the hardware platform comprises an arithmetic processing device, an image acquisition device and an interaction device.
In one embodiment of the invention, the augmented reality system is used for extracting original data from different data carriers, performing lightweight processing on the original data, generating lightweight data which can be applied to the augmented reality system, and storing the lightweight data in the augmented reality knowledge map module.
In one embodiment of the invention, the augmented assembly content editing system is used for laying out and planning visual elements provided by the augmented assembly knowledge graph module according to operation information requirements, belongs to a front-end link of augmented assembly visual guidance, and creates augmented reality visual content expressing assembly operation information according to the assembly operation information in a content editing stage.
In one embodiment of the invention, the time, the position and the type of the visual content displayed in the virtual-real fusion video are planned by using the augmented reality content editing system, and an augmented reality guide file generated by editing the augmented reality content is stored in the augmented reality knowledge map module and is applied to the augmented reality guidance system.
In one embodiment of the invention, the enhanced assembly site guidance system is applied to an assembly site, a user facing the system is a site assembly operator, the site guidance system carries out virtual-real fusion on an enhanced assembly guidance file in an enhanced assembly knowledge map module and a real assembly scene, provides visual guidance information adapting to the individual requirements of the operator according to the state of the operator, and carries out quality detection on the result after the assembly operation is finished.
In one embodiment of the invention, when the enhanced assembly field guidance system is applied to an assembly field, a monocular camera is used for acquiring a video image of a real assembly scene, virtual-real fusion guidance assembly operation is carried out on virtual information and the real scene through an augmented reality virtual-real registration processing method and a virtual-real shielding processing method, and assembly detection feedback processing is carried out after each assembly step is completed, namely the enhanced assembly field guidance system utilizes an assembly scene image shot by the camera to carry out assembly result detection and feeds back a detection result to field operators.
In one embodiment of the present invention, the assembly knowledge graph enhancing module includes an ontology module submodule, an instance node submodule, a relationship node submodule, and a data saving submodule, wherein:
the body module submodule includes: the system comprises an operator information body, a visual content information body, an enhanced assembly guidance information body, an assembly scene information body and a relation set among various body concepts, wherein the body extraction adopts a top-down method to extract;
the example node sub-module extracts and stores example data by using an ontology concept, extracts text data by using a neural network LSTM, and directly extracts other non-text data manually;
the relation node submodule is used for creating the relation between ontology concepts and the relation between example data;
and the data storage submodule is used for storing data of the enhanced assembly knowledge graph module of different application scenes.
In an embodiment of the invention, the virtual-real registration processing method is realized by adopting a full convolution neural network to perform virtual-real registration, wherein the input of the full convolution neural network is an RGB picture, and the output of the full convolution neural network is an image coordinate of a key point in the image.
In an embodiment of the invention, the virtual and real occlusion processing method is implemented by using a full convolution neural network to perform virtual and real occlusion, wherein the input of the full convolution neural network is an RGB picture, and the output is a depth value corresponding to each pixel of the image.
In one embodiment of the present invention, the assembly detection feedback process includes: according to the requirement of the enhanced assembly visual guide operation, the neglected assembly judgment is carried out by adopting an assembly object confidence value obtained by convolutional neural network detection; adopting an improved YOLOv3 convolution upgrade network to detect an assembly object, adopting a 2D detection frame obtained by convolution neural network detection to carry out misassembly detection on the similarity obtained by matching the contour of the assembly object with the contour of the assembly object, finally feeding back a detection result to an operator, and storing the result as an assembly record; wherein the improvement of YOLOv3 improves the accuracy of the detection of the assembly object by increasing the attention mechanism and the number of output branches.
In general, compared with the prior art, the technical scheme conceived by the invention has the following beneficial effects:
(1) And in the aspect of self-adaptive presentation of visual contents, constructing an augmented reality auxiliary assembly knowledge graph, and organizing and managing complex multi-source heterogeneous information in the augmented reality auxiliary assembly process by using the knowledge graph. And the knowledge graph is utilized to perform visual presentation suitable for the current assembly situation according to the operation state of the operator, so that the visual guide efficiency is improved.
(2) In the aspect of virtual and real registration, the problem of no-identification registration in the augmented reality auxiliary assembly process is solved by using a deep learning method, and the augmented reality registration with high precision, low delay and high stability is realized. The full-convolution neural network for augmented reality assisted assembly registration is designed, so that the full-convolution neural network can be suitable for complex assembly operation scenes, the neural network training can be performed through a synthetic image, the manual consumption of data acquisition and labeling is avoided, and the usability of a registration algorithm is improved.
(3) In the aspect of virtual and real shielding, the method realizes the augmented reality auxiliary assembly virtual and real shielding processing of the monocular image by using a deep learning method, solves the problem of virtual and real shielding in the augmented reality auxiliary assembly process, and improves the reality of the virtual and real fused image. The method does not need to adopt a depth camera to carry out scene depth acquisition, can effectively reduce the invasive influence of hardware on operators, and reduces the use burden of the operators.
(4) In the aspect of assembly detection, the convolutional neural network is used for integrally detecting the neglected loading and wrong loading problems in the assembly process, and the neglected loading and wrong loading detection problems in the augmented reality auxiliary assembly process are solved. The feedback of the assembly result is completed by the operator according to the visual guidance of the augmented reality, the repeated dismounting process caused by the conditions of neglected loading, wrong loading and the like in the auxiliary assembly process of the augmented reality is avoided, and the auxiliary assembly efficiency and the quality of the augmented reality are improved.
Drawings
FIG. 1 is a general framework of an augmented reality assisted assembly task guidance system based on deep learning in an embodiment of the present invention
FIG. 2 is a hierarchy of an augmented reality assisted assembly task guidance system based on deep learning in an embodiment of the present invention;
FIG. 3 illustrates a functional model for enhancing assembly of knowledge-graph modules in an embodiment of the present invention;
FIG. 4 is a functional model of a virtual-real registration processing module according to an embodiment of the present invention;
FIG. 5 shows functional models of virtual-real occlusion handling modules according to an embodiment of the present invention;
FIG. 6 is a block functional model of the assembly detection feedback process in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In order to solve the problems in the prior art, the invention provides an augmented reality auxiliary assembly operation guiding system based on deep learning. Through a knowledge graph technology, information organization and management in the augmented reality auxiliary assembly process are researched, an augmented reality auxiliary assembly knowledge graph is established to serve as data support of an augmented reality auxiliary assembly operation guide system, and visual content self-adaptive presentation adaptive to requirements of assembly scene operators is generated. And the timeliness, stability and accuracy of virtual and real registration and shielding in the augmented reality visual guiding process are realized by adopting an augmented reality virtual and real registration and shielding method based on deep learning. And the convolutional neural network is utilized to carry out integrated detection on neglected loading and wrong loading in the assembly process, so that the assembly quality of operators in the augmented reality auxiliary assembly process is improved. The main content of the invention is as follows:
1. system platform frame
An augmented reality auxiliary assembly work guiding system based on deep learning mainly comprises two parts: a software platform and a hardware platform. The software platform mainly comprises an enhanced assembly information extraction system, an enhanced assembly content editing system and an enhanced assembly guide system, and the three subsystems are respectively applied to different stages of an enhanced assembly implementation process. The enhanced assembly information extraction system is used in the information extraction stage, can convert a model in CAD auxiliary software into a lightweight model, extracts pose information of the model for constructing a virtual scene, and stores data in the enhanced assembly knowledge map module. The enhanced assembly content editing system is applied to a content editing stage and is responsible for planning and distributing different types of visual elements provided by the enhanced assembly knowledge graph module according to the description of the assembly process file, so that an enhanced assembly guide file is generated. The enhanced assembly guidance system is applied to a field guidance stage, can perform virtual-real fusion on an enhanced assembly guidance file and a real scene image, outputs a virtual-real fusion video to guide an operator to perform assembly operation, and can detect and feed back missing and wrong assembly problems in the operation process. The data of different stages are mainly organized and managed by enhancing the assembly knowledge graph module. The hardware platform mainly comprises hardware devices such as an arithmetic processing device, an image acquisition device and an interaction device. The enhanced assembly system can be applied to application scenes such as field assembly, pre-post training and the like, and provides visual guidance for assembly operators. The platform framework of the designed enhanced assembly work guidance system is guided by applying the enhanced assembly visual guidance related technology, as shown in FIG. 1.
(1) Information extraction
The enhanced assembly information extraction system is mainly responsible for extracting original data from different data carriers, carrying out lightweight processing on the original data, generating lightweight data applicable to an augmented reality system, and storing the lightweight data in the enhanced assembly knowledge graph module. For example: in the three-dimensional design process of the product, a CAD model file containing rich vector information and physical attributes is generated. When the method is applied to augmented reality, if the CAD model file of the product is directly used for virtual scene construction, the augmented assembly visual guide system consumes a large amount of operation memory, which affects the timeliness of the system. The light-weight three-dimensional model only contains the facet information and the pose information of the three-dimensional model, and the timeliness of virtual and real registration and virtual and real shading rendering can be effectively improved. The user oriented to enhance the extraction of the assembly information is an assembly process designer and is used for collecting and sorting data.
(2) Content editing
The assembly enhancement content editing module is responsible for carrying out layout and planning on visual elements provided by the assembly enhancement knowledge graph module according to the operation information requirement, and belongs to the front-end link of the assembly enhancement visual guidance. The main task of the content editing stage is to create augmented reality visual content expressing assembly operation information according to the assembly operation information. The enhanced assembly content editing and the enhanced assembly information extraction and the enhanced assembly visual guide are relations starting and ending. And planning the time, the position and the type of the display of the visual content in the virtual-real fused video by using a content editing tool. And the augmented assembly instruction file generated by editing the augmented reality content is stored in the augmented assembly knowledge graph module and is applied to the augmented assembly site visualization guide system.
(3) In-situ boot
The enhanced assembly site guide system is applied to an assembly site, and the user facing the system is an on-site assembly operator. The field guidance system can perform virtual-real fusion on the enhanced assembly guidance file in the enhanced assembly knowledge graph module and the real assembly scene. And providing visual guide information which is suitable for the individual requirements of the operators according to the states of the operators, and carrying out quality detection on the result after the assembly operation is finished. When the field guidance system is applied, firstly, a monocular camera is used for obtaining a video image of a real assembly scene, and then virtual-real fusion guidance assembly operation is carried out on virtual information and the real scene through the proposed augmented reality virtual-real registration and virtual-real shielding processing algorithm. After each assembling step is completed, the field guide system can detect the assembling result by using the assembling scene image shot by the camera and feed the detecting result back to field operators, so that the assembling quality of the operators is improved.
2. System hierarchy
A user respectively finishes the tasks of extracting original data, visually editing and planning assembly operation information and guiding on site in the stages of information extraction, content editing and guiding on site by using a graphical interaction interface of each subsystem in the augmented reality auxiliary assembly operation guiding system based on deep learning. The hierarchy of the augmented reality auxiliary assembly operation guiding system based on deep learning designed by the invention is shown in figure 2.
(1) Original data layer
The original data layer is used as a data source layer of the augmented reality auxiliary assembly operation guiding system based on deep learning, and collected data are stored in the augmented assembly knowledge map module and are respectively applied to different subsystems. The part model data and the part pose data are mainly applied to the enhanced assembly information extraction system. The assembly operation information file and the visual element data are mainly applied to an enhanced assembly content editing system. The enhanced assembly guide file, the registration data, the detection data and the like are mainly applied to an enhanced assembly field guide system. In addition, by enhancing the assembly knowledge graph module, data flow and sharing between different subsystems also exists.
(2) Functional layer
The augmented reality auxiliary assembly operation guiding system functional layer based on deep learning mainly comprises three functional modules of enhanced assembly information extraction, enhanced assembly content editing and enhanced assembly field guiding. And when the system is developed, corresponding subsystems are respectively developed according to the functions of the three modules. The enhanced assembly information extraction system mainly comprises the functions of CAD data import and three-dimensional model lightweight processing. The enhanced assembly content editing system can be mainly used for planning and arranging visual elements provided by the knowledge graph according to information described by an assembly operation manual and generating an enhanced assembly guidance step file. The enhanced assembly field guidance system is responsible for accurately fusing the edited enhanced assembly guidance file and the field operation video through virtual-real registration and virtual-real shielding, guiding an operator to carry out assembly operation, and detecting neglected assembly and misassembly in the assembly process.
(3) External device layer
The hardware supporting layer of the augmented reality auxiliary assembly operation guiding system based on deep learning, wherein external equipment of the system mainly comprises a camera for image acquisition; a processor for algorithmic processing and software execution; a display, AR glasses or a portable flat panel for displaying the virtual and real fused images; mouse, keyboard and microphone devices for human-computer interaction.
(4) Interfacial layer
The augmented reality auxiliary assembly operation guiding system interface layer based on deep learning is a channel for information exchange between a user and different subsystems. The interactive interface of the augmented reality auxiliary assembly work guidance system based on deep learning mainly comprises a three-dimensional scene display window, a content editing window, a virtual-real fusion video display window, a structure tree window and the like.
3. Main function and implementation of system
1) Enhanced assembly knowledge graph module
The enhanced assembly knowledge graph module is mainly used for organizing and managing multi-source heterogeneous information in the enhanced assembly process and providing data support for the system. The enhanced assembly knowledge base module function model is shown in fig. 3, and the original data related to the enhanced assembly process mainly come from an assembly operation manual, a light-weight CAD model, an assembly work recording file, multimedia file data, an operation record, an assembly scene and the like. And constructing the ontology model of the enhanced assembly field by adopting the Prot g, and extracting instance data from the original data carrier by using the ontology model. And finally, processing the extracted data into a format required by the graph database Neo4j, and storing the data through the triples to obtain an enhanced assembly knowledge graph module so as to finish organization and management of information related to the enhanced assembly process.
Specifically, as shown in fig. 3, the assembly knowledge graph enhancing module includes an ontology module sub-module, an instance node sub-module, a relationship node sub-module, and a data saving sub-module, where: the body module sub-module mainly comprises: the system comprises an operator information body, a visual content information body, an enhanced assembly guidance information body, an assembly scene information body and a relation set among various body concepts, wherein the body extraction adopts a top-down method to extract. The instance node submodule mainly extracts and stores instance data by utilizing an ontology concept, extracts text data by adopting a neural network LSTM, and directly extracts other non-text data by manual extraction. The relation node submodule is mainly used for creating the relation between ontology concepts and the relation between example data. And the data storage submodule is used for storing data of the enhanced assembly knowledge graph module of different application scenes.
2) Virtual and real registration module
The augmented reality virtual-real registration module is mainly applied to an augmented assembly field guidance system, is used for ensuring that a virtual scene and a real scene in the field guidance system have geometric consistency, and is the basis for realizing visual content virtual-real fusion. And solving the pose matrixes R and T of the real camera by adopting a virtual-real registration algorithm of the full convolution neural network of the assembly scene, and assigning the obtained pose matrixes to the virtual camera. The virtual scene captured by the virtual camera should remain the same as the real scene captured by the real camera. Therefore, the consistency between the imaging position of the virtual model on the virtual scene image plane and the imaging position of the real part on the real image plane is ensured. The virtual and real registration module function model is shown in fig. 4. And adopting a full convolution neural network to carry out virtual and real registration, wherein the input of the full convolution neural network is an RGB picture, and the output is an image coordinate of a key point in the image. When the neural network model is trained, the full convolution neural network model needs to be trained by using a registered training data set in the knowledge map, and the trained convolution neural network model is called to be registered when the field guidance is carried out.
The realization of the functions of the virtual and real registration module not only needs a real camera pose matrix solving algorithm, but also relates to the parameter setting of a virtual camera, the pose determination of a virtual model and the fusion rendering of virtual and real images. The parameters of the virtual camera mainly comprise external parameters and internal parameters of the virtual camera, the external parameters are determined through an enhanced assembly registration algorithm, and the internal parameters are solved through camera calibration. And the internal parameters of the virtual camera are consistent with those of the real camera, and the internal parameters obtained after the real camera is calibrated are assigned as the internal parameters of the virtual camera. In the enhanced fitting system, the pose coordinates of the virtual model can be obtained from the pose coordinates of the real object in the real space. The 3D rendering engine is used to develop the graphic display and interactive scenes of the augmented reality system, and the process of constructing the virtual scene will be described in detail in the following sections.
3) Virtual-real shielding module
The augmented reality virtual-real shielding module is mainly applied to an augmented assembly field guidance system and used for ensuring that a virtual model after fusion has a real shielding relation with a physical object in a scene. And performing virtual and real shielding processing by adopting a deep learning algorithm. During virtual and real shielding, the depth of a real scene is obtained after densification is carried out through a convolutional neural network, and the depth of a virtual scene is obtained through rendering of a virtual scene model constructed by a 3D rendering engine. And adopting a full convolution neural network to realize virtual and real shielding, wherein the input of the full convolution neural network is an RGB picture, and the output is a depth value corresponding to each pixel of the image. When the neural network model is trained, the full convolution neural network model for depth prediction needs to be trained by using a virtual and real occlusion training data set in a knowledge map, and the trained neural network model is called to predict the virtual and real occlusion depth during field guidance.
As shown in fig. 5, the functional model of the virtual-real occlusion processing module is implemented by enhancing the assembly of virtual-real occlusion processing, which mainly includes the construction of functions such as depth prediction of a convolutional neural network, virtual scene construction, and depth acquisition of a virtual model. The convolutional neural network depth prediction is mainly used for evaluating a depth image of a real scene. The construction of the virtual scene mainly refers to the construction of the virtual scene which is the same as the real scene by adopting a 3D rendering engine. The acquisition of the virtual model depth map refers to extracting depth information of a relevant three-dimensional model from a 3D rendering engine scene model through a depth extraction algorithm.
4) Assembly detection module
The assembly state detection feedback module is applied to an enhanced assembly field guide system, the assembly state image after each assembly step is completed is detected, the single-step assembly quality is evaluated, the evaluation result is fed back to an operator, and the problems of neglected assembly and wrong assembly are avoided. The assembly detection feedback module functional model is shown in fig. 6. The assembly state detection and feedback module is mainly realized by using an image feature assembly state detection method based on deep learning. According to the requirement of the enhanced assembly visual guide operation, an assembly detection feedback module adopts an assembly object confidence value obtained by convolutional neural network detection to perform neglected assembly judgment; the improved YOLOv3 convolution upgrade network is adopted for detecting the assembly objects, and the accuracy of the detection of the assembly objects is improved mainly by increasing the attention mechanism and the number of output branches of the YOLOv 3. And carrying out misloading detection by adopting the similarity obtained by matching the 2D detection frame obtained by convolutional neural network detection with the contour of the assembly object. And finally, feeding back the detection result to an operator, and storing the result as an assembly record.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. The augmented reality auxiliary assembly work guiding system based on deep learning is characterized by comprising a software platform and a hardware platform, wherein:
the software platform consists of an enhanced assembly information extraction system, an enhanced assembly content editing system, an enhanced assembly guide system and an enhanced assembly knowledge map module, the three subsystems are respectively applied to different stages of the enhanced assembly implementation process, and data of the different stages are organized and managed through the enhanced assembly knowledge map module; wherein:
the enhanced assembly information extraction system is used for converting a model in the CAD auxiliary software into a lightweight model in an information extraction stage, extracting pose information of the model for constructing a virtual scene, and storing data in the enhanced assembly knowledge map module;
the enhanced assembly content editing system is applied to a content editing stage and is responsible for planning and distributing different types of visual elements provided by the enhanced assembly knowledge graph module according to the description of an assembly process file so as to generate an enhanced assembly guide file;
the enhanced assembly guidance system is applied to the field guidance stage, can perform virtual-real fusion on the enhanced assembly guidance file and the real scene image, outputs a virtual-real fusion video to guide an operator to perform assembly operation, and can detect and feed back missing or wrong assembly problems in the operation process;
the hardware platform comprises an arithmetic processing device, an image acquisition device and an interaction device.
2. The augmented reality assisted assembly work guidance system based on deep learning of claim 1, wherein the augmented reality information extraction system is used for extracting raw data from different data carriers, performing light weight processing on the raw data, generating light weight data applicable to the augmented reality system, and storing the light weight data in the augmented reality knowledge map module.
3. The guiding system of augmented reality aided assembly work based on deep learning as claimed in claim 1 or 2, characterized in that the augmented assembly content editing system is used for laying out and planning the visual elements provided by the augmented assembly knowledge map module according to the operation information requirement, belonging to the front-end link of the augmented assembly visual guidance, and the content editing stage creates augmented reality visual content expressing the assembly operation information according to the assembly operation information.
4. The deep learning-based augmented reality auxiliary assembly work guidance system according to claim 3, wherein the augmented reality editing system is used for planning the time, the position and the type of the visual content displayed in the virtual-real fused video, and the augmented assembly guidance file generated by the augmented reality content editing is stored in the augmented assembly knowledge map module and is applied to the augmented assembly scene visual guidance system.
5. The guiding system for augmented reality auxiliary assembly work based on deep learning of claim 1 or 2, wherein the guiding system for augmented assembly site is applied to an assembly site, a user facing the guiding system is an on-site assembly operator, the on-site guiding system performs virtual-real fusion on an augmented assembly guiding file in the augmented assembly knowledge graph module and a real assembly scene, provides visual guiding information adapted to individualized requirements of the operator according to the state of the operator, and performs quality detection on a result after the assembly operation is completed.
6. The guiding system of augmented reality assisted assembly work based on deep learning of claim 5, wherein when the guiding system of augmented reality is applied to an assembly site, firstly a monocular camera is used to obtain a video image of a real assembly scene, then virtual information and the real scene are subjected to virtual-real fusion guiding assembly work through a virtual-real registration processing method and a virtual-real shielding processing method of augmented reality, and assembly detection feedback processing is performed after each assembly step is completed, that is, the guiding system of augmented reality carries out assembly result detection by using an assembly scene image shot by the camera, and feeds back the detection result to an on-site operator.
7. The deep learning-based augmented reality aided assembly work guidance system of claim 1 or 2, wherein the augmented assembly knowledge graph module comprises an ontology module sub-module, an instance node sub-module, a relationship node sub-module and a data saving sub-module, wherein:
the body module submodule includes: the system comprises an operator information body, a visual content information body, an enhanced assembly guidance information body, an assembly scene information body and a relation set among various body concepts, wherein the body extraction adopts a top-down method to extract;
the instance node submodule extracts and stores instance data by utilizing an ontology concept, extracts text data by adopting a neural network LSTM, and directly extracts other non-text data manually;
the relation node submodule is used for creating the relation between ontology concepts and the relation between example data;
and the data storage submodule is used for storing data of the enhanced assembly knowledge graph module of different application scenes.
8. The deep learning-based augmented reality auxiliary assembly work guidance system according to claim 6, wherein the virtual-real registration processing method is implemented by virtual-real registration using a full convolution neural network, wherein the full convolution neural network inputs RGB pictures and outputs image coordinates of key points in the images, when the neural network model is trained, the full convolution neural network model needs to be trained by using a registered training data set in the augmented assembly knowledge map module, and when the neural network model is guided in the field, the trained convolutional neural network model is called for registration.
9. The guiding system of claim 6, wherein the virtual-real occlusion processing method is implemented by using a full convolutional neural network for virtual-real occlusion, wherein the input of the full convolutional neural network is an RGB picture, and the output of the full convolutional neural network is a depth value corresponding to each pixel of the image, when the neural network model is trained, the full convolutional neural network model for depth prediction needs to be trained by using a virtual-real occlusion training data set in the augmented assembly knowledge map module, and when the guiding system is guided in the field, the virtual-real occlusion depth prediction is performed by calling the trained neural network model.
10. The deep learning-based augmented reality aided assembly work guidance system of claim 6, wherein the assembly detection feedback process comprises: according to the requirement of the enhanced assembly visual guide operation, performing neglected loading judgment by adopting an assembly object confidence value obtained by convolutional neural network detection; adopting an improved YOLOv3 convolution upgrade network to detect an assembly object, adopting a 2D detection frame obtained by convolution neural network detection to carry out misassembly detection on the similarity obtained by matching the contour of the assembly object with the contour of the assembly object, finally feeding back a detection result to an operator, and storing the result as an assembly record; the improvement on YOLOv3 improves the accuracy of the detection of the assembly object by increasing the attention mechanism and the number of output branches.
CN202210972333.5A 2022-08-15 2022-08-15 Augmented reality auxiliary assembly operation guiding system based on deep learning Pending CN115294308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210972333.5A CN115294308A (en) 2022-08-15 2022-08-15 Augmented reality auxiliary assembly operation guiding system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210972333.5A CN115294308A (en) 2022-08-15 2022-08-15 Augmented reality auxiliary assembly operation guiding system based on deep learning

Publications (1)

Publication Number Publication Date
CN115294308A true CN115294308A (en) 2022-11-04

Family

ID=83830595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210972333.5A Pending CN115294308A (en) 2022-08-15 2022-08-15 Augmented reality auxiliary assembly operation guiding system based on deep learning

Country Status (1)

Country Link
CN (1) CN115294308A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115570572A (en) * 2022-11-09 2023-01-06 北京工业大学 Complex assembly task action sequence planning method based on hierarchical knowledge graph
CN116678348A (en) * 2023-07-31 2023-09-01 无锡黎曼机器人科技有限公司 Method and device for detecting missing parts of whole diesel engine
CN116778119A (en) * 2023-06-26 2023-09-19 中国信息通信研究院 Man-machine cooperative assembly system based on augmented reality

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115570572A (en) * 2022-11-09 2023-01-06 北京工业大学 Complex assembly task action sequence planning method based on hierarchical knowledge graph
CN116778119A (en) * 2023-06-26 2023-09-19 中国信息通信研究院 Man-machine cooperative assembly system based on augmented reality
CN116778119B (en) * 2023-06-26 2024-03-12 中国信息通信研究院 Man-machine cooperative assembly system based on augmented reality
CN116678348A (en) * 2023-07-31 2023-09-01 无锡黎曼机器人科技有限公司 Method and device for detecting missing parts of whole diesel engine
CN116678348B (en) * 2023-07-31 2023-10-03 无锡黎曼机器人科技有限公司 Method and device for detecting missing parts of whole diesel engine

Similar Documents

Publication Publication Date Title
CN115294308A (en) Augmented reality auxiliary assembly operation guiding system based on deep learning
Shan et al. Research on landscape design system based on 3D virtual reality and image processing technology
CN101387958B (en) Image data processing method and apparatus
CN110599287A (en) System and method for customizing a machined product
CN112199086A (en) Automatic programming control system, method, device, electronic device and storage medium
CN110110114B (en) Visualization method, device and storage medium for multi-source earth observation image processing
CN115311130A (en) Method, system and terminal for migrating styles of Chinese, calligraphy and digital images in multiple lattices
Zhou et al. Computer-aided process planning in immersive environments: A critical review
CN108305306B (en) Animation data organization method based on sketch interaction
Bhadaniya et al. Mixed reality-based dataset generation for learning-based scan-to-BIM
Guo et al. Exploration of human-computer interaction system for product design in virtual reality environment based on computer-aided technology
Fang et al. A framework for human-computer interactive street network design based on a multi-stage deep learning approach
Techasarntikul et al. Guidance and visualization of optimized packing solutions
Liu Application and research of computer aided technology in clothing design driven by emotional elements
CN115857930A (en) Prototype rapid design method and system based on real-scene model
CN113989462A (en) Railway signal indoor equipment maintenance system based on augmented reality
Simón et al. The development of an advanced maintenance training programme utilizing augmented reality
Shen et al. Application of Folk Art Modeling in Modern Art Design Based on Human-computer Interaction
Ma et al. Innovative Applications of Digital Art and Augmented Reality in the Construction Industry through Building Information Modeling
Pierdicca et al. DeepReality: An open source framework to develop AI-based augmented reality applications
Pfeiffer et al. Gesture semantics reconstruction based on motion capturing and complex event processing: a circular shape example
Gao et al. Construction of equipment maintenance guiding system and research on key technologies based on augmented reality
Feng et al. Analysis of Lightweight Processing Technology for WEB oriented BIM Model
Liu Light image enhancement based on embedded image system application in animated character images
Zhang et al. Product form innovation: a morphodynamic factors-driven approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination