CN117541754A - Hydraulic motor assembly guidance system and method based on mixed reality technology - Google Patents

Hydraulic motor assembly guidance system and method based on mixed reality technology Download PDF

Info

Publication number
CN117541754A
CN117541754A CN202311541120.8A CN202311541120A CN117541754A CN 117541754 A CN117541754 A CN 117541754A CN 202311541120 A CN202311541120 A CN 202311541120A CN 117541754 A CN117541754 A CN 117541754A
Authority
CN
China
Prior art keywords
hydraulic motor
module
target detection
detection result
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311541120.8A
Other languages
Chinese (zh)
Inventor
张睿昊
刘二腾
吴连秋
钟钊瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yufeng Information Technology Co ltd
Original Assignee
Zhejiang Yufeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Yufeng Information Technology Co ltd filed Critical Zhejiang Yufeng Information Technology Co ltd
Priority to CN202311541120.8A priority Critical patent/CN117541754A/en
Publication of CN117541754A publication Critical patent/CN117541754A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a hydraulic motor assembly guidance system based on a mixed reality technology, which comprises a server end and an equipment terminal, wherein a target detection model of a hydraulic motor part is established at the server end, a target detection result corresponding to the hydraulic motor part can be directly generated based on acquired image information of the hydraulic motor part, the target detection result is received at the equipment terminal, the real position of the hydraulic motor part can be obtained by carrying out coordinate transformation on the target detection result at the equipment terminal, and assembly guidance can be carried out on the assembly position of the hydraulic motor part based on the real position. Aiming at the characteristics of various parts, complex assembly process and the like in the assembly process of the hydraulic motor, the inconvenience of traditional paper or electronic operation guidance is overcome, and the hydraulic motor assembly guidance system is realized by combining the mixed reality technology.

Description

Hydraulic motor assembly guidance system and method based on mixed reality technology
Technical Field
The invention relates to the technical field of hydraulic motor assembly, in particular to a hydraulic motor assembly guidance system and method based on a mixed reality technology.
Background
Hydraulic motors are a common type of mechanical device, and the assembly process requires a high degree of expertise and precision steps. Existing conventional assembly guidance methods rely on paper or electronic versions of the operating manual that have limitations in providing detailed step guidelines. In a real operating environment, there are many inconveniences in using paper or an electronic operation manual while both hands perform an assembling operation. Based on this, a method for identifying assembled parts based on holonens 2 self-processing is disclosed in the prior art. However, since the method directly runs the target detection deep learning network on the holonens 2, a large amount of hardware resources are required, and especially in a more complex assembly guidance system, the program performance is reduced.
The above problems are to be solved.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art, in a first aspect, an embodiment of the present invention provides a hydraulic motor assembly guidance system based on a mixed reality technology, where the system includes a server side and a device terminal, and the server side includes a hydraulic motor data set building module, a target detection model training module, a target detection result generating module, and a first data transmission module; the hydraulic motor data set establishing module is used for establishing a hydraulic motor data set based on image information of hydraulic motor parts; the target detection model training module is used for performing transfer learning training on the hydraulic motor data set to obtain a hydraulic motor part target detection model; the target detection result generation module is used for generating a target detection result based on the hydraulic motor part target detection model; the first data transmission module is used for enabling the server side to communicate with the equipment terminal; the equipment terminal comprises a target image information acquisition module, a target detection result receiving module, a coordinate conversion module, a hydraulic motor part real position generation module and a hydraulic motor assembly guidance module; the target image acquisition module is used for acquiring image information of hydraulic motor parts; the target detection result receiving module is used for receiving a target detection result sent by the server, wherein the target detection result comprises category confidence and a normalized bounding box; the coordinate conversion module is used for converting the image coordinate points in the target detection result into camera world coordinates; the hydraulic motor part real position generation module is used for generating the real position of the hydraulic motor part based on the world coordinates of the camera; the hydraulic motor assembly guidance module is used for guiding the assembly positions of the hydraulic motor parts based on the actual positions of the hydraulic motor parts.
Further, the hydraulic motor data set establishment module comprises an image receiving module, an image labeling module, an image dividing module and an image deriving module; the image acquisition module is used for receiving image information of the hydraulic motor parts; the image marking module is used for marking the position information and the category information of the hydraulic motor parts by adopting a data set marking tool on the image information of the hydraulic motor parts; the image dividing module is used for dividing the image information of the hydraulic motor parts into a training set and a verification set; the image deriving module is used for deriving the image information of the hydraulic motor parts into a yolo data set format.
Further, the target detection model training module comprises a pre-training model building module and a target detection model generating module; the pre-training model building module is used for building a deep learning target detection network model based on the hydraulic motor data set; the target detection generation module is used for training the deep learning target detection network model by adopting a transfer learning algorithm to obtain the hydraulic motor part target detection model, and the network model is a Yolov8-n network model.
Further, the target detection result generation module is used for loading network parameters of the trained hydraulic motor part target detection model and generating a target detection result based on the received picture information of the hydraulic motor part sent by the equipment terminal.
Further, the equipment terminal also comprises a second data transmission module, which is used for transmitting the image information of the hydraulic motor parts and the target detection result generated by the server.
Further, the normalized bounding box has a format of (x 0 、y 0 W, h), where x 0 And y 0 Representing the coordinates of the center point of the bounding box, w and h represent the width and height of the bounding box, respectively.
Further, the coordinate conversion module is further configured to: imaging point p (x) 0 ,y 0 ) Converted into the coordinates p (x, y) of the camera projection according to a first formula,
the first formula is:
let the camera projection coordinate p be p (x, y) in the camera coordinate system C, and generate the target point of imaging pointThe projection matrix form of the camera imaging is shown in a second formula:
the camera can be positioned through HoloLens2 to obtain a projection matrix [ PM ] of the camera and a position matrix [ W ] of the camera in reality when photographing; the world coordinate point obtained by obtaining the projection matrix is converted into a camera imaging plane formula to be a third formula:
and (3) taking the image imaging point p (x, y) into a third formula to obtain the position T (x, y, z) of the imaging point in world coordinates, and calculating the world coordinate position C (x, y, z) of the camera through a camera position matrix stored by the positionable camera.
Furthermore, the real position generation module of the hydraulic motor part is further used for obtaining a ray which starts from the camera and faces to the imaging point by differencing the world coordinates of the imaging point and the world coordinates of the camera, and the ray is intersected with the HoloLens2 space perception grid, and the intersection point is the real position of the hydraulic motor part.
Further, the hydraulic motor assembly guiding module comprises a hydraulic motor assembly position determining module, a hydraulic motor assembly flow guiding module, a hydraulic motor part identification module and a prompting module; the hydraulic motor assembly position determining module is used for determining an assembly guidance demonstration position; the hydraulic motor assembly flow guiding module is used for providing one or a combination of text introduction, picture introduction and audio/video introduction of hydraulic motor part assembly guidance through the display panel; the hydraulic motor part identification module is used for acquiring a target detection result by sending image information of unidentifiable parts to a server side; and the display module is used for displaying the component assembly demonstration animation after detecting the actual object of the component, and the motion trail of the component assembly demonstration animation is a Bezier curve.
In a second aspect, an embodiment of the present invention provides a hydraulic motor assembly guidance method based on a mixed reality technology, the method including: at the server side: receiving image information of hydraulic motor parts; creating a hydraulic motor dataset based on the image information; performing transfer learning training on the hydraulic motor data set to obtain a hydraulic motor part target detection model; generating a target detection result based on the hydraulic motor part target detection model; transmitting the target detection result to a device terminal; at the device terminal: acquiring image information of hydraulic motor parts; receiving a target detection result sent by a server, wherein the target detection result comprises category confidence and a normalized bounding box; converting the image coordinate points in the target detection result into camera world coordinates; generating a real position of a hydraulic motor component based on the camera world coordinates; and performing assembly guidance on the assembly positions of the hydraulic motor parts based on the actual positions of the hydraulic motor parts.
In a third aspect, an embodiment of the present invention provides a computer readable storage medium, where one or more instructions are stored in the computer readable storage medium, where the computer instructions are configured to cause the computer to perform the hydraulic motor assembly guidance method based on the mixed reality technology described above.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor; at least one program instruction is stored in the memory; the processor loads and executes the at least one program instruction to implement the hydraulic motor assembly guidance method based on mixed reality technology.
The technical scheme provided by the embodiment of the invention has the beneficial effects that: the embodiment of the invention provides a hydraulic motor assembly guidance system based on a mixed reality technology, which comprises a server side and an equipment terminal, wherein a target detection model of a hydraulic motor part is established at the server side, a target detection result corresponding to the target detection model can be directly generated based on acquired image information of the hydraulic motor part, the target detection result is received at the equipment terminal, the real position of the target detection result can be obtained by carrying out coordinate transformation on the target detection result at the equipment terminal, and the assembly guidance can be carried out on the assembly position of the hydraulic motor part based on the real position. Aiming at the characteristics of various parts, complex assembly process and the like in the assembly process of the hydraulic motor, the inconvenience of traditional paper or electronic operation guidance is overcome, and the hydraulic motor assembly guidance system is realized by combining the mixed reality technology.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a hydraulic motor assembly guidance system based on a mixed reality technology according to an embodiment of the present invention.
Fig. 2 is a schematic diagram showing an internal structure of a hydraulic motor data set creation module provided in an embodiment of the present invention.
Figures 3a-3b are schematic diagrams of coordinate systems during coordinate transformation provided in one embodiment of the invention.
Fig. 4 is a schematic view of the actual positions of components provided in one embodiment of the present invention.
Fig. 5 is a flowchart of a hydraulic motor assembly guidance method based on a mixed reality technology according to another embodiment of the present invention at a server side.
Fig. 6 is a flow chart of a hydraulic motor assembly guidance method based on a mixed reality technology at a device terminal according to still another embodiment of the present invention.
Fig. 7 is a partial block diagram of an electronic device provided by an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs; the terms used in the specification are used herein for the purpose of describing particular embodiments only and are not intended to limit the present invention, for example, the orientations or positions indicated by the terms "length", "width", "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. are orientations or positions based on the drawings, which are merely for convenience of description and are not to be construed as limiting the present invention.
The terms "comprising" and "having" and any variations thereof in the description of the invention and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion; the terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. The meaning of "a plurality of" is two or more, unless specifically defined otherwise.
Furthermore, references herein to "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, a schematic diagram of a hydraulic motor assembly guidance system based on mixed reality technology according to the present invention is shown.
As an example, the system comprises a server side 1 and an equipment terminal 2, the server side 1 comprising a hydraulic motor dataset creation module 110, a target detection model training module 120, a target detection result generation module 130 and a first data transmission module 140.
Optionally, the hydraulic motor data set creating module 110 is configured to create a hydraulic motor data set based on image information of hydraulic motor components.
Optionally, as shown in fig. 2, the hydraulic motor data set creating module 110 includes an image receiving module 1101, an image labeling module 1102, an image dividing module 1103, and an image deriving module 1104; the image receiving module 1101 is configured to receive image information of the hydraulic motor component; the image labeling module 1102 is configured to label the position information and the category information of the hydraulic motor component with a dataset labeling tool for the image information of the hydraulic motor component; the image dividing module 1103 is configured to divide the image information of the hydraulic motor component into a training set and a verification set; the image derivation module 1104 is configured to derive image information of the hydraulic motor component into a yolo dataset format. Specifically, the training set and the verification set are divided in the following ways: and setting a self-defined extraction picture proportion value according to the constructed network data set and the on-site picture acquisition condition by using a set-aside method, and then randomly dividing a test set and a verification set according to the extraction picture proportion value, thereby providing data input support for subsequent network model training.
Optionally, the target detection model training module 120 is configured to perform transfer learning training on the hydraulic motor data set to obtain a hydraulic motor component target detection model. The target detection model training module 120 includes a pre-training model building module and a target detection model generating module; the pre-training model building module is used for building a deep learning target detection network model based on the hydraulic motor data set; the target detection generation module is used for training the deep learning target detection network model by adopting a transfer learning algorithm to obtain the hydraulic motor part target detection model, and the network model is a Yolov8-n network model. Specifically, a Yolov8 model is used for carrying out a hydraulic motor part target detection network, a CSPDarkNet-53 network is used for a main network, meanwhile, partial parameters of the main network are frozen, and the data set is used for carrying out transfer learning training, so that the hydraulic motor part target detection model is obtained. The YOLO network is a neural network capable of processing more than 60 frames per second, and has the characteristics of high recognition efficiency and high recognition accuracy. And performing migration learning by using the data set. The migration learning uses a pre-trained network model as a basis to train the model of the relevant task, and can adapt to the scene and part feature mapping identified in the present embodiment. According to the embodiment, a pre-training YOLOv8 weight model is used, then the training set and the verification set are loaded, and the training model is carried out by setting related parameters such as the iteration number, the step length, the learning rate and the category number of the model, so that a target detection model is obtained.
Optionally, the target detection result generating module 130 is configured to generate a target detection result based on the hydraulic motor component target detection model. The target detection result generating module 130 is configured to load network parameters of a trained hydraulic motor component target detection model, and generate a target detection result based on the received picture information of the hydraulic motor component sent by the equipment terminal.
Optionally, the first data transmission module 140 is configured to enable communication between the server side and the device terminal. Specifically, a web service is written by using a flash framework, the sent picture information is accepted, and a target detection result is returned.
As an example, the device terminal 2 includes a target image information acquisition module 210, a target detection result reception module 220, a coordinate conversion module 230, a hydraulic motor part real position generation module 240, and a hydraulic motor assembly guidance module 250.
Optionally, the target image acquisition module 210 is configured to acquire image information of hydraulic motor components. Specifically, pictures of various angles of each part of the hydraulic motor can be taken by the camera as the image information of the part. And the image information of the parts is not less than five pictures.
Optionally, the target detection result receiving module 220 is configured to receive a target detection result sent by the server side 1, where the target detection result includes a category confidence and a normalized bounding box. Specifically, the target detection result returned from the server side 1 is accepted, and the target detection result includes a category confidence level and a normalized bounding box, where the format of the normalized bounding box is (x) 0 、y 0 W, h), where x 0 And y 0 Representing the coordinates of the center point of the bounding box, w and h represent the width and height of the bounding box, respectively. The normalization is the original pixel value divided by the size of the input image.
Optionally, the coordinate transformation module 230 is configured to transform the image coordinate points in the target detection result into a camera world coordinateAnd (5) marking. Specifically, as shown in fig. 3a-3b, fig. 3a is a schematic diagram of a coordinate system for converting picture coordinates to camera projection coordinates, and fig. 3b is a schematic diagram of a coordinate system for converting projection coordinates to true coordinates. Imaging point p (x) 0 ,y 0 ) Converted into the coordinates p (x, y) of the camera projection according to a first formula,
the first formula is:
let the camera projection coordinates P be the coordinates P (x, y) in the camera coordinate system C, and generate the target point P (C) of the imaging point xP ,c yP ,c zP ) The projection matrix form of the camera imaging is shown in a second formula:
the camera can be positioned through HoloLens2 to obtain a projection matrix [ PM ] of the camera and a position matrix [ W ] of the camera in reality when photographing;
the world coordinate point obtained by obtaining the projection matrix is converted into a camera imaging plane formula to be a third formula:
and (3) taking the image imaging point p (x, y) into a third formula to obtain the position T (x, y, z) of the imaging point in world coordinates, and calculating the world coordinate position C (x, y, z) of the camera through a camera position matrix stored by the positionable camera. The projection matrix [ PM ] and the position matrix [ W ] of the camera in reality are directly acquired through Hololens equipment.
Optionally, the hydraulic motor part real position generation module 240 is configured to generate the real position of the hydraulic motor part based on the camera world coordinates. Specifically, as shown in fig. 4, the real position generating module of the hydraulic motor component is further configured to obtain a ray from the camera to the imaging point by making a difference between the world coordinates of the imaging point and the world coordinates of the camera, where the ray intersects with the holonens 2 space sensing grid, and the intersection point is the real position of the hydraulic motor component. More specifically, by differencing the imaging point world coordinates and the camera world coordinates, a ray is obtained from the camera that is directed toward the imaging point. Since the three objects, the camera, the imaging point and the real object, are collinear, the real object is also on this line. As shown in fig. 2, the ray collides with the holonens 2 space sensing grid, and the collision point is the real position of the hydraulic motor part in the real world.
Optionally, the hydraulic motor assembly guidance module 250 is configured to guide the assembly of the hydraulic motor components based on the actual positions of the hydraulic motor components. Specifically, the hydraulic motor assembly guidance module 250 includes a hydraulic motor assembly position determination module, a hydraulic motor assembly flow guidance module, a hydraulic motor component identification module, and a reminder module.
More specifically, the hydraulic motor assembly position determination module is configured to determine an assembly guidance presentation position, i.e., drag the hydraulic motor virtual body near or coincident with the real object, as the assembly guidance presentation position. The hydraulic motor assembly flow guiding module is used for providing one or a combination of text introduction, picture introduction and audio/video introduction of assembly guidance of the hydraulic motor parts through the display panel, namely, providing the text introduction and the picture introduction of the assembly guidance through the panel, including basic information introduction, assembly notice, assembly position picture and the like of the hydraulic motor parts, and simultaneously playing assembly demonstration video of each step. The hydraulic motor part identification module is used for acquiring a target detection result of the unrecognizable part by sending the image information of the unrecognizable part to the server, namely, encountering the unrecognizable part of the user, shooting the unrecognizable part through a button, sending the unrecognizable part to the server, receiving the target detection result of the hydraulic motor part, then placing the virtual body on a part actual object in an operation step, and presenting a sound prompt. The display module is used for displaying the component assembly demonstration animation after detecting the component actual object, wherein the motion trail is a Bezier curve, namely, the component assembly demonstration animation appears after detecting the component actual object, and the motion trail is the Bezier curve. Meanwhile, the next step, photographing and other control can be realized in a voice mode, a text reading function is provided, and the assembly introduction information of the hydraulic motor parts in the text is played in voice.
Optionally, the device terminal 2 further includes a second data transmission module 260, configured to transmit image information of the hydraulic motor component and a target detection result generated by the server side. Specifically, HTTP communication is performed with the server through WebAPI, the photographed picture data is sent to the server, and the target detection result data returned by the server is received.
In the embodiment, aiming at the characteristics of various parts, complex assembly process and the like in the assembly process of the hydraulic motor, the inconvenience of traditional paper or electronic operation guidance is overcome, and the hydraulic motor assembly guidance system is realized by combining a mixed reality technology. Aiming at the situation that the computing power resources of the terminal are limited, the communication mode of the client-server is utilized, and the advantage of the back-end computing power is fully exerted by combining the deep learning technology. By using the three-dimensional positioning method without opening a research mode, the popularization problem after the assembly guidance system is completed is avoided, the calculation force requirement of the system is reduced, and the program load is lightened. The method provides rich and various guiding modes, including characters, pictures, videos, voices and the like, and vivid expression forms, such as part recognition, assembly animation and the like, so as to better help assembly staff to know a complex assembly process, thereby guiding the whole assembly process more efficiently and simultaneously ensuring that the assembly operation is guided more quickly and accurately.
Example 2
Referring to fig. 5-6, a flow chart of a hydraulic motor assembly guidance method based on mixed reality technology is shown, according to one embodiment of the invention.
As an example, the method comprises: at the server side:
s510: image information of hydraulic motor components is received.
S520: a hydraulic motor dataset is established based on the image information.
S530: and performing transfer learning training on the hydraulic motor data set to obtain a hydraulic motor part target detection model.
S540: and generating a target detection result based on the hydraulic motor part target detection model.
S550: and transmitting the target detection result to the equipment terminal.
At the device terminal:
s610: image information of the hydraulic motor components is acquired.
S620: and receiving a target detection result sent by the server, wherein the target detection result comprises category confidence and a normalized bounding box.
S630: and converting the image coordinate points in the target detection result into camera world coordinates.
S640: and generating the real position of the hydraulic motor part based on the world coordinates of the camera.
S650: and performing assembly guidance on the assembly positions of the hydraulic motor parts based on the actual positions of the hydraulic motor parts.
Example 3
The embodiment of the invention also provides a storage medium, wherein the storage medium is stored with a hydraulic motor assembly guiding method based on the mixed reality technology, and the hydraulic motor assembly guiding program based on the mixed reality technology realizes the steps of the hydraulic motor assembly guiding method based on the mixed reality technology when being executed by a processor. Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Example 4
Referring to fig. 7, an embodiment of the present invention further provides an electronic device, including: a memory and a processor; at least one program instruction is stored in the memory; the processor implements the hydraulic motor assembly guidance method based on the mixed reality technology provided in embodiment 2 by loading and executing the at least one program instruction.
The memory 702 and the processor 701 are connected by a bus, which may include any number of interconnected buses and bridges, which connect together the various circuits of the one or more processors 701 and the memory 702. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 701 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 702 may be used to store data used by processor 701 in performing operations.
The foregoing is merely an embodiment of the present invention, and a specific structure and characteristics of common knowledge in the art, which are well known in the scheme, are not described herein, so that a person of ordinary skill in the art knows all the prior art in the application day or before the priority date of the present invention, and can know all the prior art in the field, and have the capability of applying the conventional experimental means before the date, so that a person of ordinary skill in the art can complete and implement the present embodiment in combination with his own capability in the light of the present application, and some typical known structures or known methods should not be an obstacle for a person of ordinary skill in the art to implement the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (10)

1. The hydraulic motor assembly guidance system based on the mixed reality technology comprises a server side and an equipment terminal, and is characterized in that the server side comprises a hydraulic motor data set building module, a target detection model training module, a target detection result generating module and a first data transmission module;
the hydraulic motor data set establishing module is used for establishing a hydraulic motor data set based on image information of hydraulic motor parts;
the target detection model training module is used for performing transfer learning training on the hydraulic motor data set to obtain a hydraulic motor part target detection model;
the target detection result generation module is used for generating a target detection result based on the hydraulic motor part target detection model;
the first data transmission module is used for enabling the server side to communicate with the equipment terminal;
the equipment terminal comprises a target image information acquisition module, a target detection result receiving module, a coordinate conversion module, a hydraulic motor part real position generation module and a hydraulic motor assembly guidance module;
the target image acquisition module is used for acquiring image information of hydraulic motor parts;
the target detection result receiving module is used for receiving a target detection result sent by the server, wherein the target detection result comprises category confidence and a normalized bounding box;
the coordinate conversion module is used for converting the image coordinate points in the target detection result into camera world coordinates;
the hydraulic motor part real position generation module is used for generating the real position of the hydraulic motor part based on the world coordinates of the camera;
the hydraulic motor assembly guidance module is used for guiding the assembly positions of the hydraulic motor parts based on the actual positions of the hydraulic motor parts.
2. The hydraulic motor assembly guidance system based on mixed reality technology of claim 1, wherein the hydraulic motor dataset creation module comprises an image receiving module, an image labeling module, an image dividing module, and an image deriving module;
the image receiving module is used for receiving image information of the hydraulic motor parts;
the image marking module is used for marking the position information and the category information of the hydraulic motor parts by adopting a data set marking tool on the image information of the hydraulic motor parts;
the image dividing module is used for dividing the image information of the hydraulic motor parts into a training set and a verification set;
the image deriving module is used for deriving the image information of the hydraulic motor parts into a yolo data set format.
3. The hydraulic motor assembly guidance system based on mixed reality technology of claim 1, wherein the target detection model training module comprises a pre-training model building module and a target detection model generating module;
the pre-training model building module is used for building a deep learning target detection network model based on the hydraulic motor data set;
the target detection generation module is used for training the deep learning target detection network model by adopting a transfer learning algorithm to obtain the hydraulic motor part target detection model, and the network model is a Yolov8-n network model.
4. The hydraulic motor assembly guidance system based on the mixed reality technology according to claim 1, wherein the target detection result generating module is configured to load network parameters of a trained hydraulic motor component target detection model, and generate a target detection result based on the received picture information of the hydraulic motor component sent by the equipment terminal.
5. The hydraulic motor assembly guidance system based on mixed reality technology according to claim 1, wherein the equipment terminal further comprises a second data transmission module for transmitting image information of hydraulic motor parts and a target detection result generated by a server side.
6. The mixed reality based hydraulic motor assembly guidance system of claim 1, wherein the normalized bounding box is in the format of (x 0 、y 0 W, h), where x 0 And y 0 Representing the coordinates of the center point of the bounding box, w and h represent the width and height of the bounding box, respectively.
7. The hydraulic motor assembly guidance system based on mixed reality technology of claim 1, wherein the coordinate transformation module is further configured to:
imaging point p (x) 0 ,y 0 ) Converted into the coordinates p (x, y) of the camera projection according to a first formula,
the first formula is:
let the camera projection coordinate p be p (x, y) in the camera coordinate system C, and generate the target point of imaging pointThe projection matrix form of the camera imaging is shown in a second formula:
the camera can be positioned through HoloLens2 to obtain a projection matrix [ PM ] of the camera and a position matrix [ W ] of the camera in reality when photographing;
the world coordinate point obtained by obtaining the projection matrix is converted into a camera imaging plane formula to be a third formula:
and (3) taking the image imaging point p (x, y) into a third formula to obtain the position T (x, y, z) of the imaging point in world coordinates, and calculating the world coordinate position C (x, y, z) of the camera through a camera position matrix stored by the positionable camera.
8. The hydraulic motor assembly guidance system based on mixed reality technology according to claim 1, wherein the hydraulic motor component real position generating module is further configured to obtain a ray from the camera to the imaging point by differencing the imaging point world coordinate and the camera world coordinate, where the ray intersects with the holonens 2 spatial perception grid, and the intersection point is the real position of the hydraulic motor component.
9. The hydraulic motor assembly guidance system based on mixed reality technology of claim 1, wherein the hydraulic motor assembly guidance module comprises a hydraulic motor assembly position determination module, a hydraulic motor assembly flow guidance module, a hydraulic motor component identification module, a prompt module;
the hydraulic motor assembly position determining module is used for determining an assembly guidance demonstration position;
the hydraulic motor assembly flow guiding module is used for providing one or a combination of text introduction, picture introduction and audio/video introduction of hydraulic motor part assembly guidance through the display panel;
the hydraulic motor part identification module is used for acquiring a target detection result by sending image information of unidentifiable parts to a server side;
and the display module is used for displaying the component assembly demonstration animation after detecting the actual object of the component, and the motion trail of the component assembly demonstration animation is a Bezier curve.
10. A hydraulic motor assembly guidance method based on mixed reality technology, the method comprising:
at the server side:
receiving image information of hydraulic motor parts;
creating a hydraulic motor dataset based on the image information;
performing transfer learning training on the hydraulic motor data set to obtain a hydraulic motor part target detection model;
generating a target detection result based on the hydraulic motor part target detection model;
transmitting the target detection result to a device terminal;
at the device terminal:
acquiring image information of hydraulic motor parts;
receiving a target detection result sent by a server, wherein the target detection result comprises category confidence and a normalized bounding box;
converting the image coordinate points in the target detection result into camera world coordinates;
generating a real position of a hydraulic motor component based on the camera world coordinates;
and performing assembly guidance on the assembly positions of the hydraulic motor parts based on the actual positions of the hydraulic motor parts.
CN202311541120.8A 2023-11-17 2023-11-17 Hydraulic motor assembly guidance system and method based on mixed reality technology Pending CN117541754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311541120.8A CN117541754A (en) 2023-11-17 2023-11-17 Hydraulic motor assembly guidance system and method based on mixed reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311541120.8A CN117541754A (en) 2023-11-17 2023-11-17 Hydraulic motor assembly guidance system and method based on mixed reality technology

Publications (1)

Publication Number Publication Date
CN117541754A true CN117541754A (en) 2024-02-09

Family

ID=89793335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311541120.8A Pending CN117541754A (en) 2023-11-17 2023-11-17 Hydraulic motor assembly guidance system and method based on mixed reality technology

Country Status (1)

Country Link
CN (1) CN117541754A (en)

Similar Documents

Publication Publication Date Title
CN111174799B (en) Map construction method and device, computer readable medium and terminal equipment
CN110457414B (en) Offline map processing and virtual object display method, device, medium and equipment
WO2020228296A1 (en) Annotate object in image sequence
EP4116462A2 (en) Method and apparatus of processing image, electronic device, storage medium and program product
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
CN104376118A (en) Panorama-based outdoor movement augmented reality method for accurately marking POI
US20220375220A1 (en) Visual localization method and apparatus
CN112598780B (en) Instance object model construction method and device, readable medium and electronic equipment
KR101181967B1 (en) 3D street view system using identification information.
CN111027403A (en) Gesture estimation method, device, equipment and computer readable storage medium
US20220375258A1 (en) Image processing method and apparatus, device and storage medium
CN111784776A (en) Visual positioning method and device, computer readable medium and electronic equipment
CN110361005A (en) Positioning method, positioning device, readable storage medium and electronic equipment
CN115471662B (en) Training method, recognition method, device and storage medium for semantic segmentation model
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN114373050A (en) Chemistry experiment teaching system and method based on HoloLens
WO2022160406A1 (en) Implementation method and system for internet of things practical training system based on augmented reality technology
CN117351797A (en) Position real-time linkage system
CN117541754A (en) Hydraulic motor assembly guidance system and method based on mixed reality technology
CN114627438A (en) Target detection model generation method, target detection method, device and medium
CN112288876A (en) Long-distance AR identification server and system
CN109816791A (en) Method and apparatus for generating information
CN109982239A (en) Store floor positioning system and method based on machine vision
CN115830280A (en) Data processing method and device, electronic equipment and storage medium
WO2021244114A1 (en) Visual positioning method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination