CN114521911A - Augmented reality display method and system based on lateral position of skull and storage medium - Google Patents

Augmented reality display method and system based on lateral position of skull and storage medium Download PDF

Info

Publication number
CN114521911A
CN114521911A CN202210164032.XA CN202210164032A CN114521911A CN 114521911 A CN114521911 A CN 114521911A CN 202210164032 A CN202210164032 A CN 202210164032A CN 114521911 A CN114521911 A CN 114521911A
Authority
CN
China
Prior art keywords
lateral
cranial
ray
augmented reality
target person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210164032.XA
Other languages
Chinese (zh)
Inventor
岳江
魏海林
郑旭
盘景松
王冠
孙靖超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Alemu Health Technology Co ltd
Original Assignee
Shanghai Alemu Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Alemu Health Technology Co ltd filed Critical Shanghai Alemu Health Technology Co ltd
Priority to CN202210164032.XA priority Critical patent/CN114521911A/en
Publication of CN114521911A publication Critical patent/CN114521911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5294Devices using data or image processing specially adapted for radiation diagnosis involving using additional data, e.g. patient information, image labeling, acquisition parameters

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides an augmented reality display method, a system and a storage medium based on lateral position of a skull, wherein the method comprises the following steps: acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person; determining a lateral facial feature set based on an image of a real head of a target person displayed by an augmented reality device; registering and positioning the X-ray film feature set and the lateral facial feature set to determine a mapping relation between the lateral skull enhancement image and the real head of the target person; and displaying the cranially-side position augmented image on the image of the real head of the target person in an augmented reality device in an overlaying way based on the mapping relation. The method provided by the application can realize the rich, visual, dynamic and mutual linkage display effect of the lateral organization structure characteristics of the craniofacial part, and assists in performing the cephalogram measurement to obtain more dimensional measurement results.

Description

Augmented reality display method and system based on lateral position of skull and storage medium
Technical Field
The present application relates to the field of image processing and augmented reality display technologies, and in particular, to an augmented reality display method and system based on lateral cranial position, and a storage medium.
Background
The X-ray is also called roentgen ray, which is invisible to naked eyes, can make some compounds produce fluorescence or make photographic negative photosensitive, does not deflect in electric field or magnetic field, and has the capability of penetrating substances, but has different penetrating capabilities for different substances, and the X-ray can make human body form image on screen or film based on the difference of density and thickness of human body tissue.
The X-ray film cephalogram measurement is to carry out quantitative measurement and analysis on the images of the soft and hard tissues of the head obtained by the X-ray film in standardized positioning, thereby knowing the structures of the soft and hard tissues of the jaw and the cranium. The X-ray film overhead image measurement has important significance for the research of oral medicine, and is an important way for obtaining information in clinical diagnosis, design, treatment and research work of orthodontic treatment, maxillofacial surgery and the like.
Fig. 1 and 2 exemplarily show a specific X-ray cranial side slice, which must be photographed and obtained under the strict control of a cranial positioning instrument for orthodontic use, so as to eliminate errors caused by incorrect head position, and is a gray scale image, i.e. only two types of black and white, with 256 levels of gray scales of 0 to 255, fig. 2 is an inverted image of fig. 1, which has no essential difference in use, has stronger visual identification of soft tissue parts, has unique features and advantages due to the overlapping effect with other elements, and in the specific embodiment of the present specification, for better demonstration, the X-ray cranial side slice of the type of fig. 2 is used.
The existing methods for measuring the head shadow are numerous, for example, the traditional manual X-ray film head shadow measurement needs to be performed by rubbing with a paper pen and then using a protractor and the like, and the measuring mode has the disadvantages of complex operation, poor measuring precision, higher requirement on research personnel and lower repeatability.
With the development of computer technology, it is a common practice to perform artificial overhead radiography measurement on a computer screen by using proprietary software; in recent years, there is also software for performing head shadow measurement by computer automatic punctuation which incorporates artificial intelligence technology. The existing computer cephalogram measurement technology greatly improves the efficiency, but is not perfect, the measurement is carried out based on an X-ray film, the X-ray film only can reflect the state of a shooting object at the moment of shooting, the relative motion change relation between soft and hard tissue structures of the cranium and the face when the oral cavity of the shooting object is in a motion state (such as various types of occlusion actions) cannot be further shown, and more abundant information cannot be provided for further judging the type, the severity and the like of the malformation of wrong dentition (the "dentition" is a word and belongs to the field of orthodontics, namely the occlusion states of the upper jaw and the lower jaw).
With the continuous development of image processing technology, the AR (augmented reality) display technology is mature gradually, and the application of the AR display technology in medicine has many benefits, such as the most intuitive observation that is very convenient for the stereoscopic and intuitive observation of each anatomical structure, especially the internal structure, and each part can be observed separately, such as the separation of bones, muscles, etc., and the careful observation is performed without mutual interference, so that the diagnosis efficiency of doctors is greatly improved, and with the support of the AR display equipment, doctors can see through eyes as if they have.
The existing augmented reality display technical scheme applied to the field of orthodontic treatment usually focuses on the augmented display of a virtual 3D digital model, for example, the 3D digital model of a tooth, an implant or an oral appliance is overlaid and displayed in the oral cavity of a patient, and no ideal solution is available at present for how to utilize the augmented reality display technology to obtain the rich, visual, dynamic and mutual linkage display effect of lateral organization structure characteristics of a craniofacial part and assist in performing cephalometric measurement to obtain measurement results with more dimensions.
Disclosure of Invention
In order to solve the above problems and drawbacks of the prior art, an object of the present application is to provide a method, a system and a storage medium for displaying augmented reality based on lateral cranial position, so as to more clearly display and measure cranial features of a subject in an overlaid manner on a real head image of the subject.
One aspect of the application provides an augmented reality display method based on lateral cranial position, comprising the following steps:
s100: acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person, wherein the X-ray film features are determined based on an X-ray cranial film of the target person;
s200: determining a lateral facial feature set based on an image of a real head of a target person displayed by an augmented reality device, the lateral facial feature set comprising a plurality of lateral facial features;
s300: registering and positioning the X-ray film feature set and the lateral facial feature set to determine a mapping relation between the lateral skull enhancement image and the real head of the target person;
s400: and displaying the cranially-side position augmented image on the image of the real head of the target person in an augmented reality device in an overlaying way based on the mapping relation.
Further, the plurality of X-ray film features are in a plane parallel to a photographing plane of the X-ray cranial film.
Further, the X-ray film feature set comprises a plurality of cranially located landmark points.
Preferably, the plurality of cranial marker points includes at least 2 marker points whose relative positions are kept unchanged and a plurality of marker points whose relative positions are variable.
Preferably, the lateral cranial marker points comprise hard tissue marker points and soft tissue marker points;
the hard tissue landmark points are determined based on hard tissue structure images on an X-ray cranial slice;
the soft tissue landmark points are determined based on soft tissue structure images on an X-ray cranial slice.
Preferably, the X-ray film feature set further comprises one or more of the following features:
at least one cranial delineation line and at least one cranial characteristic line;
the cranial mapping line is determined based on an X-ray cranial slice;
the lateral cranial feature line is determined based on the lateral cranial landmark points.
Preferably, the relative positional relationship between the plurality of X-ray film features has a first state matching an X-ray cranial film and a second state matching a real-time lateral facial state of the target person.
Preferably, the registration positioning is performed when the real-time lateral facial state of the target person matches the X-ray cranial slice.
Further, the step S300 specifically includes the following steps:
s310: determining n first positioning feature points according to the X-ray film feature set, wherein n is greater than or equal to 2;
s320: determining n second positioning feature points according to the lateral facial feature set, wherein each second positioning feature point corresponds to the same part of the real head of the target person as the corresponding first positioning feature point;
s330: and determining the mapping relation between the skull side position enhanced image and the real head of the target person based on the first positioning feature point and the second positioning feature point.
Preferably, the step S310 further comprises the following steps:
s308: displaying an X-ray lateral facial contour on augmented reality equipment, wherein the X-ray lateral facial contour is determined based on an X-ray cranial side slice;
s309: and adjusting the relative position of the augmented reality equipment and the target person to ensure that the current visual angle of the augmented reality equipment is the same as the shooting angle of the X-ray skull side film, and the side position face outline of the target person displayed on the augmented reality equipment is superposed with the X-ray side position face outline.
Preferably, at least one of said first location feature points is located on a tooth of the target person.
Preferably, the step S310 further comprises the following steps:
and pasting a label on the face of the target person facing the enhanced display device.
Preferably, the cranially enhanced image comprises one or more of the following items:
the X-ray skull side film, the hard tissue structure image on the X-ray skull side film, the soft tissue structure image on the X-ray skull side film, the X-ray film feature set and the side face feature set.
Preferably, the augmented reality display method based on lateral skull position further comprises the following steps:
s500: performing a cephalogram measurement based on the image of the real head of the target person and the enhanced image of the lateral position of the head;
s600: and displaying the data of the head shadow measurement on the image of the real head of the target person in an overlaying way in the augmented reality equipment.
Preferably, the cephalometric measurement is performed on the basis of at least one measurement point taken from an image of a real head and at least one measurement point taken from a cranially enhanced image.
Another aspect of the present application provides an augmented reality display system based on lateral skull position, which is used to implement the above augmented reality display method based on lateral skull position, and includes:
the first acquisition module is used for acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person, wherein the X-ray film features are determined based on an X-ray cranial film of the target person;
the second acquisition module is used for determining a lateral facial feature set based on an image of the real head of a target person displayed by the augmented reality equipment, wherein the lateral facial feature set comprises a plurality of lateral facial features;
the registration positioning module is used for carrying out registration positioning on the X-ray film feature set and the lateral facial feature set so as to determine the mapping relation between the lateral skull enhancement image and the real head of the target person;
and the augmented display module is used for displaying the skull-side augmented image on the image of the real head of the target person in an overlaying manner in the augmented reality equipment based on the mapping relation.
Yet another aspect of the present application provides a computer-readable storage medium storing a program that when executed by a processor implements the above-described lateral-cranial-position-based augmented reality display method.
The augmented reality display method, the augmented reality display system and the storage medium based on the lateral skull position, provided by the invention, have the following beneficial effects at least:
(1) the invention can realize dynamic head shadow measurement, can simultaneously observe and calculate the head shadow measurement data change of a target person in different states, such as a plurality of different jaw positions, and can visually observe the change rule.
(2) The invention is realized by utilizing the augmented reality technology, and the augmented image reflecting the lateral position structure of the skull is superposed and displayed on the real head image of the target person, so that the invention is easy to observe and measure, and is beneficial to showing, explaining and the like for a patient.
(3) The skull side enhanced image not only combines a plurality of images, but also combines characteristic information, can be combined in multiple ways, expands the application range, can be used for observing and measuring global multi-information without screen change, is easy for macroscopic analysis and display, can be used for observing and measuring local specific tissues, and is easy for focusing.
(4) The invention is based on the change operation of the real jaw position and the jaw position, has higher accuracy and expandability compared with a jaw frame or a virtual jaw frame simulating the real jaw position, can realize the linkage measurement and display of hard tissue structures and facial soft tissue structures invisible to naked eyes such as bones, teeth and the like on the skull, and greatly expands the application of the X-ray skull side tablet.
Drawings
FIG. 1 is an exemplary X-ray cranial slice image;
FIG. 2 is a black and white inverted image of the image shown in FIG. 1;
FIG. 3 is a flow chart of a lateral cranial position based augmented reality display method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of determining a feature set of an X-ray film based on an X-ray cranial film according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a separately displayed feature set of an X-ray film according to an embodiment of the present application;
FIG. 6 is an image of the true head of a target person in an X-ray cranial film taking state and superimposed thereon an image showing the X-ray cranial film taken in that state;
FIG. 7 is an image of a plurality of X-ray film characteristics determined from the X-ray cranial film of FIG. 6 and superimposed over the real head of the target person shown still in the X-ray cranial film capture;
FIG. 8 is a schematic diagram of real-time observation and measurement of lateral features of a target person in an arbitrary lateral facial state;
fig. 9 is an image of a real head of a target person displayed by an augmented reality device according to an embodiment of the present application;
FIG. 10 is a schematic view of a plurality of lateral facial features according to an embodiment of the present application;
fig. 11 is a schematic diagram of registration positioning using a first positioning feature point and a second positioning feature point according to an embodiment of the present application;
FIG. 12 is a schematic representation of an X-ray cranial film and an X-ray lateral facial contour made therefrom according to an embodiment of the present application;
FIG. 13 is a cranially enhanced image superimposed on a real head image of a target person according to an embodiment of the present application;
fig. 14 is an image from a current perspective of an augmented reality device according to an embodiment of the present application;
FIG. 15 is a schematic view of a laterally cranially enhanced image with a superimposed display of cephalometric data according to an embodiment of the present application;
FIG. 16 is a schematic view of a laterally cranially enhanced image with a superimposed display of cephalometric data according to an embodiment of the present application;
fig. 17 is a block diagram of a lateral cranial-based augmented reality display system according to an embodiment of the present application.
Detailed Description
Hereinafter, the present application will be further described based on preferred embodiments with reference to the accompanying drawings.
In addition, various components on the drawings are enlarged or reduced for convenience of understanding, but this is not intended to limit the scope of the present application.
Singular references also include plural references and vice versa.
In the description of the embodiments of the present application, it should be noted that, if the terms "upper", "lower", "inner", "outer", etc. are used to indicate an orientation or a positional relationship based on an orientation or a positional relationship shown in the drawings, or an orientation or a positional relationship which is usually placed when a product of the embodiments of the present application is used, it is only for convenience of description and simplification of the description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the present application cannot be construed as being limited. Furthermore, the terms first, second, etc. may be used herein to distinguish between various elements, but these should not be limited by the order of manufacture or by importance to indicate or imply relative importance, and their names may differ from the descriptions and claims provided herein.
The terminology used in the description is for the purpose of describing the embodiments of the application and is not intended to be limiting of the application. It is also to be understood that, unless otherwise expressly stated or limited, the terms "disposed," "connected," and "connected" are intended to be open-ended, i.e., may be fixedly connected, detachably connected, or integrally connected; they may be mechanically coupled, directly coupled, indirectly coupled through intervening media, or may be interconnected between two elements. The specific meaning of the above terms in the present application will be specifically understood by those skilled in the art.
An aspect of an embodiment of the present application provides an augmented reality display method based on lateral cranial position, and fig. 3 shows a flowchart of the display method, as shown in fig. 3, including the following steps:
s100: acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person, wherein the X-ray film features are determined based on an X-ray cranial film of the target person;
s200: determining a lateral facial feature set based on an image of a real head of a target person displayed by an augmented reality device, the lateral facial feature set comprising a plurality of lateral facial features;
s300: registering and positioning the X-ray film feature set and the lateral facial feature set to determine a mapping relation between the lateral skull enhancement image and the real head of the target person;
s400: and displaying the cranially-side position augmented image on the image of the real head of the target person in an augmented reality device in an overlaying way based on the mapping relation.
According to the method provided by the embodiment of the application, firstly, an X-ray cranial side image of a target person is obtained based on the X-ray cranial side image of the target person, then a lateral position facial feature set of the target person is determined by utilizing an image of the real head of the target person displayed in augmented reality equipment (including but not limited to monocular or binocular augmented reality display equipment such as AR glasses, AR helmets and the like), and as the lateral position facial feature set and the X-ray image feature set have a corresponding relation, the corresponding features in the two feature sets are utilized for registration and positioning, so that a mapping relation which is needed to be followed when the cranial side augmented image is overlaid and displayed on the image of the real head of the target person is obtained, and finally, the cranial side augmented image is overlaid and displayed on the image of the real head of the target person displayed by the augmented reality equipment according to the mapping relation. The following describes in detail specific embodiments of the above steps S100 to S400 with reference to the drawings.
Step S100 is used to obtain an X-ray film feature set of the target person. Methods for mapping and calibrating the soft tissue structure and hard tissue structure of the skull of a target person based on an X-ray lateral skull slice to obtain various X-ray slice characteristics are well known to those skilled in the art. Fig. 4 shows a specific example of determining and displaying multiple X-ray film characteristics on an X-ray cranial position, as shown in fig. 4, a person skilled in the art may first trace various soft tissue structures (such as forehead skin, nose skin, upper and lower lips, etc.), hard tissue structures (such as skull, upper and lower jaws, dental crowns and roots of teeth, etc.), or contours thereof displayed on the X-ray cranial position to obtain multiple cranial position traces (the curves in fig. 4), and perform calibration using the multiple cranial position traces to obtain multiple cranial position mark points (the following table 1 lists the various cranial position mark points in fig. 4 and specific meanings thereof), and further, according to different needs of the cephalogram measurement, connect the different cranial position mark points to obtain multiple cranial position feature lines (the straight lines in the four, table 2 below lists the specific meanings of the various cranial position tracing lines and characteristic lines), and the various cranial position tracing lines, the various cranial position marking points and the various cranial position characteristic lines are all X-ray film characteristics used for representing the facial characteristics of the cranium of the target person, and the combination of the X-ray film characteristics forms the X-ray film characteristic set of the target person. The operation of obtaining the characteristic set of the X-ray film through tracing and calibration can be manually performed by experienced operators, alternatively can be performed in a mode of combining tracing and calibration of a computer algorithm with manual correction in a mutual semi-automatic mode, and can also be performed in a full-automatic mode through the computer algorithm based on artificial intelligence.
TABLE 1 lateral cranium marking points and their concrete meanings
Figure BDA0003515380530000061
Figure BDA0003515380530000071
TABLE 2 craniofacial delineation and their specific meanings
Figure BDA0003515380530000072
Figure BDA0003515380530000081
The X-ray feature set can be displayed on the image of the X-ray cranial slice in a superimposed manner as shown in fig. 4 or can be displayed separately, and fig. 5 shows a schematic diagram of the separately displayed X-ray feature set according to the embodiment of the application.
In some preferred embodiments of the present application, the X-ray film feature set comprises a plurality of cranially located landmark points, which may be used for subsequent matching with an image of the real head of the target person.
In some preferred embodiments of the present application, the lateral cranial marker points include hard tissue marker points and soft tissue marker points; the hard tissue mark points are determined based on hard tissue structure images on an X-ray cranial side slice and can be used for representing the characteristics of cranial hard tissue structures such as cranium, jaw bone, dental crown and dental root of teeth, wherein the cranium, the jaw bone and the like are tissues invisible to naked eyes when the skull, the jaw bone and the like are observed from the surface of a human face; the soft tissue marking points are determined based on soft tissue structure images on an X-ray cranial side film, and can be used for representing the side profiles of skin tissues such as forehead, nose, upper and lower lips and the like. Specifically, as shown in fig. 4, 5, in some embodiments of the present application, the hard tissue landmark points may include the following various cranially-located landmark points: s, N, P, Ba, Bo, W, Or (Or O), Ptm, ANS, PNS, A, UI, UIA, LI, LIA, B, Pog, Me, Gn, Go, Ar, Co, D; soft tissue landmark points may include the following lateral cranial landmark points: G. ns, Prn, Cm, Sn, A ', UL ', LL ', UL, LL, B ', Pos, Gn ', Mes.
In some embodiments of the present application, the set of X-ray film features further comprises one or more of the following features: the specific determination methods of the at least one cranial position tracing line and the at least one cranial position characteristic line, the cranial position tracing line and the cranial position characteristic line have been described in detail in the foregoing sections, and are not described herein again.
As can be seen from fig. 5, the plurality of X-ray features included in the X-ray feature set are abstract information obtained by extracting soft and hard tissue structure features of the craniofacial region of the target person, and can provide required information for the steps of measurement, analysis, diagnosis and the like in the subsequent orthodontic operation.
It should be noted that the various X-ray film characteristics determined in the above-mentioned manner are in a positional relationship with each other in a state of matching with the static X-ray cranial film image (defined as the first state in the present invention), namely, the plurality of X-ray film characteristics at the moment represent the position relation of each tissue structure of the craniofacial area of the target person at the moment of shooting the X-ray craniofacial film, however, in the process of actually observing, measuring and diagnosing the craniofacial tissue structure characteristics of the target person, in quite a few cases, for the observation and analysis of the dynamic mandibular position, it is desirable to be able to acquire and measure in motion the various anatomical structures in the oral cavity (in particular the maxilla, mandible, condyles) whose respective X-ray characteristics change in real time when the mandible is in different positions and in different directions of motion.
To achieve the above object, in some preferred embodiments of the present application, the plurality of cranial marker points include at least 2 marker points whose relative positions are kept constant and a plurality of marker points whose relative positions are variable.
To achieve the above objects, in some preferred embodiments of the present application, the relative positional relationship between the plurality of X-ray film features has a first state matching an X-ray cranial film and a second state matching a real-time lateral facial state of the target person.
The following describes, with reference to fig. 6 to 8, specific meanings of the first state and the second state of the relative positional relationship among the above-described marker point whose relative position is kept constant, the marker point whose relative position is variable, and the plurality of X-ray film features, as an example.
Fig. 6 shows an image of the real head of a target person in an X-ray cranial slice view at an angle generally perpendicular to the median sagittal plane of the head of the target person and superimposed thereon an image showing the X-ray cranial slice view taken in this state.
FIG. 7 illustrates a plurality of X-ray film features determined from the X-ray cranial film of FIG. 6 and superimposed on an image of a real head of a target person still in the X-ray cranial film capture, particularly, as shown in FIG. 7, the X-ray film features include: the X-ray film characteristics are in a first state matched with the X-ray head side film, and the state of the craniofacial part of the target person at the moment of shooting by the X-ray head side film is reflected. Wherein, the Z line shown in the figure is an improvement of the H line, and a tangent line (Z line) from the chin of the soft tissue to the most protruded lip is advocated as a reference line for evaluating the anterior process, which is also called a profile line (profile line), and the Z angle is formed by the lower back corner formed by the orbital-ear plane (FH plane) and has a certain normal value and an ideal value range, and the profile line in the ideal state should cut through the upper lip and the lower lip and be tangent or positioned at the later position of the line.
In the orthodontic process, the occlusion state of the lower jaw of a target person under motion needs to be observed and measured in many cases to determine the type, severity and the like of malocclusion deformity. Taking the measurement of the Z angle in the functional class III malocclusion diagnosis process as an example, the functional class III malocclusion is also called as a false class III malocclusion, and poor habits such as tongue stretching, finger sucking, lower lip biting and incorrect artificial feeding posture, and dental factors which can cause occlusion interference and early contact such as early loss or retention of single or multiple maxillary deciduous molars, insufficient wearing of deciduous cuspids, single-sided multiple dental caries, occlusion interference in the replacement period, etc., are likely to cause the change of the lower jaw closing path direction, resulting in the occurrence of the functional class III malocclusion. Various factors cause the existence of false relations and false positions, so that more questions can be put on the accuracy of diagnosis only facing static head shadow measurement, and static image data is often confused, so that the misdiagnosis probability is increased. Therefore, in the process of diagnosing functional class III malocclusion, it is more preferable to be able to measure each measurement item in the real-time occlusion state of the target person, and the following description will be given by taking the measurement of the Z angle as an example.
Fig. 8 shows a schematic diagram of observing and measuring lateral features of a target person in any lateral facial state in real time (it is important to point out that fig. 8 represents an ideal observation and measurement manner expected by those skilled in the art, not an observation and measurement manner already realized by the prior art), and it is obvious that as the lateral facial state of the target person gradually deviates from the state when the X-ray cranial lateral slice is taken, the relative positional relationship among the plurality of X-ray slice features will also change in real time. Wherein, the reference plane orbital-ear plane (FH) plane of the Z angle is determined by the measuring points (P point, O point) with the relative position kept unchanged, and is a relatively static unit, the only hard tissue which can move from time to time in the skull structure is the mandible and the teeth on the mandible, and can drive part of soft tissues (such as upper lip and lower lip) of the craniofacial part to move, correspondingly, the mark points corresponding to the soft tissues and the hard tissues which move relatively, points such as B, Po, Gn, Me, D, B ', Pos, Gn', Mes, LL ', UL' are variable relative position markers, and obviously, the corresponding tracing lines and characteristic lines will also have corresponding position changes, so that the relative position relationship among the plurality of X-ray film characteristics is in a state (defined as a second state in the present invention) matching the real-time lateral facial state of the target person.
As shown in fig. 8, as the mandible moves, the tracing line of the hard tissue of the target person dynamically changes (in fig. 8, the tracing line in the first state and the tracing line in the second state are respectively described by a dotted line and a solid line), and the corresponding soft tissue such as the lip contour line, the Z line and the Z angle also change, and the Z line and the Z angle reflect the craniofacial features of the target person in any occlusion state.
To achieve the ideal display and measurement, the method of measuring the X-ray film characteristics in the static state in the prior art needs to be changed, and for this purpose, in the embodiment of the present application, an augmented reality display technique is used, and the X-ray film characteristics acquired from the static X-ray cranial film are associated with the craniofacial characteristics of the target person changing in real time through steps S200 to S400, and the relative positional relationship is changed in a linkage manner, so that the real-time craniofacial characteristics of the target person can be displayed and measured more clearly.
Specifically, step S200 determines a plurality of lateral facial features from the image of the real head of the target person displayed by the augmented reality device and configures a lateral facial feature set.
Augmented Reality (AR) technology is a technology for fusing virtual information with the real world, and by applying various technical means such as three-dimensional spatial information reconstruction, real-time tracking, intelligent interaction and the like, the virtual information such as digitized characters, images, videos, three-dimensional models and the like is fused and displayed in the real world, so that the real world is enhanced.
In some optional embodiments of the present application, the augmented reality device acquires real-space data in real time through devices such as a camera, a sensor, a gyroscope, and the like, processes the data through the processor, and performs registration between the virtual model and the three-dimensional space, thereby obtaining a mapping relationship between the virtual model and the real three-dimensional space, and as the devices such as the camera, the attitude sensor, and the like update position change data of the wearer in the real three-dimensional space in real time, the processor may superimpose and display the virtual model on an image of a real scene displayed by a monocular or binocular display screen in real time.
In some optional embodiments of the present application, the augmented reality device further includes an interactive operation device, such as an eye tracker, a motion capture device, and the like, and the interactive operation device captures various motions of the wearer, transmits the motions to the processor for analysis and processing, and feeds back corresponding instructions, so as to implement corresponding human-computer interaction operations. The related technologies of augmented reality are all means of the prior art, and are not described herein again.
In some embodiments of the present application, in step S200, by using the above augmented reality device, three-dimensional information of a real head of a target person can be obtained, and further, a plurality of lateral facial features of the target person are determined through interactive operation on an image of the real head of the target person displayed on a display screen of the augmented reality device, so as to form a lateral facial feature set.
Fig. 9 illustrates an image of a real head of a target person displayed by an augmented reality device according to an embodiment of the present application, and fig. 10 illustrates a schematic diagram of a plurality of lateral facial features determined based on the image of the real head of the target person displayed by the augmented reality device according to an embodiment of the present application.
In some preferred embodiments of the present application, as shown in fig. 10, the lateral facial features are a plurality of landmark points reflecting the soft tissue structure of the face of the target person, and have a one-to-one correspondence relationship with the cranial landmark points in the X-ray slice feature set (the definition of which is also shown in table 1), and the two sets of landmark points having the correspondence relationship can be used for the subsequent positioning operation.
In some preferred embodiments of the present application, similar to the above-mentioned X-ray film feature set, the image of the real head of the target person may also be traced by using the interactive operation device of the augmented reality device, so that the lateral facial feature may also include a plurality of tracing lines reflecting the facial tissue contour.
In some preferred embodiments of the present application, in order to obtain the lateral facial features more accurately and automatically, label stickers may be pasted on the face of the target person facing the enhanced display device, and the label stickers are pasted on the respective label points of the face, so that the lateral facial features may be determined more accurately by way of human-computer interaction or automatically by way of image recognition.
Step S300, the X-ray film feature set and the lateral facial feature set are registered, and therefore the mapping relation between the lateral skull enhancement image and the real head of the target person is obtained. The skull side enhanced image is an image which is displayed on the real head image of the target person in a superimposed manner by the method and reflects the craniofacial features of the target person.
Unlike the way of registering a virtual model with a real scene or object, which is well known to those skilled in the art in the conventional augmented reality technology, the registration positioning in the technical solution of the present application faces at least the following difficulties:
(1) the X-ray film characteristic set is determined based on a planar X-ray cranial position film, the shooting angle of the X-ray cranial position film has strict requirements, and the X-ray cranial position film needs to be shot and obtained under the strict control of a cranial position indicator so as to eliminate errors caused by incorrect head positions; meanwhile, because the depth information of the three-dimensional image relative to the shooting angle is compressed to a shooting plane (such as the median sagittal plane of a target person), the registration positioning can not be carried out by utilizing the positioning points corresponding to each other in two groups of three-dimensional spaces in a conventional registration positioning mode;
(2) the static X-ray cranial facial features reflect lateral craniofacial features of the target person at the moment of shooting, and during registration positioning, the real-time craniofacial state of the target person may have changed from the shooting state, and particularly, as analyzed in the foregoing manner, a lateral craniofacial feature with a variable relative position may have generated an irrespective deviation from the X-ray features of the first state, thereby failing to perform accurate registration positioning.
To address the above challenges in registration, embodiments of the present application propose various optimization measures, which are described in detail below.
In some preferred embodiments of the present application, the registration positioning is performed when the real-time lateral facial state of the target person matches the X-ray cranial position. As described in the foregoing, in order to perform registration positioning more accurately, it is necessary to make the relative positional relationship between the lateral facial features of the target person and the relative positional relationship between the X-ray film features in the first state coincide with each other during registration positioning, and to achieve the above-mentioned goal, the facial state, especially the occlusal state of the upper and lower jaws, of the target person during registration positioning should be kept as consistent as possible during X-ray cranial film shooting. For example, in some specific embodiments, the target person may try to keep the same occlusion shape when taking X-ray cranial films and positioning in registration by prior communication; in addition, in other specific embodiments, impression materials used for obtaining dental impressions in orthodontic correction can be used for obtaining impressions with specific occlusion states of a target person, and the impressions can be worn by the target person during X-ray skull side film shooting and registration positioning so as to ensure consistency of side facial states.
In some embodiments of the present application, step S300 further performs registration positioning by:
s310: determining n first positioning feature points according to the X-ray film feature set, wherein n is greater than or equal to 2;
s320: determining n second positioning feature points according to the lateral facial feature set, wherein each second positioning feature point corresponds to the same part of the real head of the target person as the corresponding first positioning feature point;
s330: and determining the mapping relation between the skull side position enhanced image and the real head of the target person based on the first positioning feature point and the second positioning feature point.
Specifically, the first positioning feature point and the second positioning feature point are feature points of the same portion in an X-ray film feature set and a lateral facial feature set, for example, in some embodiments of the present application, four points, namely, a G point, a Ns point, a Prn point, and a Pos point, in the X-ray film feature set may be respectively selected as the first positioning feature point, four points, namely, a G point, a Ns point, a Prn point, and a Pos point, in the lateral facial feature set, which correspond to the same portion, may be selected as the second positioning feature point, and registration positioning is performed by using various conventional registration positioning algorithms in the existing augmented reality technology, so as to obtain a mapping relationship between the lateral image of the skull and the real head of the target person. Fig. 11 shows a schematic diagram of registration positioning using first and second positioning feature points according to an embodiment of the present application.
In the above embodiments, the positioning feature points for matching positioning are selected from various parts of the soft facial tissue, and in other preferred embodiments of the present application, in order to make the registration positioning more accurate, at least one of the first positioning feature points is located on the teeth of the target person, that is: the positioning feature points used for registration positioning have at least one hard tissue structure taken from the target person. Obviously, in these embodiments, the upper and lower lips of the target person should be opened and kept in the same state during the X-ray film taking and registration positioning so that the teeth for positioning during registration positioning can be exposed in the image of the real head of the target person.
In some preferred embodiments of the present application, in order to further improve the accuracy of registration positioning, the following steps are further included before the step S310:
s308: displaying an X-ray lateral facial contour on augmented reality equipment, wherein the X-ray lateral facial contour is determined based on an X-ray cranial side slice;
s309: and adjusting the relative position of the augmented reality equipment and the target person to ensure that the current visual angle of the augmented reality equipment is the same as the shooting angle of the X-ray skull side film, and the side position face outline of the target person displayed on the augmented reality equipment is superposed with the X-ray side position face outline.
Specifically, as shown in fig. 12, the X-ray cranial side slice and the schematic diagram of the X-ray lateral side face contour drawn according to the X-ray cranial side slice may be drawn by the tracing method based on the X-ray cranial side slice, and then before the registration positioning is performed, the relative position between the augmented reality device and the target person is adjusted, so that the current viewing angle of the augmented reality device is the same as the shooting angle of the X-ray cranial side slice, and the lateral side face contour of the target person displayed on the augmented reality device coincides with the X-ray lateral side face contour, and at this time, the lateral side face state of the target person and the current viewing angle of the augmented reality device may be both the same as that when the X-ray cranial side slice is shot, thereby ensuring the accuracy of the subsequent registration positioning.
After the registration and positioning are performed in step S300, the mapping relationship between the cranial position enhancement image and the real head of the target person is obtained, and the cranial position enhancement information is superimposed and displayed on the image of the real head of the target person displayed by the augmented reality device in step S400 according to the mapping relationship.
Specifically, in some embodiments of the present application, the cranially enhanced images include one or more of the following: the X-ray skull side film, the hard tissue structure image on the X-ray skull side film, the soft tissue structure image on the X-ray skull side film, the X-ray film feature set and the side face feature set.
FIG. 13 illustrates a lateral cranial enhancement image superimposed on a real head image of a target person, according to some embodiments of the present application. It should be noted that, for a clearer illustration, the plurality of cranially enhanced images are respectively offset along the normal direction of the median sagittal plane and pass through the rectangular frame to represent the plane of the median sagittal plane, so as to form a plurality of layers of planes spaced apart from each other, but those skilled in the art should understand that in the actual display process, the layers may be in the same plane and displayed in a semi-transparent manner in an overlapping manner. The images of the layers shown in fig. 13 are, in order from back to front: it is easy to know that, in the specific embodiment of the present application, a corresponding cranial position enhancement image may be selected according to actual needs to be superimposed and displayed on the image of the real head of the target person, for example, fig. 14 shows that, according to some embodiments of the present application, the image seen from the current perspective of the augmented reality device includes an X-ray cranial position sheet and a cranial position tracing line superimposed and displayed on the image of the real head of the target person, respectively.
Through the step S400, superposition display of various X-ray film characteristics on the real head image of the target person can be realized, particularly, the X-ray film characteristics with variable relative positions, such as lower jaw teeth, jaw bones, soft tissues, tracing lines, characteristic lines and the like of the lower jaw teeth, the jaw bones, the soft tissues and the like can be changed in a linkage manner along with movement of the lower jaw of the target person, and compared with the display of the existing static X-ray skull side film, real-time dynamic display of abstract side-position skull facial characteristics is really realized, so that the requirements of tracking and displaying real-time states of various invisible tissue structures in the orthodontic process of the oral cavity are met.
In some preferred embodiments of the present application, the lateral-cranial-position-based augmented reality display method of the present application further comprises the steps of:
s500: performing a cephalogram measurement based on the image of the real head of the target person and the enhanced image of the lateral position of the head;
s600: and displaying the data of the head shadow measurement on the image of the real head of the target person in an overlaying way in the augmented reality equipment.
Specifically, through the interactive operation device of the augmented reality equipment, the head shadow measurement can be conveniently carried out on the basis of the image of the real head of the target person and the head lateral position augmented image, and the head shadow measurement data are superposed and displayed on the image of the real head of the target person. Fig. 15 and 16 respectively show schematic diagrams of a cranially enhanced image with the head shadow measurement data displayed in an overlaid manner, wherein the X-ray cranially slice is removed from the display in fig. 16, so that the displayed information is more concise and clearer.
In some preferred embodiments of the present application, the cephalometric measurement is performed based on at least one measurement point taken from an image of a real head and at least one measurement point taken from a cranially enhanced image.
In the process of cephalometric measurement in the field of orthodontics, a considerable part of measurement items need to be jointly measured for hard tissue mark points and soft tissue mark points of the craniofacial area, and still taking the analysis of fig. 8 as an example, in the process of performing real-time superposition display on a craniofacial area enhanced image (it needs to be noted that the dynamic change of the hard tissue belongs to non-hundred-percent accurate simulation motion simulated according to the change of the soft tissue, and the soft tissue, namely a side front contour line is a contour line of real dynamic motion), the corresponding Z line and the mark points corresponding to the Z line during the dynamic change are all taken from a real facial contour, while a reference plane for measuring the Z angle, namely an orbital-ear plane, belongs to relatively immovable points and is taken from an X-line craniofacial area sheet, namely the measurement of the embodiment is realized based on at least one measurement point taken from an image of a real head and at least one measurement point taken from a craniofacial area enhanced image, so as to carry out virtual-real combination, complete the dynamic head shadow measurement based on virtual reality, and display the measurement result in the virtual reality equipment.
In addition, as analyzed before, for the acquisition of the Z angle, it is necessary to determine the hard tissue mark point (P, O point) and the soft tissue mark point (Pos, UL, LL point, or positioning tangent point according to the lip contour shape), although part of the soft tissue mark points can be accurately determined based on the X-ray cranial side slice, but still part of the soft tissue mark points may have image blurring due to relatively low density of part of the soft tissue or radiation dose problem, thereby affecting the accurate positioning thereof, with the augmented reality display method of the present application, since the registration positioning can be performed by using the more accurate soft tissue mark points, when performing the cephalometric measurement, the measurement points related to the soft tissue on the X-ray cranial side slice, which cannot be accurately positioned, can be directly replaced by the measurement points that can be accurately positioned by the image of the real head, by using the method, the cephalogram can be measured by combining the cranium side position enhanced image and the measuring points which can be accurately positioned in the real head image, so that the accuracy of the cephalogram measurement can be improved.
Another aspect of the embodiments of the present application provides an augmented reality display system based on lateral skull position, which is used for implementing the above augmented reality display method based on lateral skull position, as shown in fig. 17, and includes:
the first acquisition module is used for acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person, wherein the X-ray film features are determined based on an X-ray cranial film of the target person;
the second acquisition module is used for determining a lateral facial feature set based on an image of the real head of the target person displayed by the augmented reality equipment, wherein the lateral facial feature set comprises a plurality of lateral facial features;
the registration positioning module is used for carrying out registration positioning on the X-ray film feature set and the lateral facial feature set so as to determine the mapping relation between the lateral skull enhancement image and the real head of the target person;
and the augmented display module is used for displaying the skull-side augmented image on the image of the real head of the target person in an overlaying manner in the augmented reality equipment based on the mapping relation.
In some specific embodiments of the present application, the modules may be integrated in the augmented reality device in a form of software such as firmware and code capable of implementing the corresponding functions thereof, or in a form of hardware such as a chip capable of implementing the corresponding functions thereof. The working principle, the implemented functions and the specific implementation of the above modules are described in detail in the foregoing description of the method for displaying augmented reality based on lateral skull position, and are not described herein again.
Yet another aspect of the present application provides a computer-readable storage medium storing a program that when executed by a processor implements the above-described lateral-cranial-position-based augmented reality display method.
While the foregoing has described the detailed description of the embodiments of the present application, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the principles of the application, and it is intended to cover all such changes and modifications as fall within the scope of the appended claims.

Claims (17)

1. An augmented reality display method based on lateral skull position is characterized by comprising the following steps:
s100: acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person, wherein the X-ray film features are determined based on an X-ray cranial film of the target person;
s200: determining a lateral facial feature set based on an image of a real head of a target person displayed by an augmented reality device, the lateral facial feature set comprising a plurality of lateral facial features;
s300: registering and positioning the X-ray film feature set and the lateral facial feature set to determine a mapping relation between the lateral skull enhancement image and the real head of the target person;
s400: and displaying the cranially-side position augmented image on the image of the real head of the target person in an augmented reality device in an overlaying way based on the mapping relation.
2. The lateral-cranial-position-based augmented reality display method according to claim 1, wherein:
the plurality of X-ray films are characterized by a plane parallel to the shooting plane of the X-ray cranial films.
3. The lateral-cranial-position-based augmented reality display method according to claim 1, wherein:
the X-ray film feature set comprises a plurality of cranially-lateral position marking points.
4. The lateral-cranial-position-based augmented reality display method according to claim 3, wherein:
the plurality of cranial marker points comprises at least 2 marker points with constant relative positions and a plurality of marker points with variable relative positions.
5. The lateral-cranial-position-based augmented reality display method according to claim 3, wherein:
the lateral skull marking points comprise hard tissue marking points and soft tissue marking points;
the hard tissue landmark points are determined based on hard tissue structure images on an X-ray cranial slice;
the soft tissue landmark points are determined based on soft tissue structure images on an X-ray cranial slice.
6. The lateral-cranial-position-based augmented reality display method of claim 3, wherein the set of X-ray film features further comprises one or more of the following features:
at least one cranial delineation line and at least one cranial characteristic line;
the cranial mapping line is determined based on an X-ray cranial slice;
the lateral cranial feature line is determined based on the lateral cranial landmark points.
7. The lateral-cranial-position-based augmented reality display method according to claim 1, wherein:
the relative positional relationship between the plurality of X-ray film features has a first state matching an X-ray cranial film and a second state matching a real-time lateral facial state of the target person.
8. The lateral-cranial-position-based augmented reality display method according to claim 1, wherein:
the registration positioning is performed when the real-time lateral facial state of the target person matches the X-ray cranial position.
9. The lateral-cranial-position-based augmented reality display method according to claim 1, wherein the step S300 further comprises the steps of:
s310: determining n first positioning feature points according to the X-ray film feature set, wherein n is greater than or equal to 2;
s320: determining n second positioning feature points according to the lateral facial feature set, wherein each second positioning feature point corresponds to the same part of the real head of the target person as the corresponding first positioning feature point;
s330: and determining the mapping relation between the skull side position enhanced image and the real head of the target person based on the first positioning feature point and the second positioning feature point.
10. The lateral-cranial-position-based augmented reality display method according to claim 9, further comprising the following step before the step S310:
s308: displaying an X-ray lateral facial contour on augmented reality equipment, wherein the X-ray lateral facial contour is determined based on an X-ray cranial side slice;
s309: and adjusting the relative position of the augmented reality equipment and the target person to ensure that the current visual angle of the augmented reality equipment is the same as the shooting angle of the X-ray skull side film, and the side position face contour of the target person displayed on the augmented reality equipment is superposed with the X-ray side position face contour.
11. The lateral skull based augmented reality display method of claim 9, wherein:
at least one of the first location feature points is located on a tooth of the target person.
12. The lateral-cranial-position-based augmented reality display method according to claim 9, further comprising the following step before the step S310:
and pasting a label on the face of the target person facing the enhanced display device.
13. The lateral-cranial-position-based augmented reality display method according to claim 1, wherein the lateral-cranial-position augmented imagery includes one or more of the following items:
the X-ray skull side film, the hard tissue structure image on the X-ray skull side film, the soft tissue structure image on the X-ray skull side film, the X-ray film feature set and the side face feature set.
14. The lateral-cranial-position-based augmented reality display method according to claim 1, further comprising the steps of:
s500: performing a cephalogram measurement based on the image of the real head of the target person and the enhanced image of the lateral position of the head;
s600: and displaying the data of the head shadow measurement on the image of the real head of the target person in an overlaying way in the augmented reality equipment.
15. The lateral-cranial-position-based augmented reality display method of claim 14, wherein:
the cephalometric measurement is based on at least one measurement point taken from an image of the real head and at least one measurement point taken from a lateral cranial enhancement image.
16. A lateral-cranio-based augmented reality display system for implementing the lateral-cranio-based augmented reality display method according to claim 1, comprising:
the first acquisition module is used for acquiring an X-ray film feature set formed by a plurality of X-ray film features of a target person, wherein the X-ray film features are determined based on an X-ray cranial film of the target person;
the second acquisition module is used for determining a lateral facial feature set based on an image of the real head of the target person displayed by the augmented reality equipment, wherein the lateral facial feature set comprises a plurality of lateral facial features;
the registration positioning module is used for carrying out registration positioning on the X-ray film feature set and the lateral facial feature set so as to determine the mapping relation between the lateral skull enhancement image and the real head of the target person;
and the augmented display module is used for displaying the skull-side augmented image on the image of the real head of the target person in an overlaying manner in the augmented reality equipment based on the mapping relation.
17. A computer-readable storage medium storing a program, characterized in that: the program when executed by a processor implements a lateral cranial position based augmented reality display method of any one of claims 1 to 16.
CN202210164032.XA 2022-02-22 2022-02-22 Augmented reality display method and system based on lateral position of skull and storage medium Pending CN114521911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210164032.XA CN114521911A (en) 2022-02-22 2022-02-22 Augmented reality display method and system based on lateral position of skull and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210164032.XA CN114521911A (en) 2022-02-22 2022-02-22 Augmented reality display method and system based on lateral position of skull and storage medium

Publications (1)

Publication Number Publication Date
CN114521911A true CN114521911A (en) 2022-05-24

Family

ID=81625173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210164032.XA Pending CN114521911A (en) 2022-02-22 2022-02-22 Augmented reality display method and system based on lateral position of skull and storage medium

Country Status (1)

Country Link
CN (1) CN114521911A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080020586A (en) * 2007-11-20 2008-03-05 오스템임플란트 주식회사 Method of assigning a landmark on a cephalometric radiograph
JP2012196316A (en) * 2011-03-22 2012-10-18 Asahi Roentgen Kogyo Kk Medical radiography device
WO2020121270A1 (en) * 2018-12-14 2020-06-18 Farronato Marco System and method for dynamic augmented reality imaging of an anatomical site
CN113823000A (en) * 2021-09-26 2021-12-21 上海交通大学医学院附属第九人民医院 Enhanced display method, system, device and storage medium based on head

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080020586A (en) * 2007-11-20 2008-03-05 오스템임플란트 주식회사 Method of assigning a landmark on a cephalometric radiograph
JP2012196316A (en) * 2011-03-22 2012-10-18 Asahi Roentgen Kogyo Kk Medical radiography device
WO2020121270A1 (en) * 2018-12-14 2020-06-18 Farronato Marco System and method for dynamic augmented reality imaging of an anatomical site
CN113823000A (en) * 2021-09-26 2021-12-21 上海交通大学医学院附属第九人民医院 Enhanced display method, system, device and storage medium based on head

Similar Documents

Publication Publication Date Title
US8253778B2 (en) Three-dimensional digital magnifier operation supporting system
EP1124487B1 (en) Dental image processing method and system
US10265149B2 (en) Method and system for modeling the mandibular kinematics of a patient
Chapuis et al. A new system for computer-aided preoperative planning and intraoperative navigation during corrective jaw surgery
US7133042B2 (en) Systems and methods for generating an appliance with tie points
US9922454B2 (en) Method for designing an orthodontic appliance
CN112826615B (en) Display method of fluoroscopy area based on mixed reality technology in orthodontic treatment
US6621491B1 (en) Systems and methods for integrating 3D diagnostic data
US20070207441A1 (en) Four dimensional modeling of jaw and tooth dynamics
JP2017525522A (en) Visualization device in the patient's mouth
CN112451151B (en) Orthodontic model establishing method utilizing mixed reality technology
KR101831514B1 (en) The augmented reality system reflected estimation of movement of maxillary
CN113260335B (en) System for measuring periodontal pocket depth
CN112885436B (en) Dental surgery real-time auxiliary system based on augmented reality three-dimensional imaging
CN112972027A (en) Orthodontic micro-implant implantation positioning method using mixed reality technology
Kusnoto Two-dimensional cephalometry and computerized orthognathic surgical treatment planning
Almulla et al. Evaluating the accuracy of facial models obtained from volume wrapping: 2D images on CBCT versus 3D on CBCT
WO2001080763A2 (en) Systems and methods for generating an appliance with tie points
CN112932703A (en) Orthodontic bracket bonding method utilizing mixed reality technology
RU2610911C1 (en) System and method of virtual smile prototyping based on tactile computer device
CN115886863A (en) Tooth and facial bone three-dimensional overlapping measurement method and device with total skull base as datum plane
CN114521911A (en) Augmented reality display method and system based on lateral position of skull and storage medium
CN114022611B (en) Morphological measurement and analysis system and method for unilateral positive locking mandible functional unit and application
JP3757160B2 (en) 3D facial diagram display method for orthodontics
CN114664454A (en) Simulation method of jaw face soft tissue three-dimensional model in orthodontic process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination