CN117503043B - OCT-based defocus amount intelligent identification method and device - Google Patents

OCT-based defocus amount intelligent identification method and device Download PDF

Info

Publication number
CN117503043B
CN117503043B CN202410021060.5A CN202410021060A CN117503043B CN 117503043 B CN117503043 B CN 117503043B CN 202410021060 A CN202410021060 A CN 202410021060A CN 117503043 B CN117503043 B CN 117503043B
Authority
CN
China
Prior art keywords
target
signal
retina
determining
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410021060.5A
Other languages
Chinese (zh)
Other versions
CN117503043A (en
Inventor
安林
秦嘉
吴小翠
叶欣荣
陈咏然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weiren Medical Foshan Co ltd
Weizhi Medical Technology Foshan Co ltd
Guangdong Weiren Medical Technology Co ltd
Original Assignee
Weiren Medical Foshan Co ltd
Weizhi Medical Technology Foshan Co ltd
Guangdong Weiren Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weiren Medical Foshan Co ltd, Weizhi Medical Technology Foshan Co ltd, Guangdong Weiren Medical Technology Co ltd filed Critical Weiren Medical Foshan Co ltd
Priority to CN202410021060.5A priority Critical patent/CN117503043B/en
Publication of CN117503043A publication Critical patent/CN117503043A/en
Application granted granted Critical
Publication of CN117503043B publication Critical patent/CN117503043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an OCT-based defocus amount intelligent identification method and device, wherein the method comprises the following steps: obtaining retina signals of a target user through an OCT system, and determining an inspection target according to the retina signals; controlling the target motor to execute moving operation, determining a position change parameter of the target motor and determining a signal change parameter of a signal value corresponding to the inspection target in the process of executing moving operation by the target motor; and determining the target defocus amount according to the position change parameter and the signal change parameter. Therefore, by implementing the method and the device, the defocus amount can be intelligently identified, so that the accuracy and the reliability of the obtained defocus amount can be improved, and the efficiency of the obtained defocus amount can be improved.

Description

OCT-based defocus amount intelligent identification method and device
Technical Field
The invention relates to the technical field of intelligent identification, in particular to an OCT-based defocus amount intelligent identification method and device.
Background
In the prior art, the retina responds to an optical defocus stimulus, causing rapid changes in the choroid, which can lead to continuous remodelling of the sclera, leading to acceleration or slowing of eye elongation, myopia or hyperopia. Currently, retinal imaging is an important technique in ocular fundus disease or ocular correction, and the key to retinal imaging is to determine the best focus level and defocus required for an individual's retina, so that high quality retinal images are obtained for diagnosis and treatment by medical personnel. However, most of the methods for obtaining the defocus amount in the prior art are performed manually, and due to complex retina tissues and continuous movement of eyeballs during focusing, focusing has certain challenges, and the manual focusing process is difficult to control, so that not only is repeated and repeated debugging performed, but also the process is complicated and complicated, the quality of retina images is difficult to ensure, and the obtained defocus amount has low accuracy. It is important to provide a new defocus amount identification method to improve the accuracy of obtaining defocus amount.
Disclosure of Invention
The invention aims to solve the technical problem of providing an OCT-based defocus intelligent identification method and device, which can intelligently identify defocus, are beneficial to improving the accuracy and reliability of the obtained defocus and are beneficial to improving the efficiency of the obtained defocus.
In order to solve the technical problems, the first aspect of the invention discloses an OCT-based defocus amount intelligent identification method, which comprises the following steps:
obtaining a retina signal of a target user through an OCT system, and determining an inspection target according to the retina signal;
controlling a target motor to execute a moving operation, and determining a position change parameter of the target motor and a signal change parameter of a signal value corresponding to the inspection target in the process of executing the moving operation by the target motor; the target motors are motors corresponding to at least two focusing lenses;
and determining the target defocus amount according to the position change parameter and the signal change parameter.
As an alternative embodiment, in the first aspect of the present invention, the determining an inspection target according to the retinal signal includes:
generating a retinal structure of the target user from the retinal signal, wherein the retinal structure includes at least one RPE layer;
Determining a reflection value of each RPE layer of the target user according to the retina structure of the target user;
determining a target reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the RPE layer corresponding to the target reflection value as a detection target;
and said determining a reflectance value for each of said RPE layers of said target user based on said target user's retinal structure, comprising:
according to the retina structure of the target user, calculating a target spectrogram corresponding to the retina structure of the target user through a preset transformation calculation algorithm;
and determining the reflection value of each RPE layer of the target user according to the target spectrogram.
As an optional implementation manner, in the first aspect of the present invention, the determining, during the process of the target motor performing the moving operation, a position change parameter of the target motor, and determining a signal change parameter of a signal value corresponding to the inspection target, includes:
determining a movement change parameter of the target motor in the process of executing the movement operation by the target motor, and generating a position change parameter of the target motor according to all the movement change parameters, wherein the movement change parameter comprises one or more of a direction change parameter, an angle change parameter, a distance change parameter and a displacement change parameter;
Determining the signal variation of the signal value corresponding to the inspection target, generating the refraction variation of the inspection target according to the signal variation, and determining the signal variation parameter of the signal value corresponding to the inspection target according to the refraction variation.
In an optional implementation manner, in a first aspect of the present invention, the determining the target defocus amount according to the position change parameter and the signal change parameter includes:
generating a change relation parameter according to the position change parameter and the signal change parameter, wherein the change relation parameter comprises a relation between the position change of the target motor and the signal value change of the inspection target;
determining the target defocus amount of the target user according to the change relation parameters; wherein the target defocus amount comprises a defocus value;
and generating a retinal structure of the target user from the retinal signal, comprising:
scanning the retina signal to obtain retina image information corresponding to the target user;
and generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and generating a retina structure of the target user based on the retina structure tomogram.
In an optional implementation manner, in a first aspect of the present invention, the generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user includes:
determining a confidence parameter of each piece of retina image information according to each piece of retina image information corresponding to the target user;
according to the confidence parameters of each piece of retina image information, determining all confidence parameters meeting preset confidence conditions as target confidence parameters, and determining the retina image information corresponding to all the target confidence parameters as target retina image information;
generating a retina structure tomogram of the target user based on all the determined target retina image information and a predetermined image processing algorithm;
wherein the retina structure tomogram comprises one or more of a retina overall outline map and a retina boundary layer map of the target user.
As an optional implementation manner, in the first aspect of the present invention, after the obtaining, by the OCT system, a retinal signal of the target user, before the determining, according to the retinal signal, an inspection target, the method further includes:
Based on all the obtained retina signals of the target user, performing preset signal processing operation on each retina signal to obtain a signal processing result of each retina signal;
for each retina signal, judging whether the retina signal meets a preset signal identification condition according to a signal processing result of the retina signal;
for each of the retinal signals, determining the retinal signal as a target retinal signal when it is determined that the retinal signal satisfies a preset signal recognition condition;
wherein said determining an inspection target from said retinal signal comprises:
determining a test target based on all of the target retinal signals.
As an alternative embodiment, in the first aspect of the present invention, the method further includes:
generating multi-focus fitting information of the target user according to the target defocus amount, wherein the multi-focus fitting information comprises one or more of multi-focus soft mirror radian information and multi-focus soft mirror thickness information;
acquiring verification requirement information of the target user; the fitting requirement information comprises one or more of eye treatment requirement information, eye correction requirement information, naked eye fitting requirement information and mirror fitting requirement information of the target user;
Determining target verification information of the target user according to the multi-focus verification information of the target user and the verification requirement information of the target user;
and generating a target fitting soft mirror corresponding to the target user according to the target fitting information.
The second aspect of the invention discloses an OCT-based defocus amount intelligent recognition device, which comprises:
an acquisition module for acquiring retinal signals of the target user through the OCT system;
a determination module for determining an inspection target from the retinal signal;
the control module is used for controlling the target motor to execute moving operation; the target motors are motors corresponding to at least two focusing lenses;
the determining module is further configured to determine a position change parameter of the target motor and determine a signal change parameter of a signal value corresponding to the inspection target during the process of executing the moving operation by the target motor; and determining the target defocus amount according to the position change parameter and the signal change parameter.
As an alternative embodiment, in the second aspect of the present invention, the specific manner in which the determining module determines the inspection target according to the retinal signal includes:
Generating a retinal structure of the target user from the retinal signal, wherein the retinal structure includes at least one RPE layer;
determining a reflection value of each RPE layer of the target user according to the retina structure of the target user;
determining a target reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the RPE layer corresponding to the target reflection value as a detection target;
and the specific mode of determining the reflection value of each RPE layer of the target user according to the retina structure of the target user comprises the following steps:
according to the retina structure of the target user, calculating a target spectrogram corresponding to the retina structure of the target user through a preset transformation calculation algorithm;
and determining the reflection value of each RPE layer of the target user according to the target spectrogram.
As an optional implementation manner, in the second aspect of the present invention, the specific manner of determining, by the determining module, the position change parameter of the target motor during the process of performing the moving operation by the target motor, and determining the signal change parameter of the signal value corresponding to the inspection target includes:
Determining a movement change parameter of the target motor in the process of executing the movement operation by the target motor, and generating a position change parameter of the target motor according to all the movement change parameters, wherein the movement change parameter comprises one or more of a direction change parameter, an angle change parameter, a distance change parameter and a displacement change parameter;
determining the signal variation of the signal value corresponding to the inspection target, generating the refraction variation of the inspection target according to the signal variation, and determining the signal variation parameter of the signal value corresponding to the inspection target according to the refraction variation.
In a second aspect of the present invention, as an optional implementation manner, the determining module determines the target defocus amount according to the position change parameter and the signal change parameter includes:
generating a change relation parameter according to the position change parameter and the signal change parameter, wherein the change relation parameter comprises a relation between the position change of the target motor and the signal value change of the inspection target;
determining the target defocus amount of the target user according to the change relation parameters; wherein the target defocus amount comprises a defocus value;
And the specific mode of generating the retina structure of the target user according to the retina signal by the determining module comprises the following steps:
scanning the retina signal to obtain retina image information corresponding to the target user;
and generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and generating a retina structure of the target user based on the retina structure tomogram.
In a second aspect of the present invention, as an optional implementation manner, the determining module generates the retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, where the specific manner includes:
determining a confidence parameter of each piece of retina image information according to each piece of retina image information corresponding to the target user;
according to the confidence parameters of each piece of retina image information, determining all confidence parameters meeting preset confidence conditions as target confidence parameters, and determining the retina image information corresponding to all the target confidence parameters as target retina image information;
Generating a retina structure tomogram of the target user based on all the determined target retina image information and a predetermined image processing algorithm;
wherein the retina structure tomogram comprises one or more of a retina overall outline map and a retina boundary layer map of the target user.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
the processing module is used for executing preset signal processing operation on each retina signal based on all the obtained retina signals of the target user before the determining module determines the inspection target according to the retina signals after the obtaining module obtains the retina signals of the target user through the OCT system, so as to obtain a signal processing result of each retina signal;
the judging module is used for judging whether the retina signal meets the preset signal identification condition or not according to the signal processing result of the retina signal for each retina signal;
the determining module is further configured to determine, for each of the retinal signals, the retinal signal as a target retinal signal when it is determined that the retinal signal meets a preset signal recognition condition;
Wherein, the specific mode of determining the test target according to the retina signal by the determining module comprises:
determining a test target based on all of the target retinal signals.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
the generation module is used for generating multi-focus verification information of the target user according to the target defocus amount, wherein the multi-focus verification information comprises one or more of multi-focus soft mirror radian information and multi-focus soft mirror thickness information;
the acquisition module is also used for acquiring the verification requirement information of the target user; the fitting requirement information comprises one or more of eye treatment requirement information, eye correction requirement information, naked eye fitting requirement information and mirror fitting requirement information of the target user;
the determining module is further configured to determine target verification information of the target user according to the multi-focus verification information of the target user and the verification requirement information of the target user;
the generating module is further used for generating a target fitting soft mirror corresponding to the target user according to the target fitting information.
The third aspect of the invention discloses another OCT-based defocus amount intelligent recognition device, which comprises:
A memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to execute the OCT-based defocus amount intelligent recognition method disclosed in the first aspect of the present invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions for performing the OCT-based defocus amount intelligent recognition method disclosed in the first aspect of the present invention when the computer instructions are called.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the retina signal of a target user is acquired through an OCT system, and an inspection target is determined according to the retina signal; controlling the target motor to execute moving operation, determining a position change parameter of the target motor and determining a signal change parameter of a signal value corresponding to the inspection target in the process of executing moving operation by the target motor; and determining the target defocus amount according to the position change parameter and the signal change parameter. Therefore, by implementing the method and the device, the defocus amount can be intelligently identified, so that the accuracy and the reliability of the obtained defocus amount can be improved, and the efficiency of the obtained defocus amount can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an intelligent identification method of defocus amount based on OCT according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another OCT-based defocus amount intelligent recognition method disclosed in the embodiment of the invention;
fig. 3 is a schematic structural diagram of an intelligent identification device for defocus amount based on OCT according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another intelligent identification device for defocus amount based on OCT according to the present invention;
fig. 5 is a schematic structural diagram of another intelligent identification device for defocus amount based on OCT according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses an OCT-based defocus amount intelligent identification method and device, which can be used for intelligently identifying defocus amount, and are beneficial to improving the accuracy and reliability of the obtained defocus amount and improving the efficiency of the obtained defocus amount. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of an intelligent identification method for defocus amount based on OCT according to an embodiment of the present invention. The intelligent identification method based on the defocus amount of the OCT described in fig. 1 may be applied to the intelligent identification device based on the defocus amount of the OCT, and the intelligent identification device based on the defocus amount of the OCT may be integrated in a cloud server or a local server, which is not limited in the embodiment of the present invention. As shown in fig. 1, the intelligent identification method of defocus amount based on OCT may include the following operations:
101. the retina signal of the target user is acquired through the OCT system, and the test target is determined according to the retina signal.
In an embodiment of the present invention, OCT (optical coherence tomography ) is a non-invasive imaging method, which is a tomographic imaging technique applied to ophthalmology, and can generate high-resolution volumetric histological images of tissues, and uses long-wavelength near-infrared light for deep penetration of biological tissues.
In an embodiment of the present invention, optionally, the acquiring, by the OCT system, a retinal signal of the target user may include:
and scanning eyes of the target user through the OCT system by using a two-dimensional scanning mirror to obtain retina signals of the target user.
In an embodiment of the present invention, optionally, the retinal signal of the target user includes one or more of a vitreous cavity signal, a retinal nerve sensory layer signal, a retinal pigment epithelium-Bruch membrane signal, a choroid signal, a retrocortical vitreous signal, a nerve fiber layer signal, a ganglion cell layer signal, an inner plexiform layer signal, an inner nuclear layer signal, an outer plexiform layer signal, an outer nuclear layer signal, an outer membrane signal, a sarcoidoid body region signal, an ellipsoid region signal, a photoreceptor extracellular ganglion signal, a chimera zone signal, and a choroidal capillary vessel layer signal of the target user.
In an embodiment of the present invention, optionally, the verification target includes an RPE layer.
102. And controlling the target motor to execute the moving operation, determining a position change parameter of the target motor in the process of executing the moving operation by the target motor, and determining a signal change parameter of a signal value corresponding to the checking target.
In the embodiment of the invention, the target motors are motors corresponding to at least two focusing lenses.
In an embodiment of the present invention, further optionally, the focusing lens may be a liquid lens. The liquid lens is characterized in that liquid is used as a lens, the focal length is changed by changing the curvature of the liquid, and the shape of liquid drops can be changed by applying voltage so as to change the focal length.
In an embodiment of the present invention, optionally, the controlling the target motor to perform the moving operation may include:
and generating motor movement parameters of the target motor according to the OCT system and the retina signal, and controlling the target motor to execute movement operation matched with the motor movement parameters, wherein the motor movement parameters comprise one or more of movement direction parameters, movement speed parameters, movement distance parameters, movement duration parameters and movement acceleration parameters of the target motor. Further alternatively, the control motor performs a movement operation to control the lens movement, thereby controlling the change in the light focusing position.
103. And determining the target defocus amount according to the position change parameter and the signal change parameter.
In an embodiment of the present invention, optionally, the target defocus amount includes a defocus value. The defocus degree refers to the ratio of the distance between an object and an imaging plane to the distance between the imaging plane and a lens in the imaging process. It is one of the important parameters for measuring the imaging power of a lens, and is also a parameter commonly used in optical design. The focus is focused when imaged on the retina, whereas the focus is out of focus or defocus when not imaged on the retina. Further optionally, the defocus surface comprises a refractive surface.
In an embodiment of the present invention, optionally, for example, the position change parameter includes a position change of the target motor, the optical power change is caused by a light focusing position of the focusing lens and the position change of the target motor, and the defocus amount is quantified by a change amount of the motor position movement and a refractive power change amount, so as to obtain the target defocus amount.
Therefore, the implementation of the intelligent identification method based on OCT described in FIG. 1 can acquire the retinal signal of the target user through the OCT system and determine the inspection target, control the target motor to execute the moving operation and determine the position change parameter and the signal change parameter of the target motor in the process of the target motor executing the moving operation, and determine the target defocus amount according to the position change parameter and the signal change parameter, so that the defocus amount can be intelligently identified, the focal plane measurement of the target imaging and the identification of the defocus plane can be realized, and the accuracy and the reliability of the acquired defocus amount and the efficiency of the acquired defocus amount can be improved.
In an alternative embodiment, determining an inspection target from the retinal signal includes:
generating a retina structure of the target user according to the retina signal, wherein the retina structure comprises at least one RPE layer;
Determining a reflection value of each RPE layer of the target user according to the retina structure of the target user;
and determining a target reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the RPE layer corresponding to the target reflection value as a checking target.
In this alternative embodiment, optionally, the retinal pigment epithelium is the primary component of the retina, and the RPE layer is composed of a layer of regular polygonal cells arranged in the outermost layer of the retina. The outer side of the RPE is linked to bruch's membrane and choroid, and the inner side is linked to the outer node of the photoreceptor cell. The outer side presents an intrabasal fold which increases the cell surface area, promoting mass exchange. The half-desmosomes located in the innermost layer of the bruch's membrane are tightly connected to the basement membrane. The interior of the RPE cells has a microvilli structure extending between Photoreceptor Outer Segments (POS), which participates in the phagocytic function of the RPE. The tight junction formed between the monolayer RPE and the gap junction controls the movement of the substance while forming a choroidal-blood-retinal barrier with bruch's membrane and choroid outside the retina. The retinal pigment epithelium has dark brown color due to its melanin content, and can reduce damage of ultraviolet rays to retina and internal nerves. RPE also contains a complex metabolic system that reduces excessive accumulation of Reactive Oxygen Species (ROS) and the consequent oxidative damage.
In this alternative embodiment, optionally, the generating the retinal structure of the target user from the retinal signal may be generated from a reflection characteristic of the retina.
In this optional embodiment, optionally, determining the target reflection value from all reflection values according to the reflection values of all RPE layers may include:
and determining the highest reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the highest reflection value as a target reflection value.
Therefore, by implementing the alternative embodiment, the retina structure of the target user can be generated according to the retina signals, the reflection value of each RPE layer of the target user is determined according to the retina structure of the target user, the target reflection value is determined from all the reflection values, the RPE layer corresponding to the target reflection value is determined as the test target, the RPE layer with the strongest reflection can be selected as the test target by utilizing the reflection characteristic of the retina, the test target can be determined by combining the reflection value of each RPE layer, the accuracy and the reliability of determining the test target can be improved, the intelligence and the reliability of determining the test target can be improved, and the accuracy and the reliability of determining the target defocus amount by combining the signal change parameter and the position change parameter of the test target can be improved.
In another alternative embodiment, determining the reflectance value of each RPE layer of the target user based on the target user's retinal structure comprises:
according to the retina structure of the target user, calculating to obtain a target spectrogram corresponding to the retina structure of the target user through a preset transformation calculation algorithm;
and determining the reflection value of each RPE layer of the target user according to the target spectrogram.
In this alternative embodiment, optionally, the pre-set transform calculation algorithm comprises a fourier transform calculation.
In this optional embodiment, optionally, the calculating, according to the retina structure of the target user by a preset transformation calculation algorithm, the target spectrogram corresponding to the retina structure of the target user may include:
and determining a spectrometer receiving signal of the target user according to the retina structure of the target user, and calculating to obtain a target spectrogram corresponding to the retina structure of the target user through a preset transformation calculation algorithm.
In this alternative embodiment, optionally, for example, a spectrum obtained by receiving a signal by a spectrometer and performing fourier transform calculation is a target spectrum.
In this alternative embodiment, further optionally, the target spectrogram includes a signal value for each RPE layer. Further, the signal value for each RPE layer includes a reflection value for that RPE layer.
It can be seen that the implementation of the alternative embodiment can combine a preset transformation calculation algorithm and a retina structure of a target user to calculate a target spectrogram of the target user, and determine a reflection value of each RPE layer of the target user according to the target spectrogram, can combine the transformation calculation algorithm and the retina structure to calculate a target spectrogram corresponding to the retina structure of the target user, is beneficial to improving the accuracy and reliability of the target spectrogram of the target user, and is beneficial to improving the intelligence and efficiency of the target spectrogram of the target user, so as to be beneficial to determining the accuracy and reliability of the reflection value of each RPE layer of the target user in combination with the target spectrogram, determining the inspection target in combination with the reflection value of each RPE layer, and improving the accuracy and reliability of the inspection target in combination with the signal change parameter and the position change parameter of the inspection target, and improving the accuracy and reliability of the defocus amount of the target in subsequent combination with the signal change parameter of the inspection target.
In yet another alternative embodiment, determining a position change parameter of the target motor and determining a signal change parameter of a signal value corresponding to the inspection target during the moving operation of the target motor includes:
Determining a movement change parameter of the target motor in the process of executing the movement operation by the target motor, and generating a position change parameter of the target motor according to all the movement change parameters, wherein the movement change parameter comprises one or more of a direction change parameter, an angle change parameter, a distance change parameter and a displacement change parameter;
determining the signal variation of the signal value corresponding to the inspection target, generating the refraction variation of the inspection target according to the signal variation, and determining the signal variation parameter of the signal value corresponding to the inspection target according to the refraction variation.
In this alternative embodiment, further optionally, before the target motor performs the moving operation, the method may further include:
determining a movement control parameter of a target motor, wherein the movement control parameter of the target motor comprises one or more of a motor movement speed control parameter, a motor movement direction control parameter, a motor movement acceleration control parameter, a motor movement angle control parameter, a motor movement duration control parameter and a motor movement path control parameter;
the control target motor performs a movement operation matched with the movement control parameter.
In this alternative embodiment, optionally, the position change parameters of the target motor include all movement change parameters of the target motor.
In this alternative embodiment, optionally, the signal variation of the signal value corresponding to the inspection target includes a magnitude variation of the signal reflection value. Further optionally, the determining the signal change parameter of the signal value corresponding to the inspection target according to the refraction change amount may include:
and determining a refractive power change value according to the refractive power change amount, and determining a signal change parameter of a signal value corresponding to the inspection target according to the refractive power change value.
In this alternative embodiment, alternatively, for example, the lens position is changed by driving a motor, the imaging focal plane/defocus focal plane determination is made by retrieving the change in magnitude of the signal value thereat, the motor position T0 at the current focal plane is recorded as the focal plane when the peak is highest, and the refractive power of the current system is recorded as the best vision zone.
Therefore, the implementation of the alternative embodiment can determine the movement change parameter of the target motor and generate the position change parameter of the target motor in the process of executing the movement operation by the target motor, determine the signal change quantity of the signal value corresponding to the inspection target and generate the refraction change quantity of the inspection target, determine the signal change parameter of the signal value corresponding to the inspection target according to the refraction change quantity, respectively determine the position change parameter of the target motor and determine the signal change parameter corresponding to the inspection target, realize the targeted determination of the position change parameter of the target motor and the signal change parameter of the inspection target, and be beneficial to improving the accuracy and reliability of the determined position change parameter and the signal change parameter, thereby being beneficial to improving the accuracy and reliability of the target defocus generated by combining the position change parameter and the signal change parameter together, and further being beneficial to improving the intelligence and efficiency of generating the target defocus.
In yet another alternative embodiment, determining the target defocus amount from the position variation parameter and the signal variation parameter comprises:
generating a change relation parameter according to the position change parameter and the signal change parameter, wherein the change relation parameter comprises a relation between the position change of the target motor and the signal value change of the inspection target;
determining a target defocus amount of a target user according to the change relation parameters; wherein the target defocus amount comprises a defocus value.
In this optional embodiment, optionally, generating the change relation parameter according to the location change parameter and the signal change parameter may include:
and inputting the position change parameters and the signal change parameters into a predetermined relation determination model to obtain a model output result, and generating change relation parameters according to the model output result.
In this alternative embodiment, the variable relation parameter may optionally include one or more of a relation between a position of the target motor and a signal value of the inspection target, and a relation between a position variation value of the target motor and a signal variation value of the inspection target.
In this alternative embodiment, optionally, the target defocus amount comprises a defocus amount corresponding to a defocus plane, where defocus plane can be understood as a diopter, which diopter is obtained by controlling the motor movement position. Further, the relationship between the change in position of the target motor and the change in signal value of the inspection target may include a motor movement of 1mm, a diopter change of 1D.
In this optional embodiment, optionally, determining the target defocus amount of the target user according to the change relation parameter may include:
inputting the change relation parameters into a predetermined defocus measurement model to obtain a model measurement result, and determining the target defocus amount of a target user according to the model measurement result; wherein the model measurement includes defocus.
In this alternative embodiment, the control of the out-of-focus surface is optionally effected, for example, by a change in power caused by a change in motor position. And quantifying the out-of-focus surface through the change amount of the motor position movement and the refraction change amount.
Therefore, the implementation of the alternative embodiment can generate the change relation parameter according to the position change parameter and the signal change parameter, determine the target defocus amount of the target user according to the change relation parameter, generate the relation between the position change of the target motor and the signal value change of the inspection target according to the position change parameter and the signal change parameter, and comprehensively determine the change relation parameter according to various data, thereby being beneficial to improving the accuracy and reliability of the obtained change relation parameter, improving the intelligence of the obtained change relation parameter, intelligently identifying the obtained defocus amount, improving the accuracy and reliability of the obtained defocus amount and improving the efficiency of the obtained defocus amount.
In yet another alternative embodiment, generating a retinal structure of a target user from the retinal signal includes:
scanning retina signals to obtain retina image information corresponding to a target user;
and generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and generating a retina structure of the target user based on the retina structure tomogram.
In this alternative embodiment, the predetermined image processing algorithm may optionally include one or more of an image transformation algorithm, an image enhancement algorithm, an image segmentation algorithm, and an image classification algorithm; the image transformation algorithm comprises one or more of a geometric transformation algorithm, a scale transformation algorithm, and transformation between a spatial domain and a frequency domain; the image enhancement algorithm comprises one or more of a gray level transformation enhancement algorithm, a histogram enhancement algorithm, an image smoothing algorithm, an image noise reduction algorithm and an image sharpening algorithm; the image segmentation algorithm comprises one or more of a threshold segmentation algorithm, a boundary segmentation algorithm, a hough transformation algorithm, a region segmentation algorithm and a color segmentation algorithm.
In this optional embodiment, optionally, the scanning the retinal signal to obtain retinal image information corresponding to the target user may include:
and scanning the retina signals through a two-dimensional scanning mirror to obtain retina image information corresponding to the target user.
Therefore, the implementation of the alternative embodiment can scan the retina signal to obtain the retina image information corresponding to the target user, generate the retina structure fault image of the target user by combining the image processing algorithm and the retina image information of the target user so as to generate the retina structure of the target user, generate the retina structure fault image by combining the image processing algorithm and the retina image information of the target user, and be beneficial to improving the accuracy and the reliability of the generated retina structure fault image, the intelligence and the efficiency of the generated retina structure fault image, thereby being beneficial to improving the accuracy and the intelligence of the retina structure of the subsequent generation target user, the accuracy and the reliability of the subsequent determination of the detection target, and the accuracy and the reliability of the determination of the target defocus amount by combining the signal change parameter and the position change parameter of the detection target.
In yet another alternative embodiment, generating a retina structure tomogram of the target user based on a predetermined image processing algorithm and retina image information corresponding to the target user includes:
determining a confidence parameter of each piece of retina image information according to each piece of retina image information corresponding to the target user;
according to the confidence parameters of each piece of retina image information, determining all confidence parameters meeting preset confidence conditions as target confidence parameters, and determining the retina image information corresponding to all target confidence parameters as target retina image information;
generating a retina structure tomogram of the target user based on all the determined target retina image information and a predetermined image processing algorithm;
wherein the retina structure tomogram comprises one or more of a retina overall outline map and a retina boundary layer map of the target user.
In this optional embodiment, optionally, determining the confidence parameter of each retinal image information according to each retinal image information corresponding to the target user may include:
determining an image parameter of each piece of retina image information corresponding to the target user according to each piece of retina image information corresponding to the target user, wherein the image parameter comprises one or more of a definition parameter, a sharpness parameter, a brightness parameter, a quality parameter and a size parameter of the retina image information;
Determining a confidence parameter of each piece of retina image information based on all image parameters of each piece of retina image information corresponding to the target user; wherein the confidence parameter for each retinal image information includes a confidence level for that retinal image information.
In this optional embodiment, optionally, determining, according to the confidence parameters of each retinal image information, all confidence parameters that meet the preset confidence condition as target confidence parameters may include:
determining the confidence coefficient of each piece of retina image information according to the confidence parameter of each piece of retina image information;
for each piece of retina image information, judging whether the confidence coefficient of the retina image information is larger than or equal to a preset confidence coefficient threshold value;
for each piece of retina image information, when judging that the confidence coefficient of the retina image information is larger than or equal to a preset confidence coefficient threshold value, determining the confidence parameter corresponding to the retina image information as a target confidence parameter;
for each retinal image information, when it is determined that the confidence level of the retinal image information is less than the preset confidence level threshold value, the present flow may be ended.
In this optional embodiment, optionally, generating the retina structure tomogram of the target user based on the determined all target retina image information and the predetermined image processing algorithm may include:
And executing processing operation matched with the image processing algorithm on all target retina image information through a predetermined image processing algorithm to obtain a retina structure tomogram of the target user.
In this alternative embodiment, optionally, the retinal boundary map of the target user comprises one or more of a vitreous boundary map, a retinal interface map, a retinal nerve fiber layer map, a ganglion cell layer map, an inner plexiform layer map, an inner nuclear layer map, an outer plexiform layer map, an outer nuclear layer map, an external membranous map, a pigment epithelial layer map, a choroidal layer map.
It can be seen that implementing the alternative embodiment can determine the confidence parameter of each retinal image information according to each retinal image information corresponding to the target user, determine all confidence parameters meeting the preset confidence conditions as target confidence parameters and determine target retinal image information, generate a retinal structure tomogram of the target user based on all target retinal image information and an image processing algorithm, and determine the target retinal image information in combination with the confidence parameters in each retinal image information, thereby being beneficial to improving the accuracy and reliability of the determined target retinal image information, and further being beneficial to improving the accuracy and reliability of the subsequent retinal structure tomogram generated based on the target retinal image information and the image processing algorithm, and further being beneficial to improving the accuracy and reliability of the retinal structure tomogram of the generated target user.
In yet another alternative embodiment, after obtaining the retinal signal of the target user by the OCT system, the method further comprises, prior to determining the test target from the retinal signal:
based on all the obtained retina signals of the target user, performing preset signal processing operation on each retina signal to obtain a signal processing result of each retina signal;
for each retina signal, judging whether the retina signal meets a preset signal identification condition according to a signal processing result of the retina signal;
for each retinal signal, determining the retinal signal as a target retinal signal when it is determined that the retinal signal satisfies a preset signal recognition condition;
wherein determining the test target from the retinal signal comprises:
from all target retinal signals, a test target is determined.
In this optional embodiment, optionally, performing a preset signal processing operation on each retinal signal based on all acquired retinal signals of the target user to obtain a signal processing result of each retinal signal may include:
judging whether a preset impurity signal exists in each retina signal, and when judging that the preset impurity signal exists in the retina signal, performing signal filtering operation on the retina signal to obtain a signal processing result of the retina signal;
Wherein the signal processing result of each retinal signal includes a retinal signal obtained by filtering the impurity signal in the retinal signal.
In this optional embodiment, optionally, for each retinal signal, determining whether the retinal signal meets a preset signal identifying condition according to a signal processing result of the retinal signal may include:
for each retina signal, determining the signal interference content of the retina signal according to the signal processing result of the retina signal, and judging whether the signal interference content of the retina signal is smaller than a preset signal interference content threshold value;
for each retina signal, when judging that the signal interference content of the retina signal is smaller than a preset signal interference content threshold value, determining that the retina signal meets a preset signal identification condition; when the signal interference content of the retina signal is larger than or equal to the preset signal interference content threshold value, determining that the retina signal does not meet the preset signal identification condition.
In this alternative embodiment, further alternatively, when it is determined that the retinal signal does not satisfy the preset signal recognition condition, the present flow may be ended.
Therefore, the implementation of the alternative embodiment can perform signal processing operation on each retinal signal based on all acquired retinal signals of the target user to obtain a signal processing result of each retinal signal, judge whether each retinal signal meets a preset signal recognition condition, if so, determine the retinal signal as a target retinal signal, determine an inspection target according to all target retinal signals, process the retinal signal and judge whether the signal recognition condition is met, obtain a target retinal signal with higher signal purity, improve the accuracy and reliability of each obtained target retinal signal, and be beneficial to improving the accuracy and reliability of the subsequent determination of the inspection target based on all target retinal signals, and the accuracy and reliability of the subsequent determination of the target defocus amount based on the signal change parameters and the position change parameters of the inspection target.
Example two
Referring to fig. 2, fig. 2 is a flow chart of another intelligent identification method for defocus amount based on OCT according to an embodiment of the present invention. The intelligent identification method based on the defocus amount of the OCT described in fig. 2 may be applied to the intelligent identification device based on the defocus amount of the OCT, and the intelligent identification device based on the defocus amount of the OCT may be integrated in a cloud server or a local server, which is not limited by the embodiment of the present invention. As shown in fig. 2, the OCT-based defocus amount intelligent recognition method may include the following operations:
201. The retina signal of the target user is acquired through the OCT system, and the test target is determined according to the retina signal.
202. And controlling the target motor to execute the moving operation, determining a position change parameter of the target motor in the process of executing the moving operation by the target motor, and determining a signal change parameter of a signal value corresponding to the checking target.
203. And determining the target defocus amount according to the position change parameter and the signal change parameter.
In the embodiment of the present invention, for the detailed descriptions of step 201 to step 203, please refer to other descriptions of step 101 to step 103 in the first embodiment, and the detailed description of the embodiment of the present invention is omitted.
204. And generating multi-focus fitting information of the target user according to the target defocus amount.
In the embodiment of the invention, the multi-focus fitting information comprises one or more of multi-focus soft mirror radian information and multi-focus soft mirror thickness information.
In an embodiment of the present invention, optionally, the target defocus amount further includes a defocus amount, where defocus refers to a ratio of a distance between an object and an imaging plane to a distance between the imaging plane and a lens during an imaging process. It is one of the important parameters for measuring the imaging power of a lens, and is also a parameter commonly used in optical design.
In an embodiment of the present invention, optionally, the generating multi-focus matching information of the target user according to the target defocus amount may include:
determining eye defocus information of the target user according to defocus values in the target defocus amount, wherein the eye defocus information comprises retinal defocus value difference information of the target user;
and determining multi-focus fitting information of the target user according to the eye defocus information of the target user.
205. And acquiring the verification and allocation requirement information of the target user.
In the embodiment of the invention, the fitting requirement information comprises one or more of eye treatment requirement information, eye correction requirement information, naked eye fitting requirement information and mirror wearing fitting requirement information of the target user.
In the embodiment of the present invention, optionally, the information of the verification requirement of the target user may be obtained in real time, or may be obtained when the target user needs to perform lens verification.
206. And determining target fitting information of the target user according to the multi-focus fitting information of the target user and the fitting requirement information of the target user.
In the embodiment of the present invention, optionally, determining the target verification information of the target user according to the multifocal verification information of the target user and the verification requirement information of the target user may include:
Extracting first key information in multi-focus fitting information of a target user, extracting second key information in fitting requirement information of the target user, and determining target fitting information of the target user according to the first key information and the second key information.
207. And generating a target fitting soft mirror corresponding to the target user according to the target fitting information.
In the embodiment of the present invention, optionally, generating the target fitting soft mirror corresponding to the target user according to the target fitting information may include:
inputting the target fitting information into a predetermined soft lens fitting model to obtain a fitting output result, and generating a target fitting soft lens corresponding to a target user according to the fitting output result;
wherein the soft-lens prescription parameters include one or more of an ocular axis length parameter, a corneal curvature parameter, a vitreous cavity volume parameter, and a pupil size parameter.
Therefore, implementing the OCT-based defocus intelligent recognition method described in fig. 2 can generate multifocal fitting information of the target user according to the target defocus amount, acquire fitting requirement information of the target user, determine target fitting information of the target user in combination with the fitting requirement information and the multifocal fitting information, generate a target fitting soft lens of the target user according to the target fitting information, and perform corresponding eye treatment operation on the target user in combination with the target defocus amount, for example, can be used for myopia correction training, focusing on the front of retina, and form peripheral myopia defocus, thereby achieving the purpose of delaying eye axis growth, and can achieve the auxiliary or myopia prevention and control of myopia correction on the target user in combination with the target defocus amount, thereby being beneficial to improving the accuracy and intelligence of the generated target fitting soft lens of the target user, further being beneficial to improving the matching degree between the generated target fitting soft lens of the target user and the target user, and improving the accuracy and convenience of vision correction of the target user.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an intelligent identification device for defocus amount based on OCT according to an embodiment of the present invention. As shown in fig. 3, the OCT-based defocus amount intelligent recognition apparatus may include:
an acquisition module 301 for acquiring a retinal signal of a target user through the OCT system;
a determination module 302 for determining an inspection target from the retinal signal;
a control module 303 for controlling the target motor to perform a moving operation; the target motors are motors corresponding to the positions of the at least two focusing lenses;
the determining module 302 is further configured to determine a position change parameter of the target motor and determine a signal change parameter of a signal value corresponding to the inspection target during the moving operation performed by the target motor; and determining the target defocus amount according to the position change parameter and the signal change parameter.
It can be seen that the apparatus described in fig. 3 can acquire the retinal signal of the target user and determine the inspection target through the OCT system, control the target motor to perform the moving operation, and determine the position change parameter and the signal change parameter of the target motor during the process of the target motor performing the moving operation, and determine the target defocus amount according to the position change parameter and the signal change parameter, so as to intelligently identify the defocus amount, enable the focal plane measurement of the target imaging and the identification of the defocus plane, and be beneficial to improving the accuracy and reliability of the acquired defocus amount and the efficiency of the acquired defocus amount.
In an alternative embodiment, as shown in FIG. 3, the specific manner in which the determination module 302 determines the inspection target from the retinal signal includes:
generating a retina structure of the target user according to the retina signal, wherein the retina structure comprises at least one RPE layer;
determining a reflection value of each RPE layer of the target user according to the retina structure of the target user;
determining a target reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the RPE layer corresponding to the target reflection value as a detection target;
and, the specific manner in which the determining module 302 determines the reflection value of each RPE layer of the target user according to the retina structure of the target user includes:
according to the retina structure of the target user, calculating to obtain a target spectrogram corresponding to the retina structure of the target user through a preset transformation calculation algorithm;
and determining the reflection value of each RPE layer of the target user according to the target spectrogram.
It can be seen that the implementation of the device described in fig. 3 can generate the retinal structure of the target user according to the retinal signal, determine the reflection value of each RPE layer of the target user according to the retinal structure of the target user, determine the target reflection value from all the reflection values, and determine the RPE layer corresponding to the target reflection value as a test target, select the RPE layer with the strongest reflection as the test target by using the reflection characteristics of the retina, determine the test target by combining the reflection value of each RPE layer, and facilitate the improvement of the accuracy and reliability of the test target, and facilitate the improvement of the accuracy and reliability of the target by combining the signal change parameter and the position change parameter of the test target, and facilitate the improvement of the intelligence and efficiency of the target defocus amount, and the combination of the preset transformation calculation algorithm and the retinal structure calculation of the target user, and the determination of the target spectrogram of the target user, and the improvement of the accuracy and reliability of the target calculation algorithm, and the improvement of the target film, and the accuracy and reliability of the target calculation algorithm and the target structure calculation of the target user, and the improvement of the accuracy and the reliability of the RPE layer, and the improvement of the accuracy and reliability of the target defocus amount, thereby being beneficial to improving the accuracy and reliability of determining the target defocus amount by combining the signal change parameters and the position change parameters of the detection target and improving the intelligence and efficiency of determining the target defocus amount.
In yet another alternative embodiment, as shown in fig. 3, the determining module 302 determines a position change parameter of the target motor during the moving operation performed by the target motor, and the specific manner of determining the signal change parameter of the signal value corresponding to the inspection target includes:
determining a movement change parameter of the target motor in the process of executing the movement operation by the target motor, and generating a position change parameter of the target motor according to all the movement change parameters, wherein the movement change parameter comprises one or more of a direction change parameter, an angle change parameter, a distance change parameter and a displacement change parameter;
determining the signal variation of the signal value corresponding to the inspection target, generating the refraction variation of the inspection target according to the signal variation, and determining the signal variation parameter of the signal value corresponding to the inspection target according to the refraction variation.
Therefore, the device described in fig. 3 can determine the movement change parameter of the target motor and generate the position change parameter of the target motor in the process of executing the movement operation by the target motor, determine the signal change amount of the signal value corresponding to the inspection target and generate the refraction change amount of the inspection target, determine the signal change parameter of the signal value corresponding to the inspection target according to the refraction change amount, respectively determine the position change parameter of the target motor and determine the signal change parameter corresponding to the inspection target, and realize the targeted determination of the position change parameter of the target motor and the signal change parameter of the inspection target, thereby being beneficial to improving the accuracy and reliability of the target defocus amount generated by combining the position change parameter and the signal change parameter together, and further being beneficial to improving the intelligence and efficiency of generating the target defocus amount.
In yet another alternative embodiment, as shown in fig. 3, the determining module 302 determines the target defocus amount according to the position change parameter and the signal change parameter in a specific manner including:
generating a change relation parameter according to the position change parameter and the signal change parameter, wherein the change relation parameter comprises a relation between the position change of the target motor and the signal value change of the inspection target;
determining a target defocus amount of a target user according to the change relation parameters; wherein the target defocus amount comprises a defocus value;
the specific ways of generating the retinal structure of the target user by the determination module 302 from the retinal signals include:
scanning retina signals to obtain retina image information corresponding to a target user;
and generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and generating a retina structure of the target user based on the retina structure tomogram.
It can be seen that the device described in fig. 3 can generate a change relation parameter according to the position change parameter and the signal change parameter, and determine the target defocus amount of the target user according to the change relation parameter, can combine the position change parameter and the signal change parameter to generate the relation between the position change of the target motor and the signal value change of the inspection target, can combine multiple aspects of data comprehensiveness to determine the change relation parameter, is favorable for improving the accuracy and reliability of the obtained change relation parameter, and is favorable for improving the intelligence of the obtained change relation parameter, further can intelligently identify the defocus amount, is favorable for improving the accuracy and reliability of the obtained defocus amount, and is favorable for improving the efficiency of the obtained defocus amount, and can scan the retinal image information corresponding to the target user, and combines the image processing algorithm and the retinal image information of the target user to generate the retinal structure of the target user, is favorable for improving the accuracy and reliability of the generated retinal image processing algorithm and the retinal image information of the target user, and is favorable for improving the accuracy and reliability of the subsequent retina image information of the target user, and is favorable for improving the accuracy and reliability of the obtained retina image processing algorithm and the retina image information of the target user, and is favorable for improving the accuracy and reliability of the retina image information of the subsequent retina image information of the target user.
In yet another alternative embodiment, as shown in fig. 3, the determining module 302 generates the retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and the specific manner includes:
determining a confidence parameter of each piece of retina image information according to each piece of retina image information corresponding to the target user;
according to the confidence parameters of each piece of retina image information, determining all confidence parameters meeting preset confidence conditions as target confidence parameters, and determining the retina image information corresponding to all target confidence parameters as target retina image information;
generating a retina structure tomogram of the target user based on all the determined target retina image information and a predetermined image processing algorithm;
wherein the retina structure tomogram comprises one or more of a retina overall outline map and a retina boundary layer map of the target user.
It can be seen that the device described in fig. 3 is implemented to determine the confidence parameter of each retinal image information according to each retinal image information corresponding to the target user, determine all confidence parameters meeting the preset confidence conditions as target confidence parameters and determine target retinal image information, generate a retinal structure tomogram of the target user based on all target retinal image information and an image processing algorithm, and determine the target retinal image information in combination with the confidence parameters in each retinal image information, so as to be beneficial to improving the accuracy and reliability of the determined target retinal image information, thereby being beneficial to improving the accuracy and reliability of the subsequent retinal structure tomogram generated based on the target retinal image information and the image processing algorithm, and further being beneficial to improving the accuracy and reliability of the retinal structure tomogram of the generated target user.
In yet another alternative embodiment, as shown in fig. 4, the apparatus further comprises:
a processing module 304, configured to perform a preset signal processing operation on each retinal signal based on all the acquired retinal signals of the target user after the acquisition module 301 acquires the retinal signals of the target user through the OCT system, and before the determination module 302 determines the inspection target according to the retinal signals, to obtain a signal processing result of each retinal signal;
a judging module 305, configured to judge, for each retinal signal, whether the retinal signal meets a preset signal recognition condition according to a signal processing result of the retinal signal;
the determining module 302 is further configured to determine, for each retinal signal, the retinal signal as a target retinal signal when it is determined that the retinal signal meets a preset signal identification condition;
the specific manner of determining the test target by the determining module 302 according to the retina signal includes:
from all target retinal signals, a test target is determined.
As can be seen, implementing the apparatus described in fig. 4 can perform a signal processing operation on each retinal signal based on all acquired retinal signals of the target user to obtain a signal processing result of each retinal signal, and determine whether each retinal signal meets a preset signal recognition condition, if so, determine the retinal signal as a target retinal signal, determine an inspection target according to all target retinal signals, process the retinal signal and determine whether the signal recognition condition is met, obtain a target retinal signal with higher signal purity, and improve the accuracy and reliability of each obtained target retinal signal, thereby being beneficial to improving the accuracy and reliability of determining the inspection target based on all target retinal signals, and being beneficial to improving the accuracy and reliability of jointly determining the target defocus amount based on the signal change parameters and the position change parameters of the inspection target, and being beneficial to improving the intelligence of subsequently determining the target defocus amount.
In yet another alternative embodiment, as shown in fig. 4, the apparatus further comprises:
the generating module 306 is configured to generate multi-focus matching information of the target user according to the target defocus amount, where the multi-focus matching information includes one or more of multi-focus soft mirror radian information and multi-focus soft mirror thickness information;
the acquiring module 301 is further configured to acquire verification requirement information of a target user; the fitting requirement information comprises one or more of eye treatment requirement information, eye correction requirement information, naked eye fitting requirement information and mirror wearing fitting requirement information of a target user;
the determining module 302 is further configured to determine target verification information of the target user according to the multi-focus verification information of the target user and verification requirement information of the target user;
the generating module 306 is further configured to generate a target fitting soft mirror corresponding to the target user according to the target fitting information.
As can be seen, implementing the apparatus described in fig. 4 can generate multi-focus fitting information of a target user according to a target defocus amount, acquire fitting requirement information of the target user, determine target fitting information of the target user in combination with the fitting requirement information and the multi-focus fitting information, generate a target fitting soft lens of the target user according to the target fitting information, and perform a corresponding eye treatment operation on the target user in combination with the target defocus amount, for example, can be used for myopia correction training, focus in front of retina, form peripheral myopia defocus, thereby achieving the purpose of delaying eye axis growth, and can achieve assistance or myopia prevention control of myopia correction on the target user in combination with the target defocus amount, thereby being beneficial to improving accuracy and intelligence of the generated target fitting soft lens of the target user, and further being beneficial to improving matching degree between the generated target fitting soft lens of the target user and the target user, and improving vision accuracy and convenience of the target user.
Example IV
Referring to fig. 5, fig. 5 is a schematic structural diagram of another intelligent identification device for defocus amount based on OCT according to an embodiment of the present invention. As shown in fig. 5, the OCT-based defocus amount intelligent recognition apparatus may include:
a memory 401 storing executable program codes;
a processor 402 coupled with the memory 401;
the processor 402 invokes executable program codes stored in the memory 401 to execute the steps in the OCT-based defocus amount intelligent recognition method described in the first or second embodiment of the present invention.
Example five
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing the steps in the OCT-based defocus amount intelligent recognition method described in the first or second embodiment of the invention when the computer instructions are called.
Example six
An embodiment of the present invention discloses a computer program product comprising a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform the steps in the OCT-based defocus amount intelligent recognition method described in the first or second embodiment.
The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses an OCT-based defocus amount intelligent identification method and device, which are disclosed by the embodiment of the invention and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. An OCT-based defocus amount intelligent identification method is characterized by comprising the following steps:
obtaining a retina signal of a target user through an OCT system, and determining an inspection target according to the retina signal;
controlling a target motor to execute a moving operation, and determining a position change parameter of the target motor and a signal change parameter of a signal value corresponding to the inspection target in the process of executing the moving operation by the target motor; the target motors are motors corresponding to at least two focusing lenses;
Determining a target defocus amount according to the position change parameter and the signal change parameter;
said determining an inspection target from said retinal signal comprising:
generating a retinal structure of the target user from the retinal signal, wherein the retinal structure includes at least one RPE layer;
determining a reflection value of each RPE layer of the target user according to the retina structure of the target user;
determining a target reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the RPE layer corresponding to the target reflection value as a detection target;
wherein the determining the target defocus amount according to the position change parameter and the signal change parameter includes:
generating a change relation parameter according to the position change parameter and the signal change parameter, wherein the change relation parameter comprises a relation between the position change of the target motor and the signal value change of the inspection target;
determining the target defocus amount of the target user according to the change relation parameters; wherein the target defocus amount comprises a defocus value;
and generating a retinal structure of the target user from the retinal signal, comprising:
Scanning the retina signal to obtain retina image information corresponding to the target user;
generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and generating a retina structure of the target user based on the retina structure tomogram;
the generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user comprises the following steps:
determining a confidence parameter of each piece of retina image information according to each piece of retina image information corresponding to the target user;
according to the confidence parameters of each piece of retina image information, determining all confidence parameters meeting preset confidence conditions as target confidence parameters, and determining the retina image information corresponding to all the target confidence parameters as target retina image information;
generating a retina structure tomogram of the target user based on all the determined target retina image information and a predetermined image processing algorithm;
wherein the retina structure tomogram comprises one or more of a retina overall outline image and a retina boundary layer image of the target user;
After the obtaining of the retinal signal of the target user by the OCT system and before the determining of the inspection target from the retinal signal, the method further includes:
based on all the obtained retina signals of the target user, performing preset signal processing operation on each retina signal to obtain a signal processing result of each retina signal;
for each retina signal, judging whether the retina signal meets a preset signal identification condition according to a signal processing result of the retina signal;
for each of the retinal signals, determining the retinal signal as a target retinal signal when it is determined that the retinal signal satisfies a preset signal recognition condition;
wherein said determining an inspection target from said retinal signal comprises:
determining a test target based on all of the target retinal signals.
2. The OCT-based defocus amount intelligent recognition method according to claim 1, wherein the determining a reflection value of each of the RPE layers of the target user according to the retinal structure of the target user comprises:
according to the retina structure of the target user, calculating a target spectrogram corresponding to the retina structure of the target user through a preset transformation calculation algorithm;
And determining the reflection value of each RPE layer of the target user according to the target spectrogram.
3. The OCT-based defocus amount intelligent recognition method according to claim 1 or 2, wherein the determining a position change parameter of the target motor and determining a signal change parameter of a signal value corresponding to the inspection target during the moving operation performed by the target motor includes:
determining a movement change parameter of the target motor in the process of executing the movement operation by the target motor, and generating a position change parameter of the target motor according to all the movement change parameters, wherein the movement change parameter comprises one or more of a direction change parameter, an angle change parameter, a distance change parameter and a displacement change parameter;
determining the signal variation of the signal value corresponding to the inspection target, generating the refraction variation of the inspection target according to the signal variation, and determining the signal variation parameter of the signal value corresponding to the inspection target according to the refraction variation.
4. The OCT-based defocus amount intelligent recognition method according to claim 1, further comprising:
Generating multi-focus fitting information of the target user according to the target defocus amount, wherein the multi-focus fitting information comprises one or more of multi-focus soft mirror radian information and multi-focus soft mirror thickness information;
acquiring verification requirement information of the target user; the fitting requirement information comprises one or more of eye treatment requirement information, eye correction requirement information, naked eye fitting requirement information and mirror fitting requirement information of the target user;
determining target verification information of the target user according to the multi-focus verification information of the target user and the verification requirement information of the target user;
and generating a target fitting soft mirror corresponding to the target user according to the target fitting information.
5. OCT-based defocus amount intelligent recognition device is characterized in that the device comprises:
an acquisition module for acquiring retinal signals of the target user through the OCT system;
a determination module for determining an inspection target from the retinal signal;
the control module is used for controlling the target motor to execute moving operation; the target motors are motors corresponding to at least two focusing lenses;
The determining module is further configured to determine a position change parameter of the target motor and determine a signal change parameter of a signal value corresponding to the inspection target during the process of executing the moving operation by the target motor; determining a target defocus amount according to the position change parameter and the signal change parameter;
the specific mode of determining the test target according to the retina signal by the determining module comprises the following steps:
generating a retinal structure of the target user from the retinal signal, wherein the retinal structure includes at least one RPE layer;
determining a reflection value of each RPE layer of the target user according to the retina structure of the target user;
determining a target reflection value from all the reflection values according to the reflection values of all the RPE layers, and determining the RPE layer corresponding to the target reflection value as a detection target;
the specific mode of determining the target defocus amount by the determining module according to the position change parameter and the signal change parameter comprises the following steps:
generating a change relation parameter according to the position change parameter and the signal change parameter, wherein the change relation parameter comprises a relation between the position change of the target motor and the signal value change of the inspection target;
Determining the target defocus amount of the target user according to the change relation parameters; wherein the target defocus amount comprises a defocus value;
and the specific mode of generating the retina structure of the target user according to the retina signal by the determining module comprises the following steps:
scanning the retina signal to obtain retina image information corresponding to the target user;
generating a retina structure tomogram of the target user according to a predetermined image processing algorithm and retina image information corresponding to the target user, and generating a retina structure of the target user based on the retina structure tomogram;
the specific mode of generating the retina structure tomogram of the target user according to the predetermined image processing algorithm and retina image information corresponding to the target user by the determining module comprises the following steps:
determining a confidence parameter of each piece of retina image information according to each piece of retina image information corresponding to the target user;
according to the confidence parameters of each piece of retina image information, determining all confidence parameters meeting preset confidence conditions as target confidence parameters, and determining the retina image information corresponding to all the target confidence parameters as target retina image information;
Generating a retina structure tomogram of the target user based on all the determined target retina image information and a predetermined image processing algorithm;
wherein the retina structure tomogram comprises one or more of a retina overall outline image and a retina boundary layer image of the target user;
the processing module is used for executing preset signal processing operation on each retina signal based on all the obtained retina signals of the target user before the determining module determines the inspection target according to the retina signals after the obtaining module obtains the retina signals of the target user through the OCT system, so as to obtain a signal processing result of each retina signal;
the judging module is used for judging whether the retina signal meets the preset signal identification condition or not according to the signal processing result of the retina signal for each retina signal;
the determining module is further configured to determine, for each of the retinal signals, the retinal signal as a target retinal signal when it is determined that the retinal signal meets a preset signal recognition condition;
Wherein, the specific mode of determining the test target according to the retina signal by the determining module comprises:
determining a test target based on all of the target retinal signals.
6. OCT-based defocus amount intelligent recognition device is characterized in that the device comprises:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the OCT-based defocus amount intelligent recognition method of any one of claims 1-4.
7. A computer storage medium storing computer instructions for performing the OCT-based defocus amount intelligent recognition method according to any one of claims 1 to 4 when called.
CN202410021060.5A 2024-01-08 2024-01-08 OCT-based defocus amount intelligent identification method and device Active CN117503043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410021060.5A CN117503043B (en) 2024-01-08 2024-01-08 OCT-based defocus amount intelligent identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410021060.5A CN117503043B (en) 2024-01-08 2024-01-08 OCT-based defocus amount intelligent identification method and device

Publications (2)

Publication Number Publication Date
CN117503043A CN117503043A (en) 2024-02-06
CN117503043B true CN117503043B (en) 2024-03-29

Family

ID=89746115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410021060.5A Active CN117503043B (en) 2024-01-08 2024-01-08 OCT-based defocus amount intelligent identification method and device

Country Status (1)

Country Link
CN (1) CN117503043B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102469936A (en) * 2009-07-13 2012-05-23 佳能株式会社 Tomography apparatus and tomogram correction processing method
CN102525405A (en) * 2010-11-26 2012-07-04 佳能株式会社 Image processing apparatus and method
JP2012161595A (en) * 2011-01-20 2012-08-30 Canon Inc Image processor and image processing method
CN111739616A (en) * 2020-07-20 2020-10-02 平安国际智慧城市科技股份有限公司 Eye image processing method, device, equipment and storage medium
CN114269225A (en) * 2019-06-12 2022-04-01 诺达尔视觉有限公司 Home OCT with automatic focus adjustment
CN116473503A (en) * 2023-03-16 2023-07-25 首都医科大学附属北京同仁医院 Method for detecting refraction development effect of longitudinal chromatic aberration defocus signal on eyeball

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102469936A (en) * 2009-07-13 2012-05-23 佳能株式会社 Tomography apparatus and tomogram correction processing method
CN102525405A (en) * 2010-11-26 2012-07-04 佳能株式会社 Image processing apparatus and method
JP2012161595A (en) * 2011-01-20 2012-08-30 Canon Inc Image processor and image processing method
CN114269225A (en) * 2019-06-12 2022-04-01 诺达尔视觉有限公司 Home OCT with automatic focus adjustment
CN111739616A (en) * 2020-07-20 2020-10-02 平安国际智慧城市科技股份有限公司 Eye image processing method, device, equipment and storage medium
CN116473503A (en) * 2023-03-16 2023-07-25 首都医科大学附属北京同仁医院 Method for detecting refraction development effect of longitudinal chromatic aberration defocus signal on eyeball

Also Published As

Publication number Publication date
CN117503043A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
Lowell et al. Optic nerve head segmentation
US9636215B2 (en) Enhanced toric lens that improves overall vision where there is a local loss of retinal function
JP5582772B2 (en) Image processing apparatus and image processing method
AU2021202217B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
JP2022542473A (en) Using Deep Learning to Process Eye Images to Predict Visual Acuity
US10881294B2 (en) Ophthalmic apparatus
US10952604B2 (en) Diagnostic tool for eye disease detection using smartphone
JP7478216B2 (en) Ophthalmic device, method for controlling ophthalmic device, and program
WO2017094243A1 (en) Image processing apparatus and image processing method
Sousa de Almeida et al. Computer-aided methodology for syndromic strabismus diagnosis
CN110575132A (en) Method for calculating degree of strabismus based on eccentric photography
CN110575134A (en) method for calculating myopia degree based on eccentric photography
JP7332463B2 (en) Control device, optical coherence tomography device, control method for optical coherence tomography device, and program
CN114937024A (en) Image evaluation method and device and computer equipment
Semerád et al. Retinal vascular characteristics
CN106446805B (en) A kind of eyeground shine in optic cup dividing method and system
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
CN117503043B (en) OCT-based defocus amount intelligent identification method and device
JP7194136B2 (en) OPHTHALMOLOGICAL APPARATUS, OPHTHALMOLOGICAL APPARATUS CONTROL METHOD, AND PROGRAM
Agarwal et al. Automatic imaging method for optic disc segmentation using morphological techniques and active contour fitting
CN113995526B (en) System for determining a treatment area for vision correction surgery
Hasan et al. An algorithm to differentiate astigmatism from Keratoconus in Axial Topgraphic images
WO2024011236A9 (en) Using artificial intelligence to detect and monitor glaucoma
Goyanes et al. Automatic simultaneous ciliary muscle segmentation and biomarker extraction in AS-OCT images using deep learning-based approaches
Wang Level set segmentation of retinal structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant