CN117314822A - Visual defect identification method and system - Google Patents

Visual defect identification method and system Download PDF

Info

Publication number
CN117314822A
CN117314822A CN202311036527.5A CN202311036527A CN117314822A CN 117314822 A CN117314822 A CN 117314822A CN 202311036527 A CN202311036527 A CN 202311036527A CN 117314822 A CN117314822 A CN 117314822A
Authority
CN
China
Prior art keywords
user
information
target object
object picture
eyeball
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311036527.5A
Other languages
Chinese (zh)
Inventor
蔡啸谷
底马可
阿金卡
崔煜
胡浩
普拉文
德尔文
尼莱什
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Nuotong Yimu Medical Technology Co ltd
Original Assignee
Beijing Nuotong Yimu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Nuotong Yimu Medical Technology Co ltd filed Critical Beijing Nuotong Yimu Medical Technology Co ltd
Priority to CN202311036527.5A priority Critical patent/CN117314822A/en
Publication of CN117314822A publication Critical patent/CN117314822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application relates to a visual defect identification method and a visual defect identification system. The method comprises the following steps: acquiring eyeball data information of a user and visual perception information of the user, and identifying cornea information of the user based on the eyeball data information of the user; based on eyeball data information of a user, acquiring a sight line variation track of the user through a gaze tracking algorithm, and determining eyeball focusing information of the user based on the sight line variation track of the user; based on the visual perception information of the user, identifying the visual perception spectrum of the user through a spectrum perception strategy, and based on the visual perception spectrum of the user, identifying the color vision defect information of the user; the visual defect information of the user is calculated based on the cornea information of the user and the eyeball focusing information of the user, and the visual defect information of the user is determined based on the visual defect information of the user and the color vision defect information of the user. By adopting the method, the accuracy of identifying the vision defects of different users can be improved.

Description

Visual defect identification method and system
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a visual defect identification method and a visual defect identification system.
Background
In recent years, vision-related diseases and methods of treating the same have been significantly progressed. Various techniques and devices have been developed to address vision disorders and enhance the visual acuity of patients with myopia, hyperopia, astigmatism, and other vision related disorders. Conventional methods of improving vision include the use of corrective lenses, such as spectacles or contact lenses, to provide refractive correction to compensate for the focus error of the eye. In addition, certain vision therapies include exercise and visual stimuli to improve coordination and focusing capabilities of the eye. However, the method has slower effect and needs long-time auxiliary matching of users, so how to improve the vision correction efficiency is the key point of current research.
Current vision correction methods typically require multiple separate devices or specialized hardware for vision correction and eye tracking. Therefore, the vision correction efficiency of the user is improved, but the vision defect identification methods applied in the mode are universal identification methods, and individual characteristics of the user are not considered, so that the accuracy of identifying the vision defects of different users is low.
Disclosure of Invention
Based on the foregoing, it is necessary to provide a visual defect recognition method and system for solving the above-mentioned technical problems.
In a first aspect, the present application provides a visual defect recognition method. The method comprises the following steps:
acquiring eyeball data information of a user and visual perception information of the user, and identifying cornea information of the user based on the eyeball data information of the user;
based on the eyeball data information of the user, acquiring a sight line variation track of the user through a gaze tracking algorithm, and determining eyeball focusing information of the user based on the sight line variation track of the user;
identifying the visual perception spectrum of the user through a spectrum perception strategy based on the visual perception information of the user, and identifying the color vision defect information of the user based on the visual perception spectrum of the user;
calculating vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determining vision defect information of the user based on the vision defect information of the user and color vision defect information of the user.
Optionally, the acquiring eyeball data information of the user and visual perception information of the user includes:
acquiring eyeball three-dimensional structure information of a user, and identifying eyeball sight information of the user based on the eyeball three-dimensional structure information of the user;
Performing data processing on the three-dimensional structure information of the user and the eyeball sight information of the user to obtain eyeball data information of the user;
and acquiring perception information of the user on each color in the spectrum and identification information of the user on each color in the spectrum, and performing data processing on the perception information and the identification information to obtain visual perception information of the user.
Optionally, the identifying the cornea information of the user based on the eyeball data information of the user includes:
based on the eyeball three-dimensional structure information of the user, analyzing the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information through an edge recognition algorithm;
and screening the three-dimensional boundary range of the cornea structure in the three-dimensional boundary range of each eyeball structure, and taking the sub-three-dimensional structure information in the three-dimensional boundary range of the cornea structure as the cornea information of the user.
Optionally, the collecting, based on the eyeball data information of the user, the gaze variation track of the user through a gaze tracking algorithm includes:
acquiring target variation video information composed of target object pictures played according to the playing sequence; the target object picture comprises position information of a target object in the target object picture;
Based on the sample target variation video information and eyeball line-of-sight information of the user, acquiring line-of-sight focusing position information of each time point where the user stays in each target object picture of the target variation video information through a gaze tracking algorithm, and connecting the line-of-sight focusing position information according to the time sequence of each time point to obtain a line-of-sight variation track of the user.
Optionally, the determining, based on the gaze movement track of the user, eyeball focusing information of the user includes:
dividing the sight line focusing position information of each time point where the user stays in each target object picture of the target variable video information according to each target object picture to obtain the sight line focusing position information of the user in different target object pictures;
for each target object picture, screening sub-sight line variation tracks of the user in the target object picture based on a time point corresponding to each sight line focusing position information of the user in the target object picture and the sight line variation track of the user, and calculating each sight line deviation variation information of the user in the target object picture based on each sight line focusing position information of the user in the target object picture and the position information of the target object picture;
And determining sub-eyeball focusing information of the user in the target object picture according to the sub-eye variation track of the user in the target object picture and the variation information of each eye deviation of the user in the target object picture, and determining the eyeball focusing information of the user based on the sub-eyeball focusing information of the user in all the target object pictures.
Optionally, the identifying, based on the visual perception information of the user, the visual perception spectrum of the user through a spectrum perception policy includes:
analyzing the perception degree information of the user on each color based on the perception information of the user on each color, and determining the recognition accuracy of the user on each color based on the recognition information of the user on each color;
and calculating the sensitivity of the user to the color according to the perception degree information of the user to the color and the recognition accuracy of the user to the color by a color sensitivity algorithm for each color, and distributing and arranging the sensitivity of the user to the colors according to the sequence of the colors in the spectrum to obtain the visual perception spectrum of the user.
Optionally, the calculating the vision defect information of the user based on the cornea information of the user and the eyeball focusing information of the user includes:
for each target object picture, based on identifying a target time point corresponding to line of sight deviation variation information of the user, which is lower than a deviation variation threshold value, in the line of sight deviation variation information in the target object picture;
calculating the accurate focusing time of the user on the target object picture based on the time point corresponding to each line of sight deviation variation information of the user in the target object picture and the target time point of the user in the target object picture;
in the sub-sight line variation tracks of the user in the target object picture, screening target sight line variation tracks of the user corresponding to all time points before the target time point in each time point of the target object picture according to the time sequence of the user corresponding to each sight line deviation variation information in the target object picture, and identifying the sight line focusing range of the user on the target object picture based on the target sight line variation tracks;
and analyzing the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture.
Optionally, the analyzing the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user, and the sight focusing range of the user on each target object picture includes:
acquiring the identification difficulty of each target object picture, and carrying out normalization processing on the identification difficulty of each target object picture to obtain a weight value of each target object picture;
for each target object picture, analyzing the vision defect range of the user on the target object of the target object picture and the zoom defect information of the user on the target object of the target object picture through a vision defect judging strategy based on cornea information of the user, accurate focusing time of the user on the target object picture and the sight focusing range of the user on the target object picture;
weighting the vision defect range of the user in each target object picture based on the weight value of each target object picture to obtain the comprehensive vision defect range of the user, and weighting the zoom defect information of the target object of each target object picture based on the weight value of each target object picture to obtain the comprehensive zoom defect information of the user;
And taking the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user as the vision defect information of the user.
Optionally, the identifying color vision defect information of the user based on the visual perception spectrum of the user includes:
based on the visual perception of the user, identifying a color range of each color perception normal corresponding to the user and a color range of each color perception abnormal, and screening the color range meeting the defect abnormal condition from the color ranges of each color perception abnormal as the color vision defect information of the user.
In a first aspect, the present application provides a visual defect recognition system. The system comprises:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring eyeball data information of a user and visual perception information of the user and identifying cornea information of the user based on the eyeball data information of the user;
the analysis module is used for acquiring the sight line variation track of the user through a gaze tracking algorithm based on the eyeball data information of the user, and determining eyeball focusing information of the user based on the sight line variation track of the user;
The identification module is used for identifying the visual perception spectrum of the user through a spectrum perception strategy based on the visual perception information of the user, and identifying the color vision defect information of the user based on the visual perception spectrum of the user;
the determining module is used for calculating vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determining vision defect information of the user based on the vision defect information of the user and color vision defect information of the user.
Optionally, the acquiring module is specifically configured to:
acquiring eyeball three-dimensional structure information of a user, and identifying eyeball sight information of the user based on the eyeball three-dimensional structure information of the user;
performing data processing on the three-dimensional structure information of the user and the eyeball sight information of the user to obtain eyeball data information of the user;
and acquiring perception information of the user on each color in the spectrum and identification information of the user on each color in the spectrum, and performing data processing on the perception information and the identification information to obtain visual perception information of the user.
Optionally, the acquiring module is specifically configured to:
based on the eyeball three-dimensional structure information of the user, analyzing the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information through an edge recognition algorithm;
and screening the three-dimensional boundary range of the cornea structure in the three-dimensional boundary range of each eyeball structure, and taking the sub-three-dimensional structure information in the three-dimensional boundary range of the cornea structure as the cornea information of the user.
Optionally, the analysis module is specifically configured to:
acquiring target variation video information composed of target object pictures played according to the playing sequence; the target object picture comprises position information of a target object in the target object picture;
based on the sample target variation video information and eyeball line-of-sight information of the user, acquiring line-of-sight focusing position information of each time point where the user stays in each target object picture of the target variation video information through a gaze tracking algorithm, and connecting the line-of-sight focusing position information according to the time sequence of each time point to obtain a line-of-sight variation track of the user.
Optionally, the analysis module is specifically configured to:
Dividing the sight line focusing position information of each time point where the user stays in each target object picture of the target variable video information according to each target object picture to obtain the sight line focusing position information of the user in different target object pictures;
for each target object picture, screening sub-sight line variation tracks of the user in the target object picture based on a time point corresponding to each sight line focusing position information of the user in the target object picture and the sight line variation track of the user, and calculating each sight line deviation variation information of the user in the target object picture based on each sight line focusing position information of the user in the target object picture and the position information of the target object picture;
and determining sub-eyeball focusing information of the user in the target object picture according to the sub-eye variation track of the user in the target object picture and the variation information of each eye deviation of the user in the target object picture, and determining the eyeball focusing information of the user based on the sub-eyeball focusing information of the user in all the target object pictures.
Optionally, the identification module is specifically configured to:
analyzing the perception degree information of the user on each color based on the perception information of the user on each color, and determining the recognition accuracy of the user on each color based on the recognition information of the user on each color;
and calculating the sensitivity of the user to the color according to the perception degree information of the user to the color and the recognition accuracy of the user to the color by a color sensitivity algorithm for each color, and distributing and arranging the sensitivity of the user to the colors according to the sequence of the colors in the spectrum to obtain the visual perception spectrum of the user.
Optionally, the identification module is specifically configured to:
for each target object picture, based on identifying a target time point corresponding to line of sight deviation variation information of the user, which is lower than a deviation variation threshold value, in the line of sight deviation variation information in the target object picture;
calculating the accurate focusing time of the user on the target object picture based on the time point corresponding to each line of sight deviation variation information of the user in the target object picture and the target time point of the user in the target object picture;
In the sub-sight line variation tracks of the user in the target object picture, screening target sight line variation tracks of the user corresponding to all time points before the target time point in each time point of the target object picture according to the time sequence of the user corresponding to each sight line deviation variation information in the target object picture, and identifying the sight line focusing range of the user on the target object picture based on the target sight line variation tracks;
and analyzing the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture.
Optionally, the identification module is specifically configured to:
acquiring the identification difficulty of each target object picture, and carrying out normalization processing on the identification difficulty of each target object picture to obtain a weight value of each target object picture;
for each target object picture, analyzing the vision defect range of the user on the target object of the target object picture and the zoom defect information of the user on the target object of the target object picture through a vision defect judging strategy based on cornea information of the user, accurate focusing time of the user on the target object picture and the sight focusing range of the user on the target object picture;
Weighting the vision defect range of the user in each target object picture based on the weight value of each target object picture to obtain the comprehensive vision defect range of the user, and weighting the zoom defect information of the target object of each target object picture based on the weight value of each target object picture to obtain the comprehensive zoom defect information of the user;
and taking the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user as the vision defect information of the user.
Optionally, the determining module is specifically configured to:
based on the visual perception of the user, identifying a color range of each color perception normal corresponding to the user and a color range of each color perception abnormal, and screening the color range meeting the defect abnormal condition from the color ranges of each color perception abnormal as the color vision defect information of the user.
The visual defect identification method, the visual defect identification system, the method and the system are used for identifying cornea information of a user by acquiring eyeball data information of the user and visual perception information of the user and based on the eyeball data information of the user; based on the eyeball data information of the user, acquiring a sight line variation track of the user through a gaze tracking algorithm, and determining eyeball focusing information of the user based on the sight line variation track of the user; identifying the visual perception spectrum of the user through a spectrum perception strategy based on the visual perception information of the user, and identifying the color vision defect information of the user based on the visual perception spectrum of the user; calculating vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determining vision defect information of the user based on the vision defect information of the user and color vision defect information of the user. The eye data information of the user, the gaze tracking algorithm and the eye focusing information of the user corresponding to the gaze tracking algorithm are utilized to identify the gaze fluctuation track of the user, so that the vision defect information of the user is analyzed, then the color vision defect information of the user is analyzed through the visual perception information of the user and the spectrum perception strategy, so that the vision defect information of the user is identified, the problem that the universality method cannot consider the individual characteristics of the user is avoided, and the accuracy of identifying the vision defects of different users is improved.
Drawings
FIG. 1 is a flow chart of a visual defect recognition method according to an embodiment;
FIG. 2 is a block diagram of a visual defect recognition system in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The visual defect identification method provided by the embodiment of the application can be applied to a terminal, a server and a system comprising the terminal and the server, and is realized through interaction of the terminal and the server. The terminal may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. The terminal analyzes the vision defect information of the user through the eyeball data information of the user, a gaze tracking algorithm, the eyeball focusing information of the user corresponding to the gaze tracking algorithm, and the vision defect information of the user through the visual perception information of the user, and analyzes the color vision defect information of the user through a spectrum perception strategy, so that the vision defect information of the user is identified, the problem that individual characteristics of the user cannot be considered in a universal method is avoided, and the accuracy of identifying the vision defects of different users is improved.
In one embodiment, as shown in fig. 1, a visual defect recognition method is provided, and the method is applied to a terminal for illustration, and includes the following steps:
step S101, acquiring eyeball data information of a user and visual perception information of the user, and identifying cornea information of the user based on the eyeball data information of the user.
In this embodiment, when the terminal obtains authorization of the user, the three-dimensional scanning device scans three-dimensional eyeball structure information of both eyes of the user, identifies sight line information of the user based on the three-dimensional eyeball structure information of both eyes of the user, and uses the three-dimensional eyeball structure information of the user and the sight line information of the user as eyeball data information of the user. And then, the terminal responds to the perception information of the user on each color of the spectrum, which is uploaded by the user, and obtains the visual perception information of the user. The terminal uses three-dimensional structure information corresponding to the cornea structure in the eyeball three-dimensional structure information of the user as cornea information of the user.
Step S102, based on eyeball data information of a user, acquiring a sight line variation track of the user through a gaze tracking algorithm, and determining eyeball focusing information of the user based on the sight line variation track of the user.
In this embodiment, the terminal collects sub-line-of-sight variation trajectories of the user for each target object picture based on target variation video information composed of a plurality of target object pictures and a gaze tracking algorithm, and arranges the sub-implementation variation trajectories according to a playing order to obtain the line-of-sight variation trajectories of the user. The terminal then analyzes the eye focusing information of the user based on the gaze trajectory, wherein the gaze tracking algorithm is a CamShift (Continuously Adaptive Mean Shift) vision tracking algorithm. The eyeball focusing information is used for representing eyeball focal length variation information of the eyeball of the user when the eyeball tracks a target object in the target object picture. The specific acquisition process and determination process will be described in detail later.
Step S103, based on the visual perception information of the user, identifying the visual perception spectrum of the user through a spectrum perception strategy, and based on the visual perception spectrum of the user, identifying the color vision defect information of the user.
In this embodiment, the terminal identifies the sensitivity of each color in the visual perception information of the user through the spectrum sensing policy, and sorts the sensitivity of each color according to the spectrum sorting, so as to obtain the visual perception spectrum of the user. And then the terminal identifies the defect information corresponding to the color with the abnormal identification in the colors based on the visual perception spectrum of the user to obtain the color vision defect information of the user. The specific identification process will be described in detail later.
Step S104, calculating vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determining vision defect information of the user based on the vision defect information of the user and color vision defect information of the user.
In this embodiment, the terminal calculates the vision defect information of the user based on the cornea information of the user and the eyeball focusing information of the user. The vision defect information comprises a comprehensive vision defect range of the user and comprehensive zooming defect information of the user, wherein the comprehensive vision defect range is a range corresponding to a vision detection abnormal region of the user in the vision detection range. The terminal then uses the visual defect information of the user and the color vision defect information of the user as the visual defect information of the user. The specific calculation process and determination process will be described in detail later.
Based on the scheme, the eye data information of the user, the gaze tracking algorithm and the eye focusing information of the user corresponding to the gaze fluctuation track are utilized to identify the gaze fluctuation track of the user, so that the vision defect information of the user is analyzed, then the color vision defect information of the user is analyzed through the visual perception information of the user and the spectrum perception strategy, so that the vision defect information of the user is identified, the problem that the individual characteristics of the user cannot be considered in the universality method is avoided, and the accuracy of identifying the vision defects of different users is improved.
Optionally, acquiring eyeball data information of the user and visual perception information of the user includes: acquiring eyeball three-dimensional structure information of a user, and identifying eyeball sight information of the user based on the eyeball three-dimensional structure information of the user; performing data processing on the three-dimensional structure information of the user and the eyeball sight information of the user to obtain eyeball data information of the user; and acquiring perception information of a user on each color in the spectrum and identification information of the user on each color in the spectrum, and performing data processing on the perception information and the identification information to obtain visual perception information of the user.
In this embodiment, the terminal collects three-dimensional eyeball structural information of the user through the three-dimensional scanning device, and identifies eye sight line information of the user based on the three-dimensional eyeball structural information of the user. Wherein the eyeball realization information of the user is a vertical connecting line between a focusing point of the user after the focusing of the eyes of the user and the face orientation of the user. And then, the terminal performs data processing on the three-dimensional structure information of the user and the eyeball sight information of the user to obtain eyeball data information of the user. The terminal collects the perception information of the user on each color in the spectrum and the identification information of the user on each color in the spectrum, and carries out data processing on the perception information of the user on each color in the spectrum and the identification information of the user on each color in the spectrum to obtain the visual perception information of the user.
Based on the scheme, the eyeball data information of the user and the visual perception information of the user are obtained by carrying out data processing on the eyeball three-dimensional structure information, the eyeball sight information, the identification information and the perception information, so that dimension differences among different data are eliminated, and the uniformity of each data information is improved.
Optionally, identifying cornea information of the user based on the eyeball data information of the user includes: based on the eyeball three-dimensional structure information of the user, analyzing the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information through an edge recognition algorithm; and screening the three-dimensional boundary range of the cornea structure in the three-dimensional boundary range of each eyeball structure, and taking the sub-three-dimensional structure information in the three-dimensional boundary range of the cornea structure as cornea information of a user.
In this embodiment, the terminal analyzes the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information based on the eyeball three-dimensional structure information of the user through an edge recognition algorithm. And then, the terminal screens the three-dimensional boundary range corresponding to the cornea structure of the user in the three-dimensional boundary range of each eyeball structure. And finally, the terminal takes the sub-three-dimensional structure information in the three-dimensional boundary range of the cornea structure as cornea information of the user. Wherein the cornea information is cornea information of the user's eye. The edge recognition algorithm is a wavelet edge detection algorithm corresponding to the wavelet-based image edge detection mode.
Based on the scheme, the cornea information of the user is determined by identifying the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information of the user, so that the determination accuracy of the cornea information is improved.
Optionally, based on eyeball data information of the user, collecting a sight line variation track of the user through a gaze tracking algorithm, including: acquiring target variation video information composed of target object pictures played according to the playing sequence; the target object picture comprises the position information of the target object in the target object picture; based on sample target change video information and eyeball sight line information of a user, acquiring sight line focusing position information of each time point where the user stays in each target object picture of the target change video information through a gaze tracking algorithm, and connecting the sight line focusing position information according to the time sequence of each time point to obtain a sight line change track of the user.
In this embodiment, the terminal acquires target variation video information composed of each target object picture played according to the playing order. The target objects in each target object picture are different, and the sizes, the ranges and the colors of the target objects are different. The recognition difficulty varies among the objects. The target object picture comprises position information of a target object in the target object picture. Then, the terminal collects the sight line focusing position information of each time point where the user stays in each target object picture of the target variable video information through a gaze tracking algorithm based on the sample target variable video information and the eyeball sight line information of the user, and connects the sight line focusing position information according to the time sequence of each time point to obtain the sight line variable track of the user.
Based on the scheme, the gaze focusing position information of each time point where the user stays in each object picture of the object change video information is identified through the gaze tracking algorithm, so that the accuracy of the determined gaze focusing position information is improved.
Optionally, determining the eyeball focusing information of the user based on the line-of-sight variation track of the user includes: dividing the sight line focusing position information of each time point where the user stays in each target object picture of the target variable video information according to each target object picture to obtain the sight line focusing position information of the user in different target object pictures; for each target object picture, screening sub-sight line variation tracks of the user in the target object picture based on time points corresponding to the sight line focusing position information of the user in the target object picture and the sight line variation tracks of the user, and calculating sight line deviation variation information of the user in the target object picture based on the sight line focusing position information of the user in the target object picture and the position information of the target object picture; according to the sub-eye focusing information of the user in the target object picture and the sub-eye focusing information of the user in the target object picture, the sub-eye focusing information of the user in the target object picture is determined, and the eyeball focusing information of the user is determined based on the sub-eye focusing information of the user in all the target object pictures.
In this embodiment, the terminal divides the line-of-sight focusing position information of each time point where the user stays in each target object picture of the target variable video information according to each target object picture, and obtains the line-of-sight focusing position information of the user in different target object pictures. And the focusing position information of each line of sight in different object pictures corresponds to the same two-dimensional coordinate system with the position information of the object in the object picture.
For each target object picture, the terminal screens sub-sight line change tracks of the user in the target object picture based on time points corresponding to the sight line focusing position information of the user in the target object picture and the sight line change tracks of the user. Then, the terminal calculates each line of sight deviation variation information of the user in the target object picture based on each line of sight focusing position information of the user in the target object picture and the position information of the target object in the target object picture. The position information of the target object is position information corresponding to the outline of the target object. The line-of-sight deviation variation information is linear distance information between line-of-sight focus position information of the user and position information of the target object in the target object picture.
And the terminal determines sub-eyeball focusing information of the user in the target object picture according to the sub-sight line variation track of the user in the target object picture and the sight line deviation variation information of the user in the target object picture. And finally, the terminal takes the sub-eyeball focusing information of the user in all the target object pictures as the eyeball focusing information of the user.
Based on the scheme, the vision line variation track is divided into the sub vision line variation tracks in each target object picture, so that the accuracy of the vision line variation track analysis of the user is improved, and the focusing information of the user in each target object picture is represented by calculating the vision line deviation variation information of the user in each target object picture, so that the recognition degree of the focusing information of the user is improved.
Optionally, based on the visual perception information of the user, identifying, by the spectrum sensing policy, a visual perception spectrum of the user includes: analyzing the perception degree information of the user on each color based on the perception information of the user on each color, and determining the recognition accuracy of the user on each color based on the recognition information of the user on each color; and calculating the sensitivity of the user to the color according to the perception degree information of the user to the color and the recognition accuracy of the user to the color aiming at each color through a color sensitivity algorithm, and distributing and arranging the sensitivity of the user to each color according to the sequence of each color in a spectrum to obtain the visual perception spectrum of the user.
In this embodiment, the terminal determines the recognition accuracy of the user for each color based on the recognition information of the user for each color. The identification information is a color of the user for identifying the color, for example, the user for identifying the color of red is red, the user for identifying the color of green is cyan, and the user for identifying the color of purple is black, so that the world accuracy of the user for identifying the color of red is high, the accuracy of identifying the color of green is moderate, and the accuracy of identifying the color of purple is low.
For each color, the terminal calculates the sensitivity of the user to the color through a color sensitivity algorithm based on the perception degree information of the user to the color and the recognition accuracy of the user to the color. The color sensitive blocking algorithm is used for carrying out normalization processing on the recognition accuracy of the user on the color to obtain the recognition weight of the user on the color, and carrying out weighted calculation on the perception degree information of the user on the color based on the recognition weight of the user on the color to obtain the sensitivity of the user on the color. And the terminal distributes and arranges the sensitivity of the user to each color according to the sequence of the colors in the spectrum to obtain the visual perception spectrum of the user.
Based on the scheme, the sensitivity of the user to the colors is determined through the perception degree information of the user to the colors and the recognition accuracy of the user to the colors, so that the sensitivity analysis accuracy of the user to various colors is improved.
Optionally, calculating the vision defect information of the user based on the cornea information of the user and the eyeball focusing information of the user includes: for each target object picture, based on identifying a target time point corresponding to line of sight deviation variation information of the user, which is lower than a deviation variation threshold value, in the line of sight deviation variation information in the target object picture; calculating the accurate focusing time of the user on the target object picture based on the time point corresponding to each line of sight deviation variation information of the user in the target object picture and the target time point of the user in the target object picture; in the sub-sight line change tracks of a user in a target object picture, screening target sight line change tracks of the user corresponding to all time points before the target time point in each time point of the target object picture according to the time sequence of the time points corresponding to each sight line deviation change information of the user in the target object picture, and identifying the sight line focusing range of the user on the target object picture based on the target sight line change tracks; based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture, analyzing the vision defect information of the user.
In this embodiment, the terminal screens, for each target picture, a target time point corresponding to line of sight deviation variation information, which is lower than a deviation variation threshold preset in the terminal, among the line of sight deviation variation information of the user at the time points corresponding to the line of sight deviation variation information of the user in the target picture. Then, the terminal calculates the accurate focusing time of the user on the target object picture based on the time point corresponding to the sight line deviation variation information of the user in the target object picture and the target time point of the user in the target object picture. The accurate focusing time is the period from the first time point of the target object picture to the target time point.
In the sub-sight line variation tracks of the user in the target object picture, the terminal screens the target sight line variation tracks of the user corresponding to all time points before the target time point in each time point of the target object picture according to the time sequence of the time points corresponding to the sight line deviation variation information of the user in the target object picture. And the terminal identifies the range included in the target sight line variation track of the user to the target object picture based on the target sight line variation track, and the range is used as the sight line focusing range of the user to the target object picture. And finally, the terminal analyzes the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture. The specific vision defect analysis process will be described in detail later.
Based on the scheme, the accurate focusing time of the user on the target object picture and the sight focusing range of the user on the target object picture are determined by searching the target time point, so that the accuracy of analyzing the vision defect information of the user is improved, the program operation amount is reduced, and the efficiency of analyzing the vision defect information of the user is improved.
Optionally, analyzing the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture includes: acquiring the identification difficulty of each target object picture, and carrying out normalization processing on the identification difficulty of each target object picture to obtain a weight value of each target object picture; analyzing the vision defect range of the user on the target object of the target object picture and the zoom defect information of the user on the target object of the target object picture according to a vision defect judging strategy based on cornea information of the user, accurate focusing time of the user on the target object picture and the vision defect focusing range of the user on the target object picture; weighting the vision defect range of the user in each target object picture based on the weight value of each target object picture to obtain the comprehensive vision defect range of the user, and weighting the zoom defect information of the target object of each target object picture based on the weight value of each target object picture to obtain the comprehensive zoom defect information of the user; and taking the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user as the vision defect information of the user.
In this embodiment, the terminal responds to the identification difficulty information of each target object picture uploaded by the user, and obtains the identification difficulty of each target object picture. And then, the terminal performs normalization processing on the identification difficulty of each target object picture to obtain the weight value of each target object picture. The weight value is used for representing the recognition easiness weight of the target object picture, and the higher the weight is, the smaller the recognition difficulty is.
And analyzing the vision defect range of the user in the target object picture based on the cornea information of the user and the sight line focusing range of the user in the target object picture aiming at each target object picture. The vision defect range is a two-dimensional position range and is used for representing the position information range of the user in the target picture, wherein the position information range of the target picture is wrongly identified. Then, the terminal analyzes the zoom defect information of the user on the target object of the target object picture based on the cornea information of the user, the accurate focusing time of the user on the target object picture and the sight focusing range of the user on the target object picture. The zoom defect information is used for representing the zoom definition of the user on the target object of the target object picture, and the higher the zoom definition is, the smaller the zoom defect information is, the lower the zoom definition is, and the larger the zoom defect information is.
The terminal performs weighting processing on the vision defect range of the user in each target object picture based on the weight value of each target object picture to obtain the comprehensive vision defect range of the user, and performs weighting processing on the zoom defect information of the target object of each target object picture based on the weight value of each target object picture to obtain the comprehensive zoom defect information of the user. And finally, the terminal takes the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user as the vision defect information of the user.
Based on the scheme, the terminal analyzes the vision defect range of the user in each object picture and the zoom defect information of the user on the object of each object picture through the vision defect judging strategy, and then weights the vision defect range of each user in each object picture and the zoom defect information of the user on the object of each object picture through the identification difficulty of each object picture to obtain the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user, so that the accuracy of the obtained vision defect information of the user is improved.
Optionally, identifying color vision defect information of the user based on the visual perception spectrum of the user includes: based on the visual perception of the user, the color range of each color perception normal corresponding to the user and the color range of each color perception abnormal are identified, and the color range meeting the defect abnormal condition is screened from the color ranges of each color perception abnormal to be used as the color vision defect information of the user.
In this embodiment, the terminal identifies, based on the visual perception of the user, a color range in which each color perception is normal and a color range in which each color perception is abnormal, which correspond to the user. Wherein the abnormal color range includes a color range in which the sensitivity of the identification color is greater than a sensitivity threshold preset at the terminal, and a color range in which the sensitivity of the identification color is less than the sensitivity threshold preset at the terminal. And then, the terminal screens the color range with the sensitivity smaller than the sensitivity threshold preset on the terminal in the color range with abnormal color perception as the color vision defect information of the user.
Based on the scheme, the color vision defect information of the user is determined through sensitivity, so that the accuracy of identifying the color vision defect information of the user is improved.
It should be understood that, although each step in the flowcharts related to the embodiments described above is shown in order as indicated by an arrow, these steps are not necessarily performed in order as indicated by an arrow. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a visual defect recognition system for realizing the visual defect recognition method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation in the embodiment of the visual defect recognition system or systems provided below may be referred to the limitation of the visual defect recognition method hereinabove, and will not be repeated here.
In one embodiment, as shown in FIG. 2, there is provided a visual defect recognition system comprising: an acquisition module 210, an analysis module 220, an identification module 230, and a determination module 240, wherein:
an acquisition module 210, configured to acquire eyeball data information of a user and visual perception information of the user, and identify cornea information of the user based on the eyeball data information of the user;
the analysis module 220 is configured to collect a gaze variation track of the user through a gaze tracking algorithm based on the eyeball data information of the user, and determine eyeball focusing information of the user based on the gaze variation track of the user;
the identifying module 230 is configured to identify, based on the visual perception information of the user, a visual perception spectrum of the user through a spectrum perception policy, and identify, based on the visual perception spectrum of the user, color vision defect information of the user;
A determining module 240, configured to calculate vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determine vision defect information of the user based on vision defect information of the user and color vision defect information of the user.
Optionally, the acquiring module 210 is specifically configured to:
acquiring eyeball three-dimensional structure information of a user, and identifying eyeball sight information of the user based on the eyeball three-dimensional structure information of the user;
performing data processing on the three-dimensional structure information of the user and the eyeball sight information of the user to obtain eyeball data information of the user;
and acquiring perception information of the user on each color in the spectrum and identification information of the user on each color in the spectrum, and performing data processing on the perception information and the identification information to obtain visual perception information of the user.
Optionally, the acquiring module 210 is specifically configured to:
based on the eyeball three-dimensional structure information of the user, analyzing the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information through an edge recognition algorithm;
And screening the three-dimensional boundary range of the cornea structure in the three-dimensional boundary range of each eyeball structure, and taking the sub-three-dimensional structure information in the three-dimensional boundary range of the cornea structure as the cornea information of the user.
Optionally, the analysis module 220 is specifically configured to:
acquiring target variation video information composed of target object pictures played according to the playing sequence; the target object picture comprises position information of a target object in the target object picture;
based on the sample target variation video information and eyeball line-of-sight information of the user, acquiring line-of-sight focusing position information of each time point where the user stays in each target object picture of the target variation video information through a gaze tracking algorithm, and connecting the line-of-sight focusing position information according to the time sequence of each time point to obtain a line-of-sight variation track of the user.
Optionally, the analysis module 220 is specifically configured to:
dividing the sight line focusing position information of each time point where the user stays in each target object picture of the target variable video information according to each target object picture to obtain the sight line focusing position information of the user in different target object pictures;
For each target object picture, screening sub-sight line variation tracks of the user in the target object picture based on a time point corresponding to each sight line focusing position information of the user in the target object picture and the sight line variation track of the user, and calculating each sight line deviation variation information of the user in the target object picture based on each sight line focusing position information of the user in the target object picture and the position information of the target object picture;
and determining sub-eyeball focusing information of the user in the target object picture according to the sub-eye variation track of the user in the target object picture and the variation information of each eye deviation of the user in the target object picture, and determining the eyeball focusing information of the user based on the sub-eyeball focusing information of the user in all the target object pictures.
Optionally, the identifying module 230 is specifically configured to:
analyzing the perception degree information of the user on each color based on the perception information of the user on each color, and determining the recognition accuracy of the user on each color based on the recognition information of the user on each color;
And calculating the sensitivity of the user to the color according to the perception degree information of the user to the color and the recognition accuracy of the user to the color by a color sensitivity algorithm for each color, and distributing and arranging the sensitivity of the user to the colors according to the sequence of the colors in the spectrum to obtain the visual perception spectrum of the user.
Optionally, the identifying module 230 is specifically configured to:
for each target object picture, based on identifying a target time point corresponding to line of sight deviation variation information of the user, which is lower than a deviation variation threshold value, in the line of sight deviation variation information in the target object picture;
calculating the accurate focusing time of the user on the target object picture based on the time point corresponding to each line of sight deviation variation information of the user in the target object picture and the target time point of the user in the target object picture;
in the sub-sight line variation tracks of the user in the target object picture, screening target sight line variation tracks of the user corresponding to all time points before the target time point in each time point of the target object picture according to the time sequence of the user corresponding to each sight line deviation variation information in the target object picture, and identifying the sight line focusing range of the user on the target object picture based on the target sight line variation tracks;
And analyzing the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture.
Optionally, the identifying module 230 is specifically configured to:
acquiring the identification difficulty of each target object picture, and carrying out normalization processing on the identification difficulty of each target object picture to obtain a weight value of each target object picture;
for each target object picture, analyzing the vision defect range of the user on the target object of the target object picture and the zoom defect information of the user on the target object of the target object picture through a vision defect judging strategy based on cornea information of the user, accurate focusing time of the user on the target object picture and the sight focusing range of the user on the target object picture;
weighting the vision defect range of the user in each target object picture based on the weight value of each target object picture to obtain the comprehensive vision defect range of the user, and weighting the zoom defect information of the target object of each target object picture based on the weight value of each target object picture to obtain the comprehensive zoom defect information of the user;
And taking the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user as the vision defect information of the user.
Optionally, the determining module 240 is specifically configured to:
based on the visual perception of the user, identifying a color range of each color perception normal corresponding to the user and a color range of each color perception abnormal, and screening the color range meeting the defect abnormal condition from the color ranges of each color perception abnormal as the color vision defect information of the user.
Each of the modules in the visual defect recognition apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute the operations corresponding to each of the above modules.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and for brevity of description, all of the possible combinations of each technical feature in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of visual defect identification, the method comprising:
acquiring eyeball data information of a user and visual perception information of the user, and identifying cornea information of the user based on the eyeball data information of the user;
based on the eyeball data information of the user, acquiring a sight line variation track of the user through a gaze tracking algorithm, and determining eyeball focusing information of the user based on the sight line variation track of the user;
Identifying the visual perception spectrum of the user through a spectrum perception strategy based on the visual perception information of the user, and identifying the color vision defect information of the user based on the visual perception spectrum of the user;
calculating vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determining vision defect information of the user based on the vision defect information of the user and color vision defect information of the user.
2. The method of claim 1, wherein the obtaining eyeball data information of a user and visual perception information of the user comprises:
acquiring eyeball three-dimensional structure information of a user, and identifying eyeball sight information of the user based on the eyeball three-dimensional structure information of the user;
performing data processing on the three-dimensional structure information of the user and the eyeball sight information of the user to obtain eyeball data information of the user;
and acquiring perception information of the user on each color in the spectrum and identification information of the user on each color in the spectrum, and performing data processing on the perception information and the identification information to obtain visual perception information of the user.
3. The method of claim 2, wherein the identifying cornea information of the user based on the eyeball data information of the user comprises:
based on the eyeball three-dimensional structure information of the user, analyzing the three-dimensional boundary range of each eyeball structure in the eyeball three-dimensional structure information through an edge recognition algorithm;
and screening the three-dimensional boundary range of the cornea structure in the three-dimensional boundary range of each eyeball structure, and taking the sub-three-dimensional structure information in the three-dimensional boundary range of the cornea structure as the cornea information of the user.
4. The method according to claim 2, wherein the acquiring, based on the eyeball data information of the user, the gaze movement trajectory of the user through a gaze tracking algorithm includes:
acquiring target variation video information composed of target object pictures played according to the playing sequence; the target object picture comprises position information of a target object in the target object picture;
based on the sample target variation video information and eyeball line-of-sight information of the user, acquiring line-of-sight focusing position information of each time point where the user stays in each target object picture of the target variation video information through a gaze tracking algorithm, and connecting the line-of-sight focusing position information according to the time sequence of each time point to obtain a line-of-sight variation track of the user.
5. The method of claim 4, wherein the determining eyeball focus information of the user based on the gaze trajectory of the user comprises:
dividing the sight line focusing position information of each time point where the user stays in each target object picture of the target variable video information according to each target object picture to obtain the sight line focusing position information of the user in different target object pictures;
for each target object picture, screening sub-sight line variation tracks of the user in the target object picture based on a time point corresponding to each sight line focusing position information of the user in the target object picture and the sight line variation track of the user, and calculating each sight line deviation variation information of the user in the target object picture based on each sight line focusing position information of the user in the target object picture and the position information of the target object picture;
and determining sub-eyeball focusing information of the user in the target object picture according to the sub-eye variation track of the user in the target object picture and the variation information of each eye deviation of the user in the target object picture, and determining the eyeball focusing information of the user based on the sub-eyeball focusing information of the user in all the target object pictures.
6. The method of claim 2, wherein the identifying the user's visual perception spectrum by a spectrum perception policy based on the user's visual perception information comprises:
analyzing the perception degree information of the user on each color based on the perception information of the user on each color, and determining the recognition accuracy of the user on each color based on the recognition information of the user on each color;
and calculating the sensitivity of the user to the color according to the perception degree information of the user to the color and the recognition accuracy of the user to the color by a color sensitivity algorithm for each color, and distributing and arranging the sensitivity of the user to the colors according to the sequence of the colors in the spectrum to obtain the visual perception spectrum of the user.
7. The method of claim 5, wherein the calculating vision defect information of the user based on cornea information of the user and eyeball focus information of the user comprises:
for each target object picture, based on identifying a target time point corresponding to line of sight deviation variation information of the user, which is lower than a deviation variation threshold value, in the line of sight deviation variation information in the target object picture;
Calculating the accurate focusing time of the user on the target object picture based on the time point corresponding to each line of sight deviation variation information of the user in the target object picture and the target time point of the user in the target object picture;
in the sub-sight line variation tracks of the user in the target object picture, screening target sight line variation tracks of the user corresponding to all time points before the target time point in each time point of the target object picture according to the time sequence of the user corresponding to each sight line deviation variation information in the target object picture, and identifying the sight line focusing range of the user on the target object picture based on the target sight line variation tracks;
and analyzing the vision defect information of the user based on the accurate focusing time of the user in each target object picture, the cornea information of the user and the sight focusing range of the user on each target object picture.
8. The method of claim 1, wherein analyzing the vision defect information of the user based on the user's precise focusing time in each target picture, the user's cornea information, and the user's line-of-sight focusing range for each target picture, comprises:
Acquiring the identification difficulty of each target object picture, and carrying out normalization processing on the identification difficulty of each target object picture to obtain a weight value of each target object picture;
for each target object picture, analyzing the vision defect range of the user on the target object of the target object picture and the zoom defect information of the user on the target object of the target object picture through a vision defect judging strategy based on cornea information of the user, accurate focusing time of the user on the target object picture and the sight focusing range of the user on the target object picture;
weighting the vision defect range of the user in each target object picture based on the weight value of each target object picture to obtain the comprehensive vision defect range of the user, and weighting the zoom defect information of the target object of each target object picture based on the weight value of each target object picture to obtain the comprehensive zoom defect information of the user;
and taking the comprehensive vision defect range of the user and the comprehensive zoom defect information of the user as the vision defect information of the user.
9. The method of claim 7, wherein the identifying color vision deficiency information for the user based on the visual perception spectrum of the user comprises:
based on the visual perception of the user, identifying a color range of each color perception normal corresponding to the user and a color range of each color perception abnormal, and screening the color range meeting the defect abnormal condition from the color ranges of each color perception abnormal as the color vision defect information of the user.
10. A visual defect recognition system, the system comprising:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring eyeball data information of a user and visual perception information of the user and identifying cornea information of the user based on the eyeball data information of the user;
the analysis module is used for acquiring the sight line variation track of the user through a gaze tracking algorithm based on the eyeball data information of the user, and determining eyeball focusing information of the user based on the sight line variation track of the user;
the identification module is used for identifying the visual perception spectrum of the user through a spectrum perception strategy based on the visual perception information of the user, and identifying the color vision defect information of the user based on the visual perception spectrum of the user;
The determining module is used for calculating vision defect information of the user based on cornea information of the user and eyeball focusing information of the user, and determining vision defect information of the user based on the vision defect information of the user and color vision defect information of the user.
CN202311036527.5A 2023-08-16 2023-08-16 Visual defect identification method and system Pending CN117314822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311036527.5A CN117314822A (en) 2023-08-16 2023-08-16 Visual defect identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311036527.5A CN117314822A (en) 2023-08-16 2023-08-16 Visual defect identification method and system

Publications (1)

Publication Number Publication Date
CN117314822A true CN117314822A (en) 2023-12-29

Family

ID=89296129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311036527.5A Pending CN117314822A (en) 2023-08-16 2023-08-16 Visual defect identification method and system

Country Status (1)

Country Link
CN (1) CN117314822A (en)

Similar Documents

Publication Publication Date Title
CN103353677B (en) Imaging device and method thereof
US10109056B2 (en) Method for calibration free gaze tracking using low cost camera
CN108305240B (en) Image quality detection method and device
Zhu et al. Viewing behavior supported visual saliency predictor for 360 degree videos
US20130004082A1 (en) Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
CN110807427B (en) Sight tracking method and device, computer equipment and storage medium
CN105224285A (en) Eyes open and-shut mode pick-up unit and method
JP2022527818A (en) Methods and systems for estimating geometric variables related to the user's eye
CN112784810A (en) Gesture recognition method and device, computer equipment and storage medium
CN105373767A (en) Eye fatigue detection method for smart phones
Hu et al. A proto-object based saliency model in three-dimensional space
CN115965653B (en) Light spot tracking method and device, electronic equipment and storage medium
CN110313006A (en) A kind of facial image detection method and terminal device
CN111967592A (en) Method for generating counterimage machine recognition based on positive and negative disturbance separation
CN111784658B (en) Quality analysis method and system for face image
WO2020156823A1 (en) A method and system for predicting an eye gazing parameter and an associated method for recommending visual equipment
CN113179421A (en) Video cover selection method and device, computer equipment and storage medium
CN115019382A (en) Region determination method, apparatus, device, storage medium, and program product
Banitalebi-Dehkordi et al. Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video
Chen et al. Learning to rank retargeted images
Hassan et al. SIPFormer: Segmentation of multiocular biometric traits with transformers
CN106033613B (en) Method for tracking target and device
CN117314822A (en) Visual defect identification method and system
CN112200109A (en) Face attribute recognition method, electronic device, and computer-readable storage medium
CN116934747A (en) Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination