CN110458829B - Image quality control method, device, equipment and storage medium based on artificial intelligence - Google Patents

Image quality control method, device, equipment and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN110458829B
CN110458829B CN201910745023.8A CN201910745023A CN110458829B CN 110458829 B CN110458829 B CN 110458829B CN 201910745023 A CN201910745023 A CN 201910745023A CN 110458829 B CN110458829 B CN 110458829B
Authority
CN
China
Prior art keywords
image
fundus image
module
target
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910745023.8A
Other languages
Chinese (zh)
Other versions
CN110458829A (en
Inventor
边成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201910745023.8A priority Critical patent/CN110458829B/en
Publication of CN110458829A publication Critical patent/CN110458829A/en
Application granted granted Critical
Publication of CN110458829B publication Critical patent/CN110458829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the application discloses an image quality control method based on artificial intelligence, which comprises the following steps: acquiring a target fundus image to be controlled; obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result comprises mutual exclusion probabilities of the target fundus image belonging to different quality types; the fundus image quality control system comprises a judging module, an attention mechanism module and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction and constraint module; the restriction constraint module is used for fusing the attention characteristics through a restriction constraint model and outputting the mutual exclusion probability of the image belonging to different quality types; and determining whether the target fundus image is qualified according to the quality type prediction result. In this way, quality control for fundus images is achieved at the front end.

Description

Image quality control method, device, equipment and storage medium based on artificial intelligence
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image quality control method, device, equipment and storage medium based on artificial intelligence.
Background
With the development of image deep learning technology, application requirements of an image-based screening system are increasingly remarkable in various fields, such as image-based disease screening in the medical field, image-based quality control in the product production field, and the like.
The screening accuracy of the image-based screening system depends on the performance of the screening system, and more importantly depends on the image quality input by the front end, so that the image quality control is needed to be controlled for the image quality input by the front end in the application scene of the image-based screening system.
At present, aiming at the actual application requirements, a solution is needed to be provided for realizing quality control on the image and ensuring the accuracy of the quality control.
Disclosure of Invention
The embodiment of the application provides an image quality control method, device, equipment and storage medium based on artificial intelligence, which can control quality of images and ensure accuracy of quality control.
In view of this, the first aspect of the present application provides an image quality control method based on artificial intelligence, including:
Acquiring a target fundus image to be controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
and determining whether the target fundus image is qualified or not according to a quality type prediction result of the target fundus image.
A second aspect of the present application provides an image quality control method based on artificial intelligence, including:
acquiring a target image to be controlled;
Obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities of the target image belonging to different quality types; the image quality control system comprises a judging module and an attention mechanism module which are trained based on an image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
A third aspect of the present application provides an image quality control apparatus based on artificial intelligence, including:
the acquisition module is used for acquiring a target fundus image to be controlled;
the processing module is used for obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
And the determining module is used for determining whether the target fundus image is qualified or not according to the quality type prediction result of the target fundus image.
A fourth aspect of the present application provides an image quality control apparatus based on artificial intelligence, including:
the acquisition module is used for acquiring a target image to be controlled;
the processing module is used for obtaining a quality type prediction result of the target image through the image quality control system, wherein the quality type prediction result of the target image comprises the mutual exclusion probability that the target image belongs to different quality types; the image quality control system comprises a judging module and an attention mechanism module which are trained based on an image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
And the determining module is used for determining whether the target image is qualified or not according to the quality type prediction result of the target image.
A fifth aspect of the present application provides an apparatus comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to execute the steps of the artificial intelligence based image quality control method according to the first or second aspect described above according to the computer program.
A fourth aspect of the present application provides a computer readable storage medium storing a computer program for executing the artificial intelligence based image quality control method according to the first or second aspect.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the artificial intelligence based image quality control method of the first or second aspect above.
From the above technical solutions, the embodiments of the present application have the following advantages:
the embodiment of the application provides an image quality control method based on artificial intelligence, which utilizes a fundus image quality control system trained based on a machine learning algorithm to judge the quality of fundus images at the front end, so that the qualified fundus images are provided for a fundus artificial intelligence (Artificial Intelligence, AI) screening system at the rear end, and the diagnosis accuracy of the fundus AI screening system is improved. Specifically, in the image quality control method provided by the embodiment of the application, after a target fundus image to be quality controlled is obtained, determining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result comprises mutual exclusion probabilities that the target fundus image belongs to different quality types, the fundus image quality control system comprises a judging module, an attention mechanism module and a limiting constraint module which are obtained based on fundus image training sample set training, wherein the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module, the attention mechanism module is used for extracting attention features aiming at the input image features through an attention mechanism network and outputting the attention features to the limiting constraint module, and the limiting constraint module is used for fusing the input attention features through the limiting constraint module and outputting the mutual exclusion probabilities of different quality types; further, whether the target fundus image is qualified or not is determined based on the quality type prediction result of the target fundus image. Therefore, by utilizing the fundus image quality control system comprising a plurality of judging models, an attention mechanism network and a limiting constraint model, whether the target fundus image to be quality controlled is a qualified image or not is intelligently and accurately judged, and the quality control of the fundus image at the front end is realized.
Drawings
Fig. 1 is a schematic view of an application scenario of an image quality control method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an image quality control method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a fundus image quality control system according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a mobile-Net network according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a training method of a fundus image quality control system according to an embodiment of the present application;
fig. 6 is a schematic diagram of an operation architecture of a fundus image quality control system according to an embodiment of the present application;
fig. 7 is a schematic diagram of an experimental result of an image quality control method according to an embodiment of the present application;
fig. 8 is a flowchart of another image quality control method according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image quality control device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another image quality control device according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of another image quality control device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of still another image quality control apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application;
Fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the medical field, fundus AI screening systems have been widely used gradually, however, the confidence of feedback fundus AI screening systems of many doctors is low, and this is because fundus AI screening systems generally screen directly based on fundus images obtained by manual shooting, and most fundus images obtained by manual shooting have problems such as inaccurate exposure and dirt, and screening directly based on such fundus images with quality problems will seriously affect the confidence of system screening, resulting in many ineffective screening.
In order to improve the confidence coefficient of the fundus AI screening system, the embodiment of the application provides an image quality control method based on artificial intelligence, and the method can judge the quality of fundus images at the front end, so that the qualified fundus images are provided for the fundus AI screening system at the rear end, and the confidence coefficient of the fundus AI screening system is improved. Specifically, the image quality control method provided by the embodiment of the application uses a fundus image quality control system trained based on a machine learning algorithm to judge a target fundus image to be controlled, determines a quality type prediction result of the target fundus image, namely determines mutual exclusion probabilities that the target fundus image belongs to different quality types, and can correspondingly determine whether the target fundus image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of judging models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each fundus image to be quality controlled is a qualified image, and accurate quality control of the fundus image at the front end is realized.
It should be noted that, the image quality control method based on artificial intelligence provided in the embodiment of the present application may be applied to not only a scene of performing quality control on a fundus image, but also other scenes of performing quality control on an image, for example, a scene of performing product quality monitoring based on an image in a product production field, a scene of performing quality control on images of other organs in a medical field, and so on; the application scenario to which the image quality control method provided in the embodiment of the present application is applicable is not limited.
It should be understood that the image quality control method based on artificial intelligence provided in the embodiments of the present application may be applied to a device having data processing capability, such as a terminal device, a server, etc.; the terminal equipment can be a computer, a personal digital assistant (Personal Digital Assitant, PDA) and the like; the server can be an application server or a Web server, and can be an independent server or a cluster server in actual deployment.
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, an application scenario to which the image quality control method for quality control of fundus images provided by the embodiments of the present application is applicable is described below by taking an example in which the image quality control method provided by the embodiments of the present application is applied to a server.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an image quality control method based on artificial intelligence according to an embodiment of the present application. As shown in fig. 1, the stress scenario includes: a fundus image photographing apparatus 110, a quality control server 120, and a screening server 130; wherein the fundus image photographing apparatus 110 may photograph a fundus image for a patient under a correct operation of an operator and upload the fundus image to the quality control server 120; the quality control server 120 is configured to execute the image quality control method provided in the embodiment of the present application, determine whether the fundus image uploaded by the fundus image capturing device 110 is a qualified image, where a fundus image quality control system is operated in the quality control server 120; the screening server 130 is configured to acquire a fundus image that is discriminated as a qualified image by the quality control server 120, and to screen based on the acquired qualified fundus image, and accordingly generate a diagnosis report for providing a doctor with a diagnosis reference opinion.
In particular application, the fundus image photographing apparatus 110 uploads a fundus image photographed by it to the quality control server 120. After receiving the fundus image, the quality control server 120 inputs the fundus image as a target fundus image to be quality controlled to a fundus image quality control system running on the fundus image quality control system, and determines a quality type prediction result of the target fundus image by using the fundus image quality control system, wherein the quality type prediction result comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; further, the quality control server 120 may determine whether the fundus image uploaded by the fundus image photographing apparatus 110 is qualified or not according to the quality type prediction result of the target fundus image. If the quality control server 120 determines that the fundus image is qualified, the quality control server 120 may further transmit the qualified fundus image to the screening server 130, so that the screening server 130 performs screening according to the fundus image, and generates a related diagnosis report.
It should be noted that, the fundus image quality control system running in the quality control server 120 is obtained by training based on a fundus image training sample set, and the fundus image quality control system includes a discrimination module, an attention mechanism module and a restriction module; the judging module is used for extracting image features of the target fundus image through at least two judging models and outputting the image features extracted by each judging model to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the input image characteristics through the attention mechanism network and outputting the extracted attention characteristics to the restriction and constraint module; the limiting and restraining module is used for fusing the input attention characteristics through the limiting and restraining model, so that the mutual exclusion probability that the target fundus image belongs to different quality types is generated, namely, a quality type prediction result of the target fundus image is generated.
In this way, based on the fundus image quality control system including a plurality of discrimination models, an attention mechanism network and a constraint restriction model, the fundus image to be quality-controlled is accurately identified to determine whether the fundus image is a qualified image, so that quality control of the fundus image is realized at the front end (i.e. at the quality control server 120), qualified fundus images are ensured to be provided for the rear end (i.e. at the screening server 130), and the screening confidence of the rear end is improved.
It should be understood that the application scenario shown in fig. 1 is only an example, and in practical application, the image quality control method based on artificial intelligence provided in the embodiment of the present application may be applied to not only quality control of fundus images, but also other application scenarios requiring quality control of images, where the application scenario applicable to the image quality control method based on artificial intelligence provided in the embodiment of the present application is not limited in any way.
The image quality control method based on artificial intelligence provided by the application is described below by way of example.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image quality control method based on artificial intelligence according to an embodiment of the present application, where the image quality control method is suitable for quality control of fundus images. For convenience of description, the following embodiments will describe the image quality control method by taking a server as an execution subject. As shown in fig. 2, the image quality control method includes the following steps:
step 201: and acquiring a target fundus image to be controlled.
In the scene of quality control of the fundus image, the fundus image photographing device can photograph the fundus image of a patient under the operation of an operator, and in order to ensure the screening confidence of the fundus AI screening system at the rear end, the server can intercept the fundus image photographed by the fundus image photographing device as a target fundus image to be quality controlled before the fundus image photographing device transmits the fundus image photographed by the fundus image photographing device to the fundus AI screening system, and control the quality of the target fundus image to judge whether the target fundus image is a qualified image.
In practical application, the fundus image photographing apparatus may transmit only one fundus image to the server at a time, or may transmit a plurality of fundus images to the server at a time, without any limitation on the number of target fundus images acquired by the server at a time.
It will be appreciated that in some cases, the images captured by the fundus image capturing apparatus may not actually be fundus images due to factors such as operator operation irregularities, patient out-of-place, etc., and such images may be quality controlled by the server as target fundus images after uploading such images to the server.
Step 202: and obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities of the target fundus image belonging to different quality types.
After the server acquires the target fundus image, inputting the target fundus image into a fundus image quality control system running by the server, analyzing and processing the target fundus image by utilizing the fundus image quality control system, and further acquiring an output result of the fundus image quality control system as a quality type prediction result of the target fundus image, wherein the quality type prediction result comprises mutual exclusion probabilities that the target fundus image belongs to different quality types, namely the quality type prediction result comprises probabilities that the target fundus image belongs to different quality types, and the sum value of the probabilities is 1.
It should be noted that, the fundus image quality control system is obtained by training in an end-to-end training mode based on a fundus image training sample set, and comprises a judging module, an attention mechanism module and a limiting and restraining module. The discrimination module comprises at least two discrimination models, each discrimination model is used for extracting corresponding image features aiming at an input target fundus image, and the discrimination module further outputs the image features extracted by each discrimination model to the attention mechanism module; the attention mechanism module is composed of an attention mechanism network, attention characteristics can be extracted based on the image characteristics output by the judging module through the attention mechanism network, specifically, the attention mechanism network can adaptively enhance the weight of the image characteristics with larger influence on a prediction result so as to obtain the attention characteristics, and the attention mechanism module outputs the attention characteristics extracted by the attention mechanism network to the limiting constraint module; the limiting and constraining module is composed of a limiting and constraining model, the attention features output by the attention mechanism module can be fused through the limiting and constraining model, the distance between the image features is increased, the image features in different distributions are approximated to similar distributions, and therefore the mutual exclusion probability that the target fundus image belongs to different quality types is obtained.
In order to facilitate further understanding of the fundus image quality control system in the embodiment of the present application, the discrimination module, the attention mechanism module, and the restriction constraint model are described in detail below, respectively.
In practical application, the discrimination module in the fundus image quality control system may specifically include the following six discrimination models: a clear discrimination model, a refractive interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other types of discrimination models. The clear distinguishing model is used for extracting image features for distinguishing that the image belongs to the clear type of the fundus image aiming at the input image; the refraction interstitial turbidity distinguishing model is used for extracting image features for distinguishing that the image belongs to refraction interstitial turbidity types aiming at the input image; the global exposure distinguishing model is used for extracting image features for distinguishing that the image belongs to a global exposure type aiming at the input image; the local exposure model is used for extracting image features for distinguishing that the image belongs to a local exposure type aiming at the input image; the large-area offset judging model is used for extracting image features for judging that the image belongs to a large-area offset type aiming at the input image; other category discrimination models are used for extracting image features for discriminating that the image is not of the fundus image type for the input image.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an exemplary fundus image quality control system according to an embodiment of the present application. As shown in fig. 3, the discrimination module includes: a clear discrimination model, a refractive interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other types of discrimination models. After inputting a target fundus image into a fundus image quality control system, each discrimination model in the discrimination module correspondingly extracts image features for discriminating that the target fundus image belongs to the quality type corresponding to the target fundus image; specifically, the clear discrimination model is extracted to discriminate the image feature of the target fundus image belonging to the clear type of fundus image, the dioptric interstitial turbidity discrimination model is extracted to discriminate the image feature of the target fundus image belonging to the dioptric interstitial turbidity type, the global exposure discrimination model is extracted to discriminate the image feature of the target fundus image belonging to the global exposure type, the local exposure discrimination model is extracted to discriminate the image feature of the target fundus image belonging to the local exposure type, the large-area offset discrimination model is extracted to discriminate the image feature of the target fundus image belonging to the large-area offset type, and the other types of discrimination models are extracted to discriminate the image feature of the target fundus image not belonging to the fundus image type. Furthermore, the discrimination module outputs the image features extracted by the clear discrimination model, the refraction interstitial turbidity discrimination model, the global exposure discrimination model, the local exposure discrimination model, the large-area contamination discrimination model and the other type discrimination model to a feature constraint framework based on an attention mechanism.
In practical application, the discrimination module may include six discrimination models, including a clear discrimination model, a refractive interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area contamination discrimination model and other types of discrimination models, and may include other types of discrimination models according to actual requirements, and may also set more or less discrimination models in the discrimination module according to actual requirements, where the types and the number of the discrimination models included in the discrimination module are not limited.
In practical application, at least two discrimination models in the discrimination module can both adopt a lightweight mobile-Net network structure, so that acceleration of a network is realized, and the application requirement of an online fundus image quality control system is met. Referring to fig. 4, fig. 4 is a schematic diagram of an exemplary mobile-Net network structure provided in an embodiment of the present application, where the mobile-Net network structure may be applicable to six discrimination models in the discrimination module shown in fig. 3.
In practical application, the discrimination model in the discrimination module may adopt a mobile-Net network structure, and may also adopt other network structures, such as a depth residual error network (res Net), densenet, VGGNet (Visual Geometry Group Network), and the like, and the network structure adopted by the discrimination model is not limited. In addition, the network structure adopted by each discrimination model in the discrimination module can be the same or different.
It should be noted that, each of the discrimination models in the discrimination module may output, in addition to the image features extracted for the input target fundus image, a confidence that the target fundus image belongs to the quality type corresponding to the discrimination model, for example, for the clear discrimination model, it may output, in addition to the image features for discriminating that the target fundus image belongs to the clear type of the fundus image, a confidence that the target fundus image belongs to the clear type of the fundus image; the confidence coefficient value range is between 0% and 100%, and model parameters of the discrimination model can be adjusted through the two-class cross entropy loss based on the confidence coefficient output by the discrimination model when the eye bottom image quality control system is trained.
In order to effectively utilize the image features extracted by each discrimination model in the discrimination module, expand the difference between each image feature, reduce the intra-group error of the model and the model, the fundus image quality control system in the embodiment of the application further adopts a feature constraint framework based on an attention mechanism to carry out subsequent processing on the image features extracted by each discrimination model in the discrimination module, wherein the feature constraint framework comprises an attention mechanism module and a restriction module.
In one possible implementation, the attention mechanism network employed by the attention mechanism module includes a first network branch and a second network branch; the first network branch comprises a convolution layer, a global pooling layer and a full connection layer, and is used for extracting attention weight aiming at the input image characteristics; the second network branch comprises a channel multiplier for carrying out channel multiplication on the attention weight extracted by the first network branch and the input image characteristic to obtain the attention characteristic.
When the method is specifically applied, after the image features extracted by each discrimination model are output to the attention mechanism module by the discrimination module, the first network branch processes the superimposed image features by utilizing the own convolution layer to obtain features to be processed, then further processes the features to be processed by utilizing a compression channel formed by a global pooling layer to obtain compression features, and then processes the compression features by utilizing an extraction layer formed by two full-connection layers to obtain attention weights; and then, the second network branch performs channel multiplication on the attention weight extracted by the first network branch and the input image feature to obtain the attention feature, namely, the attention weight extracted by the first network branch is used for weighting the image features extracted by different discrimination models, and the weight of the image feature with larger influence on the prediction result is adaptively enhanced, so that the attention feature is obtained.
It should be understood that the structure of the attention mechanism network is merely an example, and in practical applications, a network model with other structures may be used as the attention mechanism network according to practical requirements, and the structure of the attention mechanism network is not limited in any way.
In one possible implementation, the constraint limiting model in the constraint limiting module may adopt four densely connected BottleNeck network structures, so that gradient back propagation in the training process is enhanced, and accuracy of the constraint limiting model is improved.
When the method is specifically applied, after the attention mechanism module generates the attention characteristic, the attention characteristic is correspondingly output to the limiting and constraint module, the limiting and constraint module in the limiting and constraint module is composed of four densely connected BottleNeck network structures, and the BottleNeck network structure is a technology widely used for improving the operation speed of a compression model, and can change the original 3x3 convolution into 1x1- >3x3- >1x1 convolution by utilizing the channel compression characteristic of 1x1 convolution, so that the calculated amount of the network is greatly reduced; in addition, the densely connected network structure is beneficial to strengthening the back propagation of gradients in the training process and improving the precision of the constraint limiting model; and finally, adding a global pooling layer to obtain the probability corresponding to each quality type, and performing secondary limitation on the model by using N (the value of N is the number of the discrimination models) classification cross entropy loss, thereby enlarging the distance between the image features, approximating the image features in different distributions to similar distributions, and improving the classification capability of the model on the ambiguous images.
It should be noted that, the model structure of the constraint restriction model is only an example, in practical application, the number of convolution layers in the bottlenegck network structure can be increased or decreased correspondingly, the constraint restriction module can include more or less bottlenegck network structures, and the global pooling layer of the last layer can also be replaced by a full connection layer or the like; the model structure of the above-described constraint model is not particularly limited herein.
Step 203: and determining whether the target fundus image is qualified or not according to a quality type prediction result of the target fundus image.
After the server obtains the quality type prediction result determined based on the fundus image quality control system, whether the target fundus image is qualified or not can be further determined according to the quality type prediction result. Specifically, since the quality type prediction result includes mutually exclusive probabilities that the target fundus image belongs to different quality types, the server may determine the quality type corresponding to the maximum probability in the quality type prediction result, that is, the quality type corresponding to the target fundus image, and then determine whether the target fundus image is qualified according to the quality type corresponding to the target fundus image.
In particular implementation, the server may select a quality type corresponding to a maximum probability from the quality type prediction result of the target fundus image, determine that the target fundus image is qualified when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, and determine that the target fundus image is unqualified when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type.
Taking the example that the distinguishing module in the fundus image quality control system comprises a clear distinguishing model, a refraction interstitial turbidity distinguishing model, a global exposure distinguishing model, a local exposure distinguishing model, a large-area contamination distinguishing model and other types of distinguishing models, the quality type prediction result finally output by the fundus image quality control system comprises a first probability that a target fundus image belongs to a clear fundus image type, a second probability that the target fundus image belongs to a refraction interstitial turbidity type, a third probability that the target fundus image belongs to a global exposure type, a fourth probability that the target fundus image belongs to a local exposure type, a fifth probability that the target fundus image belongs to a large-area contamination type and a sixth probability that the target fundus image does not belong to a fundus image type, wherein the first probability, the second probability, the third probability, the fourth probability, the fifth probability and the sixth probability are mutually exclusive probabilities, and the sum of the sixth probability is 1.
And determining a quality type corresponding to the maximum probability value in the quality type prediction result, determining that the target fundus image is qualified when the quality type corresponding to the maximum probability value is any one of a clear fundus image type, a local exposure type and a large-area offset type, and determining that the target fundus image is unqualified when the quality type corresponding to the maximum probability value is any one of a refractive interstitial turbid type, a global exposure type and a fundus image type.
It should be understood that when the quality type that can be discriminated by the fundus image quality control system is other types, the server may set a qualified quality type and an unqualified quality type for these types in advance, and further, on the basis of this, determine whether the target fundus image is a qualified image in combination with the quality type prediction result of the target fundus image; the quality type that can be discriminated by the fundus image quality control system in the present application is not limited at all.
Optionally, the image quality control method provided by the embodiment of the present application may further prompt, according to a quality control result of the target fundus image, an operator of the fundus image capturing device whether the target fundus image is qualified, and when determining that the target fundus image is unqualified, correspondingly give a reason that the target fundus image is unqualified, so as to facilitate the operator to capture the qualified fundus image again.
Specifically, when the target fundus image is failed, the server may acquire the cause of the failed target fundus image and perform information prompt according to the cause to prompt the user of the failed target fundus image and the cause of the failed target fundus image.
Still take the quality types which can be distinguished by the fundus image quality control system as examples, wherein the preset qualified quality types comprise fundus image clear type, local exposure type and large-area offset type, the preset unqualified quality types comprise refractive interstitial turbid type, global exposure type and non-fundus image type; when the server judges that the target fundus image belongs to the refraction interstitial turbidity type, the server can prompt that the disqualification reason of the target fundus image is the refraction interstitial turbidity type.
The image quality control method based on artificial intelligence can judge the quality of fundus images at the front end, so that qualified fundus images are provided for a fundus AI screening system at the rear end, and the confidence of the fundus AI screening system is improved. Specifically, the image quality control method provided by the embodiment of the application uses a fundus image quality control system trained based on a machine learning algorithm to judge a target fundus image to be controlled, determines a quality type prediction result of the target fundus image, namely determines mutual exclusion probabilities that the target fundus image belongs to different quality types, and can correspondingly determine whether the target fundus image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of judging models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each fundus image to be quality controlled is a qualified image, and accurate quality control of the fundus image at the front end is realized.
It should be understood that in practical application, whether the image quality control method provided in the embodiment of the present application can accurately control the quality of the target fundus image to be controlled mainly depends on the working performance of the fundus image quality control system, and the working performance of the fundus image quality control system is closely related to the training process of the fundus image quality control system. The training method of the fundus image quality control system provided by the embodiment of the application is described below by way of example.
Referring to fig. 5, fig. 5 is a flowchart of a training method of a fundus image quality control system according to an embodiment of the present application. For convenience of description, the following embodiments will describe a training method of the fundus image quality control system, taking a server as an execution subject. As shown in fig. 5, the training method of the fundus image quality control system includes the following steps:
step 501: the fundus image training sample set is obtained and comprises a plurality of fundus image samples and labeling quality types corresponding to each fundus image sample.
Before training the fundus image quality control system, a large number of fundus image training samples are generally required to be acquired to form a fundus image training sample set, and each fundus image training sample comprises a fundus image sample and a labeling quality type corresponding to the fundus image sample.
In practical application, after approval by related institutions, the server can collect fundus image samples from databases corresponding to various hospitals or community hospitals, and then marks the quality types of the collected fundus image samples in a manual marking mode, so that fundus image training samples are obtained.
It should be noted that, in order to ensure that a better training effect is obtained, the server may pre-process each fundus image sample collected by the server in advance, for example, adjust each fundus image sample to a preset size (for example, 512×512), perform normalization processing (for example, subtracting an image mean value and dividing the image mean value by an image variance) on each fundus image sample, and so on.
In order to increase the data volume of the fundus image training samples, the server can also perform operations such as random horizontal overturning, random elastic deformation, random color spot (speckle) adding and the like on the fundus image samples acquired by the server, so that the quality types of the fundus image training samples are marked for each fundus image sample obtained through the processing, the data volume of the fundus image training samples is increased, and the scale of the fundus image training sample set is enlarged.
Step 502: and initializing parameters of a pre-constructed fundus image quality control system.
According to actual requirements, a specific network model is adopted to construct each module in the fundus image quality control system, namely a judging module, an attention mechanism module and a limiting and restraining module in the fundus image quality control system; and initializing parameters of each module in the fundus image quality control system, namely initializing parameters of each discrimination model in the discrimination module, the attention mechanism network in the attention mechanism module and the restriction model in the restriction module.
In practical applications, the server may execute step 501 first, then execute step 502, execute step 502 first, then execute step 501, execute step 501 and step 502 simultaneously, and the execution sequence of step 501 and step 502 is not limited.
Step 503: and training parameters on each model in the fundus image quality control system with initialized parameters according to the fundus image training sample set until the fundus image quality control system meeting the training ending condition is obtained through training.
After the fundus image training sample set is obtained and the parameter initialization processing is completed on the fundus image quality control system, the server can further train the parameters on the models in the fundus image quality control system after the parameter initialization by using the obtained fundus image training sample set, namely train the parameters of the discrimination models, the attention mechanism network and the restriction model after the parameter initialization until the training is completed to obtain the fundus image quality control system meeting the training ending condition.
During specific training, the server can input fundus image samples in a fundus image training sample set to a fundus image quality control system after parameter initialization, then acquire probabilities that fundus image samples output by all discrimination models in the fundus image quality control system belong to different quality types, and acquire mutual exclusion probabilities that fundus image samples output by a constraint model in the fundus image quality control system belong to different quality types; further, according to the probability that the fundus image sample belongs to different quality types, parameters on each discrimination model are adjusted through two classification cross entropy losses respectively; according to the mutual exclusion probability that the fundus image samples belong to different quality types, parameters on each model in the fundus image quality control system are adjusted through N (N takes a value equal to the number of the judging models in the judging module) classification cross entropy loss, and iterative training is repeated until training is performed to obtain the fundus image quality control system meeting the training ending condition.
Because each discriminant model is only used for judging whether the input fundus image sample belongs to the quality type corresponding to the discriminant model, the output probability of each discriminant model only represents the probability that the fundus image sample belongs to the quality type corresponding to the discriminant model, and therefore, when the discriminant model is correspondingly trained based on the probability that the discriminant model is output, the parameters of the discriminant model can be directly adjusted based on the cross entropy loss of the two classes. The constraint model is used for judging the possibility that the input fundus image sample belongs to various quality types, the output probability of the constraint model comprises the probability that the fundus image sample belongs to various quality types, and the sum of the probabilities is 1, so that when the fundus image quality control system is trained based on the mutual exclusion probability output by the constraint model, parameters of various models in the fundus image quality control system are required to be adjusted correspondingly through N (N is equal to the number of quality types which can be judged by the fundus quality control model, namely the number of judging models in the judging module) classification cross entropy loss.
Taking the clear discrimination model, the refraction matrix turbidity discrimination model, the global exposure discrimination model, the local exposure discrimination model, the large-area contamination discrimination model and other types of discrimination models as examples, when the server trains the fundus image quality control system comprising the discrimination model, the probability that each of the clear discrimination model, the refraction matrix turbidity discrimination model, the global exposure discrimination model, the local exposure discrimination model, the large-area contamination discrimination model and the other types of discrimination models is obtained, further, the parameters of the clear discrimination model are adjusted by adopting the two-class cross entropy loss based on the probability that the clear discrimination model is output, the parameters of the refraction matrix turbidity discrimination model are adjusted by adopting the two-class cross entropy loss based on the probability that the refraction matrix turbidity discrimination model is output, the parameters of the global exposure discrimination model are adjusted by adopting the two-class cross entropy loss based on the probability that the local exposure discrimination model is output, the parameters of the large-area discrimination model are adjusted by adopting the two-class cross entropy loss based on the probability that the large-area contamination discrimination model is output, and the other types of parameters are adjusted by adopting the two-class cross entropy loss based on the probability that the two-class discrimination model is output.
In addition, the server also needs to acquire the mutual exclusion probability of limiting constraint model output in the fundus image quality control system, wherein the mutual exclusion probability comprises the probability that a fundus image sample belongs to a clear type of fundus images, the probability that a refraction interstitial turbidity type exists, the probability that a global exposure type exists, the probability that a local exposure type exists, the probability that a large area of contamination type exists and the probability that the fundus image type does not exist, and the sum value of the probabilities is 1; furthermore, the server can adjust parameters of each model in the fundus image quality control system by adopting six-classification cross entropy loss based on the mutual exclusion probability.
When the fundus image quality control system is judged to meet the training ending condition, the first system can be verified by using a test sample, and the first system is a model obtained by performing first round training on the fundus image quality control system by using fundus image training samples in a fundus image training sample set; specifically, the server inputs a fundus image sample in a test sample into the first system, and the first system is utilized to correspondingly process the input fundus image sample to obtain the mutual exclusion probability that the fundus image sample belongs to each quality type; and determining the prediction accuracy of the first system according to the labeling quality type corresponding to the fundus image sample in the test sample and the output result of the first system, and when the prediction accuracy is greater than a preset threshold, considering that the working performance of the first system is better and meets the requirement, and determining that the first system is a fundus image quality control system meeting the training ending condition.
In addition, when judging whether the fundus image quality control system meets the training ending condition, the fundus image quality control system can also be determined whether to continue training according to a plurality of systems obtained by multiple rounds of training so as to obtain the fundus image quality control system with optimal working performance. Specifically, the test sample can be used for respectively verifying a plurality of fundus image quality control systems obtained through multi-round training, if the difference between the prediction accuracy of the fundus image quality control systems obtained through each round of training is smaller, the fundus image quality control system is considered to have no lifting space for performance, and the fundus image quality control system with the highest prediction accuracy can be selected as the fundus image quality control system meeting the training ending condition; if the difference between the prediction accuracy of the fundus image quality control system obtained by training of each wheel is determined to be large, the performance of the fundus image quality control system is considered to have a lifting space, and training of the fundus image quality control system can be continued until the fundus image quality control system with the most stable and optimal performance is obtained.
It should be noted that, the test sample may be obtained from a fundus image training sample set, for example, a plurality of fundus image training samples may be extracted from the fundus image training sample set according to a preset ratio as the test sample.
According to the training method of the fundus image quality control system, the fundus image training sample set which is obtained in advance is adopted, and the parameters of each model in the fundus image quality control system after parameter initialization are repeatedly and iteratively trained until the training is finished to obtain the fundus image quality control system meeting the training ending condition. Therefore, the fundus image quality control system obtained through training is guaranteed to have good working performance, and in practical application, accurate geological control of fundus images to be controlled based on the fundus image quality control system is guaranteed.
In order to facilitate further understanding of the image quality control method provided in the embodiment of the present application, the quality types that can be determined by the fundus image quality control system include a fundus image clear type, a refractive interstitial turbidity type, a global exposure type, a local exposure type, a large-area contamination type, and a type other than a fundus image type, which are taken as examples, and the image quality control method provided in the embodiment of the present application is described in its entirety.
Referring to fig. 6, fig. 6 is a schematic diagram of an operation architecture of an exemplary fundus image quality control system according to an embodiment of the present application. The fundus image quality control system comprises a judging module, an attention mechanism module and a limiting and restraining module; the discrimination module comprises a clear discrimination model, a refraction interstitial turbidity discrimination model, a global exposure discrimination model, a local exposure discrimination model, a large-area fouling discrimination model and other types of discrimination models, and the six discrimination models can all adopt a mobile-Net network structure; the attention mechanism module includes an attention mechanism network including a compression extraction branch (corresponding to the first network branch above) and a residual scaling branch (corresponding to the second network branch above); the constraint limiting module comprises a constraint limiting model which adopts four densely connected BottleNeck network structures.
In a specific application, the server may acquire a fundus image uploaded by the fundus image photographing apparatus, and input the fundus image as a target fundus image to be quality controlled into the fundus image quality control system shown in fig. 6. The fundus image quality control system can effectively utilize information among six discrimination models, and enlarge differences among image features output by each discrimination model, so that intra-group errors among models are reduced, and scores among the models are improved.
After inputting the target fundus image into a fundus image quality control system, each discrimination model in the discrimination module correspondingly analyzes and processes the input target fundus image to obtain image characteristics for discriminating that the target fundus image belongs to the quality type corresponding to the target fundus image; specifically, the clear discrimination model will output image features for discriminating that the target fundus image belongs to the clear type of fundus image, the refractive interstitial turbid discrimination model will output image features for discriminating that the target fundus image belongs to the refractive interstitial turbid type, the global exposure discrimination model will output image features for discriminating that the target fundus image belongs to the global exposure type, the local exposure discrimination model will output image features for discriminating that the target fundus image belongs to the local exposure type, the large-area stained discrimination model will output image features for discriminating that the target fundus image belongs to the large-area stained type, and other types of discrimination models will output image features for discriminating that the target fundus image does not belong to the fundus image type. The discrimination module further outputs the image features output by the discrimination models to the attention mechanism module.
After each image feature is overlapped by an attention mechanism network in an attention mechanism module, the feature to be processed is obtained through a convolution layer processing with a convolution kernel size of 1x1 in a compression extraction branch, then the compressed feature (1 x1 xc) is obtained through a compression channel (formed by a global pooling layer) in the compression extraction branch, and then attention weight is generated through an extraction layer (formed by two fully connected layers); the attention weight is then channel multiplied with the feature to be processed in the residual scaling branch to get the attention feature. Further, the attention feature is output to a restriction module.
The limiting constraint model in the limiting constraint module consists of four densely connected BottleNeck network structures, the original 3x3 convolution is changed into 1x1- >3x3- >1x1 convolution by utilizing the channel compression characteristic of the 1x1 convolution, so that the calculated amount of the network can be greatly reduced, a global pooling module is added at last to obtain the prediction probability corresponding to each quality type, and the model is secondarily limited by utilizing six-classification cross entropy loss. The distance between the features can be increased, and the features in different distributions are approximated to similar distributions, so that the classifying capability of the fundus image quality control system for the ambiguous fundus images is improved.
After obtaining the quality type prediction result output by the fundus image quality control system, the server can determine the quality type corresponding to the maximum probability in the quality type prediction result, wherein the quality type is the quality type of the target fundus image, if the quality type belongs to a preset qualified quality type, the target fundus image can be determined to be qualified, and if the quality type belongs to a preset unqualified quality type, the target fundus image can be determined to be unqualified. When it is determined that the target fundus image is not acceptable, the server may also prompt the relevant staff for the reason why the target fundus image is not acceptable accordingly.
Experimental study proves that the image quality control method provided by the embodiment of the application can effectively help a fundus image shooting technician to complete high-quality fundus image acquisition, before the image quality control method is used, the fundus image shooting technician needs to upload 5 fundus images on average every time, so that a doctor can obtain fundus images which can be used as diagnostic reference bases, after the image quality control method is used, only 1 fundus image is needed to upload every time, so that the doctor can obtain fundus images which can be used as diagnostic reference bases, and the working efficiency of the doctor is greatly improved.
The experimental results obtained by adopting the image quality control method provided by the embodiment of the application are shown in fig. 7. By adopting the image quality control method provided by the embodiment of the application, the classification quality score (F1) of each category can be effectively improved, the resolution ratio can be improved to more than 0.8 especially in clear and refractive turbidity categories, and meanwhile, the recall ratio of about 90% can be obtained, and the accuracy of more than 86% can be obtained in local exposure problems and non-fundus categories with smaller samples. In addition, unlike the traditional classification network, the image quality control method provided by the embodiment of the application can achieve at least 20% improvement on classification accuracy of other categories.
It should be noted that, the image quality control method provided in the embodiment of the present application may be used to control quality of an eye bottom image, and may also be applied to other scenes where quality control of an image is required, such as a scene where quality control of a product is performed based on an image in a product production field, a scene where quality control of images of other organs is performed in a medical field, and so on. The image quality control method applied to other scenes will be described by way of example.
Referring to fig. 8, fig. 8 is a schematic flow chart of an image quality control method based on artificial intelligence according to an embodiment of the present application. For convenience of description, the following embodiments will be described taking a server as an execution subject. As shown in fig. 8, the method includes the steps of:
Step 801: and obtaining a target image to be controlled.
In the scene of quality control of other images, the images are shot through corresponding image shooting equipment, further, the image shooting equipment can send the images shot by the image shooting equipment to a server, after the server acquires the images sent by the image shooting equipment, the images are used as target images to be quality controlled to control the quality of the images, and whether the target images are qualified images is judged.
Step 802: and obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities of the target image belonging to different quality types.
After the server acquires the target image, the target image is input into an image quality control system running by the server, the image quality control system is utilized to analyze and process the target image, and then, an output result of the image quality control system is acquired as a quality type prediction result of the target image, wherein the quality type prediction result comprises the mutual exclusion probability that the target image belongs to different quality types, namely, the quality type prediction result comprises the probability that the target image belongs to different quality types, and the sum value of the probabilities is 1.
It should be noted that, the image quality control system is obtained by training in an end-to-end training manner based on an image training sample set, and the image quality control system comprises a judging module, an attention mechanism module and a limiting and restraining module. The judging module comprises at least two judging models, each judging model is used for extracting corresponding image features aiming at an input target image, and the judging module further outputs the image features extracted by each judging model to the attention mechanism module; the attention mechanism module is composed of an attention mechanism network, attention characteristics can be extracted based on the image characteristics output by the judging module through the attention mechanism network, specifically, the attention mechanism network can adaptively enhance the weight of the image characteristics with larger influence on a prediction result so as to obtain the attention characteristics, and the attention mechanism module outputs the attention characteristics extracted by the attention mechanism network to the limiting constraint module; the limiting and constraining module is composed of a limiting and constraining model, the attention features output by the attention mechanism module can be fused through the limiting and constraining model, the distance between the image features is increased, the image features in different distributions are approximated to similar distributions, and therefore the mutual exclusion probability that the target image belongs to different quality types is obtained.
In one possible implementation manner, the determining module may include a first determining model and a second determining model, where the first determining model is used for extracting, for a target image, an image feature for determining that the target image belongs to a quality type; the second judging model is used for extracting image features for judging that the target image belongs to the unqualified quality type aiming at the target image.
It should be noted that, under different application scenarios, the specific corresponding metrics of the quality type and the quality type are different, and accordingly, the image features actually extracted by the first and second discriminant models are different, and no limitation is made on the image features actually extracted by the first and second discriminant models.
It should be understood that, in practical applications, the first and second discriminant models may have the same network structure, or may have different network structures, and the network structures specifically adopted by the first and second discriminant models are not limited. In addition, the discrimination module may include a first discrimination model and a second discrimination model, and may include a larger number of discrimination models, and the number of discrimination models included in the discrimination module is not limited in any way.
It should be noted that, the structure of the image quality control system in the embodiment of the present application is similar to that of the fundus image quality control system in the embodiment shown in fig. 2, and details of the specific structure of the image quality control system may be referred to the related description of the fundus image quality control system in the embodiment shown in fig. 2, which is not repeated herein. The training process of the image quality control system in the embodiment of the present application is similar to the training process of the fundus image quality control system in the embodiment shown in fig. 5, but the sample image based on the training process is different, and for the training process of the image quality control system, reference may be made to the flow shown in fig. 5 above, and details thereof are not repeated here.
Step 803: and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
After the server obtains the quality type prediction result determined by the image quality control system, whether the target image is qualified or not can be further determined according to the quality type prediction result. Specifically, since the quality type prediction result includes mutual exclusion probabilities that the target image belongs to different quality types, the server may determine the quality type corresponding to the maximum probability in the quality type prediction result, where the quality type is the quality type corresponding to the target image, and then determine whether the target image is qualified according to the quality type corresponding to the target image.
In specific implementation, the server can select a quality type corresponding to the maximum probability from the quality type prediction result of the target image, and when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, the server can determine that the target image is qualified; when the quality type corresponding to the maximum probability belongs to a preset disqualified quality type, the server can determine that the target image is disqualified.
In the case of determining that the target image is not acceptable, the server may further make a relevant prompt, that is, prompt the photographer of the target image that the target image is not acceptable and the reason for the failure thereof, so that the photographer of the target image can re-photograph the acceptable image based on the prompt.
The image quality control method based on artificial intelligence uses an image quality control system trained based on a machine learning algorithm to judge a target image to be controlled, determines a quality type prediction result of the target image, namely determines the mutual exclusion probability that the target image belongs to different quality types, and can correspondingly determine whether the target image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of judging models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each image to be quality controlled is a qualified image, and accurate quality control of the image at the front end is realized.
The application also provides a corresponding image quality control device based on the artificial intelligence aiming at the image quality control method based on the artificial intelligence, so that the image quality control method based on the artificial intelligence is applied and realized in practice.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an image quality control apparatus 900 based on artificial intelligence corresponding to the image quality control method based on artificial intelligence shown in fig. 2, where the apparatus includes:
an acquisition module 901, configured to acquire a target fundus image to be quality controlled;
a processing module 902, configured to obtain, by using a fundus image quality control system, a quality type prediction result of the target fundus image, where the quality type prediction result of the target fundus image includes mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
A determining module 903, configured to determine whether the target fundus image is qualified according to a quality type prediction result of the target fundus image.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the determining module 903 is specifically configured to:
selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target fundus image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target fundus image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target fundus image is unqualified.
Optionally, referring to fig. 10, fig. 10 is a schematic structural diagram of another image quality control device 1000 according to an embodiment of the present application, on the basis of the image quality control device shown in fig. 9, where the device further includes:
and a prompting module 1001, configured to, when determining that the target fundus image is not acceptable, acquire a reason why the target fundus image is not acceptable, and display information according to the reason to prompt a user about the reason why the target fundus image is not acceptable and the reason why the target fundus image is not acceptable.
Optionally, referring to fig. 11 on the basis of the image quality control apparatus shown in fig. 9, fig. 11 is a schematic structural diagram of another image quality control apparatus 1100 provided in an embodiment of the present application, where the apparatus further includes:
A sample obtaining module 1101, configured to obtain the fundus image training sample set, where the fundus image training sample set includes a plurality of fundus image samples and a labeling quality type corresponding to each fundus image sample;
an initialization module 1102, configured to perform parameter initialization on a pre-constructed fundus image quality control system;
and the training module 1103 is configured to train parameters on each model in the fundus image quality control system initialized by parameters according to the fundus image training sample set until training results in the fundus image quality control system that meets the training ending condition.
Optionally, on the basis of the image quality control apparatus shown in fig. 11, the training module 1103 is specifically configured to:
inputting fundus image samples in the fundus image training sample set into a fundus image quality control system initialized by the parameters, acquiring probabilities that the fundus image samples output by the at least two discriminant models in the fundus image quality control system belong to different quality types, and acquiring mutual exclusion probabilities that the fundus images output by the limiting constraint model in the fundus image quality control system belong to different quality types;
according to the probability that the fundus image sample belongs to different quality types, adjusting parameters on the at least two discriminant models through two classification cross entropy losses respectively;
And adjusting parameters on each model in the fundus image quality control system through N classification cross entropy loss according to the mutual exclusion probability that the fundus image sample belongs to different quality types, and repeatedly iterating training until the fundus image quality control system meeting the training ending condition is obtained through training, wherein the N value is the number of the judging models.
Optionally, on the basis of the image quality control device shown in fig. 9, the at least two discrimination models in the discrimination module adopt a mobile-Net network structure.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the attention mechanism network adopted by the attention mechanism module includes a first network branch and a second network branch, where the first network branch includes a convolution layer and a global pooling layer, and a full connection layer, and is used for extracting attention weights for input image features; the second network branch comprises a channel multiplier for carrying out channel multiplication on the attention weight and the input image characteristic to obtain the attention characteristic.
Alternatively, based on the image quality control device shown in fig. 9, the constraint restriction model adopts four densely connected bottleneg network structures.
Optionally, on the basis of the image quality control apparatus shown in fig. 9, the at least two discrimination modules include six discrimination models, and the six discrimination models include: clear discrimination model, refractive interstitial turbidity discrimination model, global exposure discrimination model, local exposure discrimination model, large-area contamination discrimination model and other types of discrimination model; the clear distinguishing model is used for extracting image features for distinguishing that the image belongs to the clear type of the fundus image aiming at the input image; the refraction interstitial turbidity distinguishing model is used for extracting image features for distinguishing that the images belong to refraction interstitial turbidity types aiming at the input images; the global exposure distinguishing model is used for extracting image features for distinguishing that the image belongs to a global exposure type aiming at the input image; the local exposure distinguishing model is used for extracting image features for distinguishing that the image belongs to a local exposure type aiming at the input image; the large-area offset judging model is used for extracting image features for judging that the image belongs to a large-area offset type aiming at the input image; the other category discrimination model is used for extracting image features for discriminating that the image does not belong to the fundus image type for the input image.
The image quality control device based on artificial intelligence can judge the quality of fundus images at the front end, so that qualified fundus images are provided for a fundus AI screening system at the rear end, and the confidence of the fundus AI screening system is improved. Specifically, the image quality control device provided by the embodiment of the application uses the fundus image quality control system obtained based on machine learning algorithm training to judge the target fundus image to be quality controlled, determines the quality type prediction result of the target fundus image, namely determines the mutual exclusion probability that the target fundus image belongs to different quality types, and can correspondingly determine whether the target fundus image is qualified or not based on the quality type prediction result; the fundus image quality control system comprising the plurality of judging models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each fundus image to be quality controlled is a qualified image, and accurate quality control of the fundus image at the front end is realized.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an image quality control apparatus 1200 based on artificial intelligence corresponding to the image quality control method based on artificial intelligence shown in fig. 8, which includes:
An acquisition module 1201, configured to acquire a target image to be controlled;
a processing module 1202, configured to obtain, by using an image quality control system, a quality type prediction result of the target image, where the quality type prediction result of the target image includes mutual exclusion probabilities that the target image belongs to different quality types; the image quality control system comprises a judging module and an attention mechanism module which are trained based on an image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
a determining module 1203 is configured to determine whether the target image is qualified according to the quality type prediction result of the target image.
Optionally, on the basis of the image quality control apparatus shown in fig. 12, the determining module 1203 is specifically configured to:
Selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target image is unqualified.
Optionally, on the basis of the image quality control device shown in fig. 12, the judging module includes a first judging model and a second judging model, where the first judging model is used for extracting, for an input image, image features for judging that the image belongs to a qualified quality type; the second discrimination model is used for extracting a probability for discriminating that the image belongs to a disqualified quality type for the input image.
The image quality control device based on artificial intelligence uses the image quality control system trained based on the machine learning algorithm to judge the target image to be controlled, determines the quality type prediction result of the target image, namely determines the mutual exclusion probability that the target image belongs to different quality types, and can correspondingly determine whether the target image is qualified or not based on the quality type prediction result; the image quality control system comprising a plurality of judging models, the attention mechanism network and the limiting constraint model can intelligently and accurately identify whether each image to be quality controlled is a qualified image, and accurate quality control of the image at the front end is realized.
The embodiment of the application also provides a server and terminal equipment for controlling the quality of the image, and the server and terminal equipment for controlling the quality of the image provided by the embodiment of the application are introduced from the perspective of hardware materialization.
Referring to fig. 13, fig. 13 is a schematic diagram of a server structure provided in an embodiment of the present application, where the server 1300 may have a relatively large difference due to configuration or performance, and may include one or more central processing units (central processing units, CPU) 1322 (e.g., one or more processors) and a memory 1332, and one or more storage media 1330 (e.g., one or more mass storage devices) storing application programs 1342 or data 1344. Wherein the memory 1332 and storage medium 1330 may be transitory or persistent. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Further, the central processor 1322 may be configured to communicate with the storage medium 1330, and execute a series of instruction operations in the storage medium 1330 on the server 1300.
The server 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input/output interfaces 1358, and/or one or more operating systems 1341, such as Windows server (tm), mac OS XTM, unixTM, linuxTM, freeBSDTM, and so forth.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 13.
Wherein CPU 1322 is configured to perform the following steps:
acquiring a target fundus image to be controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
And determining whether the target fundus image is qualified or not according to a quality type prediction result of the target fundus image.
Alternatively, the following steps are performed:
acquiring a target image to be controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities of the target image belonging to different quality types; the image quality control system comprises a judging module and an attention mechanism module which are trained based on an image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
Optionally, CPU1322 may also perform method steps for any specific implementation of the image quality control method in embodiments of the present application.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only those portions relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, refer to the method portions of the embodiments of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, a personal digital assistant (English full name: personal Digital Assistant, english abbreviation: PDA), a computer and the like, taking the terminal as an example of the computer:
fig. 14 is a block diagram showing a part of the structure of a computer related to a terminal provided in an embodiment of the present application. Referring to fig. 14, a computer includes: radio Frequency (r.f. Frequency) circuit 1410, memory 1420, input unit 1430, display unit 1440, sensor 1450, audio circuit 1460, wireless fidelity (r.f. wireless fidelity, wiFi) module 1470, processor 1480, and power supply 1490. Those skilled in the art will appreciate that the computer architecture shown in fig. 14 is not limiting and that more or fewer components than shown may be included, or that certain components may be combined, or that different arrangements of components may be utilized.
The memory 1420 may be used to store software programs and modules, and the processor 1480 performs various functional applications of the computer and data processing by executing the software programs and modules stored in the memory 1420. The memory 1420 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data created according to the use of the computer (such as audio data, phonebooks, etc.), and the like. In addition, memory 1420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1480 is a control center of the computer, connects various parts of the entire computer using various interfaces and lines, and performs various functions of the computer and processes data by running or executing software programs and/or modules stored in the memory 1420, and calling data stored in the memory 1420, thereby performing overall monitoring of the computer. In the alternative, processor 1480 may include one or more processing units; preferably, the processor 1480 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1480.
In the embodiment of the present application, the processor 1480 included in the terminal further has the following functions:
acquiring a target fundus image to be controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
and determining whether the target fundus image is qualified or not according to a quality type prediction result of the target fundus image.
Alternatively, the following functions are provided:
Acquiring a target image to be controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities of the target image belonging to different quality types; the image quality control system comprises a judging module and an attention mechanism module which are trained based on an image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
Optionally, the processor 1480 is further configured to perform steps of any implementation of the image quality control method provided in the embodiments of the present application.
The embodiments of the present application further provide a computer readable storage medium storing a computer program for executing any one of the image quality control methods based on artificial intelligence described in the foregoing embodiments.
The present embodiments also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform any one of the above-described image quality control methods based on artificial intelligence.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc. various media for storing computer program.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (15)

1. An image quality control method based on artificial intelligence, which is characterized by comprising the following steps:
acquiring a target fundus image to be controlled;
obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
And determining whether the target fundus image is qualified or not according to a quality type prediction result of the target fundus image.
2. The method according to claim 1, wherein determining whether the target fundus image is acceptable based on the quality type prediction result of the target fundus image includes:
selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target fundus image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target fundus image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target fundus image is unqualified.
3. The method according to claim 1, wherein the method further comprises:
and when the target fundus image is determined to be unqualified, acquiring a reason why the target fundus image is unqualified, and displaying information prompt according to the reason to prompt a user of the unqualified and unqualified reasons of the target fundus image.
4. The method according to claim 1, wherein the method further comprises:
acquiring the fundus image training sample set, wherein the fundus image training sample set comprises a plurality of fundus image samples and labeling quality types corresponding to each fundus image sample;
Initializing parameters of a pre-constructed fundus image quality control system;
and training parameters on each model in the fundus image quality control system with initialized parameters according to the fundus image training sample set until the fundus image quality control system meeting the training ending condition is obtained through training.
5. The method according to claim 4, wherein the training parameters on each model in the fundus image quality control system initialized by parameters according to the fundus image training sample set until training results in the fundus image quality control system satisfying a training end condition, comprises:
inputting fundus image samples in the fundus image training sample set into a fundus image quality control system initialized by the parameters, acquiring probabilities that the fundus image samples output by the at least two discriminant models in the fundus image quality control system belong to different quality types, and acquiring mutual exclusion probabilities that the fundus images output by the limiting constraint model in the fundus image quality control system belong to different quality types;
according to the probability that the fundus image sample belongs to different quality types, adjusting parameters on the at least two discriminant models through two classification cross entropy losses respectively;
And adjusting parameters on each model in the fundus image quality control system through N classification cross entropy loss according to the mutual exclusion probability that the fundus image sample belongs to different quality types, and repeatedly iterating training until the fundus image quality control system meeting the training ending condition is obtained through training, wherein the N value is the number of the judging models.
6. The method according to any one of claims 1 to 4, wherein the at least two discriminant models in the discriminant module employ a mobile-Net network structure.
7. The method according to any one of claims 1 to 4, wherein the attention mechanism network employed by the attention mechanism module comprises a first network branch and a second network branch, the first network branch comprising a convolution layer and a global pooling layer and a fully connected layer for extracting attention weights for input image features; the second network branch comprises a channel multiplier for carrying out channel multiplication on the attention weight and the input image characteristic to obtain the attention characteristic.
8. The method according to any one of claims 1 to 4, wherein the constraint restriction model employs four densely connected BottleNeck network structures.
9. The method of any one of claims 1 to 4, wherein the at least two discriminant modules comprise six discriminant models comprising: clear discrimination model, refractive interstitial turbidity discrimination model, global exposure discrimination model, local exposure discrimination model, large-area contamination discrimination model and other types of discrimination model; wherein,
the clear distinguishing model is used for extracting image features for distinguishing that the image belongs to the clear type of the fundus image aiming at the input image; the refraction interstitial turbidity distinguishing model is used for extracting image features for distinguishing that the images belong to refraction interstitial turbidity types aiming at the input images; the global exposure distinguishing model is used for extracting image features for distinguishing that the image belongs to a global exposure type aiming at the input image; the local exposure distinguishing model is used for extracting image features for distinguishing that the image belongs to a local exposure type aiming at the input image; the large-area offset judging model is used for extracting image features for judging that the image belongs to a large-area offset type aiming at the input image; the other category discrimination model is used for extracting image features for discriminating that the image does not belong to the fundus image type for the input image.
10. An image quality control method based on artificial intelligence, which is characterized by comprising the following steps:
acquiring a target image to be controlled;
obtaining a quality type prediction result of the target image through an image quality control system, wherein the quality type prediction result of the target image comprises mutual exclusion probabilities of the target image belonging to different quality types; the image quality control system comprises a judging module and an attention mechanism module which are trained based on an image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
and determining whether the target image is qualified or not according to the quality type prediction result of the target image.
11. The method of claim 10, wherein determining whether the target image is acceptable based on the quality type prediction of the target image comprises:
Selecting a quality type corresponding to the maximum probability from the quality type prediction result of the target image;
when the quality type corresponding to the maximum probability belongs to a preset qualified quality type, determining that the target image is qualified;
and when the quality type corresponding to the maximum probability belongs to a preset unqualified quality type, determining that the target image is unqualified.
12. The method of claim 10, wherein the discrimination module comprises a first discrimination model and a second discrimination model, the first discrimination model for extracting image features for discriminating that the image belongs to a quality-acceptable type for the input image; the second discrimination model is used for extracting a probability for discriminating that the image belongs to a disqualified quality type for the input image.
13. An image quality control device based on artificial intelligence, characterized by comprising:
the acquisition module is used for acquiring a target fundus image to be controlled;
the processing module is used for obtaining a quality type prediction result of the target fundus image through a fundus image quality control system, wherein the quality type prediction result of the target fundus image comprises mutual exclusion probabilities that the target fundus image belongs to different quality types; the fundus image quality control system comprises a judging module and an attention mechanism module which are trained based on a fundus image training sample set, and a limiting and restraining module; the judging module is used for extracting image features through at least two judging models and outputting the image features to the attention mechanism module; the attention mechanism module is used for extracting attention characteristics aiming at the image characteristics through an attention mechanism network and outputting the attention characteristics to the restriction constraint module; the limiting and constraining module is used for fusing the attention characteristics through a limiting and constraining model and outputting the mutual exclusion probability of the images belonging to different quality types;
And the determining module is used for determining whether the target fundus image is qualified or not according to the quality type prediction result of the target fundus image.
14. An apparatus comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to perform the method of any one of claims 1 to 12 according to the computer program.
15. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a computer program for executing the method of any one of claims 1 to 12.
CN201910745023.8A 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence Active CN110458829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910745023.8A CN110458829B (en) 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910745023.8A CN110458829B (en) 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN110458829A CN110458829A (en) 2019-11-15
CN110458829B true CN110458829B (en) 2024-01-30

Family

ID=68486249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910745023.8A Active CN110458829B (en) 2019-08-13 2019-08-13 Image quality control method, device, equipment and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN110458829B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112690809B (en) * 2020-02-04 2021-09-24 首都医科大学附属北京友谊医院 Method, device, server and storage medium for determining equipment abnormality reason
CN111815606B (en) * 2020-07-09 2023-09-01 浙江大华技术股份有限公司 Image quality evaluation method, storage medium, and computing device
CN113128373B (en) * 2021-04-02 2024-04-09 西安融智芙科技有限责任公司 Image processing-based color spot scoring method, color spot scoring device and terminal equipment
CN113449774A (en) * 2021-06-02 2021-09-28 北京鹰瞳科技发展股份有限公司 Fundus image quality control method, device, electronic apparatus, and storage medium
CN113487608B (en) * 2021-09-06 2021-12-07 北京字节跳动网络技术有限公司 Endoscope image detection method, endoscope image detection device, storage medium, and electronic apparatus
CN114612389B (en) * 2022-02-21 2022-09-06 浙江大学 Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
CN115953622B (en) * 2022-12-07 2024-01-30 广东省新黄埔中医药联合创新研究院 Image classification method combining attention mutual exclusion rules
CN117455970B (en) * 2023-12-22 2024-05-10 山东科技大学 Airborne laser sounding and multispectral satellite image registration method based on feature fusion

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229580A (en) * 2018-01-26 2018-06-29 浙江大学 Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features
WO2018121690A1 (en) * 2016-12-29 2018-07-05 北京市商汤科技开发有限公司 Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN109146856A (en) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 Picture quality assessment method, device, computer equipment and storage medium
CN109191457A (en) * 2018-09-21 2019-01-11 中国人民解放军总医院 A kind of pathological image quality validation recognition methods
CN109285149A (en) * 2018-09-04 2019-01-29 杭州比智科技有限公司 Appraisal procedure, device and the calculating equipment of quality of human face image
CN109360178A (en) * 2018-10-17 2019-02-19 天津大学 Based on blending image without reference stereo image quality evaluation method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109685116A (en) * 2018-11-30 2019-04-26 腾讯科技(深圳)有限公司 Description information of image generation method and device and electronic device
CN109815965A (en) * 2019-02-13 2019-05-28 腾讯科技(深圳)有限公司 A kind of image filtering method, device and storage medium
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
CN110009614A (en) * 2019-03-29 2019-07-12 北京百度网讯科技有限公司 Method and apparatus for output information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8605254B2 (en) * 2009-10-26 2013-12-10 International Business Machines Corporation Constrained optimization of lithographic source intensities under contingent requirements
US10095050B2 (en) * 2016-12-02 2018-10-09 Carl Zeiss Vision International Gmbh Method, a system and a computer readable medium for optimizing an optical system, and a method of evaluating attentional performance

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018121690A1 (en) * 2016-12-29 2018-07-05 北京市商汤科技开发有限公司 Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN108229580A (en) * 2018-01-26 2018-06-29 浙江大学 Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features
CN109146856A (en) * 2018-08-02 2019-01-04 深圳市华付信息技术有限公司 Picture quality assessment method, device, computer equipment and storage medium
CN109285149A (en) * 2018-09-04 2019-01-29 杭州比智科技有限公司 Appraisal procedure, device and the calculating equipment of quality of human face image
CN109191457A (en) * 2018-09-21 2019-01-11 中国人民解放军总医院 A kind of pathological image quality validation recognition methods
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109360178A (en) * 2018-10-17 2019-02-19 天津大学 Based on blending image without reference stereo image quality evaluation method
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN109685116A (en) * 2018-11-30 2019-04-26 腾讯科技(深圳)有限公司 Description information of image generation method and device and electronic device
CN109815965A (en) * 2019-02-13 2019-05-28 腾讯科技(深圳)有限公司 A kind of image filtering method, device and storage medium
CN110009614A (en) * 2019-03-29 2019-07-12 北京百度网讯科技有限公司 Method and apparatus for output information
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"A Parallel Feature Expansion Classification Model with Feature-based Attention Mechanism";Yingchao Yu et al;《2018 IEEE 7th Data Driven Control and Learning Systems Conference (DDCLS)》;全文 *
"Boundary Regularized Convolutional Neural Network for Layer Parsing of Breast Anatomy in Automated Whole Breast Ultrasound";Cheng Bian et al;《Medical Image Computing and Computer Assisted Intervention − MICCAI 2017》;全文 *
"Select-Additive Learning: Improving Generalization in Multimodal Sentiment ";Haohan Wang et al;《ARXIV》;全文 *
"基于深度学习的多语种短文本分类方法的研究";刘娇;《 CNKI优秀硕士学位论文全文库》;全文 *
基于FA-Net的视网膜眼底图像质量评估;万程;游齐靖;孙晶;沈建新;俞秋丽;中华实验眼科杂志;第37卷(第008期);全文 *
结合视觉注意力机制和图像锐度的无参图像质量评价方法;王凡;倪晋平;董涛;郭荣礼;;应用光学(第01期);全文 *

Also Published As

Publication number Publication date
CN110458829A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110458829B (en) Image quality control method, device, equipment and storage medium based on artificial intelligence
US20240081618A1 (en) Endoscopic image processing
US11526983B2 (en) Image feature recognition method and apparatus, storage medium, and electronic apparatus
CN109919928B (en) Medical image detection method and device and storage medium
CN108389201B (en) Lung nodule benign and malignant classification method based on 3D convolutional neural network and deep learning
US11984225B2 (en) Medical image processing method and apparatus, electronic medical device, and storage medium
US11129591B2 (en) Echocardiographic image analysis
CN110517759B (en) Method for determining image to be marked, method and device for model training
Mishra et al. Diabetic retinopathy detection using deep learning
CN110837803B (en) Diabetic retinopathy grading method based on depth map network
CN109346159B (en) Case image classification method, device, computer equipment and storage medium
US20180260954A1 (en) Method and apparatus for providing medical information service on basis of disease model
CN111353352B (en) Abnormal behavior detection method and device
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN112435341B (en) Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device
CN111597946B (en) Processing method of image generator, image generation method and device
CN112101424B (en) Method, device and equipment for generating retinopathy identification model
CN110197474B (en) Image processing method and device and training method of neural network model
US11830187B2 (en) Automatic condition diagnosis using a segmentation-guided framework
CN113132717A (en) Data processing method, terminal and server
CN111080592A (en) Rib extraction method and device based on deep learning
TWM586599U (en) System for analyzing skin texture and skin lesion using artificial intelligence cloud based platform
Jayageetha et al. Medical image quality assessment using CSO based deep neural network
US11875898B2 (en) Automatic condition diagnosis using an attention-guided framework
CN109886916B (en) Capsule mirror image screening method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant