CN113808068A - Image detection method and device - Google Patents

Image detection method and device Download PDF

Info

Publication number
CN113808068A
CN113808068A CN202011241166.4A CN202011241166A CN113808068A CN 113808068 A CN113808068 A CN 113808068A CN 202011241166 A CN202011241166 A CN 202011241166A CN 113808068 A CN113808068 A CN 113808068A
Authority
CN
China
Prior art keywords
image
disease
detected
category
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011241166.4A
Other languages
Chinese (zh)
Inventor
刘潇龙
文豪
匡哲祥
赵俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Tuoxian Technology Co Ltd
Original Assignee
Beijing Jingdong Tuoxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Tuoxian Technology Co Ltd filed Critical Beijing Jingdong Tuoxian Technology Co Ltd
Priority to CN202011241166.4A priority Critical patent/CN113808068A/en
Publication of CN113808068A publication Critical patent/CN113808068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image detection method, which includes: acquiring an image to be detected; inputting the image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disease category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disease category; and displaying the disease areas in the image to be detected according to the prediction probability associated with each image area, and outputting the image detection result. The present disclosure also provides an image detection apparatus, an electronic device, and a computer-readable storage medium.

Description

Image detection method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image detection method, an image detection apparatus, an electronic device, and a computer-readable storage medium.
Background
With the increasing maturity of computer technology, image detection technology is rapidly developed. Image detection has wide application in the field of medical images, and computer-aided diagnosis realized by image detection plays an important role in the field of current medical images.
In the process of realizing the inventive concept disclosed by the invention, the inventor finds that in the related art, when the image detection technology is used for identifying the disease, the disease area in the image to be detected is marked in a positioning frame mode, so that the defects of poor readability and poor marking effect of the mark of the disease area exist.
Disclosure of Invention
In view of this, the present disclosure provides an image detection method and apparatus with strong readability of the mark in the disease area and effectively improved marking effect.
One aspect of the present disclosure provides an image detection method, including: acquiring an image to be detected; inputting the image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disease category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disease category; and displaying the disease region in the image to be detected according to the prediction probability associated with each image region, and outputting the image detection result.
Optionally, the displaying the disease region in the image to be detected according to the prediction probability associated with each image region includes: determining display parameters for each of the image regions based on the prediction probability associated with each of the image regions; and displaying each image area according to the display parameters aiming at each image area so as to display the disease area in the image to be detected.
Optionally, the determining the display parameter for each of the image regions according to the prediction probability associated with each of the image regions includes: the display luminance for each of the image regions is determined based on a prediction probability associated with each of the image regions, wherein the higher the prediction probability associated with the image region, the higher the display luminance for the image region.
Optionally, the above inputting the image to be detected into a preset detection model to obtain an image detection result includes: extracting the characteristics of the image to be detected through the separable convolution layer of the detection model to obtain at least one image characteristic associated with the image to be detected; splicing the at least one image characteristic to obtain a spliced image characteristic; and identifying the spliced image features to obtain the image detection result.
Optionally, the method further comprises: and smoothing the disease areas by using different smoothing coefficients to realize the display of the predicted diseases of different disease grades.
Optionally, the training method of the detection model includes: acquiring an original image sample with a disease category label; determining sparse disease categories with the sample number lower than a preset threshold according to the disease category labels; performing enhancement processing on the original image sample associated with the sparse disease category to obtain an enhanced image sample; and performing model training by using the enhanced image sample associated with the sparse disease category and the original image sample associated with the non-sparse disease category to obtain the detection model.
Optionally, the enhancing the original image sample associated with the sparse disease category includes at least one of: carrying out random value filling processing on disease areas related to the sparse disease categories; performing affine transformation processing on disease state regions associated with the sparse disease state categories; and carrying out fusion processing on the original image sample associated with the sparse disease category and other original image samples.
Another aspect of the present disclosure provides an image detection apparatus including: the acquisition module is used for acquiring an image to be detected; a first processing module, configured to input the to-be-detected image into a preset detection model to obtain an image detection result, where the image detection result indicates a predicted disease category and indicates a prediction probability that at least one image region in the to-be-detected image is based on the predicted disease category; and the second processing module is used for displaying the disease areas in the image to be detected according to the prediction probability associated with each image area and outputting the image detection result.
Optionally, the second processing module includes: a first processing sub-module for determining display parameters for each of the image regions based on the prediction probability associated with each of the image regions; and the second processing submodule is used for displaying each image area according to the display parameters aiming at each image area so as to display the disease area in the image to be detected.
Optionally, the first processing sub-module includes: and a first processing unit configured to determine a display luminance for each of the image regions based on a prediction probability associated with each of the image regions, wherein the higher the prediction probability associated with the image region is, the higher the display luminance for the image region is.
Optionally, the first processing module includes: the third processing submodule is used for extracting the characteristics of the image to be detected through the separable convolution layer of the detection model to obtain at least one image characteristic associated with the image to be detected; the fourth processing submodule is used for carrying out splicing processing on the at least one image characteristic to obtain a spliced image characteristic; and the fifth processing submodule is used for identifying the spliced image features to obtain the image detection result.
Optionally, the second processing module package further includes: and the sixth processing submodule is used for smoothing the disease areas by using different smoothing coefficients so as to display the predicted diseases of different disease grades.
Optionally, the training process of the detection model includes: acquiring an original image sample with a disease category label; determining sparse disease categories with the sample number lower than a preset threshold according to the disease category labels; performing enhancement processing on the original image sample associated with the sparse disease category to obtain an enhanced image sample; and performing model training by using the enhanced image sample associated with the sparse disease category and the original image sample associated with the non-sparse disease category to obtain the detection model.
Optionally, the enhancement processing is performed on the original image sample associated with the sparse disorder category, and includes at least one of: carrying out random value filling processing on disease areas related to the sparse disease categories; performing affine transformation processing on disease state regions associated with the sparse disease state categories; and carrying out fusion processing on the original image sample associated with the sparse disease category and other original image samples.
Another aspect of the present disclosure provides an electronic device. The electronic device includes at least one processor and a memory communicatively coupled to the at least one processor. The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to implement the method of the embodiment of the disclosure.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, implement the method of embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions that when executed perform the method of embodiments of the present disclosure.
According to the embodiment of the disclosure, the image to be detected is obtained; inputting an image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disorder category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disorder category; according to the technical scheme of displaying the disease areas in the image to be detected and outputting the image detection result according to the prediction probability associated with each image area, the technical problems of weak mark readability and poor mark effect of the disease areas in the related technology are at least partially solved, and the technical effects of effectively improving the mark readability of the disease areas and effectively improving the confidence coefficient of the image detection result as auxiliary diagnosis are achieved.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1 schematically illustrates an image detection system architecture according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an image detection method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of image feature extraction using a detection model according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of another image detection method according to an embodiment of the present disclosure;
FIG. 5 schematically shows a block diagram of an image detection apparatus according to an embodiment of the present disclosure;
fig. 6 schematically illustrates a block diagram of an electronic device suitable for implementing the image detection method and apparatus according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It is to be understood that such description is merely illustrative and not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, operations steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Various embodiments of the present disclosure provide an image detection method and a detection apparatus to which the method can be applied. The method comprises the steps of obtaining an image to be detected, inputting the image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disease category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disease category, then displaying the disease area in the image to be detected according to the prediction probability associated with each image area, and outputting the image detection result.
As shown in fig. 1, the system architecture 100 includes at least one terminal (a plurality of terminals are shown, such as terminals 101, 102, 103, which may include, for example, a user terminal and a medical device), and a server 104 (specifically, a service server for performing image detection, or a service server cluster, which is not shown in the figure). In the system architecture 100, the server 104 obtains an image to be detected from a terminal (e.g., the terminal 101, 102, 103), inputs the image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a predicted disorder category and indicates a predicted probability that at least one image region in the image to be detected is based on the predicted disorder category, then displays the disorder region in the image to be detected in the terminal (e.g., the terminal 101, 102, 103) according to the predicted probability associated with each image region, and outputs the image detection result
The present disclosure will be described in detail below with reference to the drawings and specific embodiments.
Fig. 2 schematically shows a flowchart of an image detection method, which applies a test platform, according to an embodiment of the present disclosure.
As shown in fig. 2, the method may include operations S210 to S230, for example.
In operation S210, an image to be detected is acquired.
In the embodiment of the disclosure, specifically, an image to be detected from a terminal is acquired, and the terminal may include a user terminal and a diagnosis and treatment device. The user terminal may include, for example, a terminal device having data processing and display functions, such as a smart phone, a tablet computer, a notebook computer, and a desktop computer. The image to be detected can be one or a plurality of images, and the image to be detected can comprise a skin lesion area of the user. A user can shoot a body part through an application program in the terminal to serve as an image to be detected, then the image to be detected is uploaded through an image uploading page, and the image to be detected is uploaded to a service server (a server for image detection).
And the business server acquires and collects the images to be detected uploaded by the user through a background server corresponding to the user terminal or the diagnosis and treatment equipment. And the service server determines an image detection result aiming at the image to be detected and returns the image detection result to the background server so that the user can view the predicted disease category indicated by the image detection result through a display page of the user terminal or the diagnosis and treatment equipment and view the disease region related to the predicted disease category.
Next, in operation S220, an image to be detected is input into a preset detection model, resulting in an image detection result, wherein the image detection result indicates a predicted disorder category and indicates a prediction probability that at least one image region in the image to be detected is based on the predicted disorder category.
In the embodiment of the disclosure, specifically, a preset detection model is used to perform identification processing on an image to be detected, specifically, identification and positioning processing on a skin lesion area in the image to be detected are performed, so as to obtain an image detection result for the image to be detected, where the image detection result indicates a prediction disorder category, and indicates a prediction probability of at least one image area in the image to be detected based on the prediction disorder category. The detection model may be a convolutional Neural Network model, for example, an Xception Neural Network model, and the Xception Neural Network model replaces a convolutional layer therein with a separable convolutional layer on the basis of a Residual learning Neural Network model (ResNet), so as to improve the classification accuracy and the model performance of the detection model.
Specifically, an image to be detected is input into a preset detection model, feature extraction and feature recognition processing are performed on the image to be detected by using the detection model, and the extracted image features can be initial skin lesion features of a skin lesion area. The method comprises the steps of converting initial skin damage features into skin damage feature information with different granularities, determining predicted disease category aiming at skin damage areas according to the skin damage feature information with different granularities, and determining prediction probability of different image areas in a detection image based on the predicted disease category.
Optionally, feature extraction is performed on the image to be detected through a separable convolutional layer of the detection model, so as to obtain at least one image feature associated with the image to be detected. Fig. 3 schematically illustrates a schematic diagram of image feature extraction by using a detection model according to an embodiment of the present disclosure, for example, in 300 shown in fig. 3, an input image feature is subjected to channel decomposition processing by a 1 × 1 convolution kernel (e.g., 301 in fig. 3), each channel in an output result of the 1 × 1 convolution kernel is subjected to feature extraction by a 3 × 3 convolution kernel (e.g., 302 to 306 in fig. 3), finally, image features output by a series of 3 × 3 convolution kernels are spliced, and the spliced image features constitute an output image feature.
Furthermore, the output image feature obtained by the foregoing operation may be used as an input image feature, and the 1 × 1 convolution kernel (e.g., 301 in fig. 3) is input again, and the operation in 300 is repeated to obtain the image feature after stitching again. And then, performing down-sampling processing on the spliced image features obtained again through a pooling layer, and overlapping the down-sampling processing result with the input image features in the previous round 300 operation to obtain the output image features in the round 300 operation. Alternatively, the output image features obtained by the previous operation 300 are used as the input image features of the next operation 300, feature extraction is repeatedly performed, and then recognition processing is performed based on the finally obtained image features, so that an image detection result is obtained.
The image detection result comprises a prediction disease category indicated by the image to be detected and prediction probabilities of different image areas based on the prediction disease category. The prediction probability of the different image regions based on the prediction disease category can be determined by the weight values of the different image features based on the prediction disease category, the weight values of the different image features based on the prediction disease category can be obtained by reversely deriving the classification result of the detection model, and the weight values represent the contribution of the image features to the recognition of the prediction disease category, namely indicate the prediction probability of the image features to the prediction disease category. Because the image characteristics of different image areas in the image to be detected may be different, the prediction probabilities associated with the different image areas may be determined according to the weight values of the different image characteristics for predicting the disease category.
Next, in operation S230, a disease region in the image to be detected is displayed according to the prediction probability associated with each image region, and an image detection result is output.
In the embodiment of the present disclosure, specifically, according to the prediction probability associated with each image region, the display parameter for each image region is determined to realize the display of different image regions based on the prediction probability. The greater the prediction probability based on the predicted disorder category, the higher the similarity of the image region to the lesion region of the predicted disorder category, and the higher the likelihood that the image region is a lesion region. According to the prediction probability associated with each image area, the display parameters aiming at different image areas are determined so as to display the disease areas in the image to be detected, which is favorable for improving the accuracy and readability of disease area display and improving the confidence coefficient of image detection as auxiliary diagnosis.
In the embodiment of the disclosure, an image to be detected is obtained; inputting an image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disorder category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disorder category; and displaying the disease areas in the image to be detected according to the prediction probability associated with each image area, and outputting an image detection result. Different image areas in the image to be detected are displayed based on the prediction probability, so that on one hand, the correlation degree between the different image areas and the predicted disease category is favorably and visually presented through display parameters, and the readability of the disease area mark is favorably enhanced; on the other hand compares in the correlation technique through the disease region in the frame selection of the orientation box mode in waiting to detect image, and this scheme can effectively realize showing irregular disease region, and this accuracy that is favorable to improving disease regional demonstration is favorable to promoting the confidence of auxiliary diagnosis.
The training method of the detection model comprises the following steps: obtaining an original image sample with a disease category label, determining a sparse disease category of which the number of samples is lower than a preset threshold value according to the disease category label, then performing enhancement processing on the original image sample associated with the sparse disease category to obtain an enhanced image sample, and finally performing model training by using the enhanced image sample associated with the sparse disease category and the original image sample associated with a non-sparse disease category to obtain a detection model. Due to the different incidence of different disease categories, the image samples associated with different disease categories may be unevenly distributed. In order to alleviate the problem of model prediction offset caused by uneven distribution of samples, the number of image samples for sparse disorder categories can be increased by performing enhancement processing on original image samples associated with the sparse disorder categories.
The enhancement processing on the original image sample associated with the sparse disease category may include, for example: random value filling processing is carried out on disease areas associated with sparse disease categories, affine transformation processing is carried out on disease areas associated with sparse disease categories, fusion processing is carried out on original image samples associated with sparse disease categories and other original image samples, and the like. In particular, by populating the pathology areas associated with sparse pathology categories with random values, the model's ability to identify pathology categories by different image features is increased. Or the number of image samples associated with the sparse disease category is increased by performing processes such as turning, scaling, rotation, radial transformation, shading change, projection change and the like on the disease region associated with the sparse disease category.
Or fusing different image samples, and taking the fused image as a new training sample. The different image samples that are fused may contain the same or different disease categories, which may include sparse disease categories and non-sparse disease categories. Different image samples before fusion respectively comprise lesion areas of corresponding disease categories, the weight of the image samples for the corresponding disease categories is 1, after the different image samples are fused, the fused image samples possibly comprise the lesion areas of the different disease categories, the weights of the fused image samples for the different disease categories are changed, and the sum of the weights for all the disease categories is 1. Different image samples are fused, so that on one hand, the diversity of the image samples is improved, on the other hand, the image samples with different disease degrees are increased, and the robustness of the trained detection model is improved. The diversified enhancement processing method for the image samples effectively increases the sample size of the image samples, and particularly greatly increases the number of the image samples associated with the sparse disorder category. A sufficient number of image samples for different disease classes makes it possible to train a number of disease classification models.
Fig. 4 schematically shows a flow chart of another image detection method according to an embodiment of the present disclosure.
As shown in fig. 4, operation S230 may include, for example, operations S410 to S420.
In operation S410, display parameters for each image region are determined according to the prediction probability associated with each image region.
In the embodiment of the disclosure, specifically, according to the prediction probability of at least one image region in the image to be detected based on the prediction disease category, the display parameter for each image region is determined so as to realize the display of the disease region in the image to be detected. The display parameters are different, the display effects of the image areas are different, the design is favorable for realizing the correlation between the display effects of the different image areas and the prediction probability, on one hand, the readability of the disease area display is favorably improved, and on the other hand, the accuracy and the confidence coefficient of the disease area display are favorably improved.
Alternatively, the display brightness for each image region may be determined according to a prediction probability associated with each image region, wherein the higher the prediction probability associated with the image region, the greater the display brightness for the image region. The disease area is displayed brightly, and the non-disease area is displayed darkly, so that the disease area can be accurately positioned, and the remarkable effect of extracting the irregular dermatosis area can be realized.
Different image regions may have different image characteristics, and in determining the prediction probability associated with each image region, the prediction probability associated with the different image characteristics may be determined according to the weight values of the different image characteristics for the predicted disorder categories. The weighted values of the different image features for the predicted disease categories are obtained by reversely deriving the classification results of the detection models, the weighted values are used for representing the contribution proportion of each image feature to the obtained prediction results, and the reversely deriving the classification results of the detection models can be realized by the existing method (for example, the Grad-CAM + + algorithm is adopted), which is not described herein again. And fusing different image areas according to corresponding weight values to display the disease areas in the image to be detected in a thermodynamic diagram mode.
Next, in operation S420, each image region is displayed according to the display parameters for each image region to realize displaying the disease region in the image to be detected.
In the embodiment of the present disclosure, specifically, according to the determined display parameters for each image region, each image region is displayed to display a disease region in the image to be detected. Optionally, when a disease region in the image to be detected is displayed, smoothing processing is performed on the disease region by using different smoothing coefficients, so as to display predicted diseases of different disease grades. Specifically, when a change in a disease region in which improvement of a disease is predicted needs to be displayed, the disease region is gradually faded and displayed by using an image smoothing technique, and specifically, the disease region display developing in the improvement direction can be realized by increasing a smoothing coefficient by time. When a change in a disease region in which disease deterioration is predicted needs to be displayed, the displayed disease region is gradually sharpened using an image filtering technique, and in particular, display of a disease region in which deterioration progresses can be achieved by increasing a sharpening coefficient in time. The design is beneficial to improving the readability of disease area display and improving the reference value of auxiliary diagnosis by visualizing the evolution process of the disease area.
Fig. 5 schematically shows a block diagram of an image detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus may include an acquisition module 501, a first processing module 502, and a second processing module 503.
Specifically, the obtaining module 501 is configured to obtain an image to be detected; a first processing module 502, configured to input an image to be detected into a preset detection model to obtain an image detection result, where the image detection result indicates a prediction disorder category and indicates a prediction probability that at least one image region in the image to be detected is based on the prediction disorder category; and a second processing module 503, configured to display a disease region in the image to be detected according to the prediction probability associated with each image region, and output an image detection result.
In the embodiment of the disclosure, an image to be detected is obtained; inputting an image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disorder category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disorder category; and displaying the disease areas in the image to be detected according to the prediction probability associated with each image area, and outputting an image detection result. Different image areas in the image to be detected are displayed based on the prediction probability, so that on one hand, the correlation degree between the different image areas and the predicted disease category is favorably and visually presented through display parameters, and the readability of the disease area mark is favorably enhanced; on the other hand compares in the correlation technique through the disease region in the frame selection of the orientation box mode in waiting to detect image, and this scheme can effectively realize showing irregular disease region, and this accuracy that is favorable to improving disease regional demonstration is favorable to promoting the confidence of auxiliary diagnosis.
As an alternative embodiment, the second processing module comprises: a first processing sub-module for determining display parameters for each image region in dependence on the prediction probabilities associated with each image region; and the second processing submodule is used for displaying each image area according to the display parameters aiming at each image area so as to display the disease area in the image to be detected.
As an alternative embodiment, the first processing submodule includes: a first processing unit for determining a display luminance for each image region according to a prediction probability associated with the image region, wherein the higher the prediction probability associated with the image region, the greater the display luminance for the image region.
As an alternative embodiment, the first processing module comprises: the third processing submodule is used for extracting the characteristics of the image to be detected through the separable convolution layer of the detection model to obtain at least one image characteristic associated with the image to be detected; the fourth processing submodule is used for carrying out splicing processing on at least one image characteristic to obtain spliced image characteristics; and the fifth processing submodule is used for identifying the spliced image features to obtain an image detection result.
As an alternative embodiment, the second processing module package further includes: and the sixth processing submodule is used for smoothing the disease region by using different smoothing coefficients so as to display the predicted disease of different disease grades.
As an alternative embodiment, the training process of the detection model includes: acquiring an original image sample with a disease category label; determining sparse disease categories with the number of samples lower than a preset threshold according to the disease category labels; performing enhancement processing on an original image sample associated with the sparse disease category to obtain an enhanced image sample; and performing model training by using the enhanced image sample associated with the sparse disease category and the original image sample associated with the non-sparse disease category to obtain a detection model.
As an alternative embodiment, the enhancement processing is performed on the original image sample associated with the sparse disorder category, and includes at least one of the following: random value filling processing is carried out on disease areas associated with sparse disease categories; performing affine transformation processing on disease condition regions associated with sparse disease condition categories; and carrying out fusion processing on the original image sample associated with the sparse disease category and other original image samples.
Alternatively, at least part of the functions of any of the modules, sub-modules, or any of the modules in the obtaining module 501, the first processing module 502, and the second processing module 503 may be implemented in one module. Any one or more of the modules according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules according to the embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging the circuit, or in any one of three implementations, or in any suitable combination of any of the software, hardware, and firmware. Or one or more of the modules according to embodiments of the disclosure, may be implemented at least partly as computer program modules which, when executed, may perform corresponding functions.
For example, any plurality of the obtaining module 501, the first processing module 502 and the second processing module 503 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. Alternatively, at least one of the obtaining module 501, the first processing module 502 and the second processing module 503 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by any other reasonable manner of integrating or packaging a circuit, such as hardware or firmware, or implemented by any one of three implementations of software, hardware and firmware, or any suitable combination of any of them. Alternatively, at least one of the obtaining module 501, the first processing module 502 and the second processing module 503 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
Fig. 6 schematically illustrates a block diagram of an electronic device suitable for implementing the image detection method and apparatus according to an embodiment of the present disclosure. The computer system illustrated in FIG. 6 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 6, a computer system 600 according to an embodiment of the present disclosure includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 603, various programs and data necessary for the operation of the system 600 are stored. The processor 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 602 and/or RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM 602 and RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Optionally, system 600 may also include an input/output (I/O) interface 605, where input/output (I/O) interface 605 is also connected to bus 604. The system 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 606 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
Alternatively, the method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. Alternatively, the systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
Alternatively, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, a computer-readable storage medium may optionally include one or more memories other than the ROM 602 and/or RAM 603 and/or ROM 602 and RAM 603 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. An image detection method, comprising:
acquiring an image to be detected;
inputting the image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disease category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disease category;
and displaying the disease areas in the image to be detected according to the prediction probability associated with each image area, and outputting the image detection result.
2. The method of claim 1, wherein the displaying the disease areas in the image to be detected according to the prediction probability associated with each image area comprises:
determining display parameters for each of the image regions based on the prediction probability associated with each of the image regions;
and displaying each image area according to the display parameters aiming at each image area so as to display the disease area in the image to be detected.
3. The method of claim 2, wherein said determining display parameters for each of said image regions based on a prediction probability associated with each of said image regions comprises:
determining a display brightness for each of the image regions based on a prediction probability associated with the image region, wherein the higher the prediction probability associated with the image region, the greater the display brightness for the image region.
4. The method of claim 1, wherein inputting the image to be detected into a preset detection model to obtain an image detection result comprises:
performing feature extraction on the image to be detected through the separable convolution layer of the detection model to obtain at least one image feature associated with the image to be detected;
splicing the at least one image feature to obtain a spliced image feature;
and identifying the spliced image features to obtain the image detection result.
5. The method of claim 1, further comprising:
and smoothing the disease region by using different smoothing coefficients to realize the display of the predicted diseases with different disease grades.
6. The method of any of claims 1 to 5, wherein the training method of the detection model comprises:
acquiring an original image sample with a disease category label;
determining a sparse disease category with the sample number lower than a preset threshold value according to the disease category label;
performing enhancement processing on the original image sample associated with the sparse disease category to obtain an enhanced image sample;
and performing model training by using the enhanced image sample associated with the sparse disease category and the original image sample associated with the non-sparse disease category to obtain the detection model.
7. The method of claim 6, wherein the enhancing the original image samples associated with the sparse condition category comprises at least one of:
performing random value filling processing on the disease region associated with the sparse disease category;
performing affine transformation processing on a disease region associated with the sparse disease category;
and performing fusion processing on the original image sample associated with the sparse disease category and other original image samples.
8. An image detection apparatus comprising:
the acquisition module is used for acquiring an image to be detected;
the first processing module is used for inputting the image to be detected into a preset detection model to obtain an image detection result, wherein the image detection result indicates a prediction disease category and indicates the prediction probability of at least one image area in the image to be detected based on the prediction disease category;
and the second processing module is used for displaying the disease areas in the image to be detected according to the prediction probability associated with each image area and outputting the image detection result.
9. An electronic device, comprising:
one or more processors; and
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
CN202011241166.4A 2020-11-09 2020-11-09 Image detection method and device Pending CN113808068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011241166.4A CN113808068A (en) 2020-11-09 2020-11-09 Image detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011241166.4A CN113808068A (en) 2020-11-09 2020-11-09 Image detection method and device

Publications (1)

Publication Number Publication Date
CN113808068A true CN113808068A (en) 2021-12-17

Family

ID=78943497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011241166.4A Pending CN113808068A (en) 2020-11-09 2020-11-09 Image detection method and device

Country Status (1)

Country Link
CN (1) CN113808068A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190540A (en) * 2018-06-06 2019-01-11 腾讯科技(深圳)有限公司 Biopsy regions prediction technique, image-recognizing method, device and storage medium
CN110148192A (en) * 2019-04-18 2019-08-20 上海联影智能医疗科技有限公司 Medical image imaging method, device, computer equipment and storage medium
CN110689525A (en) * 2019-09-09 2020-01-14 上海中医药大学附属龙华医院 Method and device for recognizing lymph nodes based on neural network
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190540A (en) * 2018-06-06 2019-01-11 腾讯科技(深圳)有限公司 Biopsy regions prediction technique, image-recognizing method, device and storage medium
CN110148192A (en) * 2019-04-18 2019-08-20 上海联影智能医疗科技有限公司 Medical image imaging method, device, computer equipment and storage medium
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium
CN110689525A (en) * 2019-09-09 2020-01-14 上海中医药大学附属龙华医院 Method and device for recognizing lymph nodes based on neural network

Similar Documents

Publication Publication Date Title
CN111160335B (en) Image watermark processing method and device based on artificial intelligence and electronic equipment
US10769487B2 (en) Method and device for extracting information from pie chart
US9349076B1 (en) Template-based target object detection in an image
WO2021217857A1 (en) Slice defect detection method and apparatus, and electronic device and readable storage medium
CN113139543B (en) Training method of target object detection model, target object detection method and equipment
CN111369581A (en) Image processing method, device, equipment and storage medium
CN109165645A (en) A kind of image processing method, device and relevant device
CN113065609B (en) Image classification method, device, electronic equipment and readable storage medium
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN110781980A (en) Training method of target detection model, target detection method and device
CN111523558A (en) Ship shielding detection method and device based on electronic purse net and electronic equipment
CN113569852A (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
CN116670687A (en) Method and system for adapting trained object detection models to domain offsets
CN111401309B (en) CNN training and remote sensing image target identification method based on wavelet transformation
CN114821551A (en) Method, apparatus and storage medium for legacy detection and model training
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN115100469A (en) Target attribute identification method, training method and device based on segmentation algorithm
CN110210314B (en) Face detection method, device, computer equipment and storage medium
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN108765399B (en) Lesion site recognition device, computer device, and readable storage medium
CN116433704A (en) Cell nucleus segmentation method based on central point and related equipment
CN113808068A (en) Image detection method and device
CN112785601B (en) Image segmentation method, system, medium and electronic terminal
CN114973022A (en) Animal husbandry livestock monitoring and extracting method based on high spatial resolution remote sensing image
CN112749293A (en) Image classification method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination