CN114627067A - Wound area measurement and auxiliary diagnosis and treatment method based on image processing - Google Patents
Wound area measurement and auxiliary diagnosis and treatment method based on image processing Download PDFInfo
- Publication number
- CN114627067A CN114627067A CN202210220853.0A CN202210220853A CN114627067A CN 114627067 A CN114627067 A CN 114627067A CN 202210220853 A CN202210220853 A CN 202210220853A CN 114627067 A CN114627067 A CN 114627067A
- Authority
- CN
- China
- Prior art keywords
- wound
- image
- outline
- area
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012545 processing Methods 0.000 title claims abstract description 31
- 238000005259 measurement Methods 0.000 title claims abstract description 10
- 238000003745 diagnosis Methods 0.000 title abstract description 8
- 239000002131 composite material Substances 0.000 claims abstract description 17
- 208000027418 Wounds and injury Diseases 0.000 claims description 149
- 206010052428 Wound Diseases 0.000 claims description 146
- 238000000605 extraction Methods 0.000 claims description 12
- 206010039509 Scab Diseases 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000003709 image segmentation Methods 0.000 claims description 7
- 230000002093 peripheral effect Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 206010053615 Thermal burn Diseases 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000013480 data collection Methods 0.000 claims description 3
- 238000013136 deep learning model Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000003707 image sharpening Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 208000015181 infectious disease Diseases 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 230000037311 normal skin Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 210000004872 soft tissue Anatomy 0.000 claims description 3
- 238000001356 surgical procedure Methods 0.000 claims description 3
- 210000001519 tissue Anatomy 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims description 2
- 239000003814 drug Substances 0.000 abstract description 6
- 229940079593 drug Drugs 0.000 abstract description 5
- 238000003672 processing method Methods 0.000 abstract description 3
- 238000003708 edge detection Methods 0.000 abstract description 2
- 230000000474 nursing effect Effects 0.000 abstract description 2
- 150000001875 compounds Chemical class 0.000 description 4
- 238000013507 mapping Methods 0.000 description 2
- 206010067484 Adverse reaction Diseases 0.000 description 1
- 230000006838 adverse reaction Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1072—Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1075—Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions by non-invasive methods, e.g. for determining thickness of tissue layer
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Dentistry (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Veterinary Medicine (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a wound area measurement and auxiliary diagnosis and treatment method based on image processing. According to the wound area measurement and auxiliary diagnosis and treatment method based on image processing, the calculated area is equivalent to the pixel level of an image through the image processing method, particularly the edge detection of the wound texture, the result is more accurate, the image identification module in the method can further identify whether the wound is a composite wound or not after the type of the wound is accurately identified, the composite expression of which types of the wound is shown, and the identification result which is accurate enough can provide effective reference for clinical medication nursing.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a wound area measuring and auxiliary diagnosis and treatment method based on image processing.
Background
Traditional wound measurement is performed by using a ruler to measure the maximum width and the maximum length, and then taking the product, which undoubtedly causes great errors because many wounds are extremely irregular in shape.
At present, methods for identifying and calculating wound areas through machine vision exist, the method is used for processing wound pictures, the wound areas estimated through an algorithm model are substantially the number of pixel points of the wound areas in the pictures and are not the wound areas of the real world, of course, mapping of the wound areas in the pictures and the real wound areas in reality is performed by some methods, but a problem is ignored, namely the wound pictures are taken by equipment, the shooting process of the equipment is at a distance from the wound, so that the wound areas of the pictures in the real equipment are not equal to the real wound areas, the distance is difficult to control, the distance and the angle of the different pictures from a camera are certainly different in the shooting process, and therefore errors are difficult to avoid.
And the nature of the regression problem is that the aim of measuring the wound area is to combine the patient's chief complaints to provide reference for subsequent clinical medication and treatment, but the traditional method ignores that the wound of the patient may be a compound wound (namely, wounds caused by multiple factors are overlapped), but the patient's chief complaints cannot be completely sure that the detailed factors caused by the wound must be clear in the clinical treatment to provide reference for treatment and care.
Disclosure of Invention
The invention mainly aims to provide a wound area measuring and auxiliary diagnosis and treatment method based on image processing, which can effectively solve the problems in the background technology.
In order to achieve the purpose, the invention adopts the technical scheme that:
the utility model provides a wound area measurement and supplementary method of diagnosing based on image processing, includes and establishes data set module, image recognition module, image enhancement module, image segmentation and contour extraction module, area calculation module, its characterized in that includes following concrete step:
step 1, establishing a data set module
Firstly, drawing a rectangle on skin by using a color R for a wound of a patient, wherein the whole rectangle can just contain the whole wound, finally measuring the length and the width of the rectangle by using a flexible rule, calculating to obtain a real area A1, and then photographing the rectangle to be used as the basis of a data set;
establishing a rectangular outer frame of the R is equivalent to manually establishing an outline, the essence of the outline is the sudden change of pixels in a local area, and the R is different from a normal skin color and a wound color, so that the rectangular outer frame is equivalent to a peripheral outline;
data collection: the data generally comprises 2 parts, namely a wound caused by single factor and a composite wound caused by multiple factors, and the data source is mainly according to the surgery of a hospital;
establishing a1 st data set, dividing the wound types of a patient into 5 wound types of skin soft tissue suture incision, suppurative infection, open injury wound, burn and scald, and sting, labeling the images respectively, marking 5 different classifications, and incorporating the classifications into a data set dataset 1;
set up the 2 nd dataset: if the patient's wound is of composite type, the dataset is included, identifying rules: classification according to the above 5 classification results, only 2 composite wounds with overlapped wounds are considered, so that 4+3+2+1 is 10 types, and images are labeled respectively and included in the data set dataset 2;
step 2, image recognition module
The method comprises the following steps of constructing a convolutional neural network model by taking ResNet101 as a basic network framework, training a deep learning model for image classification, enhancing network classification capability in order to solve the problem of small data volume of wound images, and respectively processing each wound image by using a data enhancement method, and comprises the following steps: cutting, turning and mirroring an image;
step 3, image enhancement module (for image enhancement processing of output picture of image identification module)
A. Denoising, namely, because the wound picture is shot by hardware equipment, and an imaging sensor of the equipment is influenced by the environment, the finally shot picture contains interference information, and the noise of the image is reduced by using a self-adaptive local denoising filter;
B. histogram equalization, namely performing histogram equalization on the image obtained by noise reduction to enhance the contrast of the image, then setting a threshold value, performing binarization processing to highlight the outline of the image,
C. and the image sharpening process is carried out, so that the edge contour of the wound is further enhanced,
D. performing morphological processing on an image, namely expanding and corroding, wherein the middle part of a wound possibly has a wound scab condition, the color of the scab part can be slightly different from that of the periphery, the edge of wound scab tissues can be highlighted in the middle part of the wound through the processing of the steps, so that the edge lines of small areas need to be removed, the expansion and the corrosion are actually closed operation of image morphology, the outline can be smoother, narrow gaps can be closed, small holes and small grooves are filled, and the most obvious edge outline of the outermost periphery of the wound image is finally reserved through the processing;
step 4, image segmentation and contour extraction module
A. Firstly, a normal region and a contour are segmented by using a self-adaptive threshold segmentation algorithm;
B. secondly, extracting the outline, calling a findContour () function of OpenCV to extract, and selecting RETR _ EXTERNAL by the mode, namely only extracting the outline of the outermost periphery, namely a rectangular frame;
C. drawing the color of the outline, namely marking the outline and highlighting the outline, and calling an OpenCV function drawContours ();
D. calculating the area in the outline, calling an OpenCV function contourArea (), and obtaining a rectangular area A2;
E. cutting the image, and discarding the rectangular frame at the outermost layer, so that the outline of the outermost layer is the outline of the wound;
F. calling findContour (mode) again for outermost layer contour extraction, namely extracting an edge contour of the wound, marking the contour with a specific color, and calculating the area in the wound wheel as B2;
step 5, area calculation module
The above procedure has obtained the true area of the rectangle whose outermost periphery contains the entire wound as a1, on the image, the wound outline inner area as B2, the peripheral rectangle outline inner area as a2,
the actual area of the wound is (B2/a2) a 1.
Compared with the prior art, the invention has the following beneficial effects:
1. in the invention, through an image processing method, particularly edge detection of wound textures, the calculated area is equivalent to the pixel level of an image, and the result is more accurate.
2. According to the invention, after the image identification module in the method accurately identifies the wound type, whether the wound is a compound wound or not and which wound is a compound expression can be further identified, and an accurate identification result can provide effective reference for clinical medication nursing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flow chart of a wound area measurement and auxiliary diagnosis and treatment method based on image processing according to the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further described with the specific embodiments.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "front", "rear", "both ends", "one end", "the other end", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," "connected," and the like are to be construed broadly, such as "connected," which may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
The technical scheme of the invention is further explained by combining the attached drawings.
Example 1
As shown in fig. 1, a method for measuring wound area and assisting diagnosis and treatment based on image processing includes a data set establishing module, an image recognition module, an image enhancement module, an image segmentation and contour extraction module, and an area calculation module, and is characterized by including the following specific steps:
step 1, establishing a data set module
Firstly, drawing a rectangle on skin by using a color R for a wound of a patient, wherein the whole rectangle can just contain the whole wound, finally measuring the length and the width of the rectangle by using a flexible rule, calculating to obtain a real area A1, and then photographing the rectangle to be used as the basis of a data set;
establishing a rectangular outer frame of R is equivalent to manually establishing a contour, the essence of the contour is the sudden change of pixels in a local area, and R is different from the normal skin color and the wound color, so that the rectangular outer frame is equivalent to a peripheral contour;
data collection: the data generally comprises 2 parts, namely a wound caused by single factor and a composite wound caused by multiple factors, and the data source is mainly according to the surgery of a hospital;
establishing a1 st data set, dividing the wound types of a patient into 5 wound types of skin soft tissue suture incision, suppurative infection, open injury wound, burn and scald, and sting, labeling the images respectively, marking 5 different classifications, and incorporating the classifications into a data set dataset 1;
set up the 2 nd dataset: if the patient's wound is of composite type, the dataset is included, identifying rules: classification according to the above 5 classification results, only 2 composite wounds with overlapped wounds are considered, so that 4+3+2+1 is 10 types, and images are labeled respectively and included in the data set dataset 2;
in the subsequent processing, 2 stages of recognition are performed on the image, the recognition result of the 1 st stage is 5 wound types, the recognition result of the 2 nd stage is whether the wound is a composite wound including the wound of the category, for example, the wound of a certain patient is overlapped by burn and bite sting, the burn area is large, the possibility of the 1 st stage recognition as the burn is higher, and then, whether the wound is the composite wound including the category of burn is recognized, and the result is yes.
Step 2, image recognition module
The method comprises the following steps of constructing a convolutional neural network model by taking ResNet101 as a basic network framework, training a deep learning model for image classification, enhancing network classification capability in order to solve the problem of small data volume of wound images, and respectively processing each wound image by using a data enhancement method, and comprises the following steps: cutting, turning and mirroring an image;
the purpose of image recognition is to identify what type of wound a new patient is (one of the types described in block 1), followed by identifying whether the picture is a composite wound.
The accurate wound type is identified, and reference can be provided for clinical medication and subsequent treatment. In the early stage of clinical treatment, medical staff generally ask patients the reason for the wound, and the wound of the patients is one of common wound types, and in addition, the wound may be a compound wound formed by overlapping a plurality of wounds, and the patient complaint is not necessarily clear. For example, an old wound X occupies a large area, a new wound Y is caused on the old wound due to some factors, but the area of the old wound X is small, but the patient complaint is that the wound is Y, but the actual clinical treatment cannot ignore the wound X, because the wounds are different and the medication needs correspond to each other, whether the old wound has adverse reactions to the medicine or not is also considered, and the like, so the wound X cannot be ignored. Through image recognition, the method not only can determine what type of the wound is, but also can determine whether the wound is a composite wound or not, and the method is a composite of the two types of the wound, and can provide necessary reference for clinical treatment.
Feasibility of image recognition:
there is certainly no problem with the identification of the wound type for the first part. Is a composite wound of the second part mainly identifiable? If the wound is caused by a single factor, the surface color of the wound should not be greatly different, and the edge contour of the wound should be clearer. However, the overlapping wounds caused by various factors are different in color (because the wounds are in sequence, the color of the subsequent scab of the wound is different), and the middle part has more single boundary contour. Therefore, the two images can be classified, the convolutional neural network is characterized in that image features are extracted, the features are actually the salient contours of the images, namely the features are related to colors and contours, and the features are classified according to the similarity of edge structures of the images.
Step 3, image enhancement module (for image enhancement processing of output picture of image identification module)
A. Denoising, namely, because the wound picture is shot by hardware equipment, and an imaging sensor of the equipment is influenced by the environment, the finally shot picture contains interference information, and the noise of the image is reduced by using a self-adaptive local denoising filter;
B. histogram equalization, namely performing histogram equalization on the image obtained by noise reduction to enhance the contrast of the image, then setting a threshold value, performing binarization processing to highlight the outline of the image,
C. and carrying out image sharpening to further enhance the edge contour of the wound,
D. performing morphological processing on an image, namely expanding firstly and corroding secondly, wherein the middle part of a wound possibly has a wound scab condition, the color of the scab part can be slightly different from that of the periphery, and the edge of wound scab tissues can be highlighted in the middle part of the wound through the processing of the steps, so that the edge lines of small areas need to be removed;
the image is subjected to region segmentation and contour extraction subsequently, the image enhancement can improve the image segmentation precision, and image enhancement preprocessing is performed mainly for removing basic noise and making edge contours prominent.
Step 4, image segmentation and contour extraction module
A. Firstly, a normal region and a contour are segmented by using a self-adaptive threshold segmentation algorithm;
B. secondly, contour extraction is carried out, a findContour () function of OpenCV is called for extraction, and mode selects RETR _ EXTERNAL, namely only the contour of the outermost periphery, namely a rectangular frame, is extracted;
C. drawing the outline with colors, namely marking the outline and highlighting the outline, and calling an OpenCV function drawContours ();
D. calculating the area in the outline, calling an OpenCV function contourArea (), and obtaining a rectangular area A2;
E. cutting the image, and discarding the rectangular frame at the outermost layer, so that the outline of the outermost layer is the outline of the wound;
F. calling findContour (mode) again for outermost layer contour extraction, namely extracting an edge contour of the wound, marking the contour with a specific color, and calculating the area in the wound wheel as B2;
finding the main body outline of the image, and marking the main body outline with a specified color in order to highlight the outline; and after the contour is marked, carrying out statistical calculation on the pixel area region outlined by the contour.
And (3) analysis: through the above image enhancement processing, there should be 2 subject outlines in the picture, one is the outermost rectangle, and one is the outline of the wound area inside the rectangle.
Step 5, area calculation module
The above procedure has obtained the true area of the rectangle whose outermost periphery contains the entire wound as a1, on the image, the wound outline inner area as B2, the peripheral rectangle outline inner area as a2,
the true area of the wound is (B2/a2) a 1.
In summary, the method for measuring the wound utilizes an image processing method, so that the actual mapping of the actual area of the wound and the pixel area of the image can be realized, the measurement result is more accurate, the image data set is established in the method, the medical image data can be fully utilized, the image data is utilized in the measurement of the wound area, the wound type can be identified, and the reference is provided for clinical diagnosis and treatment.
The foregoing shows and describes the general principles and features of the present invention, together with the advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (1)
1. The utility model provides a wound area measurement and supplementary method of diagnosing based on image processing, includes and establishes data set module, image recognition module, image enhancement module, image segmentation and contour extraction module, area calculation module, its characterized in that includes following concrete step:
step 1, establishing a data set module
Firstly, drawing a rectangle on skin by using a color R for a wound of a patient, wherein the whole rectangle can just contain the whole wound, finally measuring the length and the width of the rectangle by using a flexible rule, calculating to obtain a real area A1, and then photographing the rectangle to be used as the basis of a data set;
establishing a rectangular outer frame of R is equivalent to manually establishing a contour, the essence of the contour is the sudden change of pixels in a local area, and R is different from the normal skin color and the wound color, so that the rectangular outer frame is equivalent to a peripheral contour;
data collection: the data generally comprises 2 parts, namely a wound caused by single factor and a composite wound caused by multiple factors, and the data source is mainly according to the surgery of a hospital;
establishing a1 st data set, dividing the wound types of a patient into 5 wound types of skin soft tissue suture incision, suppurative infection, open injury wound, burn and scald, and sting, labeling the images respectively, marking 5 different classifications, and incorporating the classifications into a data set dataset 1;
set up the 2 nd dataset: if the patient's wound is of composite type, the dataset is included, identifying rules: classification according to the above 5 classification results, only 2 composite wounds with overlapped wounds are considered, so that 4+3+2+1 is 10 types, and images are labeled respectively and included in the data set dataset 2;
step 2, image recognition module
The method comprises the following steps of constructing a convolutional neural network model by taking ResNet101 as a basic network framework, training a deep learning model for image classification, enhancing network classification capability in order to solve the problem of small data volume of wound images, and respectively processing each wound image by using a data enhancement method, wherein the method comprises the following steps: cutting, turning and mirroring an image;
step 3, image enhancement module (for image enhancement processing of output picture of image identification module)
A. Denoising, namely, because the wound picture is shot by hardware equipment, and an imaging sensor of the equipment is influenced by the environment, the finally shot picture contains interference information, and the noise of the image is reduced by using a self-adaptive local denoising filter;
B. histogram equalization, namely performing histogram equalization on the image obtained by noise reduction to enhance the contrast of the image, then setting a threshold value, performing binarization processing to highlight the outline of the image,
C. and the image sharpening process is carried out, so that the edge contour of the wound is further enhanced,
D. performing morphological processing on an image, namely expanding and corroding, wherein the middle part of a wound possibly has a wound scab condition, the color of the scab part can be slightly different from that of the periphery, the edge of wound scab tissues can be highlighted in the middle part of the wound through the processing of the steps, so that the edge lines of small areas need to be removed, the expansion and the corrosion are actually closed operation of image morphology, the outline can be smoother, narrow gaps can be closed, small holes and small grooves are filled, and the most obvious edge outline of the outermost periphery of the wound image is finally reserved through the processing;
step 4, image segmentation and contour extraction module
A. Firstly, a normal region and a contour are segmented by using a self-adaptive threshold segmentation algorithm;
B. secondly, extracting the outline, calling a findContour () function of OpenCV to extract, and selecting RETR _ EXTERNAL by the mode, namely only extracting the outline of the outermost periphery, namely a rectangular frame;
C. drawing the outline with colors, namely marking the outline and highlighting the outline, and calling an OpenCV function drawContours ();
D. calculating the area in the outline, calling an OpenCV function contourArea (), and obtaining a rectangular area A2;
E. cutting the image, and discarding the rectangular frame at the outermost layer, so that the outline of the outermost layer is the outline of the wound;
F. calling findContour (mode) again for outermost layer contour extraction, namely extracting an edge contour of the wound, marking the contour with a specific color, and calculating the area in the wound wheel as B2;
step 5, area calculation module
The above procedure has obtained the true area of the rectangle whose outermost periphery contains the entire wound as a1, on the image, the wound outline inner area as B2, the peripheral rectangle outline inner area as a2,
the true area of the wound is (B2/a2) a 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210220853.0A CN114627067B (en) | 2022-03-08 | 2022-03-08 | Wound area measurement and auxiliary diagnosis and treatment method based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210220853.0A CN114627067B (en) | 2022-03-08 | 2022-03-08 | Wound area measurement and auxiliary diagnosis and treatment method based on image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114627067A true CN114627067A (en) | 2022-06-14 |
CN114627067B CN114627067B (en) | 2024-06-21 |
Family
ID=81900849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210220853.0A Active CN114627067B (en) | 2022-03-08 | 2022-03-08 | Wound area measurement and auxiliary diagnosis and treatment method based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114627067B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116705325A (en) * | 2023-06-26 | 2023-09-05 | 国家康复辅具研究中心 | Wound infection risk assessment method and system |
CN116965843A (en) * | 2023-09-19 | 2023-10-31 | 南方医科大学南方医院 | Mammary gland stereotactic system |
CN117409002A (en) * | 2023-12-14 | 2024-01-16 | 常州漫舒医疗科技有限公司 | Visual identification detection system for wounds and detection method thereof |
CN117877691A (en) * | 2024-03-13 | 2024-04-12 | 四川省医学科学院·四川省人民医院 | Intelligent wound information acquisition system based on image recognition |
CN118039087A (en) * | 2024-04-15 | 2024-05-14 | 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) | Breast cancer prognosis data processing method and system based on multidimensional information |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023269A (en) * | 2016-05-16 | 2016-10-12 | 北京大学第医院 | Method and device for estimating wound area |
US20180098727A1 (en) * | 2015-12-30 | 2018-04-12 | James G. Spahn | System, apparatus and method for assessing wound and tissue conditions |
CN109685739A (en) * | 2018-12-25 | 2019-04-26 | 中国科学院苏州生物医学工程技术研究所 | Wound surface image processing method and the wound surface treatment system for using this method |
CN111311608A (en) * | 2020-02-05 | 2020-06-19 | 方军 | Method, apparatus and computer-readable storage medium for assessing wounds |
-
2022
- 2022-03-08 CN CN202210220853.0A patent/CN114627067B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180098727A1 (en) * | 2015-12-30 | 2018-04-12 | James G. Spahn | System, apparatus and method for assessing wound and tissue conditions |
CN106023269A (en) * | 2016-05-16 | 2016-10-12 | 北京大学第医院 | Method and device for estimating wound area |
CN109685739A (en) * | 2018-12-25 | 2019-04-26 | 中国科学院苏州生物医学工程技术研究所 | Wound surface image processing method and the wound surface treatment system for using this method |
CN111311608A (en) * | 2020-02-05 | 2020-06-19 | 方军 | Method, apparatus and computer-readable storage medium for assessing wounds |
Non-Patent Citations (1)
Title |
---|
刘春晖;樊瑜波;许燕;: "基于单摄像头的三维体表损伤面积定量测量方法", 中国生物医学工程学报, no. 01, 20 February 2018 (2018-02-20) * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116705325A (en) * | 2023-06-26 | 2023-09-05 | 国家康复辅具研究中心 | Wound infection risk assessment method and system |
CN116705325B (en) * | 2023-06-26 | 2024-01-19 | 国家康复辅具研究中心 | Wound infection risk assessment method and system |
CN116965843A (en) * | 2023-09-19 | 2023-10-31 | 南方医科大学南方医院 | Mammary gland stereotactic system |
CN116965843B (en) * | 2023-09-19 | 2023-12-01 | 南方医科大学南方医院 | Mammary gland stereotactic system |
CN117409002A (en) * | 2023-12-14 | 2024-01-16 | 常州漫舒医疗科技有限公司 | Visual identification detection system for wounds and detection method thereof |
CN117877691A (en) * | 2024-03-13 | 2024-04-12 | 四川省医学科学院·四川省人民医院 | Intelligent wound information acquisition system based on image recognition |
CN117877691B (en) * | 2024-03-13 | 2024-05-07 | 四川省医学科学院·四川省人民医院 | Intelligent wound information acquisition system based on image recognition |
CN118039087A (en) * | 2024-04-15 | 2024-05-14 | 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) | Breast cancer prognosis data processing method and system based on multidimensional information |
CN118039087B (en) * | 2024-04-15 | 2024-06-07 | 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) | Breast cancer prognosis data processing method and system based on multidimensional information |
Also Published As
Publication number | Publication date |
---|---|
CN114627067B (en) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114627067A (en) | Wound area measurement and auxiliary diagnosis and treatment method based on image processing | |
CN109859203B (en) | Defect tooth image identification method based on deep learning | |
WO2021082691A1 (en) | Segmentation method and apparatus for lesion area of eye oct image, and terminal device | |
CN111798425B (en) | Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning | |
US20240074658A1 (en) | Method and system for measuring lesion features of hypertensive retinopathy | |
CN108765392B (en) | Digestive tract endoscope lesion detection and identification method based on sliding window | |
Al-Fahdawi et al. | A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology | |
CN111462049A (en) | Automatic lesion area form labeling method in mammary gland ultrasonic radiography video | |
CN113643353B (en) | Measurement method for enhancing resolution of vascular caliber of fundus image | |
CN112102332A (en) | Cancer WSI segmentation method based on local classification neural network | |
Hatanaka et al. | Improvement of automatic hemorrhage detection methods using brightness correction on fundus images | |
CN110189324A (en) | A kind of medical image processing method and processing unit | |
Mendonca et al. | Comparison of segmentation methods for automatic diagnosis of dermoscopy images | |
CN117877691B (en) | Intelligent wound information acquisition system based on image recognition | |
CN106960199A (en) | A kind of RGB eye is as the complete extraction method in figure white of the eye region | |
Ukil et al. | Smoothing lung segmentation surfaces in 3D X-ray CT images using anatomic guidance | |
CN112686897A (en) | Weak supervision-based gastrointestinal lymph node pixel labeling method assisted by long and short axes | |
CN109816665B (en) | Rapid segmentation method and device for optical coherence tomography image | |
KR20210050790A (en) | Apparatus and methods for classifying neurodegenerative diseases image of amyloid-positive based on deep-learning | |
CN116030042A (en) | Diagnostic device, method, equipment and storage medium for doctor's diagnosis | |
Ashame et al. | Abnormality Detection in Eye Fundus Retina | |
CN111768845B (en) | Pulmonary nodule auxiliary detection method based on optimal multi-scale perception | |
CN114343693A (en) | Aortic dissection diagnosis method and device | |
Athab et al. | Disc and Cup Segmentation for Glaucoma Detection | |
Lodin et al. | Retracted: Design of an iris-based medical diagnosis system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |