CN114862851A - Processing method based on tongue picture analysis - Google Patents

Processing method based on tongue picture analysis Download PDF

Info

Publication number
CN114862851A
CN114862851A CN202210787391.0A CN202210787391A CN114862851A CN 114862851 A CN114862851 A CN 114862851A CN 202210787391 A CN202210787391 A CN 202210787391A CN 114862851 A CN114862851 A CN 114862851A
Authority
CN
China
Prior art keywords
tongue
image
lab
pixel point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210787391.0A
Other languages
Chinese (zh)
Other versions
CN114862851B (en
Inventor
叶展
杨剑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuandaomiao Medical Technology Co ltd
Original Assignee
Shenzhen Yuandaomiao Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuandaomiao Medical Technology Co ltd filed Critical Shenzhen Yuandaomiao Medical Technology Co ltd
Priority to CN202210787391.0A priority Critical patent/CN114862851B/en
Publication of CN114862851A publication Critical patent/CN114862851A/en
Application granted granted Critical
Publication of CN114862851B publication Critical patent/CN114862851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a processing method based on tongue picture analysis, and relates to the field of artificial intelligence. The method mainly comprises the following steps: extracting a tongue region image in an image containing a tongue part; converting the tongue body area image into a Lab image and taking two pixel points with the maximum pixel values in the channel a as initial clustering centers; respectively obtaining a symmetrical weight coefficient, chromatic aberration and distance of each pixel point in the Lab image relative to each initial clustering center so as to respectively obtain a clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center; and finishing clustering all the pixel points in the Lab image according to the clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center to obtain a separation result of the tongue coating and the tongue proper. The embodiment of the invention can effectively avoid subjectivity of manual visual observation, thereby improving the separation precision and efficiency of tongue proper and tongue coating.

Description

Processing method based on tongue picture analysis
Technical Field
The application relates to the field of artificial intelligence, in particular to a processing method based on tongue picture analysis.
Background
The syndrome is the general body reaction state of the disease through the observation and the olfaction in traditional Chinese medicine, and the tongue diagnosis is an important component in the observation. The tongue is connected with the five zang-organs and six fu-organs through meridians and collaterals, and the tongue can show the functional status of each organ of the human body.
Therefore, the tongue picture can be observed, and the disease condition can be preliminarily judged according to the states of the tongue coating and the tongue proper. In the analysis of tongue phase, it is first necessary to distinguish between the tongue proper and tongue coating present therein.
At present, doctors generally distinguish tongue proper and tongue coating by naked eyes to judge the disease condition, and the method is easily affected subjectively, so that a method capable of effectively separating the tongue proper and the tongue coating in the tongue body is urgently needed to provide help for the diagnosis of the doctors.
Disclosure of Invention
Aiming at the technical problems, the invention provides a processing method based on tongue phase analysis, which is characterized in that a tongue body image is processed to extract a tongue body area image, the tongue body area image is converted into a Lab image, and the characteristic of the Lab image is utilized to cluster pixel points in the Lab image, so that the separation of the tongue proper and the tongue fur is realized, the subjectivity of manual visual observation can be effectively avoided, the separation precision and efficiency of the tongue proper and the tongue fur are improved, and the separation result of the tongue fur and the tongue proper is utilized to provide assistance for the diagnosis work of doctors.
The embodiment of the invention provides a processing method based on tongue picture analysis, which comprises the following steps:
and acquiring a tongue body area image to be separated in the tongue body image.
And converting the tongue body area image into a Lab image, and taking two pixel points with the maximum pixel values in an a channel of the Lab image as initial clustering centers.
Determining the symmetry axis of the tongue body area according to the edge of the tongue body area, obtaining the included angle formed by each initial clustering center and the symmetric point of the clustering center about the symmetry axis and each pixel point in the Lab image in sequence, and respectively obtaining the symmetric weight coefficient of each pixel point in the Lab image about each initial clustering center according to the cosine value of each included angle, the distance from each pixel point to the clustering center and the distance from each initial clustering center to the symmetric point.
And respectively obtaining a normalized color value of each pixel point in the Lab image after the normalization of the pixel value in the Lab three channels, and respectively taking the difference value between the normalized color value of each pixel point in the Lab image and the normalized color value of each initial clustering center as the color difference of each pixel point in the Lab image about each initial clustering center.
And respectively obtaining the clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center according to the symmetrical weight coefficient, the chromatic aberration and the distance of each pixel point in the Lab image relative to each initial clustering center.
And clustering each pixel point in the Lab image to the category to which the initial clustering center with the largest clustering characteristic value belongs, finishing clustering all the pixel points in the Lab image, and taking the clustering result as the separation result of the tongue fur and the tongue proper.
Further, in a processing method based on tongue analysis, according to a cosine value of each included angle, a distance from each pixel point to a cluster center, and a distance from each initial cluster center to a symmetric point thereof, a symmetric weight coefficient of each pixel point in a Lab image with respect to each initial cluster center is obtained, including:
Figure 933239DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE003A
for the symmetrical weight coefficient of each pixel point in the Lab image with respect to each initial clustering center,
Figure DEST_PATH_IMAGE005A
the values of the included angle formed by each initial clustering center, the symmetrical point of the clustering center about the symmetrical axis and each pixel point in the Lab image in turn,
Figure DEST_PATH_IMAGE007A
for the distance of each pixel point to the cluster center,
Figure DEST_PATH_IMAGE009A
for the distance of each initial cluster center to its point of symmetry,
Figure DEST_PATH_IMAGE011
is a hyperbolic tangent function.
Further, in a processing method based on tongue analysis, according to a symmetric weight coefficient, a chromatic aberration, and a distance of each pixel point in a Lab image with respect to each initial clustering center, a clustering characteristic value of each pixel point in the Lab image with respect to each initial clustering center is respectively obtained, including:
and multiplying the symmetric weight coefficient and the distance of each pixel point in the Lab image relative to each initial clustering center, dividing the product result by the chromatic aberration of each pixel point in the Lab image relative to each initial clustering center, and taking the division result as the clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center.
Further, in a processing method based on tongue analysis, after clustering each pixel point in a Lab image to a category to which an initial clustering center having a maximum clustering feature value belongs, the method further includes:
and respectively taking the centroids of the pixel points clustered to each initial clustering center as new clustering centers of each category.
Further, in a processing method based on tongue analysis, a process of obtaining a normalized color value after each pixel point in a Lab image is normalized by a pixel value in a Lab three channel includes:
and respectively normalizing the pixel value of each pixel point in the Lab image in each channel of the Lab.
And averaging the normalization results of the pixel values of each pixel point in the Lab image in each Lab channel, multiplying the average result by 255, and taking the multiplication result as the normalization color value of each pixel point in the Lab image normalized by the pixel value in each Lab channel.
Further, in a processing method based on tongue analysis, before acquiring a tongue region image to be separated in a tongue image, the method further includes: the tongue image of the patient is obtained.
Further, in a processing method based on tongue analysis, acquiring a tongue region image to be separated in a tongue image, the method includes:
and (3) utilizing DNN to segment the tongue body image, and taking a segmentation result as the extracted tongue body area image, wherein the pixel value of the pixel point which belongs to the tongue body area in the tongue body area image is 0.
Further, in a processing method based on tongue picture analysis, after obtaining a separation result of tongue coating and tongue nature, the method further comprises: the doctor gives corresponding syndrome labels to the tongue body images according to the separation result of the tongue coating and the tongue proper.
Further, in a processing method based on tongue analysis, after a corresponding syndrome label is given to a tongue image, the method further comprises:
and finishing the training of the neural network by taking the tongue body image as an input set of the neural network and taking the syndrome label corresponding to the tongue body image as an inspection set, and outputting the syndrome label corresponding to the tongue body image to be detected according to the tongue body image to be detected by utilizing the trained neural network.
Compared with the prior art, the embodiment of the invention provides a processing method based on tongue picture analysis, which has the beneficial effects that: the tongue body image is processed to extract the tongue body area image, the tongue body area image is converted into a Lab image, the characteristic of the Lab image is utilized to cluster pixel points in the Lab image, the separation of the tongue texture and the tongue coating is realized, the subjectivity of manual visual observation can be effectively avoided, the separation precision and efficiency of the tongue texture and the tongue coating are improved, and the separation result of the tongue coating and the tongue texture is utilized to provide assistance for the diagnosis work of a doctor.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a processing method based on tongue analysis according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the invention provides a processing method based on tongue picture analysis, as shown in fig. 1, comprising the following steps:
and step S101, acquiring a tongue body area image to be separated in the tongue body image.
And S102, converting the tongue body area image into a Lab image, and taking two pixel points with the maximum pixel values in an a channel of the Lab image as initial clustering centers.
Step S103, determining a symmetry axis of the tongue body region according to the edge of the tongue body region, obtaining an included angle formed by each initial clustering center and a symmetry point of the clustering center about the symmetry axis and each pixel point in the Lab image in sequence, and respectively obtaining a symmetric weight coefficient of each pixel point in the Lab image about each initial clustering center according to a cosine value of each included angle, a distance from each pixel point to the clustering center and a distance from each initial clustering center to the symmetry point.
And step S104, respectively obtaining a normalized color value of each pixel point in the Lab image after the normalization of the pixel value in the Lab three channels, and respectively taking the difference value of the normalized color value of each pixel point in the Lab image and the normalized color value of each initial clustering center as the color difference of each pixel point in the Lab image about each initial clustering center.
And S105, respectively obtaining a clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center according to the symmetric weight coefficient, the chromatic aberration and the distance of each pixel point in the Lab image relative to each initial clustering center.
And S106, clustering each pixel point in the Lab image to the category to which the initial clustering center with the largest clustering characteristic value belongs, finishing clustering all the pixel points in the Lab image, and taking the clustering result as the separation result of the tongue fur and the tongue nature.
The specific scenes aimed by the invention are as follows: in the process of diagnosing the disease condition by the doctor according to the tongue picture, the doctor may make a wrong diagnosis of the disease condition because there is no obvious boundary between the tongue coating and the tongue proper. Therefore, the embodiment of the invention takes the characteristics of the pixel points in the tongue body into consideration, and clusters the pixel points in the tongue body area to obtain the segmented results of the clustered tongue coating and tongue proper.
Further, in step S101, a tongue region image to be separated in the tongue image is acquired.
In the embodiment of the invention, a tongue image is segmented by using a Deep Neural Network (DNN), the segmentation result is used as the extracted tongue region image, and the pixel value of the pixel point outside the tongue region in the tongue region image is 0.
The tongue body image adopted in the embodiment of the invention can be obtained through the image acquisition equipment, so that the tongue body area image in the acquired tongue body image is more convenient to process, a doctor can be reminded of not eating or drinking water within 2 hours before acquisition, and the head of the doctor is placed in the head fixing area during the acquisition process, so that the acquisition conditions of the tongue body images corresponding to different patients can be consistent, the standardized processing is convenient to perform in the subsequent processing process, and the increase of the calculated amount or the reduction of the precision of the processing result caused by different acquisition conditions is avoided.
The image acquisition device used in the embodiment of the present invention may be a ccd (charged coupled device camera) camera.
In addition, since there may be a difference in the camera device, the ambient light source, and the posture of the acquirer, distortion of the tongue phase color may be caused, and therefore, color correction may be performed on the extracted tongue body region image before the tongue body region is subjected to analysis processing.
The color of the collected picture is different from the collected environment, and the color of the picture obtained by the same sample in different collecting environments is also different, so that the color correction can be performed on the collected sample. Color correction, namely, color correction is performed on a picture through a standard color plate, and the current color correction algorithm mainly comprises the following steps: polynomial regression method, artificial neural network method, SVR method. Wherein, svr (support vector regression) is an application of SVM (support vector machine) to regression problem.
As an example, in the embodiment of the present invention, the tongue region image is color-corrected by a polynomial regression algorithm.
Further, step S102, the tongue region image is converted into a Lab image, and two pixel points with the largest pixel value in the a channel of the Lab image are used as an initial clustering center.
First, the tongue region image is converted into a Lab image.
It should be noted that the Lab mode is a color mode, and the Lab color mode makes up the deficiencies of the RGB and CMYK color modes. It is a device-independent color model, and is also a color model based on physiological characteristics. Meanwhile, the Lab color space is one of the most perfect color models for describing all colors visible to the human eye, and represents a more accurate color range than other color spaces. Therefore, in the embodiment of the present invention, the tongue region image is converted into the Lab image, so as to better obtain the color characteristics existing therein in the subsequent steps.
Lab is composed of a brightness channel and two color channels. In the Lab color space, each color is represented by three numbers L, a, b, the meaning of the individual components being such that: l represents lightness, values from 0 to 100, a represents the component from green to red, values from-128 to 127, b represents the component from blue to yellow, values from-128 to 127.
Secondly, as the larger the pixel value of the pixel point a in the channel is, the higher the possibility that the pixel point a becomes a final clustering center is, and meanwhile, the classified categories in the embodiment of the invention are 2 categories which respectively correspond to the tongue coating and the tongue proper, two pixel points with the maximum pixel value in the channel a of the Lab image are used as initial clustering centers in the embodiment of the invention, so that the time required by subsequent clustering is convenient to accelerate.
Further, step S103, determining a symmetry axis of the tongue region according to the edge of the tongue region, obtaining an included angle formed by each initial clustering center and a symmetric point of the clustering center about the symmetry axis and each pixel point in the Lab image in sequence, and obtaining a symmetric weight coefficient of each pixel point in the Lab image about each initial clustering center according to a cosine value of each included angle, a distance from each pixel point to the clustering center, and a distance from each initial clustering center to the symmetric point.
In the actual tongue picture, the tongue coating and the tongue proper are closely connected, and there is no obvious boundary to divide the tongue coating and the tongue proper, but there is usually a certain rule for the distribution of the tongue coating and the tongue proper: the tongue coating is generally distributed in the middle of the tongue body, and the tongue proper is generally distributed around the periphery of the tongue body, the area of the tongue coating is generally larger than that of the tongue proper, and the color of the tongue proper is darker and redder than that of the tongue coating. The tongue body is generally symmetrical, and the tongue coating and tongue proper are respectively and symmetrically distributed on the tongue body.
Firstly, determining a symmetry axis of the tongue body region according to the edge of the tongue body region, wherein the obtaining process of the symmetry axis comprises the following steps:
and establishing a plane rectangular coordinate system by taking the lower left corner of the tongue body area image as a coordinate origin, obtaining the outer edge of the tongue body area image, obtaining points with the same longitudinal coordinate of the outer edge in the established plane rectangular coordinate system to form point pairs, respectively obtaining the middle point of each point pair, and taking the mode in each horizontal coordinate corresponding to each middle point of each point pair as the horizontal coordinate corresponding to the symmetry axis.
And secondly, obtaining an included angle formed by each initial clustering center, a symmetric point of the clustering center about the symmetric axis and each pixel point in the Lab image in sequence, wherein the vertex of the included angle is the symmetric point of the clustering center about the symmetric axis.
Finally, according to the cosine value of each included angle, the distance from each pixel point to the clustering center and the distance from each initial clustering center to the symmetrical point thereof, respectively obtaining the symmetrical weight coefficient of each pixel point in the Lab image relative to each initial clustering center, comprising:
Figure 443855DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE003AA
for each pixel point in the Lab image, the symmetrical weight coefficient of each initial clustering center,
Figure DEST_PATH_IMAGE005AA
The values of the included angle formed by each initial clustering center, the symmetrical point of the clustering center about the symmetrical axis and each pixel point in the Lab image in turn,
Figure DEST_PATH_IMAGE007AA
for the distance of each pixel point to the cluster center,
Figure DEST_PATH_IMAGE009AA
for the distance of each initial cluster center to its point of symmetry,
Figure DEST_PATH_IMAGE011A
is a hyperbolic tangent function.
It should be noted that, in the embodiment of the present invention, each pixel point in the Lab image has a corresponding symmetric weight coefficient with respect to two initial clustering centers.
Further, step S104, a normalized color value of each pixel in the Lab image normalized by the pixel value in the Lab three channel is obtained, and a difference between the normalized color value of each pixel in the Lab image and the normalized color value of each initial clustering center is used as a color difference of each pixel in the Lab image with respect to each initial clustering center.
Firstly, respectively obtaining a normalized color value of each pixel point in the Lab image after the normalization of the pixel value in the Lab three channels.
Because the value ranges of the pixel values in the Lab three channels in the Lab image are different, in order to represent the difference of the color values of the pixel points, the pixel values of the pixel points in the three channels can be normalized respectively, so that the normalized pixel values in the three channels are all in the range of [0,1], the normalization results of the pixel values of each pixel point in the Lab image in each Lab channel are averaged and then multiplied by 255, and the multiplication results are used as the normalized color values of the pixel values of each pixel point in the Lab image in each Lab three channel after normalization.
Due to the specificity of the position on the tongue: the probability that the chromatic aberration of the pixel points between adjacent pixel points is smaller is larger, and the more possible pixel points are classified into one type. The tongue coating and tongue proper of the human body are generally gathered, that is, the distance between the pixel points forming the tongue coating area is adjacent.
And secondly, taking the difference value of the normalized color value of each pixel point in the Lab image and the normalized color value of each initial clustering center as the color difference of each pixel point in the Lab image relative to each initial clustering center. Therefore, the category to which the pixel point belongs can be conveniently determined by utilizing the color difference in the subsequent steps.
Further, step S105, obtaining a clustering characteristic value of each pixel point in the Lab image with respect to each initial clustering center according to the symmetric weight coefficient, the chromatic aberration, and the distance of each pixel point in the Lab image with respect to each initial clustering center.
Specifically, the symmetric weight coefficient and the distance phase product of each pixel point in the Lab image about each initial clustering center are multiplied, the product result is divided by the chromatic aberration of each pixel point in the Lab image about each initial clustering center, and the division result is used as the clustering characteristic value of each pixel point in the Lab image about each initial clustering center.
Therefore, the symmetrical weight coefficient, the chromatic aberration and the distance of each pixel point relative to each initial clustering center are comprehensively considered, and the clustering characteristic values of representative pixel points relative to each initial clustering center are obtained and used for determining the category of the pixel points in the subsequent steps.
Further, step S106, clustering each pixel point in the Lab image to the category to which the initial clustering center with the largest clustering characteristic value belongs, completing clustering of all pixel points in the Lab image, and taking the clustering result as the separation result of the tongue coating and the tongue proper.
And clustering each pixel point in the Lab image to the category to which the initial clustering center with the maximum clustering characteristic value belongs, and finishing clustering all the pixel points in the Lab image.
According to the embodiment of the invention, on the basis of the k-means clustering algorithm, the distance from the pixel point to the clustering center in the k-means clustering algorithm is replaced by the clustering characteristic value in the embodiment of the invention, so that the difference condition between the pixel points can be comprehensively reflected better, and a more accurate clustering result can be obtained.
Meanwhile, in the clustering process, the mass center of each pixel point clustered to each initial clustering center can be respectively used as a new clustering center of each category. Therefore, the self-adaptive change of the clustering center in the clustering process can be realized, and the convergence of the clustering process is accelerated.
And finally, taking the clustering result as a separation result of the tongue coating and the tongue proper to provide help for the diagnosis of doctors.
Optionally, on the basis of obtaining the separation result of the tongue coating and the tongue proper, a doctor can be arranged to give corresponding syndrome labels to the tongue body image according to the separation result of the tongue coating and the tongue proper.
It should be noted that syndrome is a term specific to traditional Chinese medicine, which refers to a general term for a series of symptoms related to each other, i.e. the body reaction state and movement and change thereof, which are expressed in the whole level in the course of disease known by inspection, auscultation, inquiry, and resection clinics, and is called syndrome or syndrome for short.
The syndromes of tongue coating and tongue proper separation can include: the tongue coating is yellow, the tongue coating is black, cracks exist in the tongue and the like, so that the diagnosis of doctors can be further assisted.
Further, in a processing method based on tongue analysis, after a corresponding syndrome label is given to a tongue image, the method further includes:
and finishing the training of the neural network by taking the tongue body image as an input set of the neural network and taking the syndrome label corresponding to the tongue body image as an inspection set, and outputting the syndrome label corresponding to the tongue body image to be detected according to the tongue body image to be detected by utilizing the trained neural network.
In summary, the embodiments of the present invention provide a processing method based on tongue analysis, in which a tongue region image is extracted by processing a tongue image, the tongue region image is converted into a Lab image, and the characteristics of the Lab image are used to cluster pixel points in the Lab image, so as to separate the tongue proper from the tongue coating, thereby effectively avoiding subjectivity of artificial visual observation, improving the separation accuracy and efficiency of the tongue proper and the tongue coating, and providing assistance for the diagnosis work of a doctor by using the separation result of the tongue coating and the tongue proper.
The use of words such as "including," "comprising," "having," and the like in this disclosure is an open-ended term that means "including, but not limited to," and is used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that the various components or steps may be broken down and/or re-combined in the methods and systems of the present invention. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The above-mentioned embodiments are merely examples for clearly illustrating the present invention and do not limit the scope of the present invention. It will be apparent to those skilled in the art that other variations and modifications may be made in the foregoing description, and it is not necessary or necessary to exhaustively enumerate all embodiments herein. All designs identical or similar to the present invention are within the scope of the present invention.

Claims (9)

1. A processing method based on tongue analysis is characterized by comprising the following steps:
acquiring a tongue body area image to be separated in the tongue body image;
converting the tongue body area image into a Lab image, and taking two pixel points with the maximum pixel values in an a channel of the Lab image as initial clustering centers;
determining a symmetry axis of the tongue body region according to the edge of the tongue body region, obtaining an included angle formed by each initial clustering center and a symmetric point of the clustering center about the symmetry axis and each pixel point in the Lab image in sequence, and respectively obtaining a symmetric weight coefficient of each pixel point in the Lab image about each initial clustering center according to a cosine value of each included angle, a distance from each pixel point to the clustering center and a distance from each initial clustering center to the symmetric point;
respectively obtaining a normalized color value of each pixel point in the Lab image after the normalization of the pixel value in the Lab three channels, and respectively taking the difference value of the normalized color value of each pixel point in the Lab image and the normalized color value of each initial clustering center as the color difference of each pixel point in the Lab image about each initial clustering center;
respectively obtaining a clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center according to the symmetrical weight coefficient, the chromatic aberration and the distance of each pixel point in the Lab image relative to each initial clustering center;
and clustering each pixel point in the Lab image to the category to which the initial clustering center with the largest clustering characteristic value belongs, finishing clustering all the pixel points in the Lab image, and taking the clustering result as the separation result of the tongue fur and the tongue proper.
2. The processing method according to claim 1, wherein the obtaining a symmetric weight coefficient of each pixel point in the Lab image with respect to each initial clustering center according to the cosine value of each included angle, the distance from each pixel point to the clustering center, and the distance from each initial clustering center to its symmetric point respectively comprises:
Figure 165792DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,
Figure 295422DEST_PATH_IMAGE002
for the symmetrical weight coefficient of each pixel point in the Lab image with respect to each initial clustering center,
Figure 844215DEST_PATH_IMAGE003
the values of the included angle formed by each initial clustering center, the symmetrical point of the clustering center about the symmetrical axis and each pixel point in the Lab image in turn,
Figure 401098DEST_PATH_IMAGE004
for the distance of each pixel point to the cluster center,
Figure 668131DEST_PATH_IMAGE005
for the distance of each initial cluster center to its point of symmetry,
Figure 601452DEST_PATH_IMAGE006
is a hyperbolic tangent function.
3. The processing method based on tongue analysis of claim 1, wherein the obtaining of the clustering feature value of each pixel point in the Lab image with respect to each initial clustering center according to the symmetric weight coefficient, the chromatic aberration and the distance of each pixel point in the Lab image with respect to each initial clustering center comprises:
and multiplying the symmetric weight coefficient and the distance of each pixel point in the Lab image relative to each initial clustering center, dividing the product result by the chromatic aberration of each pixel point in the Lab image relative to each initial clustering center, and taking the division result as the clustering characteristic value of each pixel point in the Lab image relative to each initial clustering center.
4. The tongue analysis-based processing method according to claim 1, wherein after clustering each pixel in the Lab image into a category to which the initial clustering center with the largest clustering feature value belongs, the method further comprises:
and respectively taking the centroids of the pixel points clustered to each initial clustering center as new clustering centers of each category.
5. The processing method based on tongue analysis of claim 1, wherein the obtaining of the normalized color value after each pixel in the Lab image is normalized by the pixel value in the Lab three channels comprises:
respectively normalizing the pixel value of each pixel point in the Lab image in each channel of the Lab;
and averaging the normalization results of the pixel values of each pixel point in the Lab image in each Lab channel, multiplying the average result by 255, and taking the multiplication result as the normalization color value of each pixel point in the Lab image normalized by the pixel value in each Lab channel.
6. The tongue analysis-based processing method according to claim 1, wherein before acquiring the tongue region image to be separated in the tongue image, the method further comprises: the tongue image of the patient is obtained.
7. The tongue analysis-based processing method according to claim 1, wherein acquiring the tongue region image to be separated from the tongue image comprises:
and (3) utilizing DNN to segment the tongue body image, and taking a segmentation result as the extracted tongue body area image, wherein the pixel value of the pixel point which belongs to the tongue body area in the tongue body area image is 0.
8. The tongue analysis-based processing method according to claim 1, wherein after obtaining the tongue coating-tongue proper separation result, the method further comprises: the doctor gives corresponding syndrome labels to the tongue body images according to the separation result of the tongue coating and the tongue proper.
9. The tongue analysis-based processing method according to claim 8, wherein after the tongue image is given the corresponding syndrome label, the method further comprises:
and taking the tongue body image as an input set of the neural network, taking the syndrome label corresponding to the tongue body image as an inspection set, finishing the training of the neural network, and outputting the syndrome label corresponding to the tongue body image to be tested according to the tongue body image to be tested by utilizing the neural network finished by training.
CN202210787391.0A 2022-07-06 2022-07-06 Processing method based on tongue picture analysis Active CN114862851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210787391.0A CN114862851B (en) 2022-07-06 2022-07-06 Processing method based on tongue picture analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210787391.0A CN114862851B (en) 2022-07-06 2022-07-06 Processing method based on tongue picture analysis

Publications (2)

Publication Number Publication Date
CN114862851A true CN114862851A (en) 2022-08-05
CN114862851B CN114862851B (en) 2022-09-30

Family

ID=82627038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210787391.0A Active CN114862851B (en) 2022-07-06 2022-07-06 Processing method based on tongue picture analysis

Country Status (1)

Country Link
CN (1) CN114862851B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797352A (en) * 2023-02-08 2023-03-14 长春中医药大学 Tongue picture image processing system for traditional Chinese medicine health-care physique detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009038376A1 (en) * 2007-09-21 2009-03-26 Korea Institute Of Oriental Medicine Extraction method of tongue region using graph-based approach and geometric properties
CN104063562A (en) * 2014-07-14 2014-09-24 南京大学 Method used for generating bottom embroidery draft of disordered needlework and based on color clustering
WO2019052436A1 (en) * 2017-09-15 2019-03-21 Oppo广东移动通信有限公司 Image processing method, computer-readable storage medium and mobile terminal
CN110517270A (en) * 2019-07-16 2019-11-29 北京工业大学 A kind of indoor scene semantic segmentation method based on super-pixel depth network
CN111860538A (en) * 2020-07-20 2020-10-30 河海大学常州校区 Tongue color identification method and device based on image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009038376A1 (en) * 2007-09-21 2009-03-26 Korea Institute Of Oriental Medicine Extraction method of tongue region using graph-based approach and geometric properties
CN104063562A (en) * 2014-07-14 2014-09-24 南京大学 Method used for generating bottom embroidery draft of disordered needlework and based on color clustering
WO2019052436A1 (en) * 2017-09-15 2019-03-21 Oppo广东移动通信有限公司 Image processing method, computer-readable storage medium and mobile terminal
CN110517270A (en) * 2019-07-16 2019-11-29 北京工业大学 A kind of indoor scene semantic segmentation method based on super-pixel depth network
CN111860538A (en) * 2020-07-20 2020-10-30 河海大学常州校区 Tongue color identification method and device based on image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张彪: "基于颜色聚类的刺绣图案底稿生成方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797352A (en) * 2023-02-08 2023-03-14 长春中医药大学 Tongue picture image processing system for traditional Chinese medicine health-care physique detection
CN115797352B (en) * 2023-02-08 2023-04-07 长春中医药大学 Tongue picture image processing system for traditional Chinese medicine health-care physique detection

Also Published As

Publication number Publication date
CN114862851B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
Pang et al. Tongue image analysis for appendicitis diagnosis
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN104537373B (en) Sublingual vessel is diagnosed with multispectral sublingual image characteristic extracting method
CN103340598B (en) Colour atla for human body and preparation method thereof, using method
JP7187557B2 (en) MEDICAL IMAGE LEARNING APPARATUS, METHOD AND PROGRAM
CN111860538A (en) Tongue color identification method and device based on image processing
CN114862851B (en) Processing method based on tongue picture analysis
CN102117329B (en) Capsule endoscope image retrieval method based on wavelet transformation
CN109242792B (en) White balance correction method based on white object
Wang et al. Facial image medical analysis system using quantitative chromatic feature
Zhang et al. Computerized diagnosis from tongue appearance using quantitative feature classification
CN115631350B (en) Method and device for identifying colors of canned image
Nisar et al. A color space study for skin lesion segmentation
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN104933723B (en) Tongue image dividing method based on rarefaction representation
CN110648336B (en) Method and device for dividing tongue texture and tongue coating
Chen et al. Application of artificial intelligence in tongue diagnosis of traditional Chinese medicine: a review
US20130222767A1 (en) Methods and systems for detecting peripapillary atrophy
CN112560911B (en) Tongue image classification method and tongue image classification system for traditional Chinese medicine
CN110598533A (en) Tongue picture matching method, electronic device, computer device, and storage medium
CN102136077A (en) Method for automatically recognizing lip color based on support vector machine
CN104573723B (en) A kind of feature extraction and classifying method and system of " god " based on tcm inspection
CN112464871A (en) Deep learning-based traditional Chinese medicine tongue image processing method and system
Rao et al. Automatic Glottis Localization and Segmentation in Stroboscopic Videos Using Deep Neural Network.
CN108629780B (en) Tongue image segmentation method based on color decomposition and threshold technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant