CN114782444A - Auxiliary interpretation method, medium and electronic device for in vitro diagnosis of color development result - Google Patents
Auxiliary interpretation method, medium and electronic device for in vitro diagnosis of color development result Download PDFInfo
- Publication number
- CN114782444A CN114782444A CN202210707883.4A CN202210707883A CN114782444A CN 114782444 A CN114782444 A CN 114782444A CN 202210707883 A CN202210707883 A CN 202210707883A CN 114782444 A CN114782444 A CN 114782444A
- Authority
- CN
- China
- Prior art keywords
- color
- color correction
- training
- sample
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000000338 in vitro Methods 0.000 title claims abstract description 38
- 238000011161 development Methods 0.000 title claims abstract description 34
- 238000003745 diagnosis Methods 0.000 title claims abstract description 28
- 238000012937 correction Methods 0.000 claims abstract description 96
- 238000012549 training Methods 0.000 claims abstract description 95
- 239000011159 matrix material Substances 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims description 54
- 230000006870 function Effects 0.000 claims description 39
- 238000012886 linear function Methods 0.000 claims description 16
- 239000003153 chemical reaction reagent Substances 0.000 claims description 15
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000012935 Averaging Methods 0.000 claims 1
- 238000003908 quality control method Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 12
- 230000008859 change Effects 0.000 abstract description 2
- 239000000523 sample Substances 0.000 description 50
- 230000008569 process Effects 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 9
- 150000001875 compounds Chemical class 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000001124 body fluid Anatomy 0.000 description 1
- 239000010839 body fluid Substances 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention provides an auxiliary interpretation method, a storage medium and electronic equipment for in vitro diagnosis and color development results; a color correction model between the real image information and the standard image information of the training sample is established by adopting a nonlinear change function and a color correction matrix, and the model is stable in convergence and has higher precision and stability. In addition, the color correction model is used for processing the sample image to be detected, so that the color development degree of the sample image to be detected is displayed more accurately, and further, misjudgment and misdiagnosis caused by visual interpretation of medical personnel are avoided.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an auxiliary interpretation method, a storage medium and electronic equipment for in-vitro diagnosis and color development results.
Background
In Vitro diagnosis, i.e., ivd (in Vitro diagnosis), refers to a process of obtaining clinical diagnosis information by detecting human body samples (blood, body fluid, tissue, etc.) in addition to the human body, and further performing prevention, diagnosis, treatment and detection, post-observation, and health assessment of diseases. Clinical in vitro diagnosis can help patients to know physical conditions and illness states, so that the patients can be treated as soon as possible. However, in the actual operation process, in vitro diagnosis is especially like kit detection, the determination of the color result needs the naked eye interpretation of the detection personnel, the judgment result depends on the professional experience and subjective judgment of the detection personnel, and the error probability is relatively high.
With the development of the industry, the trend of automatically reading the color information of the in vitro diagnostic reagent is developed. Color information is one of the most intuitive, basic elements of image information. Affected by the light being photographed, the reflection of objects, and the visual sensitivity of the observer. Therefore, in order to determine the color image of the in vitro diagnostic reagent, the image needs to be corrected. However, the correction is affected by various factors, and no particularly good correction scheme exists in the related art, so that the photographed color image has a deviation from the true color, thereby misleading the result.
Disclosure of Invention
The invention aims to provide an auxiliary interpretation method, a storage medium and electronic equipment for in vitro diagnosis color development results, wherein the method can output the in vitro diagnosis color development results in a score form, and misdiagnosis misjudgment caused by reading the color development results by naked eyes is avoided.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides an auxiliary interpretation method for in vitro diagnosis and color development results, which comprises the following steps:
obtaining an original imageI 0 Original imageI 0 The method comprises the steps of (1) obtaining an image of a color card and an image of a sample to be detected;
identifying an original image I0Obtaining the position coordinates of color blocks in the color card image by using the position coordinates of the middle color card; as an embodiment of the present application, the detection process of the position coordinates of the color block is as follows: the four corners of the color card can be provided with IDs, preferably, the IDs can be binary square marks arranged at the four corners of the color card, the square marks are presented in a binary matrix form, and the position coordinates of the color card are determined by identifying ID information; determining the position coordinates of the color blocks according to the position coordinates of the color blocks; the binary system square mark is used as the ID information mark of the color card, and different color cards can be selected according to different samples to be detected; in addition, the color card can be arranged at any position in the view-finding frame, the degree of freedom is very high, and the operation is convenient and fast;
acquiring pixel values of the color lump images based on the position coordinates of the color lumps; as an embodiment of the present application, the pixel value may be N pixel points at the middle position of the color block, that is, the pixel value of the training sample, is calculated from the pixel values of the N pixel points, and specifically, the average pixel value of the N pixel points may be calculated as the pixel value of the training sample.
Taking the color blocks as training samples, and establishing a color correction model;
identifying original imagesI 0 The position coordinates of the image of the sample to be detected;
obtaining the pixel value of the image of the sample to be detected based on the position coordinates of the image of the sample to be detectedR Wait for 0 G Wait for 0 B Wait for 0 ;
The pixel value of the sample image to be measuredR Wait for 0 G Wait for 0 B Wait for 0 Inputting a color correction model to perform color correction on the sample image to be detected to obtain the corrected pixel valueR To be treated G To be treated B To be treated ;
For the corrected pixel valueR To be treated G To be treated B To be treated Performing normalization processingAnd is converted into score output in the range of 0-100.
Further, the training method of the color correction model comprises the following steps:
s1, acquiring a training sample image;
s2 training: input the firstjPixel value of training sample imageR (i-1)j G (i-1)j B (i-1)j To pixel valueR (i-1) j G (i-1)j B (i-1)j Carrying out nonlinear transformation to obtain a pixel value after the nonlinear transformation; as an embodiment of the present application, pixel valuesR (i-1)j G (i-1)j B (i-1)j The middle position of the color block is N points, and the pixel value of the color block, namely the training sample, is calculated according to the pixel value of the N pointsR (i-1)j G (i-1)j B (i-1)j The interference caused by external environments such as printing and the like can be weakened, so that the result is more accurate; then, nonlinear transformation is carried out, so that the color of the real world can be more truly reduced to the color of the standard color space, and the result is more accurate;
multiplying with color correction matrix to modify the pixel value after nonlinear transformation to obtain trained pixel valueR ij G ij B ij Whereiniin order to carry out the number of times of training,i=1,2,3...m,j=1,2,3...n(ii) a Therefore, the picture is further optimized to be close to the standard color space, and the color error is reduced;
s3 obtains a color correction model: convergence calculation of nonlinear transformation functionfAnd a color correction matrixMOutputting a non-linear functionfAnd a color correction matrixMObtaining a color correction model. According to the method, in the sample training stage, when nonlinear transformation is carried out, nonlinear transformation functions can be randomly selectedfInitial value of parameter and color correction matrixMAfter the convergence condition is reached by continuously training optimization, the nonlinear transformation function is determinedfAnd a color correction matrixMParameters of (2) obtainingObtaining a color correction model, and processing a sample to be detected through the color correction model to enable the result to approach real color information infinitely;
furthermore, the training sample image is obtained by shooting a color chart by using a visual sensor, the visual sensor can be any electronic equipment with a shooting function, the degree of freedom is very high, and the universal visual sensor on the market can be covered; as an example, the device can be a camera, a mobile phone, a tablet computer, a PC or the like with a photographing function, or an electronic probe or the like;
further, S3 obtains a color correction model, which specifically includes:
trained pixel valuesR ij G ij B ij Conversion to standard color space, combined with pre-stored training sample standard pixel valuesR Sign G Sign B Sign Calculating average color error(ii) a To average out color errorsAs a judgment basis, a gradient descent method is adopted to calculate the nonlinear transformation function in a convergence wayfAnd a color correction matrixM;
Further, training is continued multiple times to average out color errorsIf the fluctuation amplitude is not greater than the set threshold, the nonlinear transformation functionfAnd a color correction matrixMConverging, and finishing the training of the color correction model; otherwise, repeat S2 and S3 to the non-linear transformation functionfAnd a color correction matrixMAnd (6) converging. Specifically, the number of times of training may be 10 or more, the threshold is set to be 5%, that is, training is performed at least 10 times continuously, and the calculated average color error fluctuation amplitudes are all less than or equal to 5%, then the nonlinear transformation function may be determinedfAnd a color correction matrixMParametric, non-linear transformation functions offAnd a color correction matrixMConverging, and finishing the training of the color correction model; otherwise, the trained pixel values are comparedR ij G ij B ij Repeating S2 and S3 to a non-linear transformation functionfAnd a color correction matrixMAnd (6) converging.
Further, the original imageI 0 The image is obtained by shooting through a visual sensor, the visual sensor can be any electronic device with a shooting function, and as an embodiment, the visual sensor can be a camera, or a mobile phone, a tablet computer, a PC or the like with a shooting function, or an electronic probe or the like.
Further, the sample to be detected is an in vitro diagnosis color development device; specifically, the in-vitro diagnosis color development device is a color development reagent card, a reagent strip or a reagent kit; more specifically, the reagent card, strip, or kit further comprises a control line.
Further, the output score is segmented and scribed. The standard of the sectional scribing can be set according to the corresponding relation between the color development degree and the detection result, if the standard can be divided into two sections, the section is not more than 5, the result is not developed, and the section is more than 5, the result is developed; or may be divided into three segments or more. According to the result, the medical staff can be helped to make more accurate judgment on the examination result.
In another aspect, the present invention further provides a storage medium, wherein a software program for implementing the above color correction model training method or the above in vitro diagnosis and color development result interpretation method is stored on the storage medium.
In another aspect, the present invention further provides an electronic device, which includes a processor configured to execute the above-mentioned color correction model training method or the above-mentioned in vitro diagnosis and color development result interpretation method, a memory for storing the instructions executed on the processor, and a camera including or not including a camera for acquiring the standard color chart and the original image of the sample to be tested.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects: the invention adopts the nonlinear change function and the color correction matrix to establish the color correction model between the real image information and the standard image information of the training sample, and the model has stable convergence and higher precision and stability. In addition, the color correction model is utilized to process the sample image to be detected, so that the color development degree of the sample image to be detected is displayed more accurately. Further, the occurrence of misjudgment and misdiagnosis caused by the visual interpretation of medical care personnel is avoided.
Drawings
FIG. 1 is a block flow diagram of an interpretation-aiding method for in vitro diagnosis of a color result according to an exemplary embodiment of the present invention;
FIG. 2 is a block diagram of a process for building a color correction model according to an exemplary embodiment of the present invention;
FIG. 3 is a block diagram of an electronic device in an exemplary embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, some steps may be combined or partially combined, and thus the actual execution order may be changed according to the actual situation.
As shown in fig. 1, the present invention provides an auxiliary interpretation method for in vitro diagnosis of color development results, comprising:
110. obtaining an original imageI 0 Original imageI 0 The method comprises the steps of obtaining a color card image and a sample image to be detected; original imageI 0 The method is obtained by shooting through a visual sensor, the visual sensor can be any electronic equipment with a shooting function, and as an embodiment, the visual sensor can be a camera, a mobile phone, a tablet personal computer or a PC (personal computer) with a shooting function, or an electronic probe;
120. recognition of an original image I0Obtaining the position coordinates of color blocks in the color card image by using the position coordinates of the middle color card; as an embodiment of the present application, the detection process of the position coordinates of the color block is as follows: the four corners of the color card can be provided with IDs, preferably, the IDs can be binary square marks arranged at the four corners of the color card, the square marks are presented in a binary matrix form, and the position coordinates of the color card are determined by identifying ID information;
130. determining the position coordinates of the color blocks based on the position coordinates of the color blocks;
140. acquiring pixel values of the color blocks based on the position coordinates of the color blocks;
as an embodiment of the application, the pixel values can be N pixel points in the middle positions of the color blocks, and the color blocks, namely the pixel values of the training samples, are calculated according to the pixel values of the N pixel points; the interference caused by external environments such as printing and the like can be weakened, so that the result is more accurate; and then nonlinear transformation is carried out, so that the color of the real world can be more truly reduced to the color of the standard color space, and the result is more accurate.
In one embodiment, the color blocks calculated from the pixel values of the N pixel points, that is, the pixel values of the training samples, may be: through R channeljFor example, the pixel values of N pixels areR 0 , R 1 … R N Taking the average pixel value of N pixelsR j The specific calculation formula is as follows:。
as another embodiment, the color blocks calculated by the pixel values of the N pixel points, that is, the pixel values of the training samples, may also be: through R channeljFor each color block as an example, the pixel values of N pixels areR 0 , R 1 … R N Arranging N pixel values according to the size sequence, eliminating the maximum pixel value and the minimum pixel value, and taking the average pixel value of the rest N-2 pixel pointsR j The calculation formula is as follows:。
as another embodiment, the color blocks calculated by the pixel values of the N pixel points, that is, the pixel values of the training samples, may also be: by the R channeljFor each color block as an example, the pixel values of N pixels areR 0 , R 1 …R N Arranging N pixel values according to the size sequence, taking a median as the pixel value of the color block, namely a training sampleR j 。
150. Obtaining standard pixel value of color blockR Sign G Sign B Sign ;
160. Establishing a color correction model, as shown in fig. 2, the training method is as follows:
s1, taking the color blocks as training samples, and acquiring training sample images;
s2 training: input the firstjPixel value of training sample imageR (i-1)j G (i-1)j B (i-1)j For pixel valueR (i-1) j G (i-1)j B (i-1)j Carrying out nonlinear transformation to obtain the pixel value after nonlinear transformation, thereby realizing the expansion or compression processing of the low pixel value or the expansion or compression processing of the high pixel value;
and a color correction matrixMThe pixel value after nonlinear transformation is corrected by multiplication to obtain the trained pixel valueR ij G ij B ij Wherein, in the process,iin order to perform the number of times of training,i=1,2,3...m,j=1,2,3...n(ii) a In particular a color correction matrixMMay be a 3 × 3 matrix, with the initial values being randomly generated; carrying out color correction on the pixel values after nonlinear transformation, thereby further optimizing the picture to be close to a standard color space and reducing color errors;
as an exemplary embodiment, in performing the non-linear transformation, the following non-linear variation function may be selectedf(taking R channel as an example for explanation):
in the above-mentioned formula, the compound has the following structure,R i for training the sample imageiAfter sub-non-linear transformationRA channel pixel value;cthe scale comparison constant is adopted, and any natural number can be taken in the first training;R i-1 is a firsti-1And correcting the pixel value of the R channel by the secondary nonlinear transformation and the color correction matrix.
As an exemplary embodiment, the function of the non-linear variationfThe following formula may be selected (to)RChannels are illustrated as examples):
in the above formula, γ is an influence factor, and γ ≠ 1, the value set during the initial training cannot be too high, otherwise, the intermediate pixel value is increased, and the lower pixel value is compressed; but cannot be set too low, which would result in higher intermediate pixel values and higher pixel value compression.
In addition, when the nonlinear transformation is performed, the influence of the image offset on the transformation process needs to be considered, so in the selection of the nonlinear transformation function, an offset constant can be introduced, and a specific calculation formula is as follows:
wherein,εis an offset constant.
As an exemplary embodiment, a non-linear function of changefThe following formula (illustrated with the R channel as an example) may be chosen:
in the above-mentioned formula, the compound has the following structure,is the remainder of this function, having a value ofThe higher order of (c) is infinitesimally small.
The calculation process of the G channel and the B channel is the same as above.
According to the method, in the sample training stage, when nonlinear transformation is carried out, the nonlinear transformation function can be randomly selectedfInitial value of parameter and color correction matrixMBy continuously training the optimization, thereby determining the non-linear transformation functionfAnd a color correction matrixMParameters, i.e. non-linear transformation functions, described hereinafterfAnd a color correction matrixMConverging to obtain a color correction model, and processing a sample to be detected through the model to enable the result to approach to the real color information infinitely;
s3 obtains a color correction model: by trained pixel valuesR ij G ij B ij And pre-stored training sample standard pixel valuesR Sign board G Sign board B Sign board Convergence calculation of the nonlinear transformation functionfAnd a color correction matrixMOutputting a non-linear functionfObtaining a color correction model. As mentioned previously, converging a non-linear functionfAnd a color correction matrixMThe process of (2) may be regarded as a process of determining a function, i.e., a matrix parameter, and the specific process is as follows:
trained pixel valuesR ij G ij B ij Converting to standard color space, and combining with pre-stored standard pixel valueR Sign G Sign board B Sign board Calculating average color error(ii) a To average out color errorsAs a judgment basis, a Gradient Descent method is used for carrying out nonlinear transformation on the functionfAnd a color correction matrixMCarrying out convergence calculation; average color error if trained multiple times in successionIf the wave amplitudes all belong to the set threshold value, the nonlinear transformation functionfAnd a color correction matrixMConvergence and finishing the training of the color correction model; otherwise, repeat S2 and S3 to the non-linear transformation functionfAnd a color correction matrixMAnd (6) converging. Specifically, the number of times of training may be 10 or more, the threshold may be set to be less than or equal to 5%, that is, at least 10 times of continuous training, and the average color error fluctuation amplitude obtained by calculation is less than or equal to 5%, then the nonlinear transformation function may be determinedfAnd a color correction matrixMParametric, non-linear transformation functions offAnd a color correction matrixMConvergence and finishing the training of the color correction model; otherwise, the pixel after training is processedValue ofR ij G ij B ij Repeating S2 and S3 to a non-linear transformation functionfAnd a color correction matrixMAnd (6) converging.
As an exemplary embodiment, the gradient descent convergence calculation process is as follows:
taking a training sample set to be trained as a set, and taking color blocks as a data set to a nonlinear functionfThe convergence training is performed, illustratively, as a non-linear functionFor example, convergence training, three parameters of non-linear function, scale comparison constantcOffset constant epsilon and influence factor gamma, loss functions of three parametersThe expression of (c) may be:
in the formula,mthe number of color blocks; θ is the gradient weight along the gradient direction;whereinx 0 ,x 1 ,x 2 are respectively asc,ε,γ;
Since the direction of the maximum value of the directional derivative on the nonlinear function curved surface is the direction of the gradient, when the gradient of the data set is reduced, the opposite direction of the gradient is selected for gradient weightingθUpdate of (2), weightθThe update formula of (c) is as follows:
in the above-mentioned formula, the compound has the following structure,jas gradient weightsθThe number of times of the update of (c),αthe value range is [ -10,10 ]]When it comes toAnd when the updating is stopped. In the actual operation of the system,etakes 3 and determines three parameters of the non-linear function with this value: scale comparison constantcOffset constantεAnd an influencing factorγAnd then determining a non-linear function. Carrying out convergence training on the training samples by the determined nonlinear function, and calculating to obtain an average color errorThe value of (c). Average color error if trained multiple times in successionIf the fluctuation amplitude is not greater than the set threshold, the nonlinear transformation functionfAnd a color correction matrixMAnd converging and finishing the training of the color correction model. In one embodiment, the number of training times may be 10 or more, the threshold is set to 5%, the training times are continued for at least 10 times, and the calculated average color error fluctuation amplitudes are all less than or equal to 5%, then the non-linear transformation function may be determinedfAnd a color correction matrixMOf (2), a non-linear transformation functionfAnd a color correction matrixMConverging, and finishing the training of the color correction model; otherwise, it is gradually reducedeUp toThe above conditions are satisfied.
When the gradient descent method is used for carrying out convergence calculation on the nonlinear function, all training samples need to be traversed, if the number of color blocks is too large, the convergence speed is very slow, and therefore when convergence begins, a single sample can be randomly selected to carry out gradient weight calculationθAnd (4) updating. As an embodiment of the present application, the loss function of a certain training sample may be:
the gradient weight update formula is:whereinαthe value range is [ -10,10 [)](ii) a Updating the gradient weight of the training sampleθThe loss function of the next training sample is taken in. When in useAnd stopping updating. In the actual operation of the system,etakes 3 and determines three parameters of the non-linear function with this value: scale comparison constantcOffset constantεAnd an influencing factorγAnd then determining a non-linear function. Carrying out convergence training on the training samples by the determined nonlinear function, and calculating to obtain an average color errorThe value of (c). Average color error if training is repeatedIf the fluctuation amplitude is not greater than the set threshold, the nonlinear transformation functionfAnd a color correction matrixMAnd converging and finishing the training of the color correction model. In one embodiment, the number of training times may be 10 or more, the threshold is set to 5%, the training times are continued for at least 10 times, and the calculated average color error fluctuation amplitudes are all less than or equal to 5%, then the non-linear transformation function may be determinedfAnd a color correction matrixMOf (2), a non-linear transformation functionfAnd a color correction matrixMConvergence and finishing the training of the color correction model; otherwise, it is gradually reducedeUp toThe above conditions are satisfied.
As an exemplary embodiment of the present application, a gradient descent method is used for the nonlinear functionfWhen convergence calculation is performed, the method specifically includes: by a non-linear functionFor example, the convergence training, three parameters of the nonlinear function, and the scale comparison constantcOffset constantεAnd an influencing factorγLoss function of three parametersThe expression of (c) may be:
in the formula,mthe number of color blocks;θis the gradient weight along the gradient direction;wherein, in the process,x 0 ,x 1 ,x 2 are respectively asc,ε,γ;
Since the direction of the maximum value of the directional derivative on the nonlinear function curved surface is the direction of the gradient, when the gradient of the data set is reduced, the opposite direction of the gradient is selected to carry out gradient weightingθUpdate of (2), weightθThe update formula of (2) is as follows:
in the above-mentioned formula, the compound has the following structure,jas gradient weightsθThe number of times of the update of (c),αthe value range is [ -10,10 [)]When it comes toAnd when the updating is stopped. In the actual operation of the device,etakes 3 and determines three parameters of the non-linear function with this value: scale comparison constantcOffset constantεAnd influencing factorsγAnd then determining a non-linear function. Carrying out convergence training on the training sample by the determined nonlinear function, and calculating to obtain the average color errorThe value of (c). Average color error if training is repeatedIf the fluctuation amplitude is not greater than the set threshold, the nonlinear transformation functionfAnd a color correction matrixMAnd converging, and finishing the training of the color correction model. In one embodiment, the number of training times may be 10 or more, the threshold is set to 5%, the training times are continued for at least 10 times, and the calculated average color error fluctuation amplitudes are all less than or equal to 5%, then the non-linear transformation function may be determinedfAnd a color correction matrixMOf (2), a non-linear transformation functionfAnd a color correction matrixMConvergence and finishing the training of the color correction model; otherwise, it is gradually reducedeUp toThe above conditions are satisfied.
One of the samples is selected from the training sample set for gradient descent updating, and the loss function is as follows:
wherein bs is the length of the training sample set; gradient weightθThe update formula is:
when in useAnd when the updating is stopped. In the actual operation of the device,etakes 3 and determines three parameters of the non-linear function with this value: scale comparison constantcOffset constantεAnd influencing factorsγAnd then determining a non-linear function. Training with determined nonlinear functionPerforming convergence training on the training samples, and calculating to obtain an average color errorThe value of (c). Average color error if trained multiple times in successionIf the fluctuation amplitude is not greater than the set threshold, the nonlinear transformation functionfAnd a color correction matrixMAnd converging, and finishing the training of the color correction model. In one embodiment, the number of training times may be 10 or more, the threshold is set to 5%, the training times are continued for at least 10 times, and the calculated average color error fluctuation amplitudes are all less than or equal to 5%, then the non-linear transformation function may be determinedfAnd a color correction matrixMOf (2), a non-linear transformation functionfAnd a color correction matrixMConvergence and finishing the training of the color correction model; otherwise, it is gradually reducedeUp toThe above conditions are satisfied.
Color correction matrixMThe parameter convergence process of (1) is as above.
170. Identifying original imagesI 0 The position coordinates of the image of the sample to be detected;
180. obtaining the pixel value of the image of the sample to be detected based on the position coordinates of the image of the sample to be detectedR Wait for 0 G Wait for 0 B Wait for 0 ;
190. The pixel value of the sample image to be measuredR Wait for 0 G Wait for 0 B Wait for 0 Inputting a color correction model to perform color correction on the sample image to be detected to obtain the corrected pixel valueR To be treated G To be treated B To be treated ;
200. For the corrected pixel valueR To be treated G To be treated B To be treated Carrying out normalization processing, converting into score values within the range of 0-100, and outputting;
the output scores are scribed in segments. The standard of the sectional scribing can be set according to the corresponding relation between the color development degree and the detection result, if the standard can be divided into two sections, the section is not more than 5, the result is not developed, and the section is more than 5, the result is developed; or may be divided into three segments or more. According to the result, the medical staff is helped to make more accurate judgment on the examination result. For example, if the score is 2, the sample to be tested is a negative sample, and if the score is 7, the sample to be tested is a positive sample. In addition, the detection result HIA can be reported in a voice mode, such as "your score is 1 point" and the like. If it is
As an exemplary embodiment of the present application, the sample to be tested is an in vitro diagnostic color development device; specifically, the in-vitro diagnosis color development device can be a color development reagent card, a reagent strip or a reagent kit; more specifically, the reagent card, strip, or kit further comprises a control line.
In another aspect, the present invention further provides a storage medium, where a software program for implementing the color correction model training method or the in vitro diagnosis and color development result interpretation method is stored on the storage medium.
In another aspect, the present invention further provides an electronic device, which includes a processor configured to execute the above-mentioned color correction model training method or the above-mentioned in vitro diagnosis and color development result interpretation method, a memory for storing the instructions executed on the processor, and a camera including or not including the camera for acquiring the standard color chart and the original image of the sample to be tested.
The processor and the memory may be connected by a bus or other means, and in fig. 3, the processor and the memory are connected by a bus. The processor may be a central processor. The processor may also be other general purpose processors, digital signal processors, application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or a combination of the above.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as a program of instructions for performing a color correction model training method or an in vitro diagnostic color result interpretation method in an embodiment of the present invention. The processor executes various functional applications and data processing of the processor by running the non-transitory software program, instructions and modules stored in the memory, that is, the method for training the color correction model or the method for assisting interpretation of the in vitro diagnosis and color development result in the above method embodiments is implemented.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory and when executed by the processor, perform a color correction model training method as in the embodiment shown in FIG. 2 and the in vitro diagnostic color development result-aided interpretation method of FIG. 1.
The invention has been described above with a certain degree of particularity. It will be understood by those of ordinary skill in the art that the description of the embodiments is merely exemplary and that all changes that come within the true spirit and scope of the invention are desired to be protected.
Claims (10)
1. An interpretation-aiding method for in vitro diagnosis of a chromogenic result, comprising:
obtaining an original imageI 0 Original imageI 0 The color card comprises a color card and a sample to be detected;
identifying original imagesI 0 Obtaining the position coordinates of color blocks in the color card image by using the position coordinates of the middle color card;
acquiring pixel values of the color blocks based on the position coordinates of the color blocks;
taking the color blocks as training samples, and establishing a color correction model;
identifying original imagesI 0 The position coordinates of the image of the sample to be detected;
obtaining the pixel value of the image of the sample to be detected based on the position coordinates of the image of the sample to be detectedR Wait for 0 G Wait for 0 B Wait for 0 ;
The pixel value of the sample image to be measuredR Wait for 0 G Wait for 0 B Wait for 0 Inputting a color correction model to perform color correction on the sample image to be detected to obtain the corrected pixel valueR To be treated G To be treated B To be treated ;
For corrected pixel valueR To be treated G To be treated B To be treated And carrying out normalization processing, converting into score value within the range of 0-100 and outputting.
2. The method of claim 1, wherein the step of establishing a color correction model using the color block as a training sample comprises:
s1, acquiring a training sample image;
s2 training: input the firstjPixel value of training sample imageR (i-1)j G (i-1)j B (i-1)j To pixel valueR (i-1)j G (i-1) j B (i-1)j Carrying out nonlinear transformation to obtain a pixel value after nonlinear transformation; multiplying with color correction matrix to modify the pixel value after nonlinear transformation to obtain trained pixel valueR ij G ij B ij Whereiniin order to perform the number of times of training,i=1,2,3...m,j= 1,2,3...n;
s3 obtains a color correction model: convergence computation of nonlinear transformation functionsfAnd a color correction matrixMOutputting a non-linear functionfAnd obtaining a color correction model by the parameters of the color correction matrix M.
3. The method for assisting interpretation of in vitro diagnostic color development results according to claim 2, wherein the step S3 of obtaining a color correction model specifically comprises:
trained pixel valuesR ij G ij B ij Conversion to standard color space, combined with pre-stored training sample standard pixel valuesR Sign G Sign board B Sign board Calculating average color error(ii) a To average out color errorsAs a judgment basis, a gradient descent method is adopted to calculate the nonlinear transformation function in a convergence wayfAnd a color correction matrixM。
4. The aided interpretation method for in vitro diagnostic chromogenic results according to claim 3, characterized in that:
multiple successive training, averaging color errorsIf the fluctuation amplitude is not greater than the set threshold, the nonlinear transformation functionfAnd a color correction matrixMConverging, and finishing the training of the color correction model; otherwise, repeat S2 and S3 to the non-linear transformation functionfAnd a color correction matrixMAnd (6) converging.
5. The aided interpretation method for in vitro diagnostic chromogenic results according to claim 1, characterized in that: the sample to be detected is a display image of the in vitro diagnosis color development device.
6. The aided interpretation method for in vitro diagnostic chromogenic results according to claim 5, characterized in that: the in-vitro diagnosis color development device is a color development reagent card, a reagent strip or a reagent kit.
7. The aided interpretation method for in vitro diagnostic color development results according to claim 6, characterized in that: the reagent card, the reagent strip or the kit further comprises a quality control line.
8. The aided interpretation method for in vitro diagnostic chromogenic results according to claim 1, characterized in that: and carrying out sectional drawing on the output score.
9. A storage medium, characterized by: the storage medium is stored with a software program for implementing the in vitro diagnosis and color development result auxiliary interpretation method according to any one of claims 1 to 8.
10. An electronic device, characterized in that: the in-vitro diagnosis and color development result auxiliary interpretation method comprises a processor, a memory and a camera, wherein the processor is configured with execution instructions for executing the in-vitro diagnosis and color development result auxiliary interpretation method according to any one of claims 1 to 8, the memory is used for storing the execution instructions on the processor, and the camera comprises or does not comprise a standard color card and a to-be-measured sample original image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210707883.4A CN114782444B (en) | 2022-06-22 | 2022-06-22 | Auxiliary interpretation method, medium and electronic device for in vitro diagnosis color development result |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210707883.4A CN114782444B (en) | 2022-06-22 | 2022-06-22 | Auxiliary interpretation method, medium and electronic device for in vitro diagnosis color development result |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114782444A true CN114782444A (en) | 2022-07-22 |
CN114782444B CN114782444B (en) | 2022-09-02 |
Family
ID=82420993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210707883.4A Active CN114782444B (en) | 2022-06-22 | 2022-06-22 | Auxiliary interpretation method, medium and electronic device for in vitro diagnosis color development result |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782444B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103079076A (en) * | 2013-01-22 | 2013-05-01 | 无锡鸿图微电子技术有限公司 | Method and device for generating color calibration matrix of self-adaption gamma calibration curve |
CN109856133A (en) * | 2019-01-29 | 2019-06-07 | 深圳市象形字科技股份有限公司 | A kind of test paper detecting method illuminated using a variety of intensities of illumination, multicolour |
CN109903256A (en) * | 2019-03-07 | 2019-06-18 | 京东方科技集团股份有限公司 | Model training method, chromatic aberration calibrating method, device, medium and electronic equipment |
CN111292246A (en) * | 2018-12-07 | 2020-06-16 | 上海安翰医疗技术有限公司 | Image color correction method, storage medium, and endoscope |
CN111861922A (en) * | 2020-07-21 | 2020-10-30 | 浙江大华技术股份有限公司 | Method and device for adjusting color correction matrix and storage medium |
CN112164005A (en) * | 2020-09-24 | 2021-01-01 | Oppo(重庆)智能科技有限公司 | Image color correction method, device, equipment and storage medium |
US20210022646A1 (en) * | 2017-09-01 | 2021-01-28 | Chun S. Li | System and method for urine analysis and personal health monitoring |
WO2021228730A1 (en) * | 2020-05-11 | 2021-11-18 | F. Hoffmann-La Roche Ag | Method of evaluating the quality of a color reference card |
WO2021249895A1 (en) * | 2020-06-09 | 2021-12-16 | F. Hoffmann-La Roche Ag | Method of determining the concentration of an analyte in a sample of a bodily fluid, mobile device, kit, comuter program and computer-readable storage medium |
-
2022
- 2022-06-22 CN CN202210707883.4A patent/CN114782444B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103079076A (en) * | 2013-01-22 | 2013-05-01 | 无锡鸿图微电子技术有限公司 | Method and device for generating color calibration matrix of self-adaption gamma calibration curve |
US20210022646A1 (en) * | 2017-09-01 | 2021-01-28 | Chun S. Li | System and method for urine analysis and personal health monitoring |
CN111292246A (en) * | 2018-12-07 | 2020-06-16 | 上海安翰医疗技术有限公司 | Image color correction method, storage medium, and endoscope |
CN109856133A (en) * | 2019-01-29 | 2019-06-07 | 深圳市象形字科技股份有限公司 | A kind of test paper detecting method illuminated using a variety of intensities of illumination, multicolour |
CN109903256A (en) * | 2019-03-07 | 2019-06-18 | 京东方科技集团股份有限公司 | Model training method, chromatic aberration calibrating method, device, medium and electronic equipment |
WO2021228730A1 (en) * | 2020-05-11 | 2021-11-18 | F. Hoffmann-La Roche Ag | Method of evaluating the quality of a color reference card |
WO2021249895A1 (en) * | 2020-06-09 | 2021-12-16 | F. Hoffmann-La Roche Ag | Method of determining the concentration of an analyte in a sample of a bodily fluid, mobile device, kit, comuter program and computer-readable storage medium |
CN111861922A (en) * | 2020-07-21 | 2020-10-30 | 浙江大华技术股份有限公司 | Method and device for adjusting color correction matrix and storage medium |
CN112164005A (en) * | 2020-09-24 | 2021-01-01 | Oppo(重庆)智能科技有限公司 | Image color correction method, device, equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
GABOR SZEDO 等: "使用Zynq-7000 All Programmable SoC实现图像传感器色彩校正", 《电子产品世界》 * |
MICHAEL J. PUGIA 等: "Technology Behind Diagnostic Reagent Strips", 《CE UPDATE—NTUMENTATIONII》 * |
杨任兵 等: "尿液试纸条的手机图像比色分析新方法的研究", 《影像科学与光化学》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114782444B (en) | 2022-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104697938B (en) | A kind of test paper read method and test pregnant and ovulation test method using this method | |
WO2020199694A1 (en) | Spine cobb angle measurement method and apparatus, readable storage medium, and terminal device | |
CN102908120B (en) | Eye fundus image registration method, eye fundus image optic disk nerve and vessel measuring method and eye fundus image matching method | |
CN109523535B (en) | Pretreatment method of lesion image | |
CN104363815B (en) | Image processing apparatus and image processing method | |
WO2020118669A1 (en) | Student concentration detection method, computer storage medium, and computer device | |
CN110232326B (en) | Three-dimensional object recognition method, device and storage medium | |
CN115345819A (en) | Gastric cancer image recognition system, device and application thereof | |
CN112768065B (en) | Facial paralysis grading diagnosis method and device based on artificial intelligence | |
CN115713256A (en) | Medical training assessment and evaluation method and device, electronic equipment and storage medium | |
CN112801967A (en) | Sperm morphology analysis method and device | |
CN107832695A (en) | The optic disk recognition methods based on textural characteristics and device in retinal images | |
CN114782444B (en) | Auxiliary interpretation method, medium and electronic device for in vitro diagnosis color development result | |
Zestas et al. | Sollerman Hand Function Sub-Test “Write with a Pen”: A Computer-Vision-Based Approach in Rehabilitation Assessment | |
CN112168211B (en) | Fat thickness and muscle thickness measuring method and system for abdomen ultrasonic image | |
CN111281355B (en) | Method and equipment for determining pulse acquisition position | |
CN111369598B (en) | Deep learning model training method and device, and application method and device | |
CN116206733B (en) | Image cloud platform assessment method and system for fundus image | |
CN116228727A (en) | Image-based human back epidermis spinal cord detection method and device | |
CN111091910B (en) | Intelligent evaluation system based on painting clock test | |
CN111274953B (en) | Method and system for judging pain according to expression | |
CN110232690B (en) | Image segmentation method, system, equipment and computer readable storage medium | |
CN111724901B (en) | Structure parameter prediction method, system and device based on vision and storage medium | |
CN114694128A (en) | Pointer instrument detection method and system based on abstract metric learning | |
CN110363744B (en) | Lung age detection method and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |