CN118039088B - Artificial intelligence auxiliary diagnosis data processing system - Google Patents

Artificial intelligence auxiliary diagnosis data processing system Download PDF

Info

Publication number
CN118039088B
CN118039088B CN202410446277.0A CN202410446277A CN118039088B CN 118039088 B CN118039088 B CN 118039088B CN 202410446277 A CN202410446277 A CN 202410446277A CN 118039088 B CN118039088 B CN 118039088B
Authority
CN
China
Prior art keywords
focus
voice
sub
image
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410446277.0A
Other languages
Chinese (zh)
Other versions
CN118039088A (en
Inventor
曾凡新
李诗林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dazhou Aijia Feishite Technology Co ltd
Original Assignee
Dazhou Aijia Feishite Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dazhou Aijia Feishite Technology Co ltd filed Critical Dazhou Aijia Feishite Technology Co ltd
Priority to CN202410446277.0A priority Critical patent/CN118039088B/en
Publication of CN118039088A publication Critical patent/CN118039088A/en
Application granted granted Critical
Publication of CN118039088B publication Critical patent/CN118039088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to the technical field of data processing, and discloses an artificial intelligence auxiliary diagnosis data processing system; dividing an ultrasonic image to generate a sub-image, marking a focus area, acquiring focus characteristics and focus voice of the focus area, judging whether to send out a sub-image integration prompt, integrating the focus characteristics and the focus voice into a target sub-image, and generating an ultrasonic diagnosis image; compared with the prior art, the method has the advantages that the region corresponding to irrelevant normal tissues can be removed from the ultrasonic image, the precise identification of the focus region is realized, meanwhile, the matching combination of focus characteristics and focus voice is utilized, the double representation on text data and voice data can be carried out on the lesion information of the focus region, the ultrasonic diagnostic image with voice annotation is formed, the tired feeling caused by repeated switching of the visual angle of medical staff between the ultrasonic image and the text information is avoided, and convenience is provided for the subsequent diagnosis of the medical staff.

Description

Artificial intelligence auxiliary diagnosis data processing system
Technical Field
The invention relates to the technical field of data processing, in particular to an artificial intelligence auxiliary diagnosis data processing system.
Background
Along with development of information technology and construction of medical informatization, a plurality of medical institutions are provided with medical image artificial intelligent diagnosis systems, so that medical staff can process and analyze diagnosis data by means of the artificial intelligent technology, and particularly in the process of processing ultrasonic images, lesion information of focus areas in the ultrasonic images needs to be accurately identified by the artificial intelligent technology, so that accurate auxiliary diagnosis basis is provided for the medical staff.
The patent application with the reference publication number of CN114491072A discloses a traditional Chinese medicine data processing system, a method, a storage medium and a terminal, wherein response information of a patient is acquired through a dynamic questionnaire of the patient, and the dynamic questionnaire is updated in real time based on the response information of the patient, so that the system has more specialization and pertinence, and a prediction result is more accurate; the obtained prediction result can effectively assist doctors in diagnosing diseases and symptoms, and improves the accuracy and the efficiency of diagnosis; further, based on the constructed traditional Chinese medicine knowledge graph, relevant condition information of diseases and symptoms is obtained, and a questionnaire is generated by combining a preset rule template, so that the relevance of the questionnaire content and the applicability of the traditional Chinese medicine field are further improved; the method comprises the steps of optionally collecting open text information of a user, wherein the user has autonomy of selection, improves use experience of the user, and cares for the user with medical feeling or possible text description capability priority; the method has stronger fitting capability, and is particularly suitable for improving the professionality of questionnaire contents and the accuracy of prediction;
The prior art has the following defects:
When the existing diagnostic data processing system processes diagnostic data in an ultrasonic image, focus information of a focus area is identified through a medical image processing technology, the focus information is recorded on the ultrasonic image through characters, and is provided for medical staff to know, because the focus area and recorded focus information in the ultrasonic image are not in the same position, a certain distance exists between the focus area and the focus information, when the medical staff wants to comprehensively and accurately know the focus area, the visual angle of the medical staff needs to be switched and compared back and forth between the focus area and the focus information, so that the diagnostic working time of the medical staff to the ultrasonic image is increased, the visual fatigue of the medical staff is also increased, and further the probability of misdiagnosis of the ultrasonic image is increased after long-time medical diagnosis work.
In view of the above, the present invention proposes an artificial intelligence aided diagnosis data processing system to solve the above-mentioned problems.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides the following technical scheme for achieving the purposes: an artificial intelligence aided diagnosis data processing system applied to an artificial intelligence server, comprising:
The sub-image dividing module is used for dividing the ultrasonic image based on an image dividing criterion to generate a sub-image;
The focus area marking module is used for screening out a target sub-image from the sub-images and marking out focus areas in the target sub-image based on marking criteria;
the voice data input module is used for acquiring focus characteristics of a focus area and generating focus voice based on the focus characteristics;
the matching judging module is used for carrying out consistency matching on the focus characteristics and focus voice, generating a successful matching value based on a matching result, and judging whether to send out a sub-image integration prompt;
and the image integration module is used for integrating the focus characteristics and focus voice into the target sub-image to generate an ultrasonic diagnosis image.
Further, the image division criteria are: the areas of the two sub-images at the horizontal adjacent positions are consistent in size, and part of ultrasonic areas are contained in any sub-image;
the generation method of the sub-image comprises the following steps:
Scanning an ultrasonic image of a patient to obtain a scanned image;
marking midpoints of two horizontal boundaries in a scanned image, and connecting the two midpoints to obtain a vertical dividing line;
identifying a background area in a scanned image through a computer vision technology, drawing a broken line of a closed structure on the inner side boundary of the background area, and marking the area in the broken line as an ultrasonic area;
random marking on a vertical boundary on one side of a scanned image A plurality of base points, and drawing/>, which are perpendicular to the vertical dividing line, by taking the base points as starting pointsA horizontal dividing line;
By passing through The scan image is divided into/>, by horizontal dividing lines and vertical dividing linesA sub-image;
continuously adjust along the vertical direction The position of the horizontal parting line up to/>The adjustment is stopped when the ultrasound region is contained within each sub-image.
Further, the screening method of the target sub-image comprises the following steps:
Respectively at Random marking within a sub-image/>Drawing/>, taking the marked point as a circle center and taking one fifth of the diagonal length of the sub-image where the marked point is located as a radiusMarking circles which are not adjacent to each other;
measurement by medical image processing technique Scan echo data of each marked circle to obtain/>The echo values of the circles;
Will be />, Of sub-imageThe respective accumulated echo values of the circles are averaged to obtain/>Sub-echo values;
The expression of the sub-echo value is:
In the method, in the process of the invention, For/>Sub-echo value,/>For/>First/>, of sub-imageThe echo values of the circles;
marking a sub-echo value larger than a preset echo threshold value as a target sub-echo value, and marking a sub-image where the target sub-echo value is located as a target sub-image to obtain And target sub-images.
Further, the marking criteria are: the degrees of the bending included angles of any two adjacent sub-boundary lines are smaller than 45 degrees;
the marking method of the focus area comprises the following steps:
with a preset length as a radius, in Drawing/>, within ultrasound region in individual target sub-imagesA plurality of detection unit circles;
Measuring one by one Scanning echo data of each detection unit circle to obtain/>Detecting echo values;
Marking a detection unit circle corresponding to a detection echo value larger than a preset echo safety value as an abnormal unit circle to obtain Abnormal unit circles;
At the position of Constructing an XY coordinate axis in the abnormal unit circles, wherein the number of the abnormal unit circles in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant of the XY coordinate axis is larger than a preset abnormal minimum value;
drawing rays into a first quadrant, a second quadrant, a third quadrant and a fourth quadrant in sequence by taking an origin of an XY coordinate axis as a base point and a deflection angle of 15 degrees as a standard;
marking an abnormal unit circle corresponding to the maximum value of the original point spacing from the XY coordinate axis on the ray in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant as a boundary circle to obtain A plurality of boundary circles;
Will be The circle centers of the boundary circles are marked as boundary points to obtain/>Boundary points will/>Connection of boundary points to obtain/>Sub-boundary lines;
Respectively measuring the degrees of the bending included angles between any two adjacent sub-boundary lines;
when the degree of the bending included angle between any two adjacent sub-boundary lines is greater than or equal to 45 degrees, one boundary point is adjusted towards the direction of the origin of the XY coordinate axis until the degree of the bending included angle between any two adjacent sub-boundary lines is less than 45 degrees;
Will be The sub-boundary lines are connected in turn to form a boundary line, and the ultrasound region within the boundary line is marked as a lesion region.
Further, lesion characteristics include lesion extent, lesion shape, lesion sharpness, lesion blood flow signal, and lesion echo intensity;
The method for acquiring the sharpness of the focus comprises the following steps:
marking sharp angles on boundary lines of focus areas, and counting the total amount of the sharp angles;
Sequentially measuring the height of the bulges of the sharp angles through a scale, and recording the height as a bulge value;
marking a protrusion value larger than a preset protrusion height as a target protrusion value, marking a sharp angle corresponding to the target protrusion value as a target sharp angle, and counting the number of the target sharp angles;
marking a bulge value smaller than a preset bulge lower limit value as an invalid bulge value, marking a sharp angle corresponding to the invalid bulge value as an invalid sharp angle, and counting the number of the invalid sharp angles;
Comparing the total amount of the sharp angles with the amount of the ineffective sharp angles, and then comparing the total amount of the sharp angles with the amount of the target sharp angles to obtain the sharpness of the focus;
The expression of focal sharpness is:
In the method, in the process of the invention, Is focus sharpness,/>For the number of sharp angles of the target,/>Is the total amount of sharp angles,Is the number of ineffective sharp angles.
Further, the method for acquiring the focus echo intensity comprises the following steps:
dividing focus area into focus areas according to preset side length A plurality of small square grids;
At the position of Random marking/>, respectively, in each small squareIndividual nodes and measure/>Echo values of individual nodes;
Marking nodes with echo values larger than a preset echo high value as target nodes, and counting the number of the target nodes;
Will be The number and the respective ones of the target nodes in each of the tiles areComparing the number of nodes in each small square to obtain/>Sub-intensity;
the expression of the sub-intensity is:
In the method, in the process of the invention, For/>Sub-intensity,/>For/>Number of target nodes in each cell,/>For/>The number of nodes in a cell;
Removing the maximum and minimum values of the sub-intensities, and leaving Averaging after accumulating the sub-intensities to obtain focus echo intensity;
The expression of the focus echo intensity is:
In the method, in the process of the invention, Is focus echo intensity,/>For/>Sub-intensities.
Further, lesion voices include voice 1, voice 2, voice 3, voice 4, voice 5, and voice 6;
the generation method of the voice 1, the voice 2, the voice 3, the voice 4, the voice 5 and the voice 6 comprises the following steps:
Sequentially numbering the focus range, focus shape, focus sharpness, focus blood flow signal and focus echo intensity, and respectively marking the numbered focus range, focus shape, focus sharpness, focus blood flow signal and focus echo intensity as a characteristic 1, a characteristic 2, a characteristic 3, a characteristic 4, a characteristic 5 and a characteristic 6;
recognizing the semantics of the features 1, 2, 3, 4, 5 and 6 one by one through natural language processing technology;
inputting voice data which are consistent with the semantics of the features 1,2, 3, 4, 5 and 6 through voice equipment;
And numbering the input voice data in an ascending order in sequence to obtain voice 1, voice 2, voice 3, voice 4, voice 5 and voice 6.
Further, the method for generating the successful matching value comprises the following steps:
the semantics of the voice 1, the voice 2, the voice 3, the voice 4, the voice 5 and the voice 6 are respectively recognized through a natural language processing technology, voice keywords are obtained, and the voice keywords are numbered by taking the number of voice data as a reference;
Marking keywords of the feature 1, the feature 2, the feature 3, the feature 4, the feature 5 and the feature 6 respectively, obtaining character keywords, and numbering the character keywords by taking the number of the focus feature as a reference;
carrying out number consistency matching on the voice keywords with the same numbers and the corresponding text keywords;
And marking the voice keywords and the text keywords with the same numbers as successful matching groups, and counting the number of the successful matching groups to obtain successful matching values.
Further, the method for determining whether to send out the target sub-image integration prompt comprises the following steps:
when the successful matching value is 6, determining to send out a target sub-image integration prompt;
And when the successful matching value is smaller than 6, judging that the target sub-image integration prompt is not sent out.
Further, the method for generating the ultrasonic diagnostic image comprises the following steps:
constructing six data sets with a first unit and a second unit, and numbering the six data sets in ascending order;
Respectively importing the feature 1, the feature 2, the feature 3, the feature 4, the feature 5 and the feature 6 into a first unit of a data set with corresponding numbers;
Respectively importing voice 1, voice 2, voice 3, voice 4, voice 5 and voice 6 into a second unit of the data set with the corresponding number;
The six data sets are annularly arranged in a numbering ascending order mode, and a head connection point and a tail connection point are marked on the six data sets respectively;
Connecting the head connection point and the tail connection point which are positioned at adjacent positions to generate a diagnosis data set;
And remarking the positions of the boundary lines of the focus areas on the diagnosis data set, and integrating the positions into the target sub-images to obtain an ultrasonic diagnosis image.
The invention relates to a technical effect and advantages of an artificial intelligence auxiliary diagnosis data processing system, which are as follows:
Dividing an ultrasonic image based on an image dividing criterion to generate a sub-image, screening a target sub-image from the sub-image, marking a focus area in the target sub-image based on a marking criterion, acquiring focus characteristics of the focus area, generating focus voice based on the focus characteristics, performing consistency matching on the focus characteristics and the focus voice, generating a successful matching value based on a matching result, judging whether to send a sub-image integration prompt, and integrating the focus characteristics and the focus voice into the target sub-image to generate an ultrasonic diagnosis image; compared with the prior art, the method has the advantages that the region corresponding to irrelevant normal tissues can be removed from the ultrasonic image, the focus region with small area and accuracy is obtained, the negative influence of irrelevant data on diagnosis of the focus region is avoided, accurate identification of the focus region is realized, meanwhile, by means of matching combination of focus features and focus voice, double representation on text data and voice data can be carried out on lesion information of the focus region, an ultrasonic diagnosis image with voice annotation is formed, medical staff can observe the ultrasonic diagnosis image and know the lesion information of the focus region through the focus voice data, fatigue caused by repeated switching of the visual angle of the medical staff between the ultrasonic image and the text information is avoided, and convenience is further provided for follow-up diagnosis of the medical staff.
Drawings
FIG. 1 is a schematic diagram of an artificial intelligence aided diagnosis data processing system according to embodiment 1 of the present invention;
FIG. 2 is a schematic flow chart of an artificial intelligence aided diagnosis data processing method according to embodiment 2 of the present invention;
Fig. 3 is a logic schematic diagram of an artificial intelligence auxiliary diagnostic data processing method according to embodiment 2 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, an artificial intelligence auxiliary diagnosis data processing system according to the present embodiment is applied to an artificial intelligence server, and includes:
The sub-image dividing module is used for dividing the ultrasonic image based on an image dividing criterion to generate a sub-image;
The image division criterion is the basis for independently dividing the small area of the ultrasonic image, and the ultrasonic image can be divided into a plurality of relatively independent sub-images which are associated, so that the operation effect that the ultrasonic image is integrated into zero is realized, the calculation burden of the large-area and integral ultrasonic image in the subsequent diagnosis data processing is avoided, and the diagnosis data processing speed is further improved;
In order to divide the ultrasound images rapidly and accurately, it is necessary to ensure that the positions and the areas between the sub-images are kept reasonable and orderly, and each sub-image contains a part of ultrasound area, so as to improve the effect of dividing the sub-images;
the image division criteria are: the areas of the two sub-images at the horizontal adjacent positions are consistent in size, and part of ultrasonic areas are contained in any sub-image;
the generation method of the sub-image comprises the following steps:
Scanning an ultrasonic image of a patient to obtain a scanned image;
marking midpoints of two horizontal boundaries in a scanned image, and connecting the two midpoints to obtain a vertical dividing line;
identifying a background area in a scanned image through a computer vision technology, drawing a broken line of a closed structure on the inner side boundary of the background area, and marking the area in the broken line as an ultrasonic area; the background area refers to an area without tissue parameters in the scanned image, and is usually black;
random marking on a vertical boundary on one side of a scanned image A plurality of base points, and drawing/>, which are perpendicular to the vertical dividing line, by taking the base points as starting pointsA horizontal dividing line;
By passing through The scan image is divided into/>, by horizontal dividing lines and vertical dividing linesA sub-image;
continuously adjust along the vertical direction The position of the horizontal parting line up to/>Stopping adjustment when the sub-images all contain an ultrasonic area; by adjusting the position of the horizontal dividing line, each divided sub-image can be ensured to contain an ultrasonic area, so that each sub-image can be ensured to be used as a screening basis of a subsequent target sub-image, useless sub-images are prevented from being divided, and the dividing precision of the sub-images is improved;
Where the scanned image is divided into After sub-image, for subsequent convenience pair/>Identification and localization of sub-images requires/>Numbering the sub-images, and when numbering, sequentially aligning/>, starting from the sub-image corresponding to the upper left corner of the scanned image and ending with the sub-image corresponding to the lower right corner of the scanned imageThe sub-images are numbered in ascending order.
The focus area marking module is used for screening out a target sub-image from the sub-images and marking out focus areas in the target sub-image based on marking criteria;
the target sub-image is a sub-image with abnormal comparison results after the ultrasonic area in the sub-image is compared with the normal ultrasonic area, and the initial position of the sub-image where the focus area is located can be obtained by screening the target sub-image, so that the range of the focus area is reduced, the sub-image with the normal ultrasonic area is removed, and the data calculated amount is reduced;
The screening method of the target sub-images comprises the following steps:
Respectively at Random marking within a sub-image/>Drawing/>, taking the marked point as a circle center and taking one fifth of the diagonal length of the sub-image where the marked point is located as a radiusMarking circles which are not adjacent to each other; the non-adjacent mode can ensure that the phenomenon of mutual interference between any marked circles does not occur, and the independence of each marked circle is improved, so that the phenomenon of data superposition or interference between the marked circles is prevented;
measurement by medical image processing technique Scan echo data of each marked circle to obtain/>The echo values of the circles;
Will be />, Of sub-imageThe respective accumulated echo values of the circles are averaged to obtain/>Sub-echo values;
The expression of the sub-echo value is:
In the method, in the process of the invention, For/>Sub-echo value,/>For/>First/>, of sub-imageThe echo values of the circles;
marking a sub-echo value larger than a preset echo threshold value as a target sub-echo value, and marking a sub-image where the target sub-echo value is located as a target sub-image to obtain A plurality of target sub-images; the preset echo threshold is used for limiting the maximum echo value in the sub-image, namely, the attribute of the sub-image can be distinguished between the target sub-image and the non-target sub-image;
When the target sub-image is screened, further identification and judgment are required to be carried out on the ultrasonic area in the target sub-image, so that the normal area and the focus area are accurately identified in the ultrasonic area, the focus area which can directly and intuitively represent the focus position is obtained, and when the normal area and the focus area are identified, identification and marking are required to be carried out under a specific criterion, namely a marking criterion;
the marking criteria are: the degrees of the bending included angles of any two adjacent sub-boundary lines are smaller than 45 degrees;
The focus area refers to the position of a lesion area in an ultrasonic area in a target sub-image, and is used for accurately representing the focus position of a patient and accurately and detailedly representing the lesion condition of the patient;
the marking method of the focus area comprises the following steps:
with a preset length as a radius, in Drawing/>, within ultrasound region in individual target sub-imagesA plurality of detection unit circles; the preset length is used for limiting the radius of the detection unit circle in the target sub-image, so that the enough detection unit circle can be drawn in the target sub-image, and enough data support is provided for the follow-up; the preset length is formulated according to the length or width in the actual target sub-image; illustratively, the preset length is one tenth of the length in the target sub-image;
Measuring one by one Scanning echo data of each detection unit circle to obtain/>Detecting echo values;
Marking a detection unit circle corresponding to a detection echo value larger than a preset echo safety value as an abnormal unit circle to obtain Abnormal unit circles; the preset echo safety value refers to the maximum echo value belonging to normal human tissues in the ultrasonic region, namely the echo value of the ultrasonic region in the sub-image can be distinguished in size, so that data support is provided for screening of target sub-images; the preset echo safety value is obtained by acquiring the maximum echo value of the normal personnel organization in a great number of ultrasonic areas in history and then obtaining the average value of the maximum echo value;
At the position of Constructing an XY coordinate axis in the abnormal unit circles, wherein the number of the abnormal unit circles in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant of the XY coordinate axis is larger than a preset abnormal minimum value;
drawing rays into a first quadrant, a second quadrant, a third quadrant and a fourth quadrant in sequence by taking an origin of an XY coordinate axis as a base point and a deflection angle of 15 degrees as a standard;
marking an abnormal unit circle corresponding to the maximum value of the original point spacing from the XY coordinate axis on the ray in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant as a boundary circle to obtain A plurality of boundary circles;
Will be The circle centers of the boundary circles are marked as boundary points to obtain/>Boundary points will/>Connection of boundary points to obtain/>Sub-boundary lines;
Respectively measuring the degrees of the bending included angles between any two adjacent sub-boundary lines;
When the degree of the bending included angle between any two adjacent sub-boundary lines is greater than or equal to 45 degrees, one boundary point is adjusted towards the direction of the origin of the XY coordinate axis until the degree of the bending included angle between any two adjacent sub-boundary lines is less than 45 degrees; the method can ensure that the obtained focus area does not have overlarge bending amplitude by adopting the mode that the degree of the bending included angle between any two adjacent sub-boundary lines is smaller than 45 degrees, and ensure that the boundary of the focus area can be kept in a reasonable angle range, thereby providing an accurate basis for the data acquisition and calculation of the subsequent focus area;
Will be The sub-boundary lines are connected in turn to form a boundary line, and the ultrasound region within the boundary line is marked as a lesion region.
The voice data input module is used for acquiring focus characteristics of a focus area and generating focus voice based on the focus characteristics;
The focus features refer to relevant ultrasonic scanning parameters of lesion tissues in a focus region, and are used for accurately representing the features of the lesion tissues corresponding to the focus region, so that the overall size, shape and other properties of the lesion tissues are represented, and the focus region is conveniently represented by comprehensive data;
Focal features include focal extent, focal shape, focal sharpness, focal blood flow signature, and focal echo intensity;
The focus range refers to the area occupied by the focus area corresponding to the position, and when the focus range is larger, the larger the focus area is, the larger the area occupied by the focus area is; the focus range consists of focus length, focus width and focus thickness, which are all obtained by measuring the length, width and thickness of focus area by a scale; illustratively, the lesion ranges are a lesion length of 32mm, a lesion width of 22mm, and a lesion thickness of 15mm;
The focus shape refers to an image shape similar to the overall shape of the focus area, and is used to represent the overall shape of the focus area, and the focus shape includes, but is not limited to, a circle, an ellipse, and a rectangle; cysts are generally circular or oval with clear boundaries; tumors may take on irregular shapes, with blurred boundaries; the focus shape is obtained by computer vision technology identification;
The focal sharpness refers to the ratio of the number of sharp angles larger than the preset protrusion height to the number of all sharp angles in the boundary line corresponding to the focal region, and when the focal sharpness is larger, the number of the sharp angles larger than the preset protrusion height in the boundary line corresponding to the focal region is larger;
The method for acquiring the sharpness of the focus comprises the following steps:
marking sharp angles on boundary lines of focus areas, and counting the total amount of the sharp angles;
Sequentially measuring the height of the bulges of the sharp angles through a scale, and recording the height as a bulge value;
Marking a protrusion value larger than a preset protrusion height as a target protrusion value, marking a sharp angle corresponding to the target protrusion value as a target sharp angle, and counting the number of the target sharp angles; the preset bump height is the lowest limit of the bump height for the target sharp angle, so that the sharp angle can be effectively distinguished from the target sharp angle and the non-target sharp angle; the preset bump height is obtained through coefficient optimization after collecting a plurality of minimum values of bump heights marked as target sharp angles in the history;
Marking a bulge value smaller than a preset bulge lower limit value as an invalid bulge value, marking a sharp angle corresponding to the invalid bulge value as an invalid sharp angle, and counting the number of the invalid sharp angles; the preset bulge lower limit value is the lowest limit for calculating whether the sharp angle can be included in the bulge height, so that the sharp angle can be accurately distinguished from the invalid sharp angle; the lower limit value of the preset bulge is obtained by acquiring the maximum bulge height of a great number of invalid sharp angles of the history and then obtaining the average value of the maximum bulge height;
Comparing the total amount of the sharp angles with the amount of the ineffective sharp angles, and then comparing the total amount of the sharp angles with the amount of the target sharp angles to obtain the sharpness of the focus;
The expression of focal sharpness is:
In the method, in the process of the invention, Is focus sharpness,/>For the number of sharp angles of the target,/>Is the total amount of sharp angles,Is the number of ineffective sharp angles;
Focal blood flow signals refer to the intensity of blood flow signals in tissues at corresponding positions of focal areas, tumors usually have enhanced blood flow signals, and cysts usually have no obvious blood flow; focal blood flow signals include, but are not limited to, blood flow velocity, blood flow distribution, and blood flow direction;
The focus echo intensity refers to the ratio of the number of target nodes to the number of all nodes in the lesion tissue at the corresponding position of the focus region, and can identify the tissue position with high echo value in the focus region;
The method for acquiring the focus echo intensity comprises the following steps:
dividing focus area into focus areas according to preset side length A plurality of small square grids; the preset side length is used for limiting the side length of the small square lattice so as to ensure that the focus area can be divided into a sufficient number of small square lattices and provide a calculation basis for the subsequent sufficient number; the preset side length is formulated according to the width of the actual focus area, and is one-tenth of the width of the focus area by way of example;
At the position of Random marking/>, respectively, in each small squareIndividual nodes and measure/>Echo values of individual nodes;
Marking nodes with echo values larger than a preset echo high value as target nodes, and counting the number of the target nodes; the preset echo high value is the lowest limit of the echo value of the target node, namely the node can be divided into the target node and the non-target node; the preset echo high value is obtained through coefficient optimization after the minimum value of echo values corresponding to a large number of target nodes in the history is acquired;
Will be The number and the respective ones of the target nodes in each of the tiles areComparing the number of nodes in each small square to obtain/>Sub-intensity;
the expression of the sub-intensity is:
In the method, in the process of the invention, For/>Sub-intensity,/>For/>Number of target nodes in each cell,/>Is the firstThe number of nodes in a cell;
Removing the maximum and minimum values of the sub-intensities, and leaving Averaging after accumulating the sub-intensities to obtain focus echo intensity;
The expression of the focus echo intensity is:
In the method, in the process of the invention, Is focus echo intensity,/>For/>Sub-intensity;
After the focus features of the focus area are acquired, focus voice which is suitable for the focus features is generated according to the acquired focus features, so that the focus voice can represent the focus features of the focus area from voice data, and the defect caused by text data on focus feature representation is overcome;
Focus speech includes speech 1, speech 2, speech 3, speech 4, speech 5, and speech 6;
the generation method of the voice 1, the voice 2, the voice 3, the voice 4, the voice 5 and the voice 6 comprises the following steps:
Sequentially numbering the focus range, focus shape, focus sharpness, focus blood flow signal and focus echo intensity, and respectively marking the numbered focus range, focus shape, focus sharpness, focus blood flow signal and focus echo intensity as a characteristic 1, a characteristic 2, a characteristic 3, a characteristic 4, a characteristic 5 and a characteristic 6;
recognizing the semantics of the features 1, 2, 3, 4, 5 and 6 one by one through natural language processing technology;
inputting voice data which are consistent with the semantics of the features 1,2, 3, 4, 5 and 6 through voice equipment;
And numbering the input voice data in an ascending order in sequence to obtain voice 1, voice 2, voice 3, voice 4, voice 5 and voice 6.
The matching judging module is used for carrying out consistency matching on the focus characteristics and focus voice, generating a successful matching value based on a matching result, and judging whether to send out a target sub-image integration prompt;
After the focus features and focus voices are generated, the focus features and the focus voices are required to be subjected to consistency matching, wherein the consistency matching is an operation for carrying out consistent comparison on the focus features and the focus voices with different numbers, so that whether the focus features and the focus voices are consistent in number can be judged, and further the representation meaning of the focus features and the focus voices is matched;
the matching index is used for representing the numerical value of the focus characteristic and focus voice consistency matching result, can intuitively know the successful quantity of focus characteristic and focus voice matching, and is used as the basis of whether a target sub-image integration prompt is sent out later;
The generation method of the successful matching value comprises the following steps:
the semantics of the voice 1, the voice 2, the voice 3, the voice 4, the voice 5 and the voice 6 are respectively recognized through a natural language processing technology, voice keywords are obtained, and the voice keywords are numbered by taking the number of voice data as a reference; the voice keywords comprise a first voice keyword, a second voice keyword, a third voice keyword, a fourth voice keyword, a fifth voice keyword and a sixth voice keyword;
Marking keywords of the feature 1, the feature 2, the feature 3, the feature 4, the feature 5 and the feature 6 respectively, obtaining character keywords, and numbering the character keywords by taking the number of the focus feature as a reference; the text keywords comprise a first text keyword, a second text keyword, a third text keyword, a fourth text keyword, a fifth text keyword and a sixth text keyword;
carrying out number consistency matching on the voice keywords with the same numbers and the corresponding text keywords;
Marking the voice keywords and the text keywords with the same numbers as successful matching groups, and counting the number of the successful matching groups to obtain successful matching values;
When the number consistency matching is carried out on the focus characteristic and the focus voice, whether the focus characteristic and the focus voice are matched or not is judged according to the size of a successful matching value, and whether a target sub-image integration prompt is sent or not is judged according to the result of the successful matching value;
the method for judging whether to send out the target sub-image integration prompt comprises the following steps:
when the successful matching value is 6, indicating that the number matching results of all focus voices and focus features are consistent, and judging that the target sub-image integration prompt is sent if all focus voices and focus features are consistent;
when the successful matching value is smaller than 6, the situation that the number matching results of the focus voice and the focus feature are inconsistent is indicated, and when not all focus voices and focus features are consistent in matching, it is judged that the target sub-image integration prompt is not sent out;
the image integration module is used for integrating focus features and focus voice into the target sub-image to generate an ultrasonic diagnosis image;
When a sub-image integration prompt is sent, the serial number matching results of all focus voices and focus features are consistent, at the moment, focus features and focus voices aiming at focus areas are all provided and perfected, and the focus features and focus voices are integrated into a target sub-image to obtain an ultrasonic diagnosis image, so that the ultrasonic diagnosis image can represent the focus areas in characters and voices, and a convenient and accurate basis is provided for diagnosis of subsequent medical staff;
the method for generating the ultrasonic diagnostic image comprises the following steps:
constructing six data sets with a first unit and a second unit, and numbering the six data sets in ascending order;
Respectively importing the feature 1, the feature 2, the feature 3, the feature 4, the feature 5 and the feature 6 into a first unit of a data set with corresponding numbers;
Respectively importing voice 1, voice 2, voice 3, voice 4, voice 5 and voice 6 into a second unit of the data set with the corresponding number;
The six data sets are annularly arranged in a numbering ascending order mode, and a head connection point and a tail connection point are marked on the six data sets respectively;
Connecting the head connection point and the tail connection point which are positioned at adjacent positions to generate a diagnosis data set; the mode of connecting the head connection point and the tail connection point can construct a data transmission channel between the data sets at adjacent positions, so that data interaction between the adjacent data sets can be kept, other data sets related to the adjacent data sets can be queried through any one of the data sets, and convenience of data query in the data sets is improved;
And remarking the positions of the boundary lines of the focus areas on the diagnosis data set, and integrating the positions into the target sub-images to obtain an ultrasonic diagnosis image.
The technical solution of the embodiment 1 is applied to diagnosis of an actual medical ultrasonic image, 100 groups of medical diagnosis experiments are performed on 100 ultrasonic images, 100 groups of first medical diagnosis time and first medical diagnosis accuracy are obtained through experimental data of the 100 groups of medical diagnosis experiments, 100 groups of medical diagnosis experiments are performed on the same 100 ultrasonic images in the prior art to obtain 100 groups of second medical diagnosis time and second medical diagnosis accuracy, and after the 100 groups of first medical diagnosis time and first medical diagnosis accuracy are compared with the 100 groups of second medical diagnosis time and second medical diagnosis accuracy, the number of groups of first medical diagnosis time smaller than the second medical diagnosis time is 99 groups, and the number of groups of first medical diagnosis accuracy larger than the second medical diagnosis accuracy is 100 groups, so that the medical diagnosis time of medical staff on the ultrasonic image can be reduced and the medical diagnosis accuracy can be improved after the technical solution of the embodiment 1 is adopted.
In the embodiment, the ultrasonic image is divided based on an image division criterion to generate a sub-image, a target sub-image is screened out from the sub-image, a focus area is marked in the target sub-image based on a marking criterion, focus characteristics of the focus area are obtained, focus voice is generated based on the focus characteristics, the focus characteristics and the focus voice are subjected to consistency matching, a successful matching value is generated based on a matching result, whether a target sub-image integration prompt is sent out is judged, the focus characteristics and the focus voice are integrated into the target sub-image, and an ultrasonic diagnosis image is generated; compared with the prior art, the method has the advantages that the region corresponding to irrelevant normal tissues can be removed from the ultrasonic image, the focus region with small area and accuracy is obtained, the negative influence of irrelevant data on diagnosis of the focus region is avoided, accurate identification of the focus region is realized, meanwhile, by means of matching combination of focus features and focus voice, double representation on text data and voice data can be carried out on lesion information of the focus region, an ultrasonic diagnosis image with voice annotation is formed, medical staff can know the lesion information of the focus region through the focus voice data while observing the ultrasonic diagnosis image, fatigue caused by repeated switching of the visual angle of the medical staff between the ultrasonic image and the text information is avoided, convenience is further provided for follow-up diagnosis of the medical staff, and the probability of misdiagnosis is reduced.
Example 2: referring to fig. 2-3, this embodiment, which is not described in detail in the description of embodiment 1, provides an artificial intelligence auxiliary diagnosis data processing method, applied to an artificial intelligence server, implemented based on an artificial intelligence auxiliary diagnosis data processing system, including:
S1: dividing the ultrasonic image based on an image dividing criterion to generate a sub-image;
S2: screening a target sub-image from the sub-images, and marking a focus area in the target sub-image based on a marking criterion;
S3: focal features of the focal region are obtained, and focal voice is generated based on the focal features;
S4: consistency matching is carried out on the focus characteristics and focus voice, a successful matching value is generated based on a matching result, and whether a target sub-image integration prompt is sent out is judged; if the target sub-image integration prompt is sent out, executing S5; if the target sub-image integration prompt is not sent out, repeating the step S4;
S5: and integrating the focus characteristic and focus voice into the target sub-image to generate an ultrasonic diagnosis image.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (8)

1. An artificial intelligence aided diagnosis data processing system applied to an artificial intelligence server, comprising:
The sub-image dividing module is used for dividing the ultrasonic image based on an image dividing criterion to generate a sub-image;
the image division criteria are: the areas of the two sub-images at the horizontal adjacent positions are consistent in size, and part of ultrasonic areas are contained in any sub-image;
the generation method of the sub-image comprises the following steps:
Scanning an ultrasonic image of a patient to obtain a scanned image;
marking midpoints of two horizontal boundaries in a scanned image, and connecting the two midpoints to obtain a vertical dividing line;
identifying a background area in a scanned image through a computer vision technology, drawing a broken line of a closed structure on the inner side boundary of the background area, and marking the area in the broken line as an ultrasonic area;
random marking on a vertical boundary on one side of a scanned image A plurality of base points, and drawing/>, which are perpendicular to the vertical dividing line, by taking the base points as starting pointsA horizontal dividing line;
By passing through The scan image is divided into/>, by horizontal dividing lines and vertical dividing linesA sub-image;
continuously adjust along the vertical direction The position of the horizontal parting line up to/>Stopping adjustment when the sub-images all contain an ultrasonic area;
The focus area marking module is used for screening out a target sub-image from the sub-images and marking out focus areas in the target sub-image based on marking criteria;
the marking criteria are: the degrees of the bending included angles of any two adjacent sub-boundary lines are smaller than 45 degrees;
the marking method of the focus area comprises the following steps:
with a preset length as a radius, in Drawing/>, within ultrasound region in individual target sub-imagesA plurality of detection unit circles;
Measuring one by one Scanning echo data of each detection unit circle to obtain/>Detecting echo values;
Marking a detection unit circle corresponding to a detection echo value larger than a preset echo safety value as an abnormal unit circle to obtain Abnormal unit circles;
At the position of Constructing an XY coordinate axis in the abnormal unit circles, wherein the number of the abnormal unit circles in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant of the XY coordinate axis is larger than a preset abnormal minimum value;
drawing rays into a first quadrant, a second quadrant, a third quadrant and a fourth quadrant in sequence by taking an origin of an XY coordinate axis as a base point and a deflection angle of 15 degrees as a standard;
marking an abnormal unit circle corresponding to the maximum value of the original point spacing from the XY coordinate axis on the ray in the first quadrant, the second quadrant, the third quadrant and the fourth quadrant as a boundary circle to obtain A plurality of boundary circles;
Will be The circle centers of the boundary circles are marked as boundary points to obtain/>Boundary points will/>Connecting the boundary points to obtainSub-boundary lines;
Respectively measuring the degrees of the bending included angles between any two adjacent sub-boundary lines;
When the degree of the bending included angle between any two adjacent sub-boundary lines is larger than or equal to 45 degrees, one boundary point is adjusted towards the direction of the origin of the XY coordinate axis until the degree of the bending included angle between any two adjacent sub-boundary lines is smaller than 45 degrees;
Will be The sub-boundary lines are sequentially connected to form a boundary line, and then the ultrasonic area in the boundary line is marked as a focus area;
the voice data input module is used for acquiring focus characteristics of a focus area and generating focus voice based on the focus characteristics;
the matching judging module is used for carrying out consistency matching on the focus characteristics and focus voice, generating a successful matching value based on a matching result, and judging whether to send out a sub-image integration prompt;
and the image integration module is used for integrating the focus characteristics and focus voice into the target sub-image to generate an ultrasonic diagnosis image.
2. The artificial intelligence aided diagnosis data processing system of claim 1, wherein the screening method of the target sub-image includes:
Respectively at Random marking within a sub-image/>Drawing/>, taking the marked point as a circle center and taking one fifth of the diagonal length of the sub-image where the marked point is located as a radiusMarking circles which are not adjacent to each other;
measurement by medical image processing technique Scan echo data of each marked circle to obtain/>The echo values of the circles;
Will be />, Of sub-imageThe respective accumulated echo values of the circles are averaged to obtain/>Sub-echo values;
The expression of the sub-echo value is:
In the method, in the process of the invention, For/>Sub-echo value,/>For/>First/>, of sub-imageThe echo values of the circles;
marking a sub-echo value larger than a preset echo threshold value as a target sub-echo value, and marking a sub-image where the target sub-echo value is located as a target sub-image to obtain And target sub-images.
3. An artificial intelligence aided diagnosis data processing system according to claim 2, wherein the lesion characteristics include lesion extent, lesion shape, lesion sharpness, lesion blood flow signal and lesion echo intensity;
The method for acquiring the sharpness of the focus comprises the following steps:
marking sharp angles on boundary lines of focus areas, and counting the total amount of the sharp angles;
Sequentially measuring the height of the bulges of the sharp angles through a scale, and recording the height as a bulge value;
marking a protrusion value larger than a preset protrusion height as a target protrusion value, marking a sharp angle corresponding to the target protrusion value as a target sharp angle, and counting the number of the target sharp angles;
marking a bulge value smaller than a preset bulge lower limit value as an invalid bulge value, marking a sharp angle corresponding to the invalid bulge value as an invalid sharp angle, and counting the number of the invalid sharp angles;
Comparing the total amount of the sharp angles with the amount of the ineffective sharp angles, and then comparing the total amount of the sharp angles with the amount of the target sharp angles to obtain the sharpness of the focus;
The expression of focal sharpness is:
In the method, in the process of the invention, Is focus sharpness,/>For the number of sharp angles of the target,/>Is the total amount of sharp angles,/>Is the number of ineffective sharp angles.
4. An artificial intelligence aided diagnosis data processing system according to claim 3, wherein the method for acquiring the intensity of the echo of the lesion comprises:
dividing focus area into focus areas according to preset side length A plurality of small square grids;
At the position of Random marking/>, respectively, in each small squareIndividual nodes and measure/>Echo values of individual nodes;
Marking nodes with echo values larger than a preset echo high value as target nodes, and counting the number of the target nodes;
Will be The number and the respective ones of the target nodes in each of the tiles areComparing the number of nodes in each small square to obtain/>Sub-intensity;
the expression of the sub-intensity is:
In the method, in the process of the invention, For/>Sub-intensity,/>For/>Number of target nodes in each cell,/>For/>The number of nodes in a cell;
Removing the maximum and minimum values of the sub-intensities, and leaving Averaging after accumulating the sub-intensities to obtain focus echo intensity;
The expression of the focus echo intensity is:
In the method, in the process of the invention, Is focus echo intensity,/>For/>Sub-intensities.
5. The system of claim 4, wherein the lesion voices include voice 1, voice 2, voice 3, voice 4, voice 5, and voice 6;
the generation method of the voice 1, the voice 2, the voice 3, the voice 4, the voice 5 and the voice 6 comprises the following steps:
Sequentially numbering the focus range, focus shape, focus sharpness, focus blood flow signal and focus echo intensity, and respectively marking the numbered focus range, focus shape, focus sharpness, focus blood flow signal and focus echo intensity as a characteristic 1, a characteristic 2, a characteristic 3, a characteristic 4, a characteristic 5 and a characteristic 6;
recognizing the semantics of the features 1, 2, 3, 4, 5 and 6 one by one through natural language processing technology;
inputting voice data which are consistent with the semantics of the features 1,2, 3, 4, 5 and 6 through voice equipment;
And numbering the input voice data in an ascending order in sequence to obtain voice 1, voice 2, voice 3, voice 4, voice 5 and voice 6.
6. The artificial intelligence aided diagnosis data processing system of claim 5, wherein the generation method of the successful match value includes:
the semantics of the voice 1, the voice 2, the voice 3, the voice 4, the voice 5 and the voice 6 are respectively recognized through a natural language processing technology, voice keywords are obtained, and the voice keywords are numbered by taking the number of voice data as a reference;
Marking keywords of the feature 1, the feature 2, the feature 3, the feature 4, the feature 5 and the feature 6 respectively, obtaining character keywords, and numbering the character keywords by taking the number of the focus feature as a reference;
carrying out number consistency matching on the voice keywords with the same numbers and the corresponding text keywords;
And marking the voice keywords and the text keywords with the same numbers as successful matching groups, and counting the number of the successful matching groups to obtain successful matching values.
7. The system of claim 6, wherein the method for determining whether to issue the target sub-image integration prompt comprises:
when the successful matching value is 6, determining to send out a target sub-image integration prompt;
And when the successful matching value is smaller than 6, judging that the target sub-image integration prompt is not sent out.
8. The artificial intelligence aided diagnosis data processing system of claim 7, wherein the method of generating an ultrasonic diagnostic image includes:
constructing six data sets with a first unit and a second unit, and numbering the six data sets in ascending order;
Respectively importing the feature 1, the feature 2, the feature 3, the feature 4, the feature 5 and the feature 6 into a first unit of a data set with corresponding numbers;
Respectively importing voice 1, voice 2, voice 3, voice 4, voice 5 and voice 6 into a second unit of the data set with the corresponding number;
The six data sets are annularly arranged in a numbering ascending order mode, and a head connection point and a tail connection point are marked on the six data sets respectively;
Connecting the head connection point and the tail connection point which are positioned at adjacent positions to generate a diagnosis data set;
And remarking the positions of the boundary lines of the focus areas on the diagnosis data set, and integrating the positions into the target sub-images to obtain an ultrasonic diagnosis image.
CN202410446277.0A 2024-04-15 2024-04-15 Artificial intelligence auxiliary diagnosis data processing system Active CN118039088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410446277.0A CN118039088B (en) 2024-04-15 2024-04-15 Artificial intelligence auxiliary diagnosis data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410446277.0A CN118039088B (en) 2024-04-15 2024-04-15 Artificial intelligence auxiliary diagnosis data processing system

Publications (2)

Publication Number Publication Date
CN118039088A CN118039088A (en) 2024-05-14
CN118039088B true CN118039088B (en) 2024-06-07

Family

ID=90991817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410446277.0A Active CN118039088B (en) 2024-04-15 2024-04-15 Artificial intelligence auxiliary diagnosis data processing system

Country Status (1)

Country Link
CN (1) CN118039088B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961207A (en) * 2018-05-02 2018-12-07 上海大学 Lymph node Malignant and benign lesions aided diagnosis method based on multi-modal ultrasound image
CN113034489A (en) * 2021-04-16 2021-06-25 南方医科大学第五附属医院 Artificial intelligence nasal sinus CT image processing system based on degree of depth learning
CN114491072A (en) * 2022-01-28 2022-05-13 上海国民集团健康科技有限公司 Traditional Chinese medicine data processing system, method, storage medium and terminal
CN117150057A (en) * 2022-05-15 2023-12-01 曹庆恒 Medical image storage method and device, computer equipment and storage medium
CN117274603A (en) * 2023-10-12 2023-12-22 飞依诺科技股份有限公司 Liver focus image description method based on semantic segmentation network
CN117438056A (en) * 2023-12-20 2024-01-23 达州市中心医院(达州市人民医院) Editing, screening and storage control method and system for digestive endoscopy image data

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112019003022T5 (en) * 2018-06-15 2021-03-18 Canon Kabushiki Kaisha Medical image processing device, medical image processing method, and program
JP7229881B2 (en) * 2018-08-14 2023-02-28 キヤノン株式会社 MEDICAL IMAGE PROCESSING APPARATUS, TRAINED MODEL, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
US11375984B2 (en) * 2018-08-31 2022-07-05 Seno Medical Instruments, Inc. Method and system for managing feature reading and scoring in ultrasound and/or optoacoustic images
CN110490851B (en) * 2019-02-15 2021-05-11 腾讯科技(深圳)有限公司 Mammary gland image segmentation method, device and system based on artificial intelligence
US20200391053A1 (en) * 2019-06-13 2020-12-17 Canon Medical Systems Corporation Radiotherapy system, therapy planning support method, and therapy planning method
CN110211171A (en) * 2019-06-18 2019-09-06 华志微创医疗科技(北京)有限公司 The method and device of focal area is extracted from medical image
CA3156519A1 (en) * 2019-10-01 2021-04-08 Sirona Medical, Inc. Ai-assisted medical image interpretation and report generation
US11969280B2 (en) * 2020-01-07 2024-04-30 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
KR20220159402A (en) * 2020-03-27 2022-12-02 홀로직, 인크. Systems and methods for linkage of regions of interest in multiple imaging modalities
KR102150647B1 (en) * 2020-04-16 2020-09-02 주식회사 딥노이드 Ai-based cloud platform system for diagnosing medical image
CN114068002A (en) * 2020-08-07 2022-02-18 四川医枢科技股份有限公司 Auxiliary support system and method for clinical decision, teaching and scientific research of breast and thyroid tumors
CN112401915A (en) * 2020-11-19 2021-02-26 华中科技大学同济医学院附属协和医院 Image fusion comparison method for CT (computed tomography) reexamination of new coronary pneumonia
WO2022212771A2 (en) * 2021-03-31 2022-10-06 Sirona Medical, Inc. Systems and methods for artificial intelligence-assisted image analysis
CN117836870A (en) * 2021-07-04 2024-04-05 Ai瓦力股份有限公司 System and method for processing medical images in real time
CN116128783A (en) * 2021-11-08 2023-05-16 海信集团控股股份有限公司 Focus identification method of ultrasonic image and related device
CN115619760A (en) * 2022-11-02 2023-01-17 北京医银人工智能科技有限公司 Artificial intelligent thyroid ultrasonic diagnosis system and device
CN116740041B (en) * 2023-06-27 2024-04-26 新疆生产建设兵团医院 CTA scanning image analysis system and method based on machine vision
CN117727300A (en) * 2023-12-13 2024-03-19 四川大学华西医院 Voice information processing method and device
CN117454856B (en) * 2023-12-22 2024-04-16 达州爱迦飞诗特科技有限公司 Medical diagnosis data editing method and system based on-line point-to-point mode
CN117496277B (en) * 2024-01-02 2024-03-12 达州市中心医院(达州市人民医院) Rectal cancer image data modeling processing method and system based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961207A (en) * 2018-05-02 2018-12-07 上海大学 Lymph node Malignant and benign lesions aided diagnosis method based on multi-modal ultrasound image
CN113034489A (en) * 2021-04-16 2021-06-25 南方医科大学第五附属医院 Artificial intelligence nasal sinus CT image processing system based on degree of depth learning
CN114491072A (en) * 2022-01-28 2022-05-13 上海国民集团健康科技有限公司 Traditional Chinese medicine data processing system, method, storage medium and terminal
CN117150057A (en) * 2022-05-15 2023-12-01 曹庆恒 Medical image storage method and device, computer equipment and storage medium
CN117274603A (en) * 2023-10-12 2023-12-22 飞依诺科技股份有限公司 Liver focus image description method based on semantic segmentation network
CN117438056A (en) * 2023-12-20 2024-01-23 达州市中心医院(达州市人民医院) Editing, screening and storage control method and system for digestive endoscopy image data

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Dai Fei,等.Deep Learning Based Real-time Segmentation in Ultrasonic Imaging Following the Doctor's Voice Guide.INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS 2021).2022,1-4. *
TBUnet: A Pure Convolutional U-Net Capable of Multifaceted Feature Extraction for Medical Image Segmentation;Chen LiFang,等;JOURNAL OF MEDICAL SYSTEMS;20231214;第47卷(第1期);1-16 *
医学超声图像分割技术的研究及发展趋势;王琳璐,等;影像研究与医学应用;20181215;第2卷(第24期);252-254 *
基于深度学习的肝小叶自动分割及定量分析;曾凡新;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20230115(第01期);E064-118 *
甲状腺、乳腺超声影像自动分析技术综述;龚勋;杨菲;杜章锦;师恩;赵绪;杨子奇;邹海鹏;罗俊;;软件学报;20200715(第07期);2245-2282 *
肖俊杰.《生命智能》.上海大学出版社,2022,128-129. *

Also Published As

Publication number Publication date
CN118039088A (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN110060774B (en) Thyroid nodule identification method based on generative confrontation network
CN106909778B (en) A kind of Multimodal medical image recognition methods and device based on deep learning
CN106102585B (en) The display processing method and 3-D supersonic imaging method and system of three-dimensional imaging data
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
CN109858540B (en) Medical image recognition system and method based on multi-mode fusion
WO2021208739A1 (en) Method and apparatus for evaluating blood vessel in fundus color image, and computer device and medium
Turuwhenua et al. Fully automated montaging of laser scanning in vivo confocal microscopy images of the human corneal subbasal nerve plexus
CN111227864A (en) Method and apparatus for lesion detection using ultrasound image using computer vision
CN109117890A (en) A kind of image classification method, device and storage medium
CN113040823B (en) Ultrasonic imaging equipment and ultrasonic image analysis method
CN115429325A (en) Ultrasonic imaging method and ultrasonic imaging equipment
CN112863699B (en) ESD preoperative discussion system based on mobile terminal
CN118039088B (en) Artificial intelligence auxiliary diagnosis data processing system
CN113485555A (en) Medical image reading method, electronic equipment and storage medium
US20230214970A1 (en) Skin surface analysis device and skin surface analysis method
CN110403631A (en) A kind of Noninvasive intracranial pressure measurement method based on continuous ultrasound image
Zhou et al. Computer aided diagnosis for diabetic retinopathy based on fundus image
CN112037218B (en) CT image-based quantitative analysis method for new coronary pneumonia
CN115063395A (en) Ultrasonic image processing method, device, equipment and medium
KV et al. Deep Learning Approach to Nailfold Capillaroscopy Based Diabetes Mellitus Detection
Xue et al. Spine X-ray image retrieval using partial vertebral boundaries
CN106204609B (en) The processing of Laser scanning confocal microscope lung image and analysis system and method
CN117982106B (en) MRI image-based breast cancer chemotherapy curative effect prediction system and method
CN116196034B (en) Ultrasonic imaging method and system
CN116188499B (en) Segmentation method for lung ultrasonic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant