CN117711581B - Method, system, electronic device and storage medium for automatically adding bookmarks - Google Patents

Method, system, electronic device and storage medium for automatically adding bookmarks Download PDF

Info

Publication number
CN117711581B
CN117711581B CN202410166808.0A CN202410166808A CN117711581B CN 117711581 B CN117711581 B CN 117711581B CN 202410166808 A CN202410166808 A CN 202410166808A CN 117711581 B CN117711581 B CN 117711581B
Authority
CN
China
Prior art keywords
feature
class
frame image
features
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410166808.0A
Other languages
Chinese (zh)
Other versions
CN117711581A (en
Inventor
陈林
古先毅
邱若
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Haoying Medical Technology Co ltd
Original Assignee
Shenzhen Haoying Medical Technology Co ltd
Filing date
Publication date
Application filed by Shenzhen Haoying Medical Technology Co ltd filed Critical Shenzhen Haoying Medical Technology Co ltd
Priority to CN202410166808.0A priority Critical patent/CN117711581B/en
Publication of CN117711581A publication Critical patent/CN117711581A/en
Application granted granted Critical
Publication of CN117711581B publication Critical patent/CN117711581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method, a system, electronic equipment and a storage medium for automatically adding bookmarks, wherein in the retracting process, when a single-frame image is acquired, feature recognition is carried out on the single-frame image through a pre-trained feature recognition model to obtain a feature recognition result of the single-frame image until the retracting is finished, and a feature recognition result of a plurality of single-frame images is obtained; generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition result thereof; determining a plurality of features with the same feature class from a feature recognition result data set; analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category; according to the target analysis results matched with the preset target feature categories in the analysis results of the feature categories, determining an interest frame image from each single frame image; and adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark.

Description

Method, system, electronic device and storage medium for automatically adding bookmarks
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, a system, an electronic device, and a storage medium for automatically adding bookmarks.
Background
In the current intravascular ultrasound imaging workflow, two relatively important links in retraction are carried out; in performing the pullback, the physician needs to pay attention to the imaging at all times in order to manually mark the corresponding bookmark on the frame of interest image when the frame of interest image is found or to inform the assistant manually.
However, the retraction process cannot be interrupted, and a certain time difference exists between the manual discovery and the manual marking of the corresponding bookmark, so that the frame of interest image of the actually marked bookmark is not consistent with the discovered frame of interest image, and the situation is more obvious especially in a high retraction speed scene.
Therefore, how to provide an automatic bookmarking way to ensure that the actually bookmarked frame of interest image is consistent with the found frame of interest image is a current urgent need to be solved by the present application.
Disclosure of Invention
In view of the foregoing, the present invention provides a method, system, electronic device and storage medium for automatically bookmarking, with the objective of ensuring that a virtually bookmarked frame of interest image remains consistent with a found frame of interest image.
The first aspect of the present invention provides a method for automatically adding bookmarks, the method comprising:
In the retracting process, when a single frame image is acquired, carrying out feature recognition on the single frame image through a pre-trained feature recognition model to obtain a feature recognition result of the single frame image until the retracting is finished, and obtaining feature recognition results of a plurality of single frame images; the feature recognition model is obtained by training a Unet model to be trained by utilizing historical image data; the feature recognition result of the single frame image comprises each feature of the single frame image and a feature category thereof;
generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition result thereof; the feature recognition result data set comprises a feature recognition result and a sequence number of each single-frame image;
determining a plurality of features with the same feature class from the feature recognition result data set;
analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category;
According to the target analysis results matched with the preset target feature categories in the analysis results of the feature categories, determining an interest frame image from each single frame image;
And adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark.
Optionally, if the feature class is a lumen cross-sectional area class;
The analyzing the plurality of features with the same feature class to obtain the analysis result of the feature class comprises the following steps:
Acquiring the lumen cross-sectional area of each feature aiming at each feature with the feature class as the lumen cross-sectional area class;
And determining the characteristic category as the characteristic with the smallest lumen cross-sectional area in the characteristics of the lumen cross-sectional area category, and generating an analysis result of the lumen cross-sectional area category according to the characteristic with the smallest lumen cross-sectional area.
Optionally, if the feature class is a bracket class;
The analyzing the plurality of features with the same feature class to obtain the analysis result of the feature class comprises the following steps:
Determining at least one first feature set from the features of which the feature class is the stent class; wherein the first feature group includes features of which a plurality of continuous feature categories are stent categories;
Determining a start frame feature and an end frame feature from a plurality of features of the first feature set for each of the first feature sets;
And generating an analysis result of the stent category according to the initial frame characteristics and the end frame characteristics of each first characteristic group.
Optionally, if the feature class is a plaque class;
The analyzing the plurality of features with the same feature class to obtain the analysis result of the feature class comprises the following steps:
Determining at least one second feature set from the features of which the feature class is the plaque class; wherein the second feature set includes a plurality of features of a plaque class that are consecutive;
For each feature in each second feature group, acquiring the lumen cross-sectional area and the external elastic membrane cross-sectional area of the feature, and calculating the plaque area of the feature according to the lumen cross-sectional area and the external elastic membrane cross-sectional area of the feature;
And determining target features from each second feature group according to the plaque areas of the features in each second feature group, and generating analysis results of the plaque categories according to each target feature.
Optionally, the method further comprises:
Acquiring image data corresponding to the whole playback process; wherein the image data comprises a plurality of single-frame images which are arranged in sequence;
and carrying out feature recognition on the single-frame image through the feature recognition model aiming at each frame of the single-frame image to obtain a feature recognition result of the single-frame image.
Optionally, training the Unet model to be trained by using the historical image data to obtain the feature recognition model includes:
acquiring historical image data; wherein the historical image data comprises a plurality of frames of historical images, and each frame of historical image comprises a standard feature class of each historical feature;
Inputting the historical images into a Unet model to be trained aiming at each frame of the historical images, enabling the Unet model to be trained to conduct feature recognition on the historical images to obtain each historical feature and feature category of the historical images, taking the standard feature category of each historical feature, which is close to each historical feature, as a training target, and adjusting parameters of the Unet model to be trained until the Unet model to be trained achieves convergence to obtain the feature recognition model.
Optionally, after obtaining the feature recognition model, the method further includes:
Deploying the feature recognition model into an intravascular ultrasound software system, and configuring a model calling interface corresponding to the feature recognition model in the intravascular ultrasound software system.
In a second aspect, the invention provides a system for automatically bookmarking, the system comprising:
The first feature recognition unit is used for carrying out feature recognition on the single-frame image through a pre-trained feature recognition model when the single-frame image is acquired in the retracting process, so as to obtain a feature recognition result of the single-frame image until the retracting is finished, and then obtaining feature recognition results of a plurality of single-frame images; the feature recognition model is obtained by training a Unet model to be trained by utilizing historical image data; the feature recognition result of the single frame image comprises each feature of the single frame image and a feature category thereof;
The characteristic recognition result data set generating unit is used for generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition results thereof; the feature recognition result data set comprises a feature recognition result and a sequence number of each single-frame image;
a first determining unit configured to determine a plurality of the features having the same feature class from the feature recognition result data set;
The analysis unit is used for analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category;
The second determining unit is used for determining an interest frame image from each single frame image according to a target analysis result matched with a preset target feature class in the analysis results of each feature class;
And the bookmark adding unit is used for adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark.
A third aspect of the present invention provides an electronic apparatus, comprising: the device comprises a processor and a memory, wherein the processor and the memory are connected through a communication bus; the processor is used for calling and executing the program stored in the memory; the memory is used for storing a program for implementing the method for automatically adding bookmarks as provided in the first aspect of the present invention.
A fourth aspect of the invention provides a computer readable storage medium having stored therein computer executable instructions for performing the method of automatically bookmarking as provided in the first aspect of the invention.
The invention provides a method, a system, electronic equipment and a storage medium for automatically adding bookmarks, which are characterized in that a Unet model to be trained is trained by historical imaging data in advance to obtain a feature recognition model, so that in the retracting process of an intravascular ultrasound imaging system, when a single frame image is acquired, the feature recognition model is used for carrying out feature recognition on the single frame image to obtain a feature recognition result of the single frame image, and the feature recognition results of a plurality of single frame images are formed into a feature recognition result data set until the retracting is finished; comprehensively analyzing a feature result data set to find a target analysis result matched with a target feature category preset by a doctor from a plurality of feature recognition results, finding an interest frame image of interest of the doctor from each single frame image according to the target analysis result, automatically adding a corresponding bookmark on the serial number of the interest frame image, and avoiding the situations that the doctor is not required to pay attention to imaging at all times to find the interest frame image and manually place the corresponding bookmark on the found interest frame image, and the bookmark placement omission and the fact that the interest frame image marked with the bookmark is inconsistent with the found interest frame image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating a method for automatically adding bookmarks according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of a method for automatically adding bookmarks according to an embodiment of the present invention;
FIG. 3 is an exemplary diagram of another method for automatically bookmarking provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a system for automatically adding bookmarks according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by these devices, modules, or units.
It should be noted that references to "one" or "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be interpreted as "one or more" unless the context clearly indicates otherwise.
In the existing retraction process of the intravascular ultrasound imaging system, a doctor is generally required to pay attention to the images in the whole retraction process at any time so as to screen out the frame of interest images, and a corresponding bookmark is manually placed on the screened frame of interest images. This approach not only can there be instances where bookmark placement is missed and the actually bookmarked frame of interest image is not consistent with the found frame of interest image, but it is also a relatively repetitive and time consuming process that affects the efficiency of the procedure to some extent.
Therefore, the embodiment of the invention provides a method, a system, electronic equipment and a storage medium for automatically adding bookmarks, which are used for training Unet models to be trained by utilizing historical imaging data in advance to obtain a feature recognition model, so that in the retracting process of an intravascular ultrasound imaging system, when a single frame image is acquired, the feature recognition model is used for carrying out feature recognition on the single frame image to obtain a feature recognition result of the single frame image, and the feature recognition results of a plurality of single frame images are formed into a feature recognition result data set until the retracting is finished; the feature result data set is comprehensively analyzed to find a target analysis result matched with a target feature category preset by a doctor from a plurality of feature recognition results, and an interest frame image of interest of the doctor is found from each single frame image according to the target analysis result, so that automatic placement of bookmarks in the retracting process is realized, a doctor does not need to pay attention to the image in the whole retracting process at any time, time consumption is reduced, the burden of the doctor is lightened, and therefore the operation efficiency is improved.
Referring to fig. 1, a flowchart of a method for automatically adding bookmarks according to an embodiment of the present invention is shown, where the method for automatically adding bookmarks may be applied to an intravascular ultrasound software system, and the method specifically includes the following steps:
S101: in the retracting process, when a single frame image is acquired, the single frame image is subjected to feature recognition through a pre-trained feature recognition model, so that a feature recognition result of the single frame image is obtained, and the feature recognition results of a plurality of single frame images are obtained until the retracting is finished.
In the embodiment of the application, the Unet model to be trained can be trained by utilizing the historical image data in advance, so as to obtain the corresponding feature recognition model.
Alternatively, the historical image data may be acquired; wherein the historical image data comprises a plurality of frames of historical images, and each frame of historical image comprises a standard feature class of each historical feature; inputting the historical images into Unet models to be trained aiming at each frame of the historical images, enabling the Unet models to be trained to conduct feature recognition on the historical images to obtain each historical feature and feature category of the historical images, taking the standard feature category of each historical feature, which is close to each historical feature, as a training target, and adjusting parameters of the Unet models to be trained until Unet models to be trained achieve convergence to obtain feature recognition models.
It should be noted that, for each frame of history image in the history image data, each history feature in each frame of history image may be identified in advance by a person, and a standard feature class corresponding to each history feature may be set.
It should be further noted that the standard feature class of the history feature may be a lumen cross-sectional area class, a plaque class, a stent class, and the like, and the embodiment of the present invention is not limited herein.
Specifically, it is possible to analyze which features are mainly included in the frame image of interest of the history attention of each user (doctor) so as to collect and sort the history image data containing the relevant features, identify the history features of interest of the user in each frame of history image in the history image data after collecting the history image data, and set a corresponding standard feature class for each history feature.
Generating a corresponding training data set and a corresponding testing data set according to each frame of historical image and the standard feature category of each historical feature of each frame of historical image; and sequentially inputting each frame of historical images in the training data set into a Unet model to be trained, sequentially carrying out feature recognition on each frame of historical images by using a Unet model to be trained to obtain each historical feature of each frame of historical images and feature categories thereof, and adjusting parameters of a Unet model to be trained by taking a standard feature category of each historical feature of each frame of historical images approaching to each historical feature as a training target until the Unet model to be trained reaches convergence to obtain an initial feature recognition model. Finally, verifying the initial feature recognition model by using a test data set, and determining the initial feature recognition model as a feature recognition model under the condition that the feature recognition accuracy of the initial feature recognition model is not less than the preset feature recognition accuracy; otherwise, the new historical image data are collected again, and the training of the initial feature recognition model is continued on the newly collected historical image data until the feature recognition accuracy of the obtained initial feature recognition model is not less than the preset feature recognition accuracy.
It should be noted that the historical image data may be historical clinical image data, and the embodiment of the present invention is not limited herein.
It should also be noted that the feature class of the feature may be a lumen cross-sectional area class, a plaque class, a stent class, etc., and the embodiments of the present invention are not limited herein,
In some embodiments, the method and the device for training the neural network can determine that the neural network to be trained is the image semantic segmentation model to be trained because the method and the device for training the neural network belong to the image segmentation model by identifying the features in the historical images of each frame and extracting the features in the historical images. In combination with the medical image features related to the invention and the parameter characteristics of the evaluation function in the Unet model framework, the image semantic segmentation model to be trained is used for determining the Unet model to be trained, namely, the Unet model is a deep learning network model for image semantic segmentation.
Further, in the embodiment of the invention, after the corresponding feature recognition model is obtained, the obtained feature recognition model can be deployed in the intravascular ultrasound software system, and a model calling interface corresponding to the feature recognition model is configured in the intravascular ultrasound software system; meanwhile, model calling service corresponding to the feature recognition model can be added in the intravascular ultrasound software system.
In the embodiment of the invention, when a single frame image is acquired in the retracting process of an intravascular ultrasound software system, the acquired single frame image can be imported into a feature recognition model through a model calling service calling interface, so that the feature recognition model performs feature recognition on the imported single frame image to obtain a feature recognition result of the single frame image until the retracting is finished, and the feature recognition result of each single frame image acquired in the whole retracting process is obtained; that is, in the retraction process of the intravascular ultrasound software system, once one Shan Zhen image is acquired, the acquired single-frame image can be imported into the feature recognition model through the model calling service calling interface, so that the feature recognition model performs feature recognition on the imported single-frame image to obtain a feature recognition result of the single-frame image until the retraction is finished, and a feature recognition result of each Shan Zhen image acquired in the whole retraction process is obtained.
The feature recognition result of the single-frame image comprises each feature of the single-frame image and a feature category thereof.
In some embodiments, before the collected single-frame image is imported into the feature recognition model, the collected single-frame image may be preprocessed, so that the preprocessed single-frame image is imported into the feature recognition model, and the feature recognition model performs feature recognition on the imported single-frame image to obtain a feature recognition result of the single-frame image; the imported single-frame image is a single-frame image after preprocessing.
It should be noted that, the process of preprocessing the single frame image may at least include denoising, geometric transformation, and so on, which are not limited in the embodiments of the present invention.
S102: generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition result thereof; the feature recognition result data set comprises a feature recognition result and a serial number of each single frame image.
In the specific execution of step S102, when the feature recognition result of each single frame image in the whole retraction process is obtained, the acquisition sequence of each single frame image may be determined, so as to generate a feature recognition result data set according to the acquisition sequence of each single frame image and the feature recognition result thereof; the feature recognition result data set comprises a feature recognition result and a serial number of each single frame image.
S103: and determining a plurality of features with the same feature category from the feature recognition result data set, and analyzing the plurality of features with the same feature category to obtain an analysis result of the feature category.
In the process of specifically executing step S103, after the feature recognition result data set is obtained, for each feature class set in advance, a plurality of features under each feature class may be determined from the feature recognition result data set, so as to analyze the plurality of features under each feature class, and obtain an analysis result of each feature class.
It should be noted that, the preset feature categories may include a stent category, a plaque category, a lumen cross-sectional area category, and the like, and the embodiment of the present invention is not limited herein.
As an implementation manner provided by the embodiment of the present invention, if a feature class is a lumen cross-sectional area class, a process of analyzing a plurality of features with the same feature class to obtain an analysis result of the feature class may specifically be: for each feature of which the feature class is a lumen cross-sectional area class, acquiring the lumen cross-sectional area of the feature; and determining the characteristic category as the characteristic with the smallest lumen cross-sectional area in the characteristics of the lumen cross-sectional area category, and generating an analysis result of the lumen cross-sectional area category according to the characteristic with the smallest lumen cross-sectional area.
In the embodiment of the invention, aiming at each feature with the feature class of the lumen cross-sectional area, the feature with the smallest lumen cross-sectional area can be found out from the features, and the single frame image corresponding to the feature with the smallest lumen cross-sectional area is used as an analysis result of the lumen cross-sectional area class, so that when the lumen cross-sectional area class is determined to be the target feature class of interest preset by a doctor, the single frame image with the smallest lumen cross-sectional area can be used as the frame image of interest, and therefore, the doctor can quickly know the single frame image with the smallest lumen cross-sectional area when focusing on the frame image of interest, and the operation efficiency of the doctor can be improved to a certain extent.
As another implementation manner provided by the embodiment of the present invention, if the feature class is a bracket class, a process of analyzing a plurality of features with the same feature class to obtain an analysis result of the feature class may specifically be: determining at least one first feature group from the features of which the feature class is a stent class; wherein the feature group comprises a plurality of continuous features with the feature class being the bracket class; determining a start frame feature and an end frame feature from a plurality of features of the first feature set for each first feature set; and generating analysis results of the stent categories according to the initial frame characteristics and the end frame characteristics of each first characteristic group.
It should be noted that, a stent exists in a single frame image corresponding to a feature whose feature class is a stent class.
It should be further noted that, for each first feature set, the start frame feature in the first feature set may be a first feature of a plurality of features that are consecutive in the first feature set, and the end frame feature is a last feature of the plurality of features that are consecutive in the first feature set.
In some embodiments, after determining that at least one first feature group of the continuous multiple features exists, a single frame image corresponding to a start frame feature and a single frame image corresponding to an end frame feature in each first feature group may be used as an analysis result of the stent class.
In other embodiments, a single frame image corresponding to a start frame feature and a single frame image corresponding to an end frame feature of any one of the first feature groups may be used as an analysis result of a stent type, so that when the stent type is determined to be a target feature type of interest preset by a doctor, the single frame image with the start and the single frame image with the end of the stent may be used as the frame image of interest, so that the doctor can quickly know the start point and the end point of the stent when focusing on the frame image of interest, and the operation efficiency of the doctor can be improved to a certain extent.
The foregoing is merely a preferred manner of generating the analysis result of the stent type provided in the embodiment of the present invention, and the embodiment of the present invention is not limited thereto.
As another implementation manner provided in the embodiment of the present invention, if a feature class is a plaque class, a process of analyzing a plurality of features with the same feature class to obtain an analysis result of the feature class may specifically be: determining at least one second feature set from the features of which the feature class is the plaque class; wherein the second feature set includes a plurality of features of a plaque class that are consecutive; for each feature in each second feature group, acquiring the lumen cross-sectional area and the external elastic membrane cross-sectional area of the feature, and calculating the plaque area of the feature according to the lumen cross-sectional area and the external elastic membrane cross-sectional area of the feature; and determining target features from each second feature group according to the plaque areas of the features in each second feature group, and generating analysis results of the plaque categories according to each target feature.
In the embodiment of the invention, for each feature in the second feature group, subtracting the cross-sectional area of the external elastic membrane from the cross-sectional area of the lumen of the feature to obtain the plaque area of the feature; and determining the characteristic with the largest plaque area in the second characteristic group as the target characteristic of the second characteristic group aiming at each second characteristic group, and taking a single frame image corresponding to each target characteristic as an analysis result of the plaque category, so that when the plaque category is determined to be the target characteristic category of interest preset by a doctor, the single frame image corresponding to each target characteristic with the largest plaque area is taken as the interest frame image, and the doctor can quickly find the interest frame image with the largest plaque area when focusing on the interest frame image, thereby improving the operation efficiency of the doctor to a certain extent.
S104: and determining the interesting frame image from each single frame image according to a target analysis result matched with a preset target feature class in the analysis results of each feature class.
In the specific execution of step S104, the physician may preset at least one target feature class of interest, so that after the analysis result of each feature class is obtained, the target analysis result consistent with the target feature class from the analysis results of each feature class may be determined as the interest frame image of interest of the physician, without the physician focusing on imaging in the whole withdrawal process, thereby reducing time consumption and reducing the burden of the physician, and achieving the purpose of improving the operation efficiency.
It should be noted that the determined frame image of interest may be one or more, and the embodiment of the present invention is not limited herein.
S105: and adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark.
In the specific execution of step S105, after the frame of interest image is determined, a corresponding bookmark can be automatically added on the sequence number corresponding to the determined frame of interest image without manual addition, so that the situation that the bookmark is not placed and the frame of interest image marked with the bookmark is not consistent with the found frame of interest image in practice occurs, and the doctor can quickly see the corresponding frame of interest image through the bookmark on the sequence number.
The embodiment of the invention provides a method for automatically adding bookmarks, which is characterized in that a Unet model to be trained is trained by historical imaging data in advance to obtain a feature recognition model, so that in the retracting process of an intravascular ultrasound imaging system, when a single frame image is acquired, the feature recognition model is utilized to perform feature recognition on the single frame image to obtain a feature recognition result of the single frame image, and the feature recognition results of a plurality of obtained single frame images are formed into a feature recognition result data set until the retracting is finished; the feature result data set is comprehensively analyzed to find a target analysis result matched with a target feature category preset by a doctor from a plurality of feature recognition results, and an interest frame image of interest of the doctor is found from each single frame image according to the target analysis result, so that automatic placement of bookmarks in the retracting process is realized, a doctor does not need to pay attention to the image in the whole retracting process at any time, time consumption is reduced, the burden of the doctor is lightened, and therefore the operation efficiency is improved.
In order to better understand the contents provided by the above embodiment of the present invention, the following explanation is provided, as shown in fig. 2.
A1: and training the Unet model to be trained by using the historical image data to obtain a corresponding feature recognition model.
A2: when detecting that the retraction is currently being performed, judging whether the retraction is finished or not; if the withdrawal is not finished, executing A3; if the pullback is over, A5 is performed.
A3: and acquiring a single frame image, and importing the acquired single frame image into a feature recognition model through a model calling service calling model calling interface.
A4: and carrying out feature recognition on the imported single-frame image through the feature recognition model to obtain a feature recognition result of the single-frame image, and returning to the step A2.
A5: generating a feature recognition result data set according to the acquisition sequence of a plurality of single-frame images and the feature recognition result thereof, and determining a plurality of features with the same feature category from the feature recognition result data set; analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category; and determining the interesting frame image from each single frame image according to a target analysis result matched with a preset target feature class in the analysis results of each feature class.
A6: and adding the corresponding bookmarks on the serial numbers corresponding to the frame images of interest and displaying the corresponding bookmarks.
Further, in the embodiment of the invention, when a doctor views the corresponding interest frame image through the bookmark, the doctor can analyze and measure the seen interest frame image, and if the seen interest frame image is not satisfied, the bookmark on the interest frame image can be deleted, and the bookmark is added on the corresponding interest frame image in a manual mode.
In the embodiment of the invention, if the withdrawing process is finished and the interested frame image without the bookmark is not added, the method for automatically adding the bookmark can be called when the corresponding image sequence is looked back, each single frame image in the image sequence is traversed, and the bookmark is automatically added again. Wherein the review may be a playback process.
Optionally, acquiring image data corresponding to the whole playback process; wherein the image data comprises a plurality of single-frame images which are arranged in sequence; aiming at each single-frame image, carrying out feature recognition on the single-frame image through a feature recognition model to obtain a feature recognition result of the single-frame image; generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition result thereof; determining a plurality of features with the same feature class from a feature recognition result data set; analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category; according to the target analysis results matched with the preset target feature categories in the analysis results of the feature categories, determining an interest frame image from each single frame image; and adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark. The process of analyzing the plurality of features with the same feature category, determining the frame image of interest, and adding the corresponding bookmark to the sequence number corresponding to the frame image of interest can refer to the corresponding content in the embodiment provided by the present invention, and the embodiment of the present invention will not be described herein.
Specifically, referring to fig. 3, first, a Unet model to be trained may be trained using historical image data to obtain a corresponding feature recognition model. Secondly, acquiring image data (image sequence) corresponding to the whole playback process, wherein the image data comprises a plurality of single-frame images which are sequentially arranged; and the feature recognition model is imported into each single-frame image frame by frame through a model calling service calling model calling interface. Thirdly, sequentially carrying out feature recognition on the imported single-frame images through the feature recognition model, and returning a feature recognition result of each single-frame image; generating a feature recognition result data set according to the acquisition sequence of the plurality of single-frame images and the feature recognition results thereof from the times to determine a plurality of features with the same feature category from the feature recognition result data set; analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category, and determining an interest frame image from each single frame image according to a target analysis result matched with a preset target feature category in the analysis result of each feature category; and finally, adding the corresponding bookmarks on the serial numbers corresponding to the frame images of interest and displaying the corresponding bookmarks.
Based on the method for automatically adding bookmarks provided by the embodiment of the present invention, correspondingly, the embodiment of the present invention further provides a system for automatically adding bookmarks, as shown in fig. 4, where the system for automatically adding bookmarks includes:
The first feature recognition unit 41 is configured to perform feature recognition on the single-frame image through a pre-trained feature recognition model when the single-frame image is acquired in the retraction process, so as to obtain a feature recognition result of the single-frame image, until the retraction is finished, and obtain feature recognition results of a plurality of single-frame images; the feature recognition model is obtained by training a Unet model to be trained by a training unit through historical image data; the feature recognition result of the single-frame image comprises each feature of the single-frame image and a feature category thereof;
A feature recognition result data set generating unit 42 for generating a feature recognition result data set according to the acquisition order of the plurality of single frame images and the feature recognition result thereof; the feature recognition result data set comprises a feature recognition result and a sequence number of each single frame image;
A first determining unit 43 for determining a plurality of features of the same feature class from the feature recognition result data set;
an analysis unit 44, configured to analyze a plurality of features with the same feature class, to obtain an analysis result of the feature class;
A second determining unit 45, configured to determine an interest frame image from each single frame image according to a target analysis result matched with a preset target feature class in the analysis results of each feature class;
The bookmark adding unit 46 is configured to add a corresponding bookmark to the sequence number corresponding to the frame of interest image, so that the user views the corresponding frame of interest image through the bookmark.
The specific principle and execution process of each unit in the automatic bookmark adding system disclosed in the above embodiment of the present invention are the same as the automatic bookmark adding method disclosed in the above embodiment of the present invention, and may refer to the corresponding parts in the automatic bookmark adding method disclosed in the above embodiment of the present invention, and will not be described in detail here.
The embodiment of the invention provides a system for automatically adding bookmarks, which is characterized in that a Unet model to be trained is trained by historical imaging data in advance to obtain a feature recognition model, so that in the retracting process of an intravascular ultrasound imaging system, when a single frame image is acquired, the feature recognition model is utilized to perform feature recognition on the single frame image to obtain a feature recognition result of the single frame image, and the feature recognition results of a plurality of obtained single frame images are formed into a feature recognition result data set until the retracting is finished; comprehensively analyzing a feature result data set to find a target analysis result matched with a target feature category preset by a doctor from a plurality of feature recognition results, finding an interest frame image of interest of the doctor from each single frame image according to the target analysis result, automatically adding a corresponding bookmark on the serial number of the interest frame image, and avoiding the situations that the doctor is not required to pay attention to imaging at all times to find the interest frame image and manually place the corresponding bookmark on the found interest frame image, and the bookmark placement omission and the fact that the interest frame image marked with the bookmark is inconsistent with the found interest frame image.
Optionally, if the feature class is a lumen cross-sectional area class, the analysis unit includes:
A first acquisition unit configured to acquire a lumen cross-sectional area of a feature for each feature of which the feature class is a lumen cross-sectional area class;
and the first analysis result generation unit is used for determining the characteristic with the smallest lumen cross-sectional area in the characteristics with the characteristic category being the lumen cross-sectional area category and generating an analysis result of the lumen cross-sectional area category according to the characteristic with the smallest lumen cross-sectional area.
Optionally, if the feature class is a stent class, the analysis unit includes:
a third determining unit configured to determine at least one first feature group from among the features of which the feature class is the stent class; wherein the first feature group comprises a plurality of continuous features of which the feature class is a bracket class;
a fourth determining unit configured to determine, for each of the first feature groups, a start frame feature and an end frame feature from a plurality of features of the first feature groups;
And the second analysis result generating unit is used for generating analysis results of the stent category according to the starting frame characteristics and the ending frame characteristics of each first characteristic group.
Optionally, if the feature class is a plaque class, the analysis unit includes:
a second obtaining unit configured to determine at least one second feature group from each feature whose feature class is a plaque class; wherein the second feature set includes features of which the continuous plurality of feature categories are plaque categories;
A calculation unit for acquiring a lumen cross-sectional area and a foreign elastic membrane cross-sectional area of the feature for each feature in each second feature group, and calculating a plaque area of the feature from the lumen cross-sectional area and the foreign elastic membrane cross-sectional area of the feature;
And a third analysis result generation unit configured to determine target features from each of the second feature groups according to plaque areas of the respective features in each of the second feature groups, and generate an analysis result of plaque categories according to each of the target features.
Optionally, the system for automatically adding bookmarks provided by the embodiment of the present invention further includes:
an image data acquisition unit for acquiring image data corresponding to the whole playback process; wherein the image data comprises a plurality of single-frame images which are arranged in sequence;
The second feature recognition unit is used for carrying out feature recognition on the single-frame image through the feature recognition model aiming at each single-frame image to obtain a feature recognition result of the single-frame image.
Optionally, the training unit includes:
a history image data acquisition unit configured to acquire history image data; wherein the historical image data comprises a plurality of frames of historical images, and each frame of historical image comprises a standard feature class of each historical feature;
the training subunit is used for inputting the historical images into a Unet model to be trained aiming at each frame of the historical images, enabling the Unet model to be trained to conduct feature recognition on the historical images to obtain each historical feature of the historical images and feature categories of the historical images, taking the standard feature category of each historical feature, which is close to each historical feature, as a training target, and adjusting parameters of the Unet model to be trained until the Unet model to be trained is converged to obtain a feature recognition model.
Optionally, the system for automatically adding bookmarks provided by the embodiment of the present invention further includes:
the configuration unit is used for deploying the feature recognition model into the intravascular ultrasound software system and configuring a model calling interface corresponding to the feature recognition model in the intravascular ultrasound software system.
An embodiment of the present application provides an electronic device, as shown in fig. 5, where the electronic device includes a processor 501 and a memory 502, where the memory 502 is configured to store program codes and data for automatically adding bookmarks, and the processor 501 is configured to invoke program instructions in the memory to execute steps shown in a method for implementing automatic bookmarking in the above embodiment.
The embodiment of the application provides a storage medium, which comprises a storage program, wherein when the program runs, equipment in which the storage medium is controlled to execute the method for automatically adding the bookmarks shown in the embodiment.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (7)

1. A method of automatically bookmarking, the method comprising:
In the retracting process, when a single frame image is acquired, carrying out feature recognition on the single frame image through a pre-trained feature recognition model to obtain a feature recognition result of the single frame image until the retracting is finished, and obtaining feature recognition results of a plurality of single frame images; the feature recognition model is obtained by training a Unet model to be trained by utilizing historical image data; the feature recognition result of the single frame image comprises each feature of the single frame image and a feature category thereof;
generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition result thereof; the feature recognition result data set comprises a feature recognition result and a sequence number of each single-frame image;
determining a plurality of features with the same feature class from the feature recognition result data set;
analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category;
According to the target analysis results matched with the preset target feature categories in the analysis results of the feature categories, determining an interest frame image from each single frame image;
Adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark;
If the characteristic category is the cross-sectional area category of the lumen; the analyzing the plurality of features with the same feature class to obtain the analysis result of the feature class comprises the following steps: acquiring the lumen cross-sectional area of each feature aiming at each feature with the feature class as the lumen cross-sectional area class; determining the characteristic category as the characteristic with the smallest lumen cross-sectional area in the characteristics of the lumen cross-sectional area category, and generating an analysis result of the lumen cross-sectional area category according to the characteristic with the smallest lumen cross-sectional area;
If the characteristic class is a bracket class; the analyzing the plurality of features with the same feature class to obtain the analysis result of the feature class comprises the following steps: determining at least one first feature set from the features of which the feature class is the stent class; wherein the first feature group includes features of which a plurality of continuous feature categories are stent categories; determining a start frame feature and an end frame feature from a plurality of features of the first feature set for each of the first feature sets; generating analysis results of the bracket categories according to the initial frame characteristics and the end frame characteristics of each first characteristic group;
If the feature class is plaque class; the analyzing the plurality of features with the same feature class to obtain the analysis result of the feature class comprises the following steps: determining at least one second feature set from the features of which the feature class is the plaque class; wherein the second feature set includes a plurality of features of a plaque class that are consecutive; for each feature in each second feature group, acquiring the lumen cross-sectional area and the external elastic membrane cross-sectional area of the feature, and calculating the plaque area of the feature according to the lumen cross-sectional area and the external elastic membrane cross-sectional area of the feature; and determining target features from each second feature group according to the plaque areas of the features in each second feature group, and generating analysis results of the plaque categories according to each target feature.
2. The method according to claim 1, wherein the method further comprises:
Acquiring image data corresponding to the whole playback process; wherein the image data comprises a plurality of single-frame images which are arranged in sequence;
and carrying out feature recognition on the single-frame image through the feature recognition model aiming at each frame of the single-frame image to obtain a feature recognition result of the single-frame image.
3. The method of claim 1, wherein training the Unet model to be trained using historical image data to obtain the feature recognition model comprises:
acquiring historical image data; wherein the historical image data comprises a plurality of frames of historical images, and each frame of historical image comprises a standard feature class of each historical feature;
Inputting the historical images into a Unet model to be trained aiming at each frame of the historical images, enabling the Unet model to be trained to conduct feature recognition on the historical images to obtain each historical feature and feature category of the historical images, taking the standard feature category of each historical feature, which is close to each historical feature, as a training target, and adjusting parameters of the Unet model to be trained until the Unet model to be trained achieves convergence to obtain the feature recognition model.
4. A method according to claim 3, wherein after deriving the feature recognition model, the method further comprises:
Deploying the feature recognition model into an intravascular ultrasound software system, and configuring a model calling interface corresponding to the feature recognition model in the intravascular ultrasound software system.
5. A system for automatically bookmarking, the system comprising:
The first feature recognition unit is used for carrying out feature recognition on the single-frame image through a pre-trained feature recognition model when the single-frame image is acquired in the retracting process, so as to obtain a feature recognition result of the single-frame image until the retracting is finished, and then obtaining feature recognition results of a plurality of single-frame images; the feature recognition model is obtained by training a Unet model to be trained by utilizing historical image data; the feature recognition result of the single frame image comprises each feature of the single frame image and a feature category thereof;
The characteristic recognition result data set generating unit is used for generating a characteristic recognition result data set according to the acquisition sequence of the plurality of single-frame images and the characteristic recognition results thereof; the feature recognition result data set comprises a feature recognition result and a sequence number of each single-frame image;
a first determining unit configured to determine a plurality of the features having the same feature class from the feature recognition result data set;
The analysis unit is used for analyzing a plurality of features with the same feature category to obtain an analysis result of the feature category;
The second determining unit is used for determining an interest frame image from each single frame image according to a target analysis result matched with a preset target feature class in the analysis results of each feature class;
the bookmark adding unit is used for adding a corresponding bookmark on the serial number corresponding to the interest frame image so that a user can view the corresponding interest frame image through the bookmark;
If the feature class is a lumen cross-sectional area class, the analysis unit comprises: a first acquisition unit and a first analysis result generation unit;
the first obtaining unit is used for obtaining the lumen cross-sectional area of each feature aiming at each feature with the feature class as the lumen cross-sectional area class;
The first analysis result generation unit is used for determining the characteristic with the smallest lumen cross-sectional area in the characteristics with the characteristic category being the lumen cross-sectional area category, and generating an analysis result of the lumen cross-sectional area category according to the characteristic with the smallest lumen cross-sectional area;
If the feature class is a stent class, the analysis unit includes: a third determination unit, a fourth determination unit, and a second analysis result generation unit;
The third determining unit is used for determining at least one first feature group from the features with the feature class being the bracket class; wherein the first feature group includes features of which a plurality of continuous feature categories are stent categories;
The fourth determining unit is configured to determine, for each of the first feature groups, a start frame feature and an end frame feature from a plurality of features of the first feature groups;
The second analysis result generating unit is used for generating analysis results of the bracket categories according to the initial frame characteristics and the end frame characteristics of each first characteristic group;
If the feature class is a plaque class, the analysis unit includes: the second acquisition unit, the calculation unit and the third analysis result generation unit;
The second obtaining unit is used for determining at least one second feature group from each feature with the feature class as the plaque class; wherein the second feature set includes a plurality of features of a plaque class that are consecutive;
the computing unit is used for acquiring the cross section area of the lumen and the cross section area of the external elastic membrane of each feature in each second feature group, and computing the plaque area of the feature according to the cross section area of the lumen and the cross section area of the external elastic membrane of the feature;
the third analysis result generating unit is configured to determine a target feature from each of the second feature groups according to plaque areas of the features in each of the second feature groups, and generate an analysis result of the plaque class according to each of the target features.
6. An electronic device, comprising: the device comprises a processor and a memory, wherein the processor and the memory are connected through a communication bus; the processor is used for calling and executing the program stored in the memory; the memory for storing a program for implementing the method of automatically bookmarking according to any of claims 1-4.
7. A computer readable storage medium having stored therein computer executable instructions for performing the method of automatically bookmarking according to any of claims 1-4.
CN202410166808.0A 2024-02-05 Method, system, electronic device and storage medium for automatically adding bookmarks Active CN117711581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410166808.0A CN117711581B (en) 2024-02-05 Method, system, electronic device and storage medium for automatically adding bookmarks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410166808.0A CN117711581B (en) 2024-02-05 Method, system, electronic device and storage medium for automatically adding bookmarks

Publications (2)

Publication Number Publication Date
CN117711581A CN117711581A (en) 2024-03-15
CN117711581B true CN117711581B (en) 2024-06-11

Family

ID=

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143047A (en) * 2014-07-21 2014-11-12 华北电力大学(保定) Automatic tissue calibration method for IVUS gray-scale image
CN108182683A (en) * 2018-02-08 2018-06-19 山东大学 IVUS based on deep learning and transfer learning organizes mask method and system
CN110946619A (en) * 2019-11-27 2020-04-03 杨靖 Intravascular ultrasonic automatic imaging omics analysis system and analysis method
CN114494177A (en) * 2022-01-21 2022-05-13 天津大学 IVOCT (in-vivo visual optical coherence tomography) branch blood vessel identification method by utilizing longitudinal section and withdrawal property
CN115908330A (en) * 2022-11-23 2023-04-04 杭州脉流科技有限公司 DSA image-based coronary artery automatic frame selection classification recommendation method and device
WO2023118080A1 (en) * 2021-12-22 2023-06-29 Koninklijke Philips N.V. Intravascular ultrasound imaging for calcium detection and analysis
CN117392040A (en) * 2022-06-29 2024-01-12 无锡祥生医疗科技股份有限公司 Standard section identification method, system, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143047A (en) * 2014-07-21 2014-11-12 华北电力大学(保定) Automatic tissue calibration method for IVUS gray-scale image
CN108182683A (en) * 2018-02-08 2018-06-19 山东大学 IVUS based on deep learning and transfer learning organizes mask method and system
CN110946619A (en) * 2019-11-27 2020-04-03 杨靖 Intravascular ultrasonic automatic imaging omics analysis system and analysis method
WO2023118080A1 (en) * 2021-12-22 2023-06-29 Koninklijke Philips N.V. Intravascular ultrasound imaging for calcium detection and analysis
CN114494177A (en) * 2022-01-21 2022-05-13 天津大学 IVOCT (in-vivo visual optical coherence tomography) branch blood vessel identification method by utilizing longitudinal section and withdrawal property
CN117392040A (en) * 2022-06-29 2024-01-12 无锡祥生医疗科技股份有限公司 Standard section identification method, system, device and storage medium
CN115908330A (en) * 2022-11-23 2023-04-04 杭州脉流科技有限公司 DSA image-based coronary artery automatic frame selection classification recommendation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
血管内超声灰阶图像的自动组织标定;孙正;王立欣;周雅;;生物医学工程学杂志;20160425(02);287-294 *

Similar Documents

Publication Publication Date Title
EP1092982A1 (en) Diagnostic system with learning capabilities
CN109168052B (en) Method and device for determining service satisfaction degree and computing equipment
CN109063433B (en) False user identification method and device and readable storage medium
CN110942447B (en) OCT image segmentation method, OCT image segmentation device, OCT image segmentation equipment and storage medium
CN111862020A (en) Method, device, server and storage medium for predicting physiological age of anterior segment
CN113378804A (en) Self-service sampling detection method and device, terminal equipment and storage medium
CN109711545A (en) Creation method, device, system and the computer-readable medium of network model
CN111178420A (en) Coronary segment labeling method and system on two-dimensional contrast image
CN110613417A (en) Method, equipment and storage medium for outputting upper digestion endoscope operation information
CN113658175A (en) Method and device for determining symptom data
CN117809124B (en) Medical image association calling method and system based on multi-feature fusion
CN116012568A (en) System for acquiring cardiac rhythm information through photographing electrocardiogram
CN117711581B (en) Method, system, electronic device and storage medium for automatically adding bookmarks
Poorjam et al. Quality control of voice recordings in remote Parkinson’s disease monitoring using the infinite hidden Markov model
CN110334107A (en) Qualification evaluation method, apparatus and server based on data analysis
CN117711581A (en) Method, system, electronic device and storage medium for automatically adding bookmarks
CN112801940A (en) Model evaluation method, device, equipment and medium
CN115588439B (en) Fault detection method and device of voiceprint acquisition device based on deep learning
CN114490344A (en) Software integration evaluation method based on machine learning and static analysis
CN113962216A (en) Text processing method and device, electronic equipment and readable storage medium
CN114708634A (en) Relative weight analysis method and device based on face image and electronic equipment
CN114464326A (en) Coronary heart disease prediction system based on multi-mode carotid artery data
CN113268419A (en) Method, device, equipment and storage medium for generating test case optimization information
CN111833993A (en) AI-based regional image remote quality control management system
CN113239075A (en) Construction data self-checking method and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant