CN113505262B - Ultrasonic image searching method and device, ultrasonic equipment and storage medium - Google Patents

Ultrasonic image searching method and device, ultrasonic equipment and storage medium Download PDF

Info

Publication number
CN113505262B
CN113505262B CN202110945450.8A CN202110945450A CN113505262B CN 113505262 B CN113505262 B CN 113505262B CN 202110945450 A CN202110945450 A CN 202110945450A CN 113505262 B CN113505262 B CN 113505262B
Authority
CN
China
Prior art keywords
target
search
searched
keyword
existing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110945450.8A
Other languages
Chinese (zh)
Other versions
CN113505262A (en
Inventor
董振鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wisonic Medical Technology Co ltd
Original Assignee
Shenzhen Wisonic Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wisonic Medical Technology Co ltd filed Critical Shenzhen Wisonic Medical Technology Co ltd
Priority to CN202110945450.8A priority Critical patent/CN113505262B/en
Publication of CN113505262A publication Critical patent/CN113505262A/en
Application granted granted Critical
Publication of CN113505262B publication Critical patent/CN113505262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an ultrasonic image searching method, an ultrasonic image searching device, ultrasonic equipment and a storage medium. The method comprises the following steps: acquiring a target search request, wherein the target search request comprises a target search type and target search information corresponding to the target search type; executing a target feature extraction program corresponding to the target search type, performing feature extraction on target search information, and determining target search features corresponding to the target search information; matching the target search characteristics with keyword associated maps corresponding to all existing ultrasonic images in a system database to obtain at least one target ultrasonic image; and sequencing at least one target ultrasonic image based on the target sequencing rule, and displaying a target search result. The method can guarantee the diversity and the searching efficiency of the target ultrasonic image searching mode.

Description

Ultrasonic image searching method and device, ultrasonic equipment and storage medium
Technical Field
The present invention relates to the field of ultrasound technology, and in particular, to an ultrasound image searching method, apparatus, ultrasound device, and storage medium.
Background
With the continuous popularization and application of ultrasound devices, more and more medical institutions are equipped with ultrasound devices to scan and identify human tissues of a target object by using the ultrasound devices to form an ultrasound image, so that doctors can know physiological detection data of the target object through the ultrasound image to achieve the purpose of assisting in detecting and identifying the health state of the target object. Generally, the ultrasound device will store ultrasound images resulting from the examination in a system database so that the physician can determine the desired ultrasound image by querying the system database and generate an ultrasound analysis report based on the ultrasound image.
When an existing ultrasound image is scanned to form an ultrasound image, generally, the ultrasound image is stored in a system database in a correlated manner based on an image name and a generation time determined by a specific naming rule, and the storage manner may cause the following disadvantages in a subsequent ultrasound image search process: firstly, when the number of ultrasound videos or ultrasound images in the system database is large, the required ultrasound images cannot be found quickly only based on the image names and the generation time, which may result in a long time consumption in the ultrasound image query process and affect the ultrasound image analysis processing efficiency. Secondly, for part of the labeled ultrasound videos and ultrasound images, only the ultrasound images can be browsed according to a fixed sequence, and the required ultrasound images cannot be found quickly, so that the time consumption of the ultrasound image query process is long, and the ultrasound image analysis processing efficiency is affected.
Disclosure of Invention
The embodiment of the invention provides an ultrasonic image searching method, an ultrasonic device and a storage medium, and aims to solve the problem of low searching efficiency of the conventional ultrasonic image.
An ultrasound image searching method, comprising:
acquiring a target search request, wherein the target search request comprises a target search type and target search information corresponding to the target search type;
executing a target feature extraction program corresponding to the target search type, performing feature extraction on the target search information, and determining target search features corresponding to the target search information;
matching the target search characteristics with keyword associated maps corresponding to all existing ultrasonic images in a system database to obtain at least one target ultrasonic image;
and sequencing at least one target ultrasonic image based on a target sequencing rule, and displaying a target search result.
An ultrasound image search apparatus comprising:
the target search request acquisition module is used for acquiring a target search request, wherein the target search request comprises a target search type and target search information corresponding to the target search type;
the target search feature acquisition module is used for executing a target feature extraction program corresponding to the target search type, extracting features of the target search information and determining target search features corresponding to the target search information;
the target ultrasonic image acquisition module is used for matching the target search characteristics with keyword associated maps corresponding to all existing ultrasonic images in a system database to acquire at least one target ultrasonic image;
and the target search result display module is used for sequencing at least one target ultrasonic image based on a target sequencing rule and displaying a target search result.
An ultrasound apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the ultrasound image search method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the ultrasound image search method described above.
According to the ultrasonic image searching method, the ultrasonic image searching device, the ultrasonic equipment and the storage medium, target searching characteristics can be determined by adopting target characteristic extraction programs corresponding to different target searching types according to different target searching types and target searching information in a target searching request; searching the target searching characteristics and the keyword associated maps corresponding to all the existing ultrasonic images in the system database, so that the target ultrasonic images related to the target searching information can be quickly and accurately determined, and the diversity and searching efficiency of the target ultrasonic image searching mode can be guaranteed; and finally, sequencing at least one target ultrasonic image by adopting a target sequencing rule to obtain a target search result according with the use habit of the user, thereby being beneficial to improving the user experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a diagram illustrating an application environment of an ultrasound image searching method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 3 is another flow chart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 4 is another flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 5 is another flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 6 is another flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 7 is another flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 8 is another flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 9 is another flowchart of a method for searching an ultrasound image according to an embodiment of the present invention;
FIG. 10 is a diagram of an ultrasound image searching apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The ultrasound image searching method provided by the embodiment of the invention can be applied to the ultrasound equipment shown in fig. 1, and the ultrasound equipment comprises a main controller, an ultrasound probe connected with the main controller, a beam forming processor, an image processor and a display screen.
The main controller is a controller of the ultrasonic equipment, and the main controller is connected with other functional modules in the ultrasonic equipment, including but not limited to an ultrasonic probe, a beam forming processor, an image processor, a display screen and the like, and is used for controlling the work of each functional module.
An ultrasound probe is a transmitting and receiving device of ultrasound waves. In this example, the ultrasonic probe emits an ultrasonic wave to the outside; the ultrasonic wave is transmitted in media such as human tissues and the like, echo analog signals such as reflected waves, scattered waves and the like are generated, the echo analog signals are converted into echo electric signals, the echo electric signals are amplified and subjected to analog-to-digital conversion, the echo electric signals are converted into echo digital signals, and then the echo digital signals are sent to the beam synthesis processor.
The beam forming processor is connected with the ultrasonic probe and used for receiving the echo digital signals sent by the ultrasonic probe, carrying out beam forming on the echo digital signals of one or more channels, acquiring one or more paths of echo forming signals and sending the echo forming signals to the image processor.
The image processor is connected with the beam forming processor and used for receiving the echo synthesis signals sent by the beam forming processor, carrying out image processing processes such as image synthesis, space composition, frame correlation and the like on the echo synthesis signals, forming an ultrasonic image, and sending the ultrasonic image to a display screen so that the ultrasonic image is displayed on the display screen.
In an embodiment, as shown in fig. 2, an ultrasound image searching method is provided, which is described by taking the master controller in fig. 1 as an example, and includes the following steps:
s201: acquiring a target search request, wherein the target search request comprises a target search type and target search information corresponding to the target search type;
s202: executing a target feature extraction program corresponding to the target search type, performing feature extraction on target search information, and determining target search features corresponding to the target search information;
s203: matching the target search characteristics with keyword associated maps corresponding to all existing ultrasonic images in a system database to obtain at least one target ultrasonic image;
s204: and sequencing at least one target ultrasonic image based on the target sequencing rule, and displaying a target search result.
Wherein the target search request is a request for triggering the ultrasound image search. The target search type is a form of triggering a target search request this time, and may be any one of a character string search type, a voice search type, an image search type, a video search type, a tag search type, and a scene search type. The target search information is information for reflecting a user's need for searching.
As an example, in step S201, the main controller may acquire a target search request triggered by a user, the target search request including target search information input by the user in the form of any one of a character string search type, a voice search type, an image search type, a video search type, a tag search type, and a scene search type. For example, if the target search type is a character string search type, the target search information is a character string to be searched; if the target search type is a voice search type, the target search information is the voice to be searched; if the target search type is the image search type, the target search information is an ultrasonic image to be searched; if the target search type is a video search type, the target search information is an ultrasonic video to be searched; if the target search type is the tag search type, the target search information is a tag to be searched; and if the target search type is the scene search type, the target search information is the scene to be searched.
Wherein the target feature extraction program is a program for extracting a specific feature that matches the target search type.
As an example, in step S202, after acquiring the target search request, the main controller performs a query based on the target search type according to the target search type in the target search request, and determines a search feature extraction program corresponding to the target search type as the target feature extraction program. Then, the main controller can adopt a target feature extraction program to extract features of the target search information and determine target search features corresponding to the target search information. In this example, the target search features may be features that match the criteria keywords in a target knowledge-graph that is pre-stored in the system database. The target knowledge-graph is a knowledge-graph formed of all the standard keywords associated with the ultrasound image. The standard keywords refer to keywords recorded on the target knowledge graph before the current time of the system and used for recording related information of the ultrasonic image.
The keyword association graph corresponding to the existing ultrasonic image is a knowledge graph formed by summarizing the keywords extracted or identified from the existing ultrasonic image before the current time of the system, and the keyword association graph comprises but is not limited to keywords corresponding to the existing standard surface characteristics, the existing organization characteristics, the existing equipment characteristics and other characteristics; existing device characteristics include information such as ultrasound imaging mode, ultrasound probe, ultrasound marker, and ultrasound measurement.
As an example, in step S203, the main controller may perform matching processing on the target search feature and the keyword associated maps corresponding to all existing ultrasound images in the system database, and obtain a feature matching result corresponding to each existing ultrasound image; and determining at least one target ultrasonic image according to the characteristic matching result. The feature matching result can be understood as a result of performing feature matching on the target search feature and the keyword associated map in the existing ultrasound image, and since the target search feature and the keyword associated map are feature sets related to the keyword, a keyword matching algorithm can be adopted to judge whether all or part of the keyword associated map of the existing ultrasound image contains the target search feature so as to determine the feature matching result.
As an example, in step S204, the main controller may sort the feature matching results corresponding to at least one target ultrasound image according to a target sorting rule, and obtain an image sorting order; and sequencing at least one target ultrasonic image based on the image sequencing sequence, and displaying a target search result. The target sorting rule is a rule configured in advance by the system for sorting the feature matching results, and for example, the sorting may be performed in a descending order according to the size of the feature matching results.
In the ultrasound image searching method provided by this embodiment, target search features may be determined by adopting target feature extraction procedures corresponding to different target search types according to different target search types and target search information in a target search request; searching the target searching characteristics and the keyword associated maps corresponding to all the existing ultrasonic images in the system database, so that the target ultrasonic images related to the target searching information can be quickly and accurately determined, and the diversity and searching efficiency of the target ultrasonic image searching mode can be guaranteed; and finally, sequencing at least one target ultrasonic image by adopting a target sequencing rule to obtain a target search result according with the use habit of the user, thereby being beneficial to improving the user experience.
In an embodiment, as shown in fig. 3, in step S202, executing a target feature extraction program corresponding to a target search type, performing feature extraction on target search information, and determining a target search feature corresponding to the target search information includes:
s301: when the target search type is a character string search type, extracting keywords of a character string to be searched to obtain search keywords;
s302: acquiring historical keywords corresponding to the character string search type;
s303: calculating a first matching degree corresponding to the search keyword and the historical keyword;
s304: if the first matching degree is greater than the target matching degree, determining the historical keywords as target search features corresponding to the target search information;
s305: if the first matching degree is not greater than the target matching degree, matching the search keywords with each standard keyword in the target knowledge graph to obtain a second matching degree corresponding to the search keywords and each standard keyword;
s306: if the second matching degree is greater than the target matching degree, determining the standard keywords as target search features;
s307: and if the second matching degree is not greater than the target matching degree, determining the search keyword as a target search feature.
As an example, in step S301, when the target search type in the target search request is a character string search type and the target search information thereof is a character string to be searched, for example, when the user inputs a character string to be searched including but not limited to characters, numbers and special character strings in the search input field, the main controller may extract keywords from the character string to be searched by using a keyword extraction algorithm, and obtain a search keyword.
As an example, in step S302, the main controller may determine a history search string according to a history search request with a recent K times target search type as a character string search type, determine keywords extracted from the history search string as history keywords corresponding to the character string search type, so as to determine history keywords available for comparative analysis according to the recent K times history search request, where K ≧ 1. Or the main controller can acquire historical search requests of which all target search types are character string search types in a target search time period before the current time of the system, determine historical search character strings, and determine keywords extracted from the historical search character strings as historical keywords corresponding to the character string search types, so that the historical keywords which can be used for comparative analysis are determined according to all the historical search requests in the target search time period.
In this example, when the ultrasound device receives a target search request and completes a search based on the target search request, information such as a search keyword, a search time, and a search status formed by the target search request is stored in the system database as log information, so that the ultrasound device can be used as a history search request corresponding to a subsequent target search request.
As an example, in step S303, the main controller may perform matching processing on the search keyword and the history keyword by using a keyword matching algorithm, and determine a first matching degree between the search keyword and the history keyword. The first degree of match may reflect a degree of similarity between the search keyword in the target search request and the historical keyword in the historical search request.
Wherein the target matching degree is a pre-configured threshold value used for evaluating whether the keywords reach the criterion of similarity.
As an example, in step S304, when the first matching degree is greater than the target matching degree, the main controller determines that the search keyword is relatively similar to the history keyword, and may determine the search keyword as the same keyword, and may directly determine the history keyword as the target search feature, so as to directly determine the target ultrasound image based on the history search result corresponding to the history keyword, which is helpful to improve the search efficiency corresponding to the target ultrasound image.
As an example, in step S305, the main controller determines that the search keyword is not similar to the history keyword when the first matching degree is not greater than the target matching degree, and performs a matching process on the search keyword and all the standard keywords in the target knowledge graph to determine a second matching degree between each search keyword and each standard keyword in the target knowledge graph. The second degree of match may reflect a degree of similarity between the search keyword and the standard keyword in the target knowledge-graph.
As an example, in step S306, when the second matching degree is greater than the target matching degree, the main controller determines that the search keyword is similar to the standard keyword, and may determine the search keyword as the same keyword, and may determine the standard keyword as a target search feature, so as to perform a subsequent ultrasound image search based on the standard keyword, which is helpful to ensure the search efficiency and result accuracy of the ultrasound image.
As an example, in step S307, when the second matching degree is not greater than the target matching degree, the main controller determines that the search keyword is not similar to all the standard keywords in the target knowledge graph, and at this time, only the search keyword can be determined as the target search feature, so as to perform the ultrasound image search process by using the target search feature.
In an embodiment, as shown in fig. 4, in step S202, executing a target feature extraction program corresponding to a target search type, performing feature extraction on target search information, and determining a target search feature corresponding to the target search information includes:
s401: when the target search type is a voice search type, performing voice text conversion on voice to be searched by adopting a voice text conversion technology to obtain a character string to be searched;
s402: extracting keywords from the character string to be searched to obtain search keywords;
s403: acquiring historical keywords corresponding to the character string search type;
s404: calculating a first matching degree corresponding to the search keyword and the historical keyword;
s405: if the first matching degree is greater than the target matching degree, determining the historical keywords as target search features corresponding to the target search information;
s406: if the first matching degree is not greater than the target matching degree, matching the search keywords with each standard keyword in the target knowledge graph to obtain a second matching degree corresponding to the search keywords and each standard keyword;
s407: if the second matching degree is greater than the target matching degree, determining the standard keywords as target search features;
s408: and if the second matching degree is not greater than the target matching degree, determining the search keyword as a target search feature.
As an example, in step S401, when the target search type in the target search request is a voice search type, the target search information is a voice to be searched, for example, the user collects the voice to be searched that the user needs to search through a sound recording device. When receiving the voice to be searched, the main controller needs to adopt OCR or other voice text conversion technologies to convert the voice to be searched into a character string to be searched in a text form, so that target search characteristics can be determined based on the character string to be searched subsequently.
The processing procedures of steps S402-408 are the same as those of steps S301-S307, and are not repeated here to avoid repetition.
In an embodiment, as shown in fig. 5, in step S202, executing a target feature extraction program corresponding to a target search type, performing feature extraction on target search information, and determining a target search feature corresponding to the target search information includes:
s501: when the target search type is an image search type, acquiring at least one ultrasonic image to be searched;
s502: performing standard surface identification on an ultrasonic image to be searched, and determining target standard surface characteristics corresponding to the ultrasonic image to be searched;
s503: carrying out tissue structure identification on an ultrasonic image to be searched, and determining a target tissue characteristic corresponding to the ultrasonic image to be searched;
s504: performing character recognition on an ultrasonic image to be searched, and determining the characteristics of target equipment corresponding to the ultrasonic image to be searched;
s505: and determining target searching characteristics based on the target standard surface characteristics, the target organization characteristics and the target equipment characteristics corresponding to at least one ultrasonic image to be searched.
The ultrasound image to be searched is an ultrasound image to be searched, so that an ultrasound analysis report or other relevant information of the ultrasound image to be searched is determined based on the ultrasound image to be searched.
As an example, in step S501, when the target search type is an image search type, the main controller acquires at least one ultrasound image to be searched, which is an ultrasound image to be searched, so as to determine an ultrasound analysis report or other relevant information thereof based on the ultrasound image to be searched. For example, the user may select at least one ultrasound image to be searched in the ultrasound device, and trigger a target search request based on the at least one ultrasound image to be searched, so that the target search information in the target search request is the at least one ultrasound image to be searched.
The target standard surface features refer to standard surface features identified by the ultrasonic image to be searched.
As an example, in step S502, the main controller may perform standard face recognition on the ultrasound image to be searched, which needs to be labeled, by using a pre-trained standard face recognition model, and determine a target standard face feature corresponding to the ultrasound image to be searched according to a recognition result of the standard face recognition model. For example, if the ultrasound image to be searched is an ultrasound image formed by scanning a heart region of the target object, the type of the target standard surface corresponding to the ultrasound image to be searched is a four-chamber heart standard surface, and the characteristic of the target standard surface corresponding to the type of the target standard surface is a four-chamber heart of the heart.
In an embodiment, the step S502, namely, performing standard surface identification on the ultrasound image to be searched, and determining a target standard surface feature corresponding to the ultrasound image to be searched, includes:
s5021: performing standard face recognition on the ultrasonic image to be searched by adopting a standard face recognition model to obtain standard face similarity corresponding to at least one preset standard face type in the standard face recognition model;
s5022: determining a target standard surface type from at least one preset standard surface type according to the standard surface similarity corresponding to the at least one preset standard surface type;
s5023: and determining the characteristics of the target standard surface corresponding to the ultrasonic image to be searched according to the type of the target standard surface.
The standard surface recognition model is a model which is determined by adopting a neural network model to perform model training in advance and is used for standard surface recognition. In this example, the standard surface recognition model can be model-trained using, but not limited to, neural network models such as Resnet, FasterRCNN, Densenet, YOLO, VGG16, and VGG 19. The preset standard face type is a standard face type determined in the model training process. Generally, in the standard surface recognition model training process, a training ultrasonic image corresponding to a preset standard surface type is adopted to train a neural network model, and model parameters in the neural network model are updated by the training ultrasonic image, so that the standard surface recognition model obtained by training can realize standard surface recognition. The standard face similarity refers to the similarity between the ultrasound image to be searched and the training ultrasound image corresponding to the preset standard face type.
As an example, in step S5021, the main controller may perform standard face recognition on the ultrasound image to be searched by using a pre-trained standard face recognition model, respectively recognize an image similarity between the ultrasound image to be searched and a trained ultrasound image corresponding to at least one preset standard face type in the standard face recognition model, and determine the image similarity as a standard face similarity corresponding to at least one preset standard face type in the standard face recognition model. For example, if the standard surface recognition model corresponds to A, B, C and D preset standard surface types, the ultrasound image to be searched and the A, B, C and D training ultrasound images Pa, Pb, Pc, and Pd corresponding to the preset standard surface types need to be processed to determine the corresponding standard surface similarities Sa, Sb, Sc, and Sd, respectively.
The target standard face type refers to a standard face type identified by the ultrasonic image to be searched.
As an example, in step S5022, the main controller determines a target standard surface type from at least one preset standard surface type according to the standard surface similarity corresponding to the at least one preset standard surface type, specifically, the preset standard surface type corresponding to the maximum value of the standard surface similarity is determined as the target standard surface type; or determining the preset standard surface type with the standard surface similarity larger than the first similarity threshold as the target standard surface type. The first similarity threshold here is a threshold set in advance for evaluating whether the standard surface similarity reaches a preset standard.
The standard face knowledge graph is a pre-configured graph used for recording keywords related to the standard face features.
As an example, in step S5023, after determining the target standard surface type corresponding to the ultrasound image to be searched, the main controller may query the standard surface knowledge graph according to the target standard surface type to determine the target standard surface feature corresponding to the ultrasound image to be searched. For example, when the type of the target standard surface corresponding to the ultrasound image to be searched is identified as a four-chamber standard surface, the standard surface identification map is queried according to the four-chamber standard surface, the standard surface keyword which is most matched with the type of the target standard surface of the ultrasound image to be searched is selected from standard surface keywords which are pre-recorded in the standard surface knowledge map and correspond to the four-chamber standard surface, such as standard surface keywords of the four-chamber heart, the four-chamber heart under the cardiac sword, the four-chamber heart under the cardiac sternum, and the like, and the target standard surface feature is determined as the target standard surface feature corresponding to the ultrasound image to be searched, for example, the target standard surface feature is determined as the four-chamber heart of the heart.
In the embodiment, the standard surface identification model trained by the neural network model is adopted to identify the ultrasonic image to be searched and determine the type of the target standard surface, so that the intellectualization of the identification process of the type of the target standard surface can be ensured, a doctor does not need to perform manual identification judgment, and the processing efficiency and the accuracy are improved; and then, the standard surface knowledge graph is inquired based on the type of the target standard surface to determine the characteristics of the target standard surface, so that the determination efficiency and standardization of the characteristics of the target standard surface can be guaranteed, doctors are not required to perform standard surface identification and judgment, and inconsistency of labeling treatment caused by the fact that different doctors define the characteristics of the target standard surface independently can be avoided.
The target tissue features refer to tissue features identified by the ultrasonic image to be searched.
As an example, in step S503, the main controller may use a pre-trained tissue structure recognition model to perform tissue structure recognition on the ultrasound image to be searched, which needs to be labeled, and determine a target tissue feature corresponding to the ultrasound image to be searched according to a recognition result of the tissue structure recognition model. In this example, the master controller may use an organization structure recognition model corresponding to the target standard surface feature to perform organization structure recognition on the ultrasound image to be searched, and determine the corresponding target organization feature, which is helpful to improve the recognition efficiency and accuracy of the target organization feature. For example, when the target standard surface feature corresponding to the target standard surface type is a heart four-chamber heart, the target tissue feature may be a specific tissue in the heart four-chamber heart, for example, a tissue structure of a left upper ventricle.
In an embodiment, the step S503, namely, performing tissue structure identification on the ultrasound image to be searched, and determining a target tissue feature corresponding to the ultrasound image to be searched, includes:
s5031: adopting an organization structure identification model, identifying an organization structure of an ultrasonic image to be searched, and acquiring the organization similarity corresponding to at least one preset organization structure in the organization structure identification model;
s5032: determining a target tissue structure from at least one preset tissue structure according to the tissue similarity corresponding to the at least one preset tissue structure;
s5033: and determining the target tissue characteristics corresponding to the ultrasonic image to be searched according to the target tissue structure.
The tissue structure recognition model is a model which is determined by adopting a neural network model to perform model training in advance and is used for performing tissue structure recognition. In the present example, the tissue structure recognition model can be trained by using neural network models such as, but not limited to, U-Net and V-Net. The predetermined tissue structure is a tissue structure predetermined in the model training process. Generally, in the process of training the tissue structure recognition model, training the neural network model by using a training ultrasonic image corresponding to a preset tissue structure, and updating model parameters in the neural network model by using the training ultrasonic image, so that the tissue structure recognition model obtained by training can realize tissue structure recognition. The tissue similarity is the similarity between the ultrasound image to be searched and the training ultrasound image corresponding to the preset tissue structure.
As an example, in step S5031, the main controller may perform tissue structure recognition on the ultrasound image to be searched by using a pre-trained tissue structure recognition model, respectively recognize an image similarity between the ultrasound image to be searched and a trained ultrasound image corresponding to at least one preset tissue structure in the tissue structure recognition model, and determine the image similarity as a tissue similarity corresponding to at least one preset tissue structure in the tissue structure recognition model.
In this example, in order to improve the identification accuracy and efficiency of the tissue structure, the tissue structure identification model corresponding to the target standard surface feature may be adopted to identify the tissue structure of the ultrasound image to be searched, and the image similarity between the ultrasound image to be searched and the training ultrasound image corresponding to the at least one preset tissue structure corresponding to the target standard surface feature is respectively identified. For example, when the target standard surface feature is a heart with four cavities, the preset tissue structures of the target standard surface feature include an upper left ventricle, an upper right ventricle, a lower left ventricle, a lower right ventricle, and the like, and the ultrasound image to be searched and the training ultrasound images corresponding to the four preset tissue structures, i.e., the upper left ventricle, the upper right ventricle, the lower left ventricle, and the lower right ventricle, respectively, may be subjected to model identification, so as to determine the tissue similarities S1, S2, S3, and S4 corresponding to the four preset tissue structures.
As an example, in step S5032, the main controller determines, according to the tissue similarity corresponding to the at least one preset tissue structure, a target tissue structure from the at least one preset tissue structure, specifically, the preset tissue structure corresponding to the maximum tissue similarity may be determined as the target tissue structure, or the preset tissue structure with the tissue similarity greater than the second similarity threshold may be determined as the target tissue structure. The second similarity threshold here is a preset threshold for evaluating whether the tissue similarity reaches a preset standard.
Wherein the organization structure knowledge map is a pre-configured map for recording keywords related to the organization structure.
As an example, in step S5033, after determining the target tissue structure corresponding to the ultrasound image to be searched, the main controller may query the tissue structure knowledge base according to the target tissue structure, and determine the target tissue feature corresponding to the ultrasound image to be searched. For example, when the target tissue structure is identified as the upper left ventricle, the tissue structure knowledge map may be queried according to the upper left ventricle, so as to select a tissue keyword matched with the ultrasound image to be searched from tissue keywords corresponding to the upper left ventricle, and determine the tissue keyword as the target tissue feature.
Further, after the tissue structure identification model determines the target tissue structure of the ultrasonic image to be searched, the shape contour identification is carried out on the target tissue structure so as to identify the tissue contour shapes such as a circle, a triangle, an ellipse and a straight line, and therefore the tissue contour shape is used as a keyword in the keyword association map in the follow-up process, and the processing efficiency of analyzing statistics, query and other processing of the ultrasonic image to be searched is improved.
In the embodiment, the ultrasonic image to be searched is identified by adopting the tissue structure identification model trained by the neural network model, the target tissue structure is determined, the intellectualization of the target tissue structure identification process can be ensured, a doctor does not need to carry out manual identification judgment, and the processing efficiency and the accuracy are improved; and then, the tissue structure knowledge graph is inquired based on the target tissue structure to determine the target tissue characteristics, so that the determination efficiency and standardization of the target tissue characteristics can be guaranteed, doctors are not required to identify and judge the tissue structure, and the inconsistency of labeling treatment caused by the fact that different doctors define the target tissue characteristics independently can be avoided.
The target device characteristics refer to characteristics related to ultrasonic device measurement identified from an ultrasonic image to be searched, and include but are not limited to information such as an ultrasonic imaging mode, an ultrasonic probe, an ultrasonic marker and ultrasonic measurement.
As an example, in step S504, the main controller may perform character recognition on the ultrasound image to be searched by using, but not limited to, OCR or other image character recognition technologies, and identify a text to be searched from the ultrasound image to be searched; and extracting keywords from the text to be searched, and determining the characteristics of the target equipment corresponding to the ultrasonic image to be searched. The text to be searched refers to the text content identified from the ultrasound image to be searched. In this example, since the text characters of the ultrasound image generally have a structural feature, that is, the text characters recorded in the ultrasound image generally are structured information of predetermined structuring, such as an ultrasound imaging mode, an ultrasound probe, an ultrasound marker, and ultrasound measurement, and the positions of different structured information are specific, an OCR or other image recognition technology may be adopted to perform text recognition on the text character region in the ultrasound image to be searched, which is beneficial to improving the efficiency of text recognition.
In an embodiment, in step S504, performing text recognition on the ultrasound image to be searched, and determining a target device feature corresponding to the ultrasound image to be searched includes:
s5041: performing character recognition on the ultrasonic image to be searched, and determining a text to be searched corresponding to the ultrasonic image to be searched;
s5042: and determining the target equipment characteristics corresponding to the ultrasonic image to be searched based on the text query equipment characteristic knowledge graph to be searched.
As an example, in step S5041, the main controller may perform character recognition on the ultrasound image to be searched by using, but not limited to, OCR or other image character recognition technologies, and identify a text to be searched from the ultrasound image to be searched, which specifically includes: firstly, quickly positioning a text character area from an ultrasonic image to be searched by adopting but not limited to CRAFT or other character area detection technologies; then intercepting a target screenshot of a literal character area from the ultrasonic image to be searched; and finally, performing character recognition on the target screenshot by adopting but not limited to OCR or other image character recognition technologies, and determining a text to be searched corresponding to the ultrasonic image to be searched. Understandably, the target screenshot corresponding to the character area is positioned and intercepted, and then character recognition is carried out by adopting OCR or other image character recognition technologies, which is beneficial to ensuring the recognition efficiency of the text to be searched.
Wherein the device feature knowledge graph is a pre-configured graph for recording keywords related to the device features. Because the characters recorded in the ultrasound image are usually structured information of predetermined structuralization, such as an ultrasound imaging mode, an ultrasound probe, an ultrasound marker, ultrasound measurement and the like, all imaging equipment features corresponding to the training ultrasound image can be gathered and aggregated to form an equipment feature knowledge graph in the model training process.
As an example, in step S5042, after determining the text to be searched corresponding to the ultrasound image to be searched, the main controller may perform keyword recognition on the text to be searched corresponding to the ultrasound image to be searched, and determine a keyword to be searched corresponding to the ultrasound image to be searched; and then matching the keywords to be searched with the equipment characteristic knowledge graph, and determining the successfully matched keywords to be searched as the target equipment characteristics corresponding to the ultrasonic images to be searched, namely determining the keywords to be searched related to the information such as ultrasonic imaging modes, ultrasonic probes, ultrasonic marks, ultrasonic measurements and the like as the target equipment characteristics corresponding to the ultrasonic images to be searched.
In the embodiment, the text to be searched identified by the ultrasonic image to be searched is inquired about the equipment characteristic knowledge graph constructed in advance based on the structural information such as the ultrasonic imaging mode, the ultrasonic probe, the ultrasonic marker and the ultrasonic measurement, so that the target equipment characteristic of the ultrasonic image to be searched can be quickly and accurately determined, and the efficiency and the standardization of obtaining the target equipment characteristic are favorably ensured.
As an example, in step S504, the main controller may determine the target search feature in different processing manners based on the number of images of the ultrasound image to be searched. In this example, when the number of the images of the ultrasound image to be searched is one, the feature information, such as the target standard surface feature, the target tissue feature, and the target device feature, determined by the ultrasound image to be searched may be directly determined as the target search feature. And when the number of the images of the ultrasonic images to be searched is at least two, determining common characteristics in the target standard surface characteristics, the target tissue characteristics and the target equipment characteristics corresponding to the at least two ultrasonic images to be searched as target searching characteristics. In this example, the target search feature in the form of the keyword is determined according to the ultrasound image to be searched, so that the system database is searched based on the target search feature to convert the search processing in the form of the image into the search processing in the form of the text, which is helpful for improving the search efficiency of the ultrasound image to be searched.
In an embodiment, as shown in fig. 6, in step S202, executing a target feature extraction program corresponding to a target search type, performing feature extraction on target search information, and determining a target search feature corresponding to the target search information includes:
s601: when the target search type is a video search type, acquiring an ultrasonic video to be searched, and acquiring at least two ultrasonic images to be searched from the ultrasonic video to be searched;
s602: performing standard surface identification on an ultrasonic image to be searched, and determining target standard surface characteristics corresponding to the ultrasonic image to be searched;
s603: carrying out tissue structure identification on an ultrasonic image to be searched, and determining a target tissue characteristic corresponding to the ultrasonic image to be searched;
s604: performing character recognition on an ultrasonic image to be searched, and determining the characteristics of target equipment corresponding to the ultrasonic image to be searched;
s605: and determining common characteristics in the target standard surface characteristics, the target organization characteristics and the target equipment characteristics corresponding to at least two ultrasonic images to be searched as target searching characteristics.
As an example, in step S601, when the target search type is a video search type, the main controller may acquire an ultrasound video to be searched, where the ultrasound video to be searched includes at least two ultrasound images to be searched, so as to search at least two ultrasound images to be searched in the same ultrasound video to be searched together.
The processing procedures of steps S602-S604 are the same as the processing procedures of steps S502-S504, and are not repeated here to avoid repetition.
As an example, in step S605, the main controller may determine common features among the target standard surface features, the target tissue features and the target device features corresponding to at least two ultrasound images to be searched in the same ultrasound video to be searched as the target search features. In this example, an ultrasound video to be searched in a video form is converted into an ultrasound video in an image form; and determining common characteristics in the target standard surface characteristics, the target organization characteristics and the target equipment characteristics corresponding to the at least two ultrasonic images to be searched as target searching characteristics so as to realize the conversion of the video form searching processing into the text form searching processing and contribute to improving the searching efficiency of the ultrasonic images to be searched.
In an embodiment, in step S202, executing a target feature extraction program corresponding to a target search type, performing feature extraction on target search information, and determining a target search feature corresponding to the target search information includes:
and when the target search type is the label search type, obtaining a label to be searched, and directly determining the label to be searched as the target search characteristic.
As an example, a plurality of configuration feature tags pre-configured by the system, which are tags formed by the system before the current time based on keywords in the keyword association map of all historical ultrasound images, are displayed on the display screen of the ultrasound apparatus. In this example, the configuration feature tag includes information such as configuration standard surface features, configuration tissue features, tissue outline shapes, configuration device features, report keywords, and attribute keywords, and the configuration device features include information such as ultrasound imaging mode, ultrasound probe, ultrasound marker, and ultrasound measurement; the attribute keywords include information such as file names, folders, and file suffixes.
In this example, a user may click a configuration feature tag displayed in real time on a display screen of a selected ultrasound device, determine the selected configuration feature tag as a tag to be searched, and determine the tag as a target search feature based on the tag to be searched, so as to query a system database based on the target search feature, and quickly query and obtain a target search result matched with the tag to be searched.
In an embodiment, as shown in fig. 7, in step S202, executing a target feature extraction program corresponding to a target search type, performing feature extraction on target search information, and determining a target search feature corresponding to the target search information includes:
s701: when the target search type is a scene search type, acquiring a target scene schematic diagram and a scene to be searched;
s702: determining scene organization characteristics corresponding to a scene to be searched according to the scene position of the scene to be searched in the target scene schematic diagram;
s703: and determining target search characteristics based on scene organization characteristics corresponding to the scene to be searched.
The scene search type is a search mode for searching a scene describing the generation of an ultrasound video or ultrasound image. The scene to be searched is a keyword which is input by a user and is related to a scene generated by an ultrasonic video or an ultrasonic image, and the keyword includes, but is not limited to, scene information such as an ultrasonic imaging mode, an ultrasonic probe, an ultrasonic marker and an ultrasonic measurement. The target scene schematic diagram is a scene schematic diagram selected by a user when the user triggers a target search request. The target scene schematic diagram can be a two-dimensional human body schematic diagram, a three-dimensional human body schematic diagram, a human tissue structure schematic diagram and the like which are configured in advance.
As an example, in step S701, when the target search type is the scene search type, the main controller needs to determine target search information, which is a target scene schematic diagram and a scene to be searched, according to the target search request. In this example, a user may select a specific region in a target scene schematic diagram currently displayed by an ultrasound image, so as to determine the region to be determined as a scene to be searched, and trigger a target search request based on the target scene schematic diagram and the scene to be searched.
As an example, in step S702, the main controller may determine a scene organization structure corresponding to the scene to be searched according to the scene position of the scene to be searched in the target scene schematic diagram and the scene position of the scene to be searched in the target scene schematic diagram, and then determine a scene organization feature corresponding to the scene to be searched according to the scene organization structure. The scene organization structure here refers to an organization structure corresponding to a scene position of a scene to be searched in a target scene schematic diagram. For example, the main controller may query the organization structure knowledge graph according to the scene organization structure, and determine the scene organization characteristics corresponding to the scene to be searched. For example, when the scene organization structure is identified as the upper left ventricle, the tissue structure knowledge map may be queried according to the upper left ventricle, so as to select an organization keyword matched with the ultrasound image to be searched from the organization keywords corresponding to the upper left ventricle, and determine the organization keyword as the scene organization feature corresponding to the scene to be searched.
As an example, in step S703, since all existing ultrasound images in the system database are stored with the keyword associated map including the existing organization features, the main controller may directly determine the scene organization features corresponding to the scene to be searched according to the scene organization structure, as the target search features, so as to perform ultrasound image query based on the target search features, and may effectively improve the ultrasound image query efficiency.
In an embodiment, as shown in fig. 8, in step S203, the matching process is performed on the target search feature and the keyword associated maps corresponding to all existing ultrasound images in the system database, and the obtaining of at least one target ultrasound image includes:
s801: if the target search features are history keywords, determining at least one history ultrasonic image corresponding to the history keywords as at least one target ultrasonic image;
s802: if the target search features are not historical keywords, matching the target search features with keyword associated maps corresponding to all existing ultrasonic images in a system database to obtain entity similarity of each existing ultrasonic image corresponding to the target search features;
s802: and selecting at least one existing ultrasonic image with larger entity similarity, and determining the existing ultrasonic image as at least one target ultrasonic image.
As an example, in step S801, when the determined target search feature is a character string to be searched or a history keyword determined in the voice analysis process to be searched, the main controller indicates that the user triggers the same target search request in a shorter time, and may directly determine at least one history ultrasound image in the history search result corresponding to the history keyword as the at least one target ultrasound image, which is beneficial to improving the search efficiency corresponding to the at least one target ultrasound image.
As an example, in step S802, when the target search feature is not a history keyword, the main controller performs matching processing on a target associated map formed according to the target search feature and keyword associated maps corresponding to all existing ultrasound images in the system database, and determines entity similarity of the target search feature and each existing ultrasound image. For example, if the target association map and the keyword association map include information such as entities, entity relationships, entity attributes, and the like, the entity relationship similarity and the entity attribute similarity may be analyzed and calculated, and the sum of the entity relationship similarity and the entity attribute similarity is determined as the entity similarity.
In this example, each existing ultrasound image keyword association map stored in advance by the system is mainly used to determine the map of the association relationship between the keywords and the sentences. For example, the construction of the keyword association graph includes, but is not limited to, information such as entities, entity relationships, and entity attributes. In this example, a triple GK ═ (E, R, S) is determined as a keyword association map, and E ═ { E1, E2, …, En } is a set of entities in the entity library, which contains n different entities in total; r { R1, R2, …, Rm } is a set of entity relationships in the entity library, containing m different entity relationships; s { S1, S2, … sk } is a set of each entity attribute in the entity library, and includes k entity attribute relationships, where n, m, and k are all count-wise. The entity set E, the entity relationship set R and the entity attribute set S are preset dictionary libraries, wherein the entity set E comprises but is not limited to ultrasound videos, ultrasound images, standard surface features, organization outline forms, image imaging modes, ultrasound probes, ultrasound marks, measurement modes, ultrasound analysis reports, file names, folders, file suffixes, functional modules and other information,
for example, the following steps are carried out: as shown in the following table, an ultrasound image of an entity, i.e. the vocabulary in the entity library is "ultrasound image", and the corresponding entity attribute is "heart four-chamber center", and the relationship between the two entities in the ultrasound is "standard surface", i.e. the relationship between the two entities constitutes the standard surface, so as to know the specific attribute of the ultrasound image.
Entity Entity relationships Entity attributes
Ultrasound image Standard noodle Heart with four cavities
Wherein the entity similarity of an entity M in a keyword associated map of an existing ultrasonic image and an entity N in a target search feature is calculated according to the following formula:
Sum(M,N)=Dis(SM,SN)+Dis(RM,RN)
wherein Sum (M, N) is the entity similarity of the entity M and the entity N, Dis (SM, SN) is the attribute distance between the entity attribute SM of the entity M and the entity attribute SN of the entity N, and Dis (RM, RN) is the relationship distance between the entity relationship RM of the entity M and the entity relationship RN of the entity N. And when the entity similarity Sum (M, N) of the entity M and the entity N is greater than a preset threshold, the entity M and the entity N are considered to reach the similarity standard. Herein, the
As an example, in step S803, the main controller may obtain entity similarities corresponding to the target search features of each existing ultrasound image in the system database, sequence the entity similarities of at least one existing ultrasound image, select at least one existing ultrasound image with a larger entity similarity, and determine the at least one existing ultrasound image as the at least one target ultrasound image, so as to achieve the purpose of performing a fast search by using the target search features and the keyword associated maps of the existing ultrasound images, which is helpful for improving the processing efficiency of ultrasound image search.
In one embodiment, as shown in fig. 9, before step S201, that is, before obtaining a target search request, the target search request including a target search type and target search information corresponding to the target search type, the ultrasound image search method further includes:
s901: acquiring existing image data, wherein the existing image data comprises an existing ultrasonic image, an ultrasonic analysis report and file attribute information;
s902: performing standard surface identification on the existing ultrasonic image, and determining the existing standard surface characteristics corresponding to the existing ultrasonic image;
s903: identifying the tissue structure of the existing ultrasonic image, and determining the existing tissue characteristics corresponding to the existing ultrasonic image;
s904: performing character recognition on the existing ultrasonic image, and determining the existing equipment characteristics corresponding to the existing ultrasonic image;
s905: extracting keywords from the ultrasonic analysis report and the file attribute information to obtain report keywords and attribute keywords;
s906: performing labeling processing on the existing ultrasonic image based on the existing standard surface characteristics, the existing organization characteristics, the existing equipment characteristics, the report keywords and the attribute keywords to obtain a keyword association map corresponding to the existing ultrasonic image;
s907: and storing the existing ultrasonic images and the keyword association map in a system database in an associated manner.
Wherein the existing ultrasonic image is an ultrasonic image which is subjected to labeling processing before the current time of the system. The ultrasonic analysis report refers to a report file formed by analyzing the existing ultrasonic image by a doctor. The file attribute information is information related to ultrasound analysis report storage, including but not limited to file naming, folder and file suffix, and the like.
As an example, in step S901, the main controller may acquire existing ultrasound data, which includes data such as existing ultrasound images, ultrasound analysis reports, and file attribute information. The ultrasonic analysis report is an analysis report formed by analyzing an existing ultrasonic image by a doctor, and the ultrasonic analysis report is stored in a system database so as to acquire file attribute information corresponding to the ultrasonic analysis report according to the storage address and the storage mode of the ultrasonic analysis report in the system database.
Wherein, the existing standard surface features refer to the standard surface features identified by the existing ultrasonic images.
As an example, in step S902, the main controller may perform standard face recognition on the existing ultrasound image that needs to be labeled by using a pre-trained standard face recognition model, and determine an existing standard face feature corresponding to the existing ultrasound image according to a recognition result of the standard face recognition model. For example, if the existing ultrasound image is an ultrasound image formed by scanning a heart region of the target object, the target standard surface type corresponding to the existing ultrasound image is a four-chamber heart standard surface, and the existing standard surface characteristic corresponding to the target standard surface type is a four-chamber heart of the heart.
Wherein the existing tissue features refer to tissue features identified by the existing ultrasound images.
As an example, in step S903, the main controller may use a pre-trained tissue structure recognition model to perform tissue structure recognition on an existing ultrasound image that needs to be labeled, and determine an existing tissue feature corresponding to the existing ultrasound image according to a recognition result of the tissue structure recognition model. In this example, the main controller may use an organization structure recognition model corresponding to the existing standard surface features to perform organization structure recognition on the existing ultrasound image, and determine the corresponding existing organization features, which is helpful to improve the recognition efficiency and accuracy of the existing organization features. For example, when the existing standard surface feature corresponding to the target standard surface type is the heart four-chamber heart, the existing tissue feature may be a specific tissue in the heart four-chamber heart, for example, the tissue structure of the upper left ventricle.
The existing device characteristics refer to characteristics related to ultrasonic device measurement identified from an existing ultrasonic image, and include but are not limited to information such as an ultrasonic imaging mode, an ultrasonic probe, an ultrasonic marker, and ultrasonic measurement.
As an example, in step S904, the main controller may perform character recognition on the existing ultrasound image by using, but not limited to, OCR or other image character recognition technologies, and recognize an existing processed text from the existing ultrasound image; and extracting keywords from the existing processing text to determine the existing equipment characteristics corresponding to the existing ultrasonic image. Wherein the existing processed text refers to the text content identified from the existing ultrasound image. In this example, since the text characters of the ultrasound image generally have a structural feature, that is, the text characters recorded in the ultrasound image generally are structured information of predetermined structuring, such as an ultrasound imaging mode, an ultrasound probe, an ultrasound marker, and ultrasound measurement, and the positions of different structured information are specific, OCR or other image recognition techniques may be adopted to perform text recognition on the text character region in the existing ultrasound image, which is helpful for improving the efficiency of text recognition.
As an example, in step S905, the main controller may adopt a keyword extraction algorithm to perform keyword extraction on the ultrasound analysis report, and obtain a report keyword corresponding to the ultrasound analysis report; and a keyword extraction algorithm is adopted to extract keywords from the file attribute information, and attribute keywords corresponding to the file attribute information are obtained, so that the existing ultrasonic images are subjected to labeling processing based on the report keywords and the attribute keywords, the follow-up statistical analysis, query analysis and other processing of the existing ultrasonic images are facilitated to be guaranteed, and the processing efficiency is improved.
The keyword association graph is a knowledge graph formed by summarizing the keywords extracted or identified from the existing ultrasonic images, and specifically, the keyword association graph is formed by summarizing the keywords corresponding to the characteristics including but not limited to the existing standard surface characteristics, the existing organization characteristics, the existing equipment characteristics, the report keywords, the attribute keywords and the like.
As an example, in step S906, the main controller may perform an integration process on the existing standard surface features, the existing organization features, the existing equipment features, the report keywords, the attribute keywords, and other features corresponding to the existing ultrasound images by using a pre-configured feature integration rule, so as to obtain the keyword association map corresponding to the integrated existing ultrasound images. In this example, the main controller may perform a splicing process on existing standard surface features, existing organization features, existing equipment features, report keywords, attribute keywords, and other features corresponding to existing ultrasound images based on a preset feature splicing rule to form a keyword association map in a character string form. Or, the main controller can record the characteristics of the existing standard surface characteristics, the existing organization characteristics, the existing equipment characteristics, the report keywords, the attribute keywords and the like corresponding to the existing ultrasonic images in a knowledge graph data table to form a keyword association graph expressed in a data table form.
As an example, in step S907, after the main controller obtains the keyword associated map corresponding to each existing ultrasound image, the existing ultrasound image and the keyword associated map need to be stored in the system database in an associated manner, so that all existing ultrasound images can be subjected to statistical analysis based on the keyword associated map in the following process, and the statistical analysis efficiency is improved; or multi-dimensional data query is carried out on the existing ultrasonic image, and the data query efficiency is improved. In this example, the system database may be a local database or a cloud database.
In this example, the keyword association map corresponding to each existing ultrasound image includes existing standard surface features, existing organization features, existing equipment features, report keywords, and attribute keywords; the existing equipment characteristics comprise information such as an ultrasonic imaging mode, an ultrasonic probe, an ultrasonic mark, ultrasonic measurement and the like; the attribute keywords comprise information such as file names, folders, file suffixes and the like; the existing ultrasonic images and the keyword association maps are stored in the system database in an associated mode, and the method can be understood as that the existing ultrasonic images are subjected to labeling processing by the keyword association maps, so that statistics, query and other processing are performed on all the existing ultrasonic images in the system database according to the keyword association maps, and the processing efficiency is improved.
In the ultrasound image processing method provided by this embodiment, the existing ultrasound image is analyzed to determine the existing standard surface characteristics, the existing organization characteristics, the existing equipment characteristics, the report keywords, the attribute keywords and other characteristics corresponding to the existing ultrasound image, so as to determine different characteristics that can be used for labeling the existing ultrasound image; and integrating the existing standard surface characteristics, the existing organization characteristics, the existing equipment characteristics, the report keywords, the attribute keywords and other characteristics corresponding to the existing ultrasonic images to form a keyword associated map, and storing the keyword associated map and the existing ultrasonic images in a system database in an associated manner to realize the labeling processing of the existing ultrasonic images by using the keyword associated map, so that the statistics, the query and other processing of the existing ultrasonic images can be realized by using the keyword associated map subsequently, and the processing efficiency is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, an ultrasound image searching apparatus is provided, and the ultrasound image searching apparatus corresponds to the ultrasound image searching methods in the above embodiments one to one. As shown in fig. 10, the ultrasound image search apparatus includes a target search request acquisition module 1001, a target search feature acquisition module 1002, a target ultrasound image acquisition module 1003, and a target search result display module 1004. The functional modules are explained in detail as follows:
a target search request obtaining module 1001 configured to obtain a target search request, where the target search request includes a target search type and target search information corresponding to the target search type;
a target search feature obtaining module 1002, configured to execute a target feature extraction program corresponding to a target search type, perform feature extraction on target search information, and determine a target search feature corresponding to the target search information;
a target ultrasound image obtaining module 1003, configured to perform matching processing on the target search feature and the keyword associated maps corresponding to all existing ultrasound images in the system database, so as to obtain at least one target ultrasound image;
and a target search result display module 1004 for sorting the at least one target ultrasound image based on the target sorting rule and displaying the target search result.
Preferably, the target search feature obtaining module 1002 includes:
the search keyword acquisition unit is used for extracting keywords from the character string to be searched when the target search type is the character string search type to acquire search keywords;
a history keyword obtaining unit, configured to obtain a history keyword corresponding to a character string search type;
the first matching degree acquisition unit is used for calculating a first matching degree corresponding to the search keyword and the history keyword;
the first characteristic determining unit is used for determining the historical keywords as target searching characteristics corresponding to the target searching information if the first matching degree is greater than the target matching degree;
the second matching degree obtaining unit is used for matching the search keywords with each standard keyword in the target knowledge graph if the first matching degree is not greater than the target matching degree, and obtaining second matching degrees corresponding to the search keywords and each standard keyword;
the second characteristic determining unit is used for determining the standard keyword as the target search characteristic if the second matching degree is greater than the target matching degree;
and the third characteristic determining unit is used for determining the search keyword as the target search characteristic if the second matching degree is not greater than the target matching degree.
Preferably, the target search feature obtaining module 1002 includes:
the device comprises a to-be-searched character string acquisition unit, a search unit and a search unit, wherein the to-be-searched character string acquisition unit is used for performing voice text conversion on voice to be searched by adopting a voice text conversion technology when a target search type is a voice search type to acquire a to-be-searched character string;
the search keyword acquisition unit is used for extracting keywords from the character string to be searched and acquiring search keywords;
a history keyword obtaining unit, configured to obtain a history keyword corresponding to a character string search type;
the first matching degree acquisition unit is used for calculating a first matching degree corresponding to the search keyword and the history keyword;
the first characteristic determining unit is used for determining the historical keywords as target searching characteristics corresponding to the target searching information if the first matching degree is greater than the target matching degree;
the second matching degree obtaining unit is used for matching the search keywords with each standard keyword in the target knowledge graph if the first matching degree is not greater than the target matching degree, and obtaining second matching degrees corresponding to the search keywords and each standard keyword;
the second characteristic determining unit is used for determining the standard keyword as the target search characteristic if the second matching degree is greater than the target matching degree;
and the third characteristic determining unit is used for determining the search keyword as the target search characteristic if the second matching degree is not greater than the target matching degree.
Preferably, the target search feature obtaining module 1002 includes:
the device comprises a to-be-searched ultrasonic image acquisition unit, a searching unit and a searching unit, wherein the to-be-searched ultrasonic image acquisition unit is used for acquiring at least one to-be-searched ultrasonic image when a target searching type is an image searching type;
the target standard surface feature determination unit is used for performing standard surface identification on the ultrasonic image to be searched and determining a target standard surface feature corresponding to the ultrasonic image to be searched;
the target tissue characteristic determining unit is used for identifying the tissue structure of the ultrasonic image to be searched and determining the target tissue characteristic corresponding to the ultrasonic image to be searched;
the imaging equipment characteristic determining unit is used for performing character recognition on the ultrasonic image to be searched and determining the target equipment characteristic corresponding to the ultrasonic image to be searched;
and the fourth characteristic determining unit is used for determining the target searching characteristic based on the target standard surface characteristic, the target tissue characteristic and the imaging equipment characteristic corresponding to the at least one ultrasonic image to be searched.
Preferably, the target search feature obtaining module 1002 includes:
the device comprises a to-be-searched ultrasonic video acquisition unit, a searching unit and a searching unit, wherein the to-be-searched ultrasonic video acquisition unit is used for acquiring an ultrasonic video to be searched and acquiring at least two ultrasonic images to be searched from the ultrasonic video to be searched when a target searching type is a video searching type;
the target standard surface feature determination unit is used for performing standard surface identification on the ultrasonic image to be searched and determining a target standard surface feature corresponding to the ultrasonic image to be searched;
the target tissue characteristic determining unit is used for identifying the tissue structure of the ultrasonic image to be searched and determining the target tissue characteristic corresponding to the ultrasonic image to be searched;
the imaging equipment characteristic determining unit is used for performing character recognition on the ultrasonic image to be searched and determining the target equipment characteristic corresponding to the ultrasonic image to be searched;
and the fifth characteristic determining unit is used for determining common characteristics in the target standard surface characteristics, the target tissue characteristics and the imaging equipment characteristics corresponding to the at least two ultrasonic images to be searched as target searching characteristics.
Preferably, the target search feature obtaining module 1002 includes:
the scene search information acquisition unit is used for acquiring a target scene schematic diagram and a scene to be searched when the target search type is a scene search type;
the scene organization characteristic acquisition unit is used for determining scene organization characteristics corresponding to a scene to be searched according to the scene position of the scene to be searched in the target scene schematic diagram;
and the sixth characteristic determining unit is used for determining the target searching characteristic based on the scene organization characteristic corresponding to the scene to be searched.
Preferably, the target ultrasound image acquisition module 1003 includes:
the first target ultrasonic image determining unit is used for determining at least one historical ultrasonic image corresponding to the historical keyword as at least one target ultrasonic image if the target searching feature is the historical keyword;
the entity similarity determining unit is used for matching the target search features with keyword associated maps corresponding to all existing ultrasonic images in a system database if the target search features are not historical keywords, and acquiring entity similarity of each existing ultrasonic image corresponding to the target search features;
and the second target ultrasonic image determining unit is used for selecting at least one existing ultrasonic image with larger entity similarity and determining the existing ultrasonic image as at least one target ultrasonic image.
Preferably, the ultrasound image searching apparatus further comprises:
the existing image data acquisition unit is used for acquiring existing image data, and the existing image data comprises an existing ultrasonic image, an ultrasonic analysis report and file attribute information;
the existing standard surface feature acquisition unit is used for carrying out standard surface identification on the existing ultrasonic image and determining the existing standard surface features corresponding to the existing ultrasonic image;
the existing tissue characteristic acquisition unit is used for identifying the tissue structure of the existing ultrasonic image and determining the existing tissue characteristic corresponding to the existing ultrasonic image;
the existing equipment characteristic acquisition unit is used for performing character recognition on the existing ultrasonic image and determining the existing equipment characteristics corresponding to the existing ultrasonic image;
the report attribute acquisition unit is used for extracting keywords from the ultrasonic analysis report and the file attribute information to acquire report keywords and attribute keywords;
the keyword associated map acquiring unit is used for performing labeling processing on the existing ultrasonic image based on the existing standard surface characteristics, the existing organization characteristics, the imaging equipment characteristics, the report keywords and the attribute keywords to acquire a keyword associated map corresponding to the existing ultrasonic image;
and the association storage unit is used for storing the existing ultrasonic images and the keyword association map in a system database in an associated manner.
For the specific definition of the ultrasound image searching apparatus, reference may be made to the above definition of the ultrasound image searching method, and details thereof are not repeated here. The modules in the ultrasound image searching device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the ultrasound device, and can also be stored in a memory in the ultrasound device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an ultrasound apparatus is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the ultrasound image searching method in the foregoing embodiments is implemented, for example, as shown in S201-S204 in fig. 2, or as shown in fig. 3 to 8, which is not repeated here to avoid repetition. Alternatively, the processor executes the computer program to implement the functions of the modules/units in the embodiment of the ultrasound image searching apparatus, such as the functions of the target search request obtaining module 1001, the target search feature obtaining module 1002, the target ultrasound image obtaining module 1003 and the target search result displaying module 1004 shown in fig. 9, which are not described herein again to avoid redundancy.
In an embodiment, a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the ultrasound image searching method in the foregoing embodiments, for example, S201 to S204 shown in fig. 2, or fig. 3 to 8, which are not repeated herein to avoid repetition. Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units in the embodiment of the ultrasound image searching apparatus, such as the functions of the target search request obtaining module 1001, the target search feature obtaining module 1002, the target ultrasound image obtaining module 1003 and the target search result displaying module 1004 shown in fig. 9, and is not repeated here for avoiding repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (16)

1. An ultrasound image searching method, comprising:
acquiring a target search request, wherein the target search request comprises a target search type and target search information corresponding to the target search type;
executing a target feature extraction program corresponding to the target search type, performing feature extraction on the target search information, and determining target search features corresponding to the target search information;
matching the target search characteristics with keyword associated maps corresponding to all existing ultrasonic images in a system database to obtain at least one target ultrasonic image;
based on a target sorting rule, sorting at least one target ultrasonic image, and displaying a target search result, wherein the executing a target feature extraction program corresponding to the target search type, performing feature extraction on the target search information, and determining a target search feature corresponding to the target search information includes:
when the target search type is a character string search type, extracting keywords of a character string to be searched to obtain search keywords;
acquiring historical keywords corresponding to the character string search type;
calculating a first matching degree corresponding to the search keyword and the historical keyword;
if the first matching degree is greater than the target matching degree, determining the historical keywords as target search features corresponding to the target search information;
if the first matching degree is not greater than the target matching degree, matching the search keyword with each standard keyword in a target knowledge graph to obtain a second matching degree corresponding to the search keyword and each standard keyword;
if the second matching degree is greater than the target matching degree, determining the standard keyword as a target search feature;
and if the second matching degree is not greater than the target matching degree, determining the search keyword as a target search feature.
2. The method of claim 1, wherein the executing a target feature extraction program corresponding to the target search type to perform feature extraction on the target search information and determine the target search feature corresponding to the target search information includes:
when the target search type is a voice search type, performing voice text conversion on voice to be searched by adopting a voice text conversion technology to obtain a character string to be searched;
extracting keywords from the character string to be searched to obtain search keywords;
acquiring historical keywords corresponding to the character string search type;
calculating a first matching degree corresponding to the search keyword and the historical keyword;
if the first matching degree is greater than the target matching degree, determining the historical keywords as target search features corresponding to the target search information;
if the first matching degree is not greater than the target matching degree, matching the search keyword with each standard keyword in a target knowledge graph to obtain a second matching degree corresponding to the search keyword and each standard keyword;
if the second matching degree is greater than the target matching degree, determining the standard keyword as a target search feature;
and if the second matching degree is not greater than the target matching degree, determining the search keyword as a target search feature.
3. The method of claim 1, wherein the executing a target feature extraction program corresponding to the target search type to perform feature extraction on the target search information and determine the target search feature corresponding to the target search information includes:
when the target search type is an image search type, acquiring at least one ultrasonic image to be searched;
performing standard surface identification on the ultrasonic image to be searched, and determining target standard surface characteristics corresponding to the ultrasonic image to be searched;
identifying the tissue structure of the ultrasonic image to be searched, and determining the target tissue characteristics corresponding to the ultrasonic image to be searched;
performing character recognition on the ultrasonic image to be searched, and determining the target equipment characteristics corresponding to the ultrasonic image to be searched;
and determining target searching characteristics based on the target standard surface characteristics, the target tissue characteristics and the target equipment characteristics corresponding to at least one ultrasonic image to be searched.
4. The method of claim 1, wherein the executing a target feature extraction program corresponding to the target search type to perform feature extraction on the target search information and determine the target search feature corresponding to the target search information includes:
when the target search type is a video search type, acquiring an ultrasonic video to be searched, and acquiring at least two ultrasonic images to be searched from the ultrasonic video to be searched;
performing standard surface identification on the ultrasonic image to be searched, and determining target standard surface characteristics corresponding to the ultrasonic image to be searched;
identifying the tissue structure of the ultrasonic image to be searched, and determining the target tissue characteristics corresponding to the ultrasonic image to be searched;
performing character recognition on the ultrasonic image to be searched, and determining the target equipment characteristics corresponding to the ultrasonic image to be searched;
and determining common characteristics in the target standard surface characteristics, the target organization characteristics and the target equipment characteristics corresponding to at least two ultrasonic images to be searched as target searching characteristics.
5. The method of claim 1, wherein the executing a target feature extraction program corresponding to the target search type to perform feature extraction on the target search information and determine the target search feature corresponding to the target search information includes:
when the target search type is a scene search type, acquiring a target scene schematic diagram and a scene to be searched;
determining scene organization characteristics corresponding to the scene to be searched according to the scene position of the scene to be searched in the target scene schematic diagram;
and determining target search characteristics based on the scene organization characteristics corresponding to the scene to be searched.
6. The method of claim 1, wherein the matching of the target search feature with the keyword association map corresponding to all existing ultrasound images in the system database to obtain at least one target ultrasound image comprises:
if the target search features are history keywords, determining at least one history ultrasonic image corresponding to the history keywords as at least one target ultrasonic image;
if the target search features are not historical keywords, matching the target search features with keyword associated maps corresponding to all existing ultrasonic images in a system database to obtain entity similarity of each existing ultrasonic image corresponding to the target search features;
and selecting at least one existing ultrasonic image with larger entity similarity, and determining the existing ultrasonic image as at least one target ultrasonic image.
7. The ultrasound image searching method of claim 1, wherein before the obtaining of the target search request including the target search type and the target search information corresponding to the target search type, the ultrasound image searching method further comprises:
acquiring existing image data, wherein the existing image data comprises an existing ultrasonic image, an ultrasonic analysis report and file attribute information;
performing standard surface identification on the existing ultrasonic image, and determining the existing standard surface characteristics corresponding to the existing ultrasonic image;
identifying the tissue structure of the existing ultrasonic image, and determining the existing tissue characteristics corresponding to the existing ultrasonic image;
performing character recognition on the existing ultrasonic image, and determining existing equipment characteristics corresponding to the existing ultrasonic image;
extracting keywords from the ultrasonic analysis report and the file attribute information to obtain report keywords and attribute keywords;
based on the existing standard surface features, the existing organization features, the existing equipment features, the report keywords and the attribute keywords, performing labeling processing on the existing ultrasonic images to obtain keyword association maps corresponding to the existing ultrasonic images;
and storing the existing ultrasonic images and the keyword association map in a system database in an associated manner.
8. An ultrasound image search apparatus, comprising:
the target search request acquisition module is used for acquiring a target search request, wherein the target search request comprises a target search type and target search information corresponding to the target search type;
the target search feature acquisition module is used for executing a target feature extraction program corresponding to the target search type, extracting features of the target search information and determining target search features corresponding to the target search information;
the target ultrasonic image acquisition module is used for matching the target search characteristics with keyword associated maps corresponding to all existing ultrasonic images in a system database to acquire at least one target ultrasonic image;
a target search result display module, configured to rank at least one target ultrasound image based on a target ranking rule, and display a target search result, where the target search feature acquisition module includes:
a search keyword acquisition unit, configured to, when the target search type is a character string search type, perform keyword extraction on a character string to be searched to acquire a search keyword;
a history keyword obtaining unit, configured to obtain a history keyword corresponding to the character string search type;
a first matching degree obtaining unit, configured to calculate a first matching degree corresponding to the search keyword and the history keyword;
a first feature determining unit, configured to determine the history keyword as a target search feature corresponding to the target search information if the first matching degree is greater than a target matching degree;
the second matching degree obtaining unit is used for matching the search keyword with each standard keyword in the target knowledge graph if the first matching degree is not greater than the target matching degree, and obtaining a second matching degree corresponding to the search keyword and each standard keyword;
the second characteristic determining unit is used for determining the standard keyword as a target search characteristic if the second matching degree is greater than the target matching degree;
and the third characteristic determining unit is used for determining the search keyword as a target search characteristic if the second matching degree is not greater than the target matching degree.
9. The ultrasound image searching apparatus of claim 8, wherein the target searching feature obtaining module comprises:
a character string to be searched acquiring unit, configured to perform voice-to-text conversion on the voice to be searched by using a voice-to-text conversion technology when the target search type is a voice search type, and acquire a character string to be searched;
the search keyword acquisition unit is used for extracting keywords from the character string to be searched and acquiring search keywords;
a history keyword obtaining unit, configured to obtain a history keyword corresponding to the character string search type;
a first matching degree obtaining unit, configured to calculate a first matching degree corresponding to the search keyword and the history keyword;
a first feature determining unit, configured to determine the history keyword as a target search feature corresponding to the target search information if the first matching degree is greater than a target matching degree;
the second matching degree obtaining unit is used for matching the search keyword with each standard keyword in the target knowledge graph if the first matching degree is not greater than the target matching degree, and obtaining a second matching degree corresponding to the search keyword and each standard keyword;
the second characteristic determining unit is used for determining the standard keyword as a target search characteristic if the second matching degree is greater than the target matching degree;
and the third characteristic determining unit is used for determining the search keyword as a target search characteristic if the second matching degree is not greater than the target matching degree.
10. The ultrasound image searching apparatus of claim 8, wherein the target searching feature obtaining module comprises:
the ultrasonic image acquisition unit to be searched is used for acquiring at least one ultrasonic image to be searched when the target search type is an image search type;
the target standard surface feature determination unit is used for performing standard surface identification on the ultrasonic image to be searched and determining a target standard surface feature corresponding to the ultrasonic image to be searched;
the target tissue characteristic determining unit is used for identifying the tissue structure of the ultrasonic image to be searched and determining the target tissue characteristic corresponding to the ultrasonic image to be searched;
the imaging equipment characteristic determining unit is used for performing character recognition on the ultrasonic image to be searched and determining the target equipment characteristic corresponding to the ultrasonic image to be searched;
and the fourth feature determination unit is used for determining a target search feature based on the target standard surface feature, the target tissue feature and the target equipment feature corresponding to at least one ultrasonic image to be searched.
11. The ultrasound image searching apparatus of claim 8, wherein the target searching feature obtaining module comprises:
the to-be-searched ultrasonic video acquisition unit is used for acquiring an ultrasonic video to be searched and acquiring at least two ultrasonic images to be searched from the ultrasonic video to be searched when the target search type is a video search type;
the target standard surface feature determination unit is used for performing standard surface identification on the ultrasonic image to be searched and determining a target standard surface feature corresponding to the ultrasonic image to be searched;
the target tissue characteristic determining unit is used for identifying the tissue structure of the ultrasonic image to be searched and determining the target tissue characteristic corresponding to the ultrasonic image to be searched;
the imaging equipment characteristic determining unit is used for performing character recognition on the ultrasonic image to be searched and determining the target equipment characteristic corresponding to the ultrasonic image to be searched;
and the fifth characteristic determining unit is used for determining common characteristics in the target standard surface characteristics, the target tissue characteristics and the target equipment characteristics corresponding to the at least two ultrasonic images to be searched as target searching characteristics.
12. The ultrasound image searching apparatus of claim 8, wherein the target searching feature obtaining module comprises:
the scene search information acquisition unit is used for acquiring a target scene schematic diagram and a scene to be searched when the target search type is a scene search type;
the scene organization characteristic acquisition unit is used for determining scene organization characteristics corresponding to the scene to be searched according to the scene position of the scene to be searched in the target scene schematic diagram;
and the sixth characteristic determining unit is used for determining target searching characteristics based on the scene organization characteristics corresponding to the scene to be searched.
13. The ultrasound image searching apparatus of claim 8, wherein the target ultrasound image obtaining module comprises:
a first target ultrasound image determining unit, configured to determine, if the target search feature is a history keyword, at least one history ultrasound image corresponding to the history keyword as at least one target ultrasound image;
an entity similarity determining unit, configured to, if the target search feature is not a history keyword, perform matching processing on the target search feature and a keyword associated map corresponding to all existing ultrasound images in a system database, and obtain entity similarity between each existing ultrasound image and the target search feature;
and the second target ultrasonic image determining unit is used for selecting at least one existing ultrasonic image with larger entity similarity and determining the existing ultrasonic image as at least one target ultrasonic image.
14. The ultrasound image searching apparatus of claim 8, further comprising:
the existing image data acquisition unit is used for acquiring existing image data, and the existing image data comprises an existing ultrasonic image, an ultrasonic analysis report and file attribute information;
the existing standard surface feature acquisition unit is used for carrying out standard surface identification on the existing ultrasonic image and determining the existing standard surface feature corresponding to the existing ultrasonic image;
the existing tissue characteristic acquisition unit is used for identifying the tissue structure of the existing ultrasonic image and determining the existing tissue characteristic corresponding to the existing ultrasonic image;
the existing equipment characteristic acquisition unit is used for performing character recognition on the existing ultrasonic image and determining the existing equipment characteristics corresponding to the existing ultrasonic image;
the report attribute acquisition unit is used for extracting keywords from the ultrasonic analysis report and the file attribute information to acquire report keywords and attribute keywords;
a keyword associated map obtaining unit, configured to perform tagging processing on an existing ultrasound image based on the existing standard surface feature, the existing organization feature, the existing equipment feature, the report keyword, and the attribute keyword, and obtain a keyword associated map corresponding to the existing ultrasound image;
and the association storage unit is used for storing the existing ultrasonic images and the keyword association map in a system database in an associated manner.
15. An ultrasound apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the ultrasound image search method of any of claims 1 to 7 when executing the computer program.
16. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the ultrasound image searching method according to any one of claims 1 to 7.
CN202110945450.8A 2021-08-17 2021-08-17 Ultrasonic image searching method and device, ultrasonic equipment and storage medium Active CN113505262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110945450.8A CN113505262B (en) 2021-08-17 2021-08-17 Ultrasonic image searching method and device, ultrasonic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110945450.8A CN113505262B (en) 2021-08-17 2021-08-17 Ultrasonic image searching method and device, ultrasonic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113505262A CN113505262A (en) 2021-10-15
CN113505262B true CN113505262B (en) 2022-03-29

Family

ID=78016009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110945450.8A Active CN113505262B (en) 2021-08-17 2021-08-17 Ultrasonic image searching method and device, ultrasonic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113505262B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113786213B (en) * 2021-11-16 2022-03-01 深圳迈瑞软件技术有限公司 Ultrasonic imaging device and readable storage medium
CN116257871B (en) * 2023-03-13 2023-11-17 杭州易签宝网络科技有限公司 Method, device and storage medium for data storage, certification and verification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920716A (en) * 2018-07-27 2018-11-30 中国电子科技集团公司第二十八研究所 The data retrieval and visualization system and method for knowledge based map
CN110704743A (en) * 2019-09-30 2020-01-17 北京科技大学 Semantic search method and device based on knowledge graph

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101437039B (en) * 2007-11-15 2012-11-07 华为技术有限公司 Mobile searching method, system and equipment
JP6543207B2 (en) * 2016-03-17 2019-07-10 株式会社東芝 DATA MANAGEMENT DEVICE, DATA MANAGEMENT SYSTEM, AND DATA MANAGEMENT METHOD
KR101916798B1 (en) * 2016-10-21 2018-11-09 네이버 주식회사 Method and system for providing recommendation query using search context
CN107169010A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of determination method and device of recommendation search keyword
CN108595494B (en) * 2018-03-15 2022-05-20 腾讯科技(深圳)有限公司 Method and device for acquiring reply information
CN111400607B (en) * 2020-06-04 2020-11-10 浙江口碑网络技术有限公司 Search content output method and device, computer equipment and readable storage medium
CN111737499B (en) * 2020-07-27 2020-11-27 平安国际智慧城市科技股份有限公司 Data searching method based on natural language processing and related equipment
CN112836519A (en) * 2021-02-08 2021-05-25 网易(杭州)网络有限公司 Training method of text generation model, and text generation method and device
CN113254708A (en) * 2021-06-28 2021-08-13 北京乐学帮网络技术有限公司 Video searching method and device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920716A (en) * 2018-07-27 2018-11-30 中国电子科技集团公司第二十八研究所 The data retrieval and visualization system and method for knowledge based map
CN110704743A (en) * 2019-09-30 2020-01-17 北京科技大学 Semantic search method and device based on knowledge graph

Also Published As

Publication number Publication date
CN113505262A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
EP3879485B1 (en) Tissue nodule detection and model training method and apparatus thereof, device and system
CN113505262B (en) Ultrasonic image searching method and device, ultrasonic equipment and storage medium
JP6980040B2 (en) Medical report generation method and equipment
EP3937183A1 (en) Image analysis method, microscope video stream processing method, and related apparatus
CN112509119B (en) Spatial data processing and positioning method and device for temporal bone and electronic equipment
CN110472049B (en) Disease screening text classification method, computer device and readable storage medium
CN110472136B (en) Query result pushing method and device, storage medium and computer equipment
US20230135046A1 (en) Classification display method of ultrasound data and ultrasound imaging system
CN112164451A (en) Intelligent diagnosis guiding and registering method, device, equipment and storage medium
CN112580613A (en) Ultrasonic video image processing method, system, equipment and storage medium
CN112530550A (en) Image report generation method and device, computer equipment and storage medium
CN113241138B (en) Medical event information extraction method and device, computer equipment and storage medium
CN112801940B (en) Model evaluation method, device, equipment and medium
CN113658690A (en) Intelligent medical guide method and device, storage medium and electronic equipment
CN114819135A (en) Training method of detection model, target detection method, device and storage medium
CN112101030B (en) Method, device and equipment for establishing term mapping model and realizing standard word mapping
CN109223034A (en) Ultrasonic imaging method and supersonic imaging apparatus
US11386991B2 (en) Methods and apparatus for artificial intelligence informed radiological reporting and model refinement
CN113693625B (en) Ultrasonic imaging method and ultrasonic imaging apparatus
CN113486195A (en) Ultrasonic image processing method and device, ultrasonic equipment and storage medium
CN115700826A (en) Receipt processing method, receipt display method, receipt processing device, receipt display device, computer equipment and storage medium
CN111968111A (en) Method and device for identifying visceral organs or artifacts of CT (computed tomography) image
US20240119750A1 (en) Method of generating language feature extraction model, information processing apparatus, information processing method, and program
CN116823829B (en) Medical image analysis method, medical image analysis device, computer equipment and storage medium
US20230121619A1 (en) System and methods for exam suggestions using a clustered database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant