CN112215843B - Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium - Google Patents

Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium Download PDF

Info

Publication number
CN112215843B
CN112215843B CN202011326525.6A CN202011326525A CN112215843B CN 112215843 B CN112215843 B CN 112215843B CN 202011326525 A CN202011326525 A CN 202011326525A CN 112215843 B CN112215843 B CN 112215843B
Authority
CN
China
Prior art keywords
ultrasonic
image
ultrasonic probe
information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011326525.6A
Other languages
Chinese (zh)
Other versions
CN112215843A (en
Inventor
赵明昌
龚栋梁
莫若理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Chison Medical Technologies Co Ltd
Original Assignee
Wuxi Chison Medical Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Chison Medical Technologies Co Ltd filed Critical Wuxi Chison Medical Technologies Co Ltd
Publication of CN112215843A publication Critical patent/CN112215843A/en
Application granted granted Critical
Publication of CN112215843B publication Critical patent/CN112215843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4263Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors not mounted on the probe, e.g. mounted on an external reference frame
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an ultrasonic intelligent imaging navigation method, an ultrasonic intelligent imaging navigation device, ultrasonic equipment and a storage medium, wherein the method comprises the following steps: acquiring an environment image at least comprising a detection object and an ultrasonic probe, and identifying the position information of a target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environment image by using a trained identification network model; and determining a scanning navigation path guided to the target part to be scanned by the ultrasonic probe based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe, identifying the real-time position of the ultrasonic probe through a tracking neural network model in the scanning process, and updating the scanning navigation path when deviating from the scanning navigation path. By implementing the method, the scanning path can be determined based on the operation habits of doctors, the speed is high, the accuracy is high, and the ultrasonic scanning efficiency of the doctors is greatly improved; and whether the guide path needs to be updated can be judged according to the real-time position of the ultrasonic probe, so that the shortest guide path is searched.

Description

Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium
Technical Field
The invention relates to the technical field of ultrasonic image processing, in particular to an ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and a storage medium.
Background
The ultrasonic diagnostic apparatus is widely applied in clinical medicine, and the quality of an ultrasonic image obtained by ultrasonic scanning determines the later diagnosis. However, the organ positions and boundaries of people with different heights, weights and sexes are different, and the position for placing the ultrasonic probe is not necessarily the position to be scanned.
Currently, in actual work, a doctor operates the ultrasonic probe to move to a target position for scanning, but experience accumulation and operation proficiency of different doctors are different, and some doctors have less experience accumulation and cannot operate the ultrasonic probe quickly and accurately to obtain a standard ultrasonic image.
Disclosure of Invention
In view of this, embodiments of the present invention provide an ultrasound intelligent imaging navigation method, an ultrasound intelligent imaging navigation apparatus, an ultrasound device, and a storage medium, so as to solve the problem that an ultrasound image cannot be accurately obtained in the prior art.
The technical scheme provided by the invention is as follows:
the first aspect of the embodiments of the present invention provides an ultrasonic intelligent imaging navigation method, including: acquiring an environment image at least comprising a detection object and an ultrasonic probe through a visual sensor, and identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environment image by utilizing a trained identification network model based on the target part to be scanned; determining a scanning navigation path of the ultrasonic probe corresponding to the target part to be scanned based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe; displaying the scanning navigation path; and in the moving process of the probe, identifying the real-time position of the ultrasonic probe through a trained tracking neural network model, and updating the scanning navigation path according to the real-time position of the ultrasonic probe when the ultrasonic probe deviates from the scanning navigation path.
Further, the ultrasonic intelligent imaging navigation method further comprises the following steps: when the ultrasonic probe is guided to the target part to be scanned, acquiring a current ultrasonic image of the target part to be scanned by the ultrasonic probe; loading a three-dimensional ultrasonic model corresponding to the target part to be scanned based on the obtained target part to be scanned, wherein the three-dimensional ultrasonic model at least comprises a standard scanning tangent plane marked with probe position information and probe angle information; determining the position information and the angle information of the current ultrasonic probe according to the current ultrasonic image and/or the current environment image acquired by the vision sensor; and guiding the ultrasonic probe to move to the standard scanning section according to the position information and the angle information of the current ultrasonic probe, the position information of the probe corresponding to the standard scanning section mark and the angle information of the probe corresponding to the standard scanning section mark.
Further, the determining the position information and the angle information of the current ultrasound probe according to the current ultrasound image and/or the current environment image acquired by the visual sensor includes: inputting the current ultrasonic image and/or the current environment image acquired by the visual sensor and the three-dimensional ultrasonic model into a trained index neural network model or a CNN deep convolution neural network model for processing, and determining the position information and the angle information of the current ultrasonic probe; or inputting the current ultrasonic image and/or the current environment image acquired by the visual sensor into a trained full convolution neural network model for processing, and determining the position information and the angle information of the current ultrasonic probe.
Further, the inputting the current ultrasound image and/or the current environment image acquired by the visual sensor and the three-dimensional ultrasound model into the trained index neural network model for processing to determine the position information and the angle information of the current ultrasound probe includes: extracting a first characteristic vector in the current ultrasonic image and/or the current environment image acquired by the visual sensor through a two-dimensional convolutional neural network; extracting a second feature vector in the three-dimensional ultrasonic model through a three-dimensional convolution neural network; splicing the first feature vector and the second feature vector in a dimension to obtain a first spliced feature vector; and inputting the first splicing characteristic vector into a full-connection layer, and outputting the position information and the angle information of the current ultrasonic probe.
Further, the inputting the current ultrasound image and/or the current environment image acquired by the visual sensor into the trained fully-convolutional neural network model for processing to determine the position information and the angle information of the current ultrasound probe includes: inputting a current ultrasonic image into a full convolution neural network for processing to obtain a characteristic diagram of the current ultrasonic image; performing global maximum pooling on the feature map to obtain a third feature vector of the current ultrasonic image; carrying out global average pooling on the feature map to obtain a fourth feature vector of the current ultrasonic image; splicing the third feature vector and the fourth feature vector to obtain a second spliced feature vector; and inputting the second splicing characteristic vector into a full-connection layer, and outputting the position information and the angle information of the current ultrasonic probe.
Further, the inputting the current ultrasound image and/or the current environment image acquired by the visual sensor and the three-dimensional ultrasound model into the trained CNN deep convolution neural network model for processing to determine the position information and the angle information of the current ultrasound probe includes: obtaining IMU information collected by an inertia measurement unit arranged in the ultrasonic probe; extracting a fifth feature vector in the current ultrasonic image and/or the current environmental image acquired by the visual sensor through the CNN deep convolution neural network; extracting a sixth feature vector in the three-dimensional ultrasonic model through the CNN deep convolution neural network; extracting a seventh feature vector in the IMU information through the CNN deep convolutional neural network; splicing the fifth feature vector, the sixth feature vector and the seventh feature vector to obtain a first spliced feature vector; and inputting the first splicing characteristic vector into a full-connection layer for characteristic vector fusion to obtain the position information and the angle information of the current ultrasonic probe.
Further, the acquiring IMU information acquired by an inertial measurement unit disposed in the ultrasound probe includes: acquiring first IMU information of the ultrasonic probe at the current moment through an inertia measurement unit; obtaining a plurality of IMU information which is measured in advance and stored in a preset time period before the current moment of the ultrasonic probe; inputting first IMU information of the current moment of the ultrasonic probe and a plurality of IMU information in a preset time period before the current moment into a recurrent neural network model for processing to obtain second IMU information of the ultrasonic probe, wherein the accuracy of the second IMU information is greater than that of the first IMU information, and determining the second IMU information as the IMU information acquired by an inertial measurement unit in the ultrasonic probe.
Further, the ultrasonic intelligent imaging navigation method further comprises the following steps: when the ultrasonic probe is guided to the target part to be scanned, acquiring an ultrasonic image of the target part to be scanned by the ultrasonic probe; determining a standard image according to a matching degree value of the ultrasonic image and a matching image in a preset image database, wherein the matching image comprises a plurality of marking information, and the matching degree value of the ultrasonic image and the standard image is greater than a first preset matching degree value; guiding the ultrasonic probe to move according to mark information contained in the standard image, and determining an ultrasonic image with a matching value exceeding a second preset matching value with the standard image as a target ultrasonic image, wherein the second preset matching value is greater than or equal to the first preset matching value; and determining the diagnosis information of the target ultrasonic image according to the marking information contained in the standard image, wherein the diagnosis information at least comprises one or more of target part information and focus information.
Further, the recognition network model is a segmentation model based on segmentation of different organ contours and ultrasound probe contours, or the recognition network model is a detection model for recognizing organs and ultrasound probe distribution regions, and the segmentation model includes: the device comprises an input layer, a plurality of convolution layers, a plurality of pooling layers, a plurality of bilinear interpolation layers and an output layer, wherein the number of channels of the bilinear interpolation layers is the same as that of target positions to be scanned and the number of probes; the detection model comprises: the device comprises an input layer, a plurality of convolution layers, a plurality of pooling layers, a plurality of bilinear interpolation layers and an output layer, wherein the output of the bilinear interpolation layers added with the convolution layers enters the output layer through two-layer convolution and is output.
Further, the step of identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environmental image by using the trained identification network model includes: segmenting distribution areas of different parts of the detection object and the distribution area of the ultrasonic probe from the environment image by using the trained recognition network model; identifying part information corresponding to different distribution areas by using a trained identification network model, wherein the part information at least comprises part names or part categories; determining a distribution area of the target part to be scanned based on the target part to be scanned and the part information by using the trained recognition network model; and determining the position information of the target part to be scanned and the initial position information of the ultrasonic probe according to the distribution area of the target part to be scanned and the distribution area of the ultrasonic probe.
Further, determining a scanning navigation path of the ultrasonic probe corresponding to the target part to be scanned based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe, including: acquiring a plurality of historical navigation paths of the ultrasonic probe based on the position information of a target part to be scanned; and determining a scanning navigation path of the ultrasonic probe corresponding to the target part to be scanned from the plurality of historical navigation paths according to the initial position information of the ultrasonic probe.
Further, the determining a scanning navigation path from the plurality of historical navigation paths according to the initial position information of the ultrasonic probe, wherein the scanning navigation path is guided to a target part to be scanned by the ultrasonic probe, and the determining includes: judging whether the ultrasonic probe is positioned on any one of the plurality of historical navigation paths according to the initial position information of the ultrasonic probe; if the ultrasonic probe is positioned on any one of the plurality of historical navigation paths, determining the corresponding historical navigation path as a scanning navigation path; and if the ultrasonic probe is not on any historical navigation path, determining one historical navigation path with the shortest vertical distance to the ultrasonic probe in the plurality of historical navigation paths as a scanning navigation path.
Further, if the ultrasound probe is not on any historical navigation path, determining one historical navigation path with the shortest vertical distance to the ultrasound probe among the plurality of historical navigation paths as a scanning navigation path, including: if the ultrasonic probe is not on any historical navigation path, determining a historical navigation path with the shortest vertical distance between the ultrasonic probe and the historical navigation paths; and determining a vertical distance between the ultrasonic probe and the determined historical navigation path and a distance between a vertical point of the determined historical navigation path and an end point of the determined historical navigation path as a scanning navigation path.
Further, determining a scanning navigation path guided to the target part to be scanned by the ultrasonic probe based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe, including: and generating a scanning navigation path based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe.
Further, the tracking neural network model adopts a convolutional neural network, and the step of identifying the real-time position of the ultrasonic probe through the trained tracking neural network model includes: acquiring a model image of an ultrasonic probe; inputting the model image and the environment image into a convolutional neural network, wherein the convolutional neural network outputs a first feature corresponding to the model image and a second feature corresponding to the environment image; convolving the first characteristic serving as a convolution kernel with the second characteristic to obtain a spatial response diagram; and outputting the spatial response map to a linear interpolation layer to acquire the real-time position of the ultrasonic probe in the environment image.
Further, when the ultrasound probe deviates from the scanning navigation path, updating the scanning navigation path according to the real-time position of the ultrasound probe includes: when the distance of the ultrasonic probe deviating from the scanning navigation path is within a preset distance range, a deviation prompt is sent out, wherein the deviation prompt comprises one or more of a visual prompt, a voice prompt and a touch prompt; and when the distance of the ultrasonic probe deviating from the scanning navigation path exceeds a preset distance range, determining one historical navigation path with the shortest vertical distance with the ultrasonic probe in the plurality of historical navigation paths as the scanning navigation path.
Further, the matching degree value of the ultrasound image and the matching image or the matching degree value of the ultrasound image and the standard image is calculated by the following method, including: calculating the matching degree value of the ultrasonic image and the matching image or the ultrasonic image and the standard image by a cosine similarity algorithm; and/or calculating the matching degree value of the ultrasonic image and the matching image or the ultrasonic image and the standard image through the trained matching neural network model.
Further, the matching images comprise at least one or more of matching ultrasound images, matching CT images, matching nuclear magnetic images, and matching X-ray images.
Further, the guiding the ultrasound probe to move comprises: guiding the ultrasonic probe to move according to mechanical arm navigation or instruction navigation, wherein the mechanical arm navigation comprises guiding the ultrasonic probe to move by adopting at least one mechanical arm, and the instruction navigation comprises one or more of a visual guide mode, a voice guide mode or a force feedback guide mode.
Further, the visual guidance mode is configured to be one or more of image guidance, video guidance, identification guidance, text guidance and projection guidance; the force feedback guidance means is configured as one or more of a tactile guidance, a vibration guidance, a traction guidance.
A second aspect of the embodiments of the present invention provides an ultrasound intelligent imaging navigation apparatus, including: the identification module is used for acquiring an environment image at least comprising a detection object and an ultrasonic probe through a visual sensor, and identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environment image by utilizing a trained identification network model based on the target part to be scanned; the path determining module is used for determining a scanning navigation path guided to the target part to be scanned by the ultrasonic probe based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe; the display module is used for displaying the path; and the real-time tracking module is used for identifying the real-time position of the ultrasonic probe through a trained tracking neural network model in the probe moving process, and updating the scanning navigation path according to the real-time position of the ultrasonic probe when the ultrasonic probe deviates from the scanning navigation path.
A third aspect of the embodiments of the present invention provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to cause a computer to execute the ultrasound intelligent imaging navigation method according to any one of the first aspect and the first aspect of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides an ultrasound apparatus, including: the ultrasonic intelligent imaging navigation method comprises a memory and a processor, wherein the memory and the processor are mutually connected in a communication mode, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the ultrasonic intelligent imaging navigation method according to the first aspect and any one of the first aspect.
The technical scheme provided by the invention has the following effects:
according to the ultrasonic intelligent imaging navigation method, the ultrasonic intelligent imaging navigation device, the ultrasonic equipment and the storage medium, the target part and the ultrasonic probe are identified from the environmental image by using the trained identification network model, the corresponding plurality of historical scanning paths are obtained based on the identified target part, and the scanning navigation path of the ultrasonic probe is confirmed from the plurality of historical scanning paths, so that the scanning path can be determined based on the operation habit of a doctor, the speed is high, the accuracy is high, and the ultrasonic scanning efficiency of the doctor is greatly improved. Meanwhile, the real-time position of the ultrasonic probe is identified through the trained tracking neural network model, and whether the guide path needs to be updated or not can be judged according to the real-time position of the ultrasonic probe so as to find the shortest guide path. The recognition network model provided by the embodiment of the invention can accurately, simply and conveniently acquire the position of the target part, and meanwhile, the real-time position information of the ultrasonic probe is tracked by adopting the tracking neural network model, so that the degree of automation is high, and the accuracy is high.
According to the ultrasonic intelligent imaging navigation method, the ultrasonic intelligent imaging navigation device, the ultrasonic equipment and the storage medium, the position information and the angle information of the current ultrasonic image and the position information and the angle information of the standard scanning section which are obtained by the ultrasonic probe can be rapidly and accurately determined through the indexing neural network model, the full convolution neural network model or the CNN deep convolution neural network model and the loaded three-dimensional ultrasonic model, and the ultrasonic probe is guided to move to the standard scanning section according to the position relation between the current ultrasonic image and the standard scanning section. The ultrasonic intelligent imaging navigation method provided by the embodiment of the invention improves the speed and accuracy of searching a standard scanning tangent plane by an ultrasonic probe. Furthermore, the ultrasonic intelligent imaging navigation method provided by the embodiment of the invention can generate a visual guide path, display the guide path, the standard scanning section and the ultrasonic probe in real time and improve the scanning accuracy.
According to the ultrasonic intelligent imaging navigation method, the ultrasonic intelligent imaging navigation device, the ultrasonic equipment and the storage medium, after the ultrasonic image acquired by the ultrasonic probe is matched with the matching image in the preset image database to obtain the matching value, the ultrasonic probe is guided to move for accurate positioning through the mark information contained in the standard image, and then the target ultrasonic image required by auxiliary diagnosis is acquired. Therefore, the embodiment of the invention can guide a doctor to obtain the scanning speed of the target ultrasonic image and improve the working efficiency of the doctor.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an ultrasound intelligent imaging navigation method according to an embodiment of the invention;
FIG. 2 is a schematic structural diagram of a tracking neural network model according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a segmentation model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a structure of a detection model according to an embodiment of the invention;
FIG. 5 is a flow chart of an ultrasound intelligent imaging navigation method according to another embodiment of the invention;
FIG. 6 is a schematic diagram of the structure of an indexing neural network model according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a recurrent neural network model in accordance with an embodiment of the present invention;
FIG. 8 is a schematic view of an imaging guide on a display according to an embodiment of the present invention;
FIG. 9 is a flow chart of an ultrasound intelligent imaging navigation method according to another embodiment of the present invention;
FIG. 10 is a block diagram of an ultrasound intelligent imaging navigation device according to an embodiment of the invention;
FIG. 11 is a schematic structural diagram of a computer-readable storage medium provided in accordance with an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an ultrasound apparatus provided according to an embodiment of the present invention.
Detailed Description
As mentioned in the background, when a medical staff scans a patient for ultrasonic imaging, the medical staff needs to hold a probe with hands and then place the probe at a position to be scanned for scanning imaging. However, the positions and boundaries of organs or tissues of people with different heights, weights, sexes and ages are different, and some operators with insufficient operation experience cannot quickly and accurately find the target part to be scanned and the ultrasonic standard section of the target part. The invention provides an ultrasonic intelligent imaging navigation method capable of guiding an ultrasonic probe to move to a target part to be scanned and an ultrasonic standard tangent plane of the target part.
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art. Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The embodiment of the invention provides an ultrasonic intelligent imaging navigation method, as shown in fig. 1, the method comprises the following steps:
step S100: the method comprises the steps of obtaining an environment image at least comprising a detection object and an ultrasonic probe through a visual sensor, and identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environment image by utilizing a trained identification network model based on the target part to be scanned. Specifically, the visual sensor may be a camera, such as a depth camera, for example, and the camera captures an environmental image including at least the detection object and the ultrasound probe, and the environmental image may be an RGB image or an RGB video. The camera may be a depth camera. Depth cameras have depth information for each pixel point of a picture or video taken by the camera as compared to conventional cameras.
For a target part to be scanned, the information of the target part to be scanned can be input through an input unit on an ultrasonic device or electronic equipment such as a mobile phone, an iPad and a computer, so that the ultrasonic device can acquire the information of the target part to be scanned of a detection object; the input unit can be a keyboard, a trackball, a mouse, a touch pad or the like or a combination thereof; the input unit may also be a voice recognition input unit, a gesture recognition input unit, or the like. It should be understood that the target portion information to be scanned may be a name of the target portion or a target portion icon displayed on the display selected through the input unit.
A "target site" in embodiments of the present invention may include a human, an animal, a portion of a human, or a portion of an animal. For example, the subject may include an organ or a blood vessel such as a liver, a heart, a uterus, a brain, a chest, an abdomen, and the like. In addition, the term "target site" may also include an artificial model. The artificial model represents a material having a volume very close to the density and effective atomic number of an organism, and may include a spherical artificial model having an emotion similar to a human body.
In one embodiment, the method for identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environmental image by using the trained identification network model comprises the following steps:
step S101: dividing distribution areas of different parts of a detection object and a distribution area of an ultrasonic probe from an environment image; specifically, if the environment image is an RGB video, selecting any one frame of RGB image in the RGB video, and segmenting a distribution area of different parts of the detection object and a distribution area of the ultrasonic probe from the RGB image through the identification network model, where the different parts and the probe are displayed differently, for example, the different parts or the distribution area of the ultrasonic probe are displayed differently by using different colors or shades.
Step S102: identifying part information corresponding to different distribution areas, wherein the part information at least comprises part names or part types; the position information corresponding to different distribution areas is identified through the identification network model, and then the names or the types of different positions can be identified. The identification network model can be a segmentation model based on segmenting different part outlines and ultrasonic probe outlines or a detection model for identifying organs and ultrasonic probe distribution areas.
Step S103: determining a distribution area of the target part to be scanned based on the target part to be scanned and the part information; it can be understood that the distribution area of the target part can be located based on the acquired target part information to be scanned by the detection object.
Step S104: and determining the position information of the target part to be scanned and the initial position information of the ultrasonic probe according to the distribution area of the target part to be scanned and the distribution area of the ultrasonic probe. Specifically, since there may be one or more ultrasound probes, after determining the distribution area of the target site to be scanned, the initial position information of the ultrasound probe may be determined according to the distribution area of the ultrasound probe.
Optionally, the position information of the ultrasonic probe can also be acquired through a magnetic sensor, the magnetic sensor comprises a magnetic emitter and a magnetic inductor, the magnetic emitter emits a strong magnetic field, which is equivalent to establishing a world coordinate system, and the position information of the ultrasonic probe can be acquired through the magnetic inductor arranged in the ultrasonic probe. Furthermore, initial position information of the ultrasound probe may also be acquired by the camera. It should be understood that, at this time, the video shot by the camera includes the target portion and the ultrasonic probe, the camera is equivalent to establishing a world coordinate system, and the positional relationship between the ultrasonic probe and the target portion can be obtained from the video.
Step S200: and determining a scanning navigation path guided to the target part to be scanned by the ultrasonic probe based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe.
At present, most methods for planning an ultrasonic guide path adopt the steps of acquiring initial position information of an ultrasonic probe and position information of an organ to be scanned, and planning a scanning path according to the shortest distance between the ultrasonic probe and the organ to be scanned. The correctness of the scanning path planned by the planning method cannot be determined, and the method using the shortest distance between two points as the planning basis does not consider that the target part is non-planar when the target part is at the surface of the detection object, such as a mammary gland. The routine operation habit of a doctor is not considered, and different parts are suitable for planning the path by adopting the same planning principle. Therefore, when the scanning navigation path is determined, a plurality of historical navigation paths of the ultrasonic probe can be obtained based on the position information of the target part to be scanned; and determining a scanning navigation path of the ultrasonic probe corresponding to the target part to be scanned from the plurality of historical navigation paths according to the initial position information of the ultrasonic probe.
In this embodiment, the acquisition of the scanning navigation path of the ultrasound probe is determined based on a historical navigation path set of the physician scanning different organs with the ultrasound probe.
Specifically, whether the ultrasonic probe is positioned on any one of a plurality of historical navigation paths is judged according to initial position information of the ultrasonic probe; if the ultrasonic probe is located on any one of the plurality of historical navigation paths, the corresponding historical navigation path is determined as the scanning navigation path, namely if the ultrasonic probe is just located on the historical navigation path, the corresponding historical navigation path can be determined as the scanning navigation path, path planning is not needed to be carried out according to the initial position information of the ultrasonic probe and the target position information of the target part, and the speed of obtaining the scanning navigation path of the ultrasonic probe is greatly improved.
And if the ultrasonic probe is not on any historical navigation path, determining one historical navigation path with the shortest vertical distance to the ultrasonic probe in the plurality of historical navigation paths as a scanning navigation path. The ultrasonic probe is moved to the line of the optimal historical navigation path, and then the ultrasonic probe is moved along the optimal historical navigation path, namely, the scanning navigation path is the vertical distance between the ultrasonic probe and the optimal historical navigation path and the distance between the vertical point and the end point on the optimal historical navigation path. It will be understood that, although the embodiment also takes the shortest vertical distance between the ultrasonic probe and the historical navigation path as the selection basis, the present invention still determines the scanning navigation path of the present ultrasonic probe according to the historical navigation path of the ultrasonic probe operated by the doctor to scan the target site.
In an embodiment, when a plurality of historical navigation paths in the plurality of historical navigation paths are partially overlapped or intersected, whether only one qualified historical navigation path exists needs to be judged according to the initial position information of the ultrasonic probe and the plurality of historical navigation paths. If the ultrasonic probe is only positioned on one historical navigation path in the plurality of historical navigation paths, determining the corresponding historical navigation path as a scanning navigation path; and if the ultrasonic probe is positioned on a plurality of historical navigation paths in the plurality of historical navigation paths at the same time, selecting the historical navigation path with the most navigation times as a scanning navigation path.
When a plurality of historical navigation paths in the plurality of historical navigation paths are partially overlapped or intersected and the ultrasonic probe is simultaneously positioned on the plurality of historical navigation paths in the plurality of historical navigation paths, selecting the historical navigation path with the largest navigation times as a scanning navigation path, and specifically comprising the following steps: and if at least two historical navigation paths in the plurality of historical navigation paths in which the ultrasonic probe is positioned are the historical navigation paths with the largest navigation times, determining a scanning navigation path according to the remaining scanning distance of the ultrasonic probe on the corresponding historical navigation path. For example, if there are 2 historical navigation paths with the largest number of navigation times in the historical navigation paths meeting the conditions, determining a scanning navigation path according to the remaining scanning distance of the ultrasonic probe for scanning the target part along the 2 historical navigation paths, for example, if the historical navigation path a requires the ultrasonic probe to move by 2cm, and the historical navigation path B requires the ultrasonic probe to move by 3cm, selecting the historical navigation path a as the scanning navigation path of the ultrasonic probe for the current scanning.
Step S300: displaying a scanning navigation path; specifically, after the scanning navigation path is determined, the scanning navigation path may be displayed, for example, on a display, or may be displayed in a projection or sensor manner.
Step S400: and in the probe moving process, the real-time position of the ultrasonic probe is identified through the trained tracking neural network model, and when the ultrasonic probe deviates from the scanning navigation path, the scanning navigation path is updated according to the real-time position of the ultrasonic probe. In particular, when the ultrasonic probe is guided to move according to the scanning navigation path, the ultrasonic probe may deviate from the scanning navigation path due to an operation problem, and thus a real-time position of the ultrasonic probe may be tracked in real time.
In one embodiment, the tracking neural network model uses a convolutional neural network, and the step of identifying the real-time position of the ultrasound probe by the trained tracking neural network model includes:
step S401: acquiring a model image of an ultrasonic probe; specifically, a model image of the ultrasound probe is preset in the ultrasound device and can be called through the input unit; the input unit can be a keyboard, a trackball, a mouse, a touch pad or the like or a combination thereof; the input unit may also be a voice recognition input unit, a gesture recognition input unit, or the like.
Step S402: inputting the model image and the environment image into a convolutional neural network, and outputting a first characteristic corresponding to the model image and a second characteristic corresponding to the environment image by the convolutional neural network; wherein, the convolutional neural network selects the shared fully convolutional neural network, and the fully convolutional neural network at least comprises: an input layer, a convolution layer, a batch normalization layer, a linear rectification function layer, a maximum pooling layer, and an output layer.
Step S403: convolving the first characteristic serving as a convolution kernel with the second characteristic to obtain a spatial response diagram; the spatial response map comprises the response intensity of the first feature on the second feature and the acquaintance values of all positions in the model image and the environment image, wherein the response intensity value is 0-1.
Step S404: and outputting the spatial response map to a linear interpolation layer to acquire the real-time position of the ultrasonic probe in the environment image.
As shown in fig. 2, s represents an object to be tracked, in this embodiment, an ultrasonic probe, and d represents an environment image captured by a camera, wherein the environment image includes the ultrasonic probe. The embodiment of the invention tracks the ultrasonic probe in the environmental image in real time through the tracking neural network model. And inputting the model image and the environment image into the same full convolution neural network, wherein the full convolution neural network is used for mapping the characteristics, and mapping the original model image and the environment image to a specific characteristic space. And the shared full convolution neural network outputs a first feature corresponding to the model image and a second feature corresponding to the environment image. And then, the first feature obtained by the model image is used as a convolution kernel, the convolution operation is carried out on the first feature which is used as the convolution kernel and the second feature to obtain a space response graph, the response intensity of the first feature on the second feature and the recognition value of each position in the model image and the environment image are obtained, and the response intensity value is 0-1. Because of the fully convolutional neural network, images of arbitrary size are acceptable.
It should be understood that the environment image may be an RGB image, or may be an RGB video, and if the environment image is an RGB video, each frame of the RGB image in the RGB video is processed.
When a training sample of the tracking neural network model is manufactured, in the environment images, points in a certain range around the ultrasonic probe are used as positive samples and marked as 1, the rest are negative samples and marked as 0, and the positive samples and the rest are mapped to the output of the tracking neural network structure, namely, each environment image has a corresponding label, and the size of each environment image is the size of the output of the target tracking network structure. The loss function adopts logistic regression loss or cross entropy loss, and the optimizer does not limit.
The structure of the shared full convolution neural network comprises: convolution, BN (batch normalization), ReLU (linear rectification function), max pooling. During actual tracking, the output of the tracking neural network is linearly interpolated to the size same as that of the environmental image, so that the response intensity output by the tracking network is mapped to the environmental image, and the region with the maximum response value is selected as the position of the ultrasonic probe. When the camera is a depth camera, an RGBD environment image is acquired, and only the input needs to be changed and the depth D information is added to the output of the network based on the RGBD environment image.
It should be understood that the training data of the tracking neural network model is an environmental image taken by the camera and including at least a detected object and an ultrasonic probe, in which there is an ultrasonic probe in the environmental image, and the detected object is the ultrasonic probe at this time, that is, the number of categories of the detected object is 2, the ultrasonic probe and other backgrounds. Due to the real-time requirement, the network structure of the organ detection can be simplified, for example, the number of channels is reduced, or a bilinear interpolation module is removed, and the pooled output is directly used as the input of the subsequent prediction. In practical application, respective detection models can be customized according to different ultrasonic probes, or one detection model can detect multiple ultrasonic probes, and the detection type is +1, namely the number of the types of the probes.
In an embodiment, when the ultrasound probe is found to deviate from the scanning navigation path according to the tracked real-time position of the ultrasound probe, the scanning navigation path may be updated according to the real-time position of the ultrasound probe.
Specifically, when the distance of the ultrasonic probe deviating from the scanning navigation path is within a preset distance range, a deviation prompt is sent out, wherein the deviation prompt comprises one or more of a visual prompt, a voice prompt and a touch prompt. And the visual prompt can prompt the direction angle of the probe moving on the display or generate a virtual indication icon at the body surface corresponding to the detection object. The tactile indication can be the tactile control of the ultrasonic probe, and a doctor can judge whether the scanning navigation path deviates from the tactile of the ultrasonic probe, wherein the frequency and the amplitude of the tactile are larger when the deviation distance is larger. Because the distance of the ultrasonic probe deviating from the scanning navigation path is smaller, the path does not need to be re-planned, and the ultrasonic probe is only prompted to be controlled to return to the original scanning navigation path to continue moving.
And when the distance of the ultrasonic probe deviating from the scanning navigation path exceeds a preset distance range, determining one historical navigation path with the shortest distance with the ultrasonic probe in the plurality of historical navigation paths as the scanning navigation path. The ultrasonic probe is moved to the line of the optimal historical navigation path, and the corresponding historical navigation path is determined as the scanning navigation path. Optionally, the preset distance is 0.5 cm.
According to the ultrasonic intelligent imaging navigation method provided by the embodiment of the invention, the target part and the ultrasonic probe are identified from the environmental image by using the trained identification network model, the corresponding plurality of historical scanning paths are obtained based on the identified target part, and the scanning navigation path of the ultrasonic probe is confirmed from the plurality of historical scanning paths, so that the scanning path can be determined based on the operation habit of a doctor, the speed is high, the accuracy is high, and the ultrasonic scanning efficiency of the doctor is greatly improved. And the real-time position of the ultrasonic probe is identified through the trained tracking neural network model, and whether the guide path needs to be updated or not can be judged according to the real-time position of the ultrasonic probe so as to find the shortest guide path. The recognition network model provided by the embodiment of the invention can accurately, simply and conveniently acquire the position of the target part, and meanwhile, the real-time position information of the ultrasonic probe is tracked by adopting the tracking neural network model, so that the degree of automation is high, and the accuracy is high.
In one implementation, the network model for identification in step S100 is a segmentation model based on segmenting different site contours and ultrasound probe contours or a detection model for identifying organs and ultrasound probe distribution regions, wherein the segmentation model includes: the device comprises an input layer, a plurality of convolution layers, a plurality of pooling layers, a plurality of bilinear interpolation layers and an output layer, wherein the number of channels of the bilinear interpolation layers is the same as the number of target parts to be scanned and probes; the detection model comprises: the device comprises an input layer, a plurality of convolution layers, a plurality of pooling layers, a plurality of bilinear interpolation layers and an output layer, wherein the output of the addition of the bilinear interpolation layers and the convolution layers enters the output layer for output through two-layer convolution.
As shown in fig. 3, the input of the segmentation model is a three-channel RGB image, followed by two layers of convolution + pooling modules, where the convolution kernel size is 3 × 3, the step size is 1, the number of convolution kernels increases by a multiple of 32, the kernel size of the pooling layer is 2 × 2, and the step size is 2, and the number of modules is consistent with the number of subsequent bilinear interpolation and convolution modules, and the number of modules can be increased or decreased according to the training test effect. Two layers of convolution (convolution kernel 3 x 3, step 1) are used for connection between two modules to enhance feature extraction. The number of channels output by the bilinear interpolation and convolution layer is the number of the positions and the types of the ultrasonic probes, and a ReLU activation function is added after convolution, so that the problem of gradient disappearance is solved. The convolution layer is followed by a convolution layer after the previous pooling layer, the size of the convolution kernel is 1 multiplied by 1, the purpose is to be the same as the number of channels (namely the number of categories of organs and ultrasonic probes) output by the former, meanwhile, the nonlinearity is also increased, the fitting capability of the network is increased, and the part can be added with the former to be used as the input of next up-sampling, so that the capability of improving the network classification is achieved. In the final bilinear interpolation + convolution layer, softmax is performed on the number of output channels, the index of the maximum value is taken, namely each pixel corresponds to a category, the output at the moment is 1 channel, namely the final part segmentation image, and different part areas on the image have different category values.
As shown in fig. 4, the structure of the detection model is similar to that of the segmentation model, the size of the detection model is continuously halved in the convolution and pooling module, the resolution of the image is continuously reduced, the useful semantic information is continuously enhanced, the detection capability of the network on the target is effectively enhanced after the detection model is fused with the bilinear interpolation and convolution module, and the detection of the network on the small target is effectively enhanced through the prediction at different resolution stages. Different from the segmentation model, the output of the sum of each bilinear interpolation and convolution and the convolution after pooling is taken, and two layers of convolution are added, wherein one layer is used for regressing a target rectangular frame (x, y, w, h), the (x, y) is the upper left point of the target rectangular frame, and the (w, h) is the width and the height of the target rectangular frame; the other layer is to obtain the category score corresponding to the target rectangular frame, and the category with the highest category score represents the position corresponding to the category of the target rectangular frame.
In one embodiment, the segmentation model (detection model) is obtained by training as follows: acquiring a plurality of detection object images comprising ultrasonic probes, and labeling the parts of the detection object images and the ultrasonic probes; and inputting the marked detection object image containing the ultrasonic probe into a segmentation neural network (detection neural network) for training, and adjusting parameters of the segmentation neural network (detection neural network) according to the training to obtain a trained segmentation model (detection module). For example, when the detection object is a human body, whole body photographs of different people are collected, the organ labeling of the human body is carried out on the photographs by using a labeling tool, two labeling modes are adopted, the organ segmentation is based on the organ contour of the whole image, a category numerical value corresponds to all pixels in one contour, and the region of the non-organ contour is a category 0; organ detection is based on rectangular boxes, one rectangular box contains one organ, and the category corresponding to the organ is marked.
It should be understood that the recognition network model can simultaneously recognize different parts and ultrasonic probes by sharing the full convolutional neural network, and can also recognize the part of the detection object and the ultrasonic probes by different convolutional neural network distributions. And recognizing the position information of the ultrasonic probe from the environmental image by using the trained recognition network model.
In an embodiment, as shown in fig. 5, the ultrasound intelligent imaging navigation method further includes:
step S500: when the ultrasonic probe is guided to a target part to be scanned, acquiring a current ultrasonic image of the target part to be scanned by the ultrasonic probe; the ultrasonic image is one or more of a single-frame ultrasonic image, a multi-frame ultrasonic image or an ultrasonic video, and the ultrasonic probe transmits and receives ultrasonic waves to a target part when the ultrasonic image is acquired. Specifically, the ultrasonic probe is excited by the transmission pulse to transmit ultrasonic waves to a target part, receives an ultrasonic echo with information of the target part, which is reflected from a target region, after a certain time delay, and converts the ultrasonic echo into an electric signal again to obtain an ultrasonic image. . The ultrasonic probe can be connected with the ultrasonic host computer in a wired mode, and can also be an ultrasonic probe in a wireless connection mode, such as palm ultrasonic.
It should be understood that, the ultrasound probe may retrieve a preset parameter set for scanning the target portion according to the target portion to be scanned, where the preset parameter set includes: transmit frequency, depth parameter, dynamic range parameter, etc. Specifically, the input unit may be a keyboard, a trackball, a mouse, a touch pad, or the like, or a combination thereof, through the input unit adjustment; the input unit may also be a voice recognition input unit, a gesture recognition input unit, or the like. Or selecting an indication icon of the target part on the ultrasonic equipment, and automatically loading the preset parameter group corresponding to the target part by the ultrasonic equipment after selection.
Step S600: loading a three-dimensional ultrasonic model corresponding to the target part to be scanned based on the obtained target part to be scanned, wherein the three-dimensional ultrasonic model at least comprises a standard scanning section marked with probe position information and probe angle information; specifically, when the three-dimensional ultrasound model corresponding to the target portion to be scanned of the detection object is loaded, the target portion information to be scanned of the detection object needs to be acquired, and the target portion information may be acquired from the target portion information acquired in step S100. The corresponding three-dimensional ultrasonic model is a corresponding trained three-dimensional ultrasonic model obtained according to information of different ages, sexes, heights, weights and the like. The section contains marks including position information and angle information of the probe, type and model of the probe, etc. When the three-dimensional ultrasonic model corresponding to the target part to be scanned is loaded, an external input mode, such as a keyboard, a touch screen, language input and card swiping, can be adopted, and an automatic shooting acquisition mode or an image reading mode can also be adopted.
The three-dimensional ultrasonic model is stored in a storage medium in advance, and the three-dimensional ultrasonic model of the corresponding organ is loaded according to the target part to be scanned. It should be understood that the three-dimensional ultrasound model is obtained by scanning and reconstructing the target region in advance. Specifically, carrying out ultrasonic scanning on a target part along a preset direction through an ultrasonic probe to obtain an ultrasonic image of each section of the target part; acquiring six-degree-of-freedom parameters corresponding to ultrasonic images of different sections scanned by a probe; and inputting the ultrasonic image of each section and the corresponding six-degree-of-freedom parameter into the trained deep neural network model to obtain the three-dimensional ultrasonic model of the target part.
The ultrasound image of each slice in the three-dimensional ultrasound model is provided with position information and angle information. Generating a world coordinate system comprising a probe and a target part by a magnetic field generator in the scanning process of the ultrasonic probe; and acquiring six-degree-of-freedom parameters of the probe through a magnetic positioner arranged on the probe, wherein the six-degree-of-freedom parameters comprise position parameters and direction parameters of the probe. In the actual ultrasonic diagnosis process, different sections of the organ are often observed to assist a doctor in diagnosis, so that the three-dimensional ultrasonic model of the embodiment of the invention at least comprises one standard scanning section with position information and angle information.
Step S700: and determining the position information and the angle information of the current ultrasonic probe according to the current ultrasonic image and/or the current environment image acquired by the vision sensor.
Step S800: and guiding the ultrasonic probe to move to the standard scanning tangent plane according to the position information and the angle information of the current ultrasonic probe and the position information and the angle information of the probe corresponding to the standard scanning tangent plane mark. Specifically, the position information and the angle information are six-degree-of-freedom coordinates (x, y, z, ax, ay, az), and ax, ay, az are angle information in the xyz direction. The position information and the angle information of the ultrasonic probe correspond to the ultrasonic image when being stored, which is equivalent to marking the ultrasonic image, so that the position information and the angle information of the ultrasonic probe are also the position information and the angle information of the ultrasonic image.
In an embodiment, when the position information and the angle information of the current ultrasound probe are determined by the current ultrasound image, a frame of image and the three-dimensional ultrasound model in the current ultrasound image may be input into the trained index neural network model, the trained full convolution neural network model or the trained CNN deep convolution neural network model for processing, so as to determine the position information and the angle information of the current ultrasound probe.
Wherein, the index neural network model at least comprises: two-dimensional convolutional neural networks and three-dimensional convolutional neural networks. The two-dimensional convolutional neural network is used for processing the input current ultrasonic image and at least comprises a two-dimensional convolutional layer, a maximum pooling layer, an average pooling layer and an activation function layer. The three-dimensional convolution neural network is used for processing the input three-dimensional ultrasonic model. The three-dimensional convolutional neural network at least comprises a three-dimensional convolutional layer, a maximum pooling layer, an average pooling layer and an activation function layer. When the index neural network model is adopted for processing, the method specifically comprises the following steps:
step S801: extracting a first characteristic vector in a current ultrasonic image and/or a current environment image acquired by a visual sensor through a two-dimensional convolutional neural network; the index neural network model at least comprises a two-dimensional convolution neural network and a three-dimensional convolution neural network, the current ultrasonic image is input into the corresponding two-dimensional convolution neural network, and a first feature vector in the current ultrasonic image is extracted through the two-dimensional convolution neural network, wherein the first feature vector is a one-dimensional feature vector. As shown in fig. 6, a represents the input current ultrasound image.
Step S802: extracting a second feature vector in the three-dimensional ultrasonic model through a three-dimensional convolution neural network; and inputting the loaded three-dimensional ultrasonic model into a corresponding three-dimensional convolution neural network for processing, and extracting a second feature vector in the three-dimensional ultrasonic model through the three-dimensional convolution neural network. The three-dimensional convolutional neural network at least comprises a three-dimensional convolutional layer, a maximum pooling layer, an average pooling layer and an activation function layer, and the output is averaged or added on a channel, so that a one-dimensional feature vector is obtained, namely the second feature vector is also a one-dimensional feature vector. Where the convolution kernel of the three-dimensional convolution layer may be 3 x 3, as shown in fig. 6, b represents the three-dimensional ultrasound model.
Step S803: and splicing the first feature vector and the second feature vector in the dimension to obtain a first spliced feature vector.
Step S804: and inputting the first splicing characteristic vector into a full connection layer, and outputting the position information and the angle information of the current ultrasonic image. The number of neurons of the fully-connected layer is the same as the number of position information and angle information, and preferably, the number of fully-connected layers is 6.
The full convolution neural network model is based on two-dimensional convolution neural network training, and the two-dimensional convolution neural network at least comprises a two-dimensional convolution layer, a maximum pooling layer, an average pooling layer and an activation function layer. It should be appreciated that although the full convolution neural network model has fewer three-dimensional convolution networks than the indexing neural network model, the full convolution neural network model has greater data processing capability than the two-dimensional convolution neural network in the indexing neural network model. The three-dimensional ultrasonic model is a plurality of section images scanned along a certain angle, each section image is provided with corresponding (x, y, z, ax, ay, az), and b can be regarded as the three-dimensional model of the target part. When a full convolution neural network model is adopted for processing, the method specifically comprises the following steps:
step S810: and inputting the current ultrasonic image and/or the current environment image acquired by the visual sensor into a full convolution neural network for processing to obtain a characteristic diagram of the current ultrasonic image.
Step S811: and carrying out global maximum pooling on the feature map to obtain a third feature vector of the current ultrasonic image.
Step S812: and carrying out global average pooling on the feature map to obtain a fourth feature vector of the current ultrasonic image.
Step S813: and splicing the third feature vector and the fourth feature vector to obtain a second spliced feature vector.
Step S814: and inputting the second splicing characteristic vector into the full-connection layer, and outputting the position information and the angle information of the current ultrasonic probe.
It should be understood that the full convolution neural network model is to perform multi-angle scanning on a target portion to obtain a plurality of multi-angle sectional images, each sectional image has a corresponding (x, y, z, ax, ay, az), the purpose of the network is to establish a relational model between the sectional image of the target portion and a corresponding position, i.e. a prediction stage, for example, sampling the same organ of a plurality of different persons (e.g. 5000 persons), each organ performs scanning at different angles (e.g. 360 angles), 200 frames of ultrasound images can be obtained in each angular direction, and then the number of training samples of the full convolution neural network model is 5000 x 360 x 200 x 360000000; and (3) training the huge sample ultrasonic image, and updating parameters of the full convolution neural network to obtain a full convolution neural network model. When a current ultrasound image is input into the full convolution neural network model, the position information and the angle information (x, y, z, ax, ay, az) of the current ultrasound probe can be obtained. The training adopts a regression method, and the loss function is mean square error.
When a CNN deep convolutional neural network model is used for processing, the method specifically includes:
step S821: obtaining IMU information collected by an inertia measurement unit arranged in an ultrasonic probe; specifically, when the CNN deep convolutional neural network is used to obtain the position information and the angle information, IMU information needs to be obtained first. For example, IMU information may be acquired by an Inertial measurement unit (Inertial measurement unit) provided in the ultrasound probe. The inertial measurement unit at least comprises an accelerometer and a gyroscope, the precise gyroscope and the accelerometer are combined in a multi-axis mode, and a reliable position and movement identification function is provided for stable and navigation application through fusion. Precision MEMS IMUs provide the required level of precision even under complex operating environments and dynamic or extreme motion dynamics conditions. The accuracy of calculating the current position information and angle information of the ultrasonic image can be improved by acquiring the IMU information. An inertial measurement unit comprises three single-axis accelerometers and three single-axis gyroscopes, wherein the accelerometers detect acceleration signals of an object in independent three axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, measure the angular velocity and acceleration of the ultrasonic probe in three-dimensional space, and calculate the attitude of the ultrasonic probe according to the angular velocity and acceleration signals.
In order to improve the accuracy of the IMU information acquired by the inertial measurement unit, in an embodiment, the acquiring the IMU information acquired by the inertial measurement unit disposed in the ultrasound probe specifically includes: acquiring first IMU information of the ultrasonic probe at the current moment through an inertia measurement unit; obtaining a plurality of IMU information which is measured in advance and stored in a preset time period before the current moment of the ultrasonic probe; inputting first IMU information of the current moment of the ultrasonic probe and a plurality of IMU information in a preset time period before the current moment into a recurrent neural network model for processing to obtain second IMU information of the ultrasonic probe, wherein the accuracy of the second IMU information is greater than that of the first IMU information, and determining the second IMU information as the IMU information acquired by an inertial measurement unit in the ultrasonic probe.
The recurrent neural network model of the embodiment of the invention is a cascade recurrent neural network model, and the IMU information at least comprises multi-axis angular velocity data and acceleration data. As shown in fig. 7, X1 (t)0) Represents t0Data collected by a gyroscope in the moment inertia measurement unit; x1 (t)1) Represents t1Data collected by a gyroscope in the moment inertia measurement unit; x1 (t)n) Representing data collected by a gyroscope in the inertial measurement unit at the current moment. It should be understood that the IMU information obtained in the preset time period before the current time of the ultrasound probe is IMU information at different times in the preset time period. X2 (t)0) Represents t0Data collected by an accelerometer in the moment inertia measurement unit; x2 (t)1) Represents t1Data collected by an accelerometer in the moment inertia measurement unit; x2 (t)n) Watch (A)And displaying data collected by an accelerometer in the inertial measurement unit at the current moment. The number of cascades of a Recurrent Neural Network (RNN) is set according to the class of sensors in the inertial measurement unit.
As shown in fig. 7, in the embodiment of the present invention, a two-stage RNN network is provided, and is used to respectively extract feature information of data collected by a gyroscope and feature information of data collected by an accelerometer. And splicing the characteristic information output by the recurrent neural network structure, inputting the spliced characteristic information into the full-connection network for characteristic fusion, and finally outputting second IMU information of the ultrasonic probe. It should be understood that the second IMU information is IMU information with high accuracy at the current time of the ultrasound probe. It should be understood that the raw data collected by the gyroscope and the accelerometer can be directly used as input, or can be processed by embedding a vector layer (embedding) and then inputting the RNN.
Step S822: and extracting a fifth feature vector in the current ultrasonic image and/or the current environmental image acquired by the vision sensor through the CNN deep convolutional neural network.
Step S823: and extracting a sixth feature vector in the three-dimensional ultrasonic model through the CNN deep convolution neural network.
Step S824: and extracting a seventh feature vector in the IMU information through the CNN deep convolutional neural network.
Step S825: and splicing the fifth feature vector, the sixth feature vector and the seventh feature vector to obtain a first spliced feature vector.
Step S826: and inputting the first splicing characteristic vector into the full-connection layer to perform characteristic vector fusion to obtain the position information and the angle information of the current ultrasonic probe. The number of neurons of the fully-connected layer is the same as the number of position information and angle information, and preferably, the number of fully-connected layers is 6.
The CNN deep convolution neural network comprises a two-dimensional convolution neural network and a three-dimensional convolution neural network. According to the embodiment of the invention, a first feature vector in the current ultrasonic image is extracted through two-dimensional convolution nerve, wherein the two-dimensional convolution nerve network at least comprises two-dimensional convolution, maximum pooling, average pooling and an activation function, and the first feature vector is a one-dimensional feature vector. Extracting a second feature vector in the three-dimensional ultrasonic model through a three-dimensional convolution neural network; the three-dimensional convolution neural network at least comprises three-dimensional convolution, a convolution kernel can be 3 multiplied by 3, maximum pooling, average pooling and an activation function are carried out, and the output is averaged or added on channels, so that a one-dimensional second feature vector is obtained.
In one embodiment, after the position information and the angle information (X, Y, Z, AX, AY, AZ) of the current ultrasound image and the position information and the angle information (X, Y, Z, AX, AY, AZ) of the standard scanning section preset in the three-dimensional ultrasound image are determined, a guide path for the ultrasound probe to move to the standard scanning section is planned according to the position information and the angle information of the two, and the position information and the angle information are six-degree-of-freedom coordinates.
As shown in fig. 8, the scanning guidance area 1000 displayed on the display includes at least a first guidance area 1600 and a second guidance area 1700, where the first guidance area 1600 displays at least the position information and the angle information of the current ultrasound probe, the position information and the angle information of the probe corresponding to the standard scanning section, and the operation prompt information. The operation prompt information of the embodiment of the invention at least comprises the translation distance and the selected angle, and can also be the pressing pressure of the ultrasonic probe. The second guide region includes the object 1100 to be detected, the target organ 1500 highlighted on the object 1100, the current ultrasound probe 1200, the guide path 1400, and the target virtual probe 1300, it being understood that the highlighting may be highlighting the entire target organ 1500 or the outline of the target organ 1500. The current ultrasound probe 1200 moves according to its real-time position, and the target virtual probe 1300 needs to move to a position to obtain the ultrasound probe corresponding to the standard scanning section.
In an embodiment, a physician may need to scan a plurality of standard scanning sections when performing an ultrasound scan on a certain target organ, and the embodiment of the present invention plans the guidance path 1400 according to the distance between the position information of different standard scanning sections and the current ultrasound probe 1200. It should be understood that the guide path 1400 is also highlighted, and may be highlighted by a distinctive color, flashing, or the like.
In one embodiment, the guiding the ultrasound probe to move to the standard scanning section according to the position information and the angle information of the current ultrasound image and the standard scanning section comprises:
step S831: planning a guide path for the ultrasonic probe to move to a standard scanning tangent plane according to the position information and the angle information; specifically, a guide path for the ultrasonic probe to move to the standard scanning section is planned according to the position information and the angle information of the current ultrasonic image and the position information and the angle information of the standard scanning section.
Step S832: a real-time position of the ultrasound probe is acquired.
In an embodiment, the method may acquire an environmental image including at least a detection object and an ultrasound probe captured by a camera, and identify a real-time position of the ultrasound probe through a trained tracking neural network model, which specifically includes: acquiring a model image of an ultrasonic probe; inputting a model image and the environment image into a shared full convolution neural network, and outputting a first characteristic corresponding to the model image and a second characteristic corresponding to the environment image by the shared full convolution neural network; the first characteristic is that a convolution kernel is convoluted with the second characteristic to obtain a spatial response diagram; and outputting the spatial response map to a linear interpolation layer to acquire the real-time position of the ultrasonic probe in the environment image.
It should be understood that the model image of the ultrasound probe is preset in the ultrasound device and can be called through the input unit, the input unit can be a keyboard, a trackball, a mouse, a touch pad, or the like, or a combination thereof, and the input unit can also be a voice recognition input unit, a gesture recognition input unit, or the like. It is to be understood that the target site information may be a name of the target site or a target site icon displayed on the display selected through the input unit. The spatial response map comprises the response intensity of the first feature on the second feature and the recognition value of each position in the model image and the environment image, and the response intensity value is 0-1.
Step S833: judging whether the ultrasonic probe deviates from the guide path according to the real-time position of the ultrasonic probe, and if so, updating the guide path according to the real-time position; sending a deviation prompt within a preset distance range of the ultrasonic probe deviating from the guide path; the deviation alarm prompt comprises one or more of an indicator light, a voice prompt and a vibration prompt; and sending a deviation correction prompt, wherein the deviation correction prompt comprises the step of prompting the moving direction and distance of the ultrasonic probe on a display, and it is to be understood that the distance of the ultrasonic probe deviating from the guide path is relatively small, so that the path does not need to be re-planned, and the ultrasonic probe is only prompted to be controlled to return to the original guide path to continue moving.
In an embodiment, the display may be used to display the direction and distance of the movement of the ultrasonic probe on the surface of the detection object, and specifically, the guidance path and the operation prompting step of the ultrasonic probe may be displayed on the surface of the detection object through a projection device or a laser guidance device. And after the ultrasonic probe deviates from the guide path and exceeds a preset range, re-planning the guide path according to the real-time position of the ultrasonic probe. Specifically, the shortest guidance path is newly selected according to the real-time position of the ultrasonic probe and the position of the target part at the moment. The display comprises a display of VR, AR and other display devices.
Step S834: and displaying the guide path, the standard scanning section and the ultrasonic probe in real time.
Specifically, a guide path, a standard scanning section and the ultrasonic probe are highlighted on the environment image and/or the surface of the detected object. The guide path, the standard scanning section and the ultrasonic probe can be displayed in a distinguishing way in different colors or shades and the like.
In order to further prompt the position of the standard scanning section, a target virtual probe is displayed at the body surface position of the detection object corresponding to the standard scanning section so as to guide the ultrasonic probe. It should be understood that the corresponding position of the detected object may be displayed on the display, or a three-dimensional virtual ultrasound probe may be projected at the corresponding position of the actual detected object.
In order to further improve the speed and the accuracy of scanning, the method further comprises the following steps: in the process of guiding the ultrasonic probe to move to the standard scanning section, providing operation prompt information, wherein the operation prompt information comprises: one or more of voice-operated prompts, visual-operated prompts, and tactile-operated prompts. The visual operation prompt can prompt the direction and the angle of the probe moving on the display or generate a virtual indication icon at the body surface corresponding to the detection object. The tactile operation cue is that the ultrasonic probe vibrates when the ultrasonic probe deviates from the guide path. When the ultrasonic probe moves to the standard scanning section, the ultrasonic probe vibrates to prompt that the target position is reached, or the focus is found when the focus does not reach the standard scanning section in the scanning process, and a voice prompt or a vibration prompt can be sent.
According to the ultrasonic intelligent imaging navigation method provided by the embodiment of the invention, the position information and the angle information of the current ultrasonic image and the position information and the angle information of the standard scanning section, which are acquired by the ultrasonic probe, can be quickly and accurately determined by the indexing neural network model, the full convolution neural network model or the CNN deep convolution neural network model and the loaded three-dimensional ultrasonic model, and the ultrasonic probe is guided to move to the standard scanning section according to the position relation between the current ultrasonic image and the standard scanning section. The ultrasonic intelligent imaging navigation method provided by the embodiment of the invention improves the speed and accuracy of searching a standard scanning tangent plane by an ultrasonic probe. Furthermore, the ultrasonic intelligent imaging navigation method provided by the embodiment of the invention can generate a visual guide path, and display the guide path, the standard scanning section and the ultrasonic probe in real time, so that the scanning accuracy is improved.
In an embodiment, as shown in fig. 9, the ultrasound intelligent imaging navigation method further includes:
step S5000: when the ultrasonic probe is guided to a target part to be scanned, acquiring an ultrasonic image of the target part to be scanned by the ultrasonic probe; specifically, when the ultrasonic probe is guided to a target site to be scanned, ultrasonic waves are transmitted and received to and from the target site by the ultrasonic probe. The ultrasonic probe is excited by the transmitted pulse, transmits ultrasonic waves to a target part, receives ultrasonic echoes with target part information reflected from a target area after a certain time delay, and converts the ultrasonic echoes into electric signals again to obtain ultrasonic images or videos. It is to be understood that the ultrasound image of the embodiment of the present invention is one or more of a single-frame ultrasound image, a multi-frame ultrasound image, or an ultrasound video. The ultrasonic probe can be connected with the ultrasonic host computer in a wired mode, and can also be a palm ultrasonic probe.
It should be understood that, the ultrasound probe may retrieve a preset parameter set for scanning the target portion according to the target portion to be scanned, where the preset parameter set includes: transmit frequency, depth parameter, dynamic range parameter, etc. Specifically, the input unit may be a keyboard, a trackball, a mouse, a touch pad, or the like, or a combination thereof, through the input unit adjustment; the input unit may also be a voice recognition input unit, a gesture recognition input unit, or the like. Or selecting an indication icon of a target part on the ultrasonic equipment, and automatically loading a preset parameter value corresponding to the target part by the ultrasonic equipment after selection.
Step S6000: determining a standard image according to the matching degree value of the ultrasonic image and the matching image in the preset image database, wherein the matching image comprises a plurality of marking information, and the matching degree value of the ultrasonic image and the standard image is larger than a first preset matching degree value. Specifically, in order to provide a reference basis for diagnosis for a doctor quickly, the embodiment of the invention calculates the matching degree values of the ultrasonic image to be diagnosed and a plurality of matching images containing the label information in a preset image database in a retrieval and query manner. The marking information at least comprises one or more of navigation information, target part information and focus information corresponding to the matching image. It should be understood that the navigation information is corresponding position information and angle information when the matching image is collected, and is used as a basis for guidance. In addition, when a plurality of standard images with the matching values larger than the first preset matching value with the ultrasonic image exist in the database, determining the image with the highest calculated matching value as the standard image; when the calculated matching degree values are all lower than the first preset matching degree value, the size of the first preset matching degree value can be properly reduced according to the actual situation, and therefore the required standard image can be obtained in the database.
The position information and the angle information can be obtained by one or more of magnetic sensing positioning, visual sensing positioning and inertial measurement unit positioning. Magnetic sensing positioning can be performed by establishing a magnetic field, namely a world coordinate system, through a magnetic transmitter and then positioning according to a magnetic receiver arranged on an ultrasonic probe. The visual sensing positioning method includes the steps that a world coordinate system is established through at least one camera, and position information and angle information of an ultrasonic probe are obtained through image recognition. The inertial measurement unit at least comprises an accelerometer and a gyroscope, the precise gyroscope and the accelerometer are combined in a multi-axis mode, and a reliable position and movement identification function is provided for stable and navigation application through fusion. Precision MEMS IMUs provide the required level of precision even under complex operating environments and dynamic or extreme motion dynamics conditions. The accuracy of calculating the current position information and angle information of the ultrasonic image can be improved by acquiring the IMU information. An inertial measurement unit comprises three single-axis accelerometers and three single-axis gyroscopes, wherein the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, measure the angular velocity and acceleration of the object in three-dimensional space, and calculate the attitude of the object according to the angular velocity and acceleration signals.
The default image database may be: one or more of a local image database, a hospital alliance image database, and a cloud image database. The matching image types in the preset image database of the embodiment of the invention at least comprise one or more of matching ultrasonic images, matching CT images and matching nuclear magnetic images.
In one embodiment, the match value for the ultrasound image and the matching image may be calculated by a cosine similarity algorithm. The cosine similarity algorithm measures the similarity between the ultrasonic image and the matched image by calculating the cosine value of an included angle between an image feature vector representing the ultrasonic image and an inner product space of a plurality of matched images in an image database.
In an embodiment, the matching degree between the ultrasound image and a plurality of matching images containing the label information in the preset image database may be calculated by using a trained matching neural network model. Wherein, the matching neural network model comprises: a first neural network, a second neural network, a screening neural network, and a matching neural network. The first neural network is used for identifying the scanned part of the ultrasonic image and is obtained by training a plurality of ultrasonic images marked with the category of the scanned part; the first neural network of the embodiment of the invention is a convolutional neural network, and the first neural network comprises an input layer, a hidden layer and an output layer; the hidden layer comprises a plurality of convolution layers, a down-sampling layer and an up-sampling layer; the input ultrasonic image to be diagnosed is subjected to convolution operation and down-sampling operation respectively through a plurality of convolution layers and down-sampling layers, and is subjected to convolution operation and up-sampling operation respectively through a plurality of convolution layers and up-sampling layers; the input layer and the hidden layer, the hidden layers and the output layer of the first neural network are connected through weight parameters; the convolution layer in the first neural network is used for automatically extracting the feature vector in the ultrasonic image. After a plurality of ultrasonic images marked with scanning part categories are trained, the ultrasonic images are input into the first neural network, and then the scanning parts corresponding to the ultrasonic images to be diagnosed can be quickly identified. It is understood that when the matching images containing the label information are stored in the local image database, the hospital union image database or the cloud image database, the hospital classifies the matching images, for example, all the matching images related to the "heart" are stored in a sub-image set. The image database establishes corresponding sub-image sets according to different scanned positions, such as uterus, brain, chest, abdomen and the like. According to the embodiment of the invention, the first neural network can be used for quickly identifying the scanned part of the ultrasonic image, so that the searching matching amount can be reduced, and the searching matching speed can be improved.
In order to further improve the image matching speed, the second neural network is used for identifying focus information of the ultrasonic image, and the second neural network is obtained through training of a plurality of ultrasonic images marked with the focus information; the second neural network is also a convolutional neural network. The second neural network training method specifically comprises the following steps: inputting a plurality of ultrasonic image samples marked with focus information into a second neural network to predict focus areas in the ultrasonic image samples; determining a target lesion area corresponding to the predicted lesion area by using the predicted lesion area; and determining the sampling weight of the ultrasonic image sample according to the focus information of the ultrasonic image sample and the predicted focus area, and further obtaining a trained second neural network. The lesion area of the ultrasonic image can be rapidly identified through the second neural network. It can be understood that the image database can establish corresponding sub-image sets according to different lesion areas of the same scanned part. According to the embodiment of the invention, the second neural network can be used for quickly identifying the focus information of the ultrasonic image to be diagnosed, so that the searching and matching amount can be reduced, and the searching and matching speed is improved.
In one embodiment, a matching image with a matching value with the ultrasound image exceeding a first predetermined matching degree is determined as a standard image. An operator can set a first preset matching value through an input unit, and the input unit is used for inputting a control instruction of the operator. The input unit may be at least one of a keyboard, a trackball, a mouse, a touch panel, a handle, a dial, a joystick, and a foot switch. The input unit may also input a non-contact type signal such as a sound, a gesture, a line of sight, or a brain wave signal. The operator may set a specific first predetermined matching value, for example, 95%, and more than 95% of the matching images in the image database may be screened out. It should be understood that the standard image is equivalent to a standard interface to be obtained for scanning the target region, so that the diagnosis accuracy of the doctor can be improved, and misdiagnosis can be avoided.
Step S7000: and guiding the ultrasonic probe to move according to the mark information contained in the standard image, and determining the ultrasonic image with the matching value of the standard image exceeding a second preset matching value as a target ultrasonic image, wherein the second preset matching value is greater than or equal to the first preset matching value.
The standard image of the embodiment of the invention is obtained by calculating the matching degree value of the ultrasonic image and the matched image in the preset image database, so that the ultrasonic image obtained by the ultrasonic probe can be an approximate position, and the target ultrasonic image can be obtained only by guiding and adjusting the position and the angle of the ultrasonic probe in one step. And after the standard image is determined, guiding the ultrasonic probe to move according to marking information contained in the standard image, wherein the marking information at least comprises one or more of navigation information, target part information and focus information corresponding to the matched image. And planning a guide path according to the position information and the angle information corresponding to the acquired standard image and the position information and the angle information of the current ultrasonic probe, and guiding the ultrasonic probe to obtain a target ultrasonic image. That is, operation prompt information is provided to guide the ultrasonic probe to move to obtain an accurate ultrasonic image, for example, a visual operation prompt may be provided to prompt the direction angle of the probe movement on the display or generate a virtual indication icon at the body surface corresponding to the detection object. The tactile operation cue is that the ultrasonic probe vibrates when the ultrasonic probe deviates from the guide path. When the ultrasonic probe moves to the standard scanning section, the ultrasonic probe vibrates to prompt that the target position is reached, or the focus is found when the focus does not reach the standard scanning section in the scanning process, and a voice prompt or a vibration prompt can be sent.
Step S8000: and determining the diagnosis information of the target ultrasonic image according to the marking information contained in the standard image, wherein the diagnosis information at least comprises one or more of target part information and focus information. Specifically, after the target ultrasonic image of the target portion is obtained, the diagnostic information of the target ultrasonic image can be obtained.
The diagnostic information includes at least one or more of target site information, lesion information. In one embodiment, the diagnostic information of the target ultrasound image is inferred from the label information contained in the standard image, and it can be understood that the target ultrasound image is the closest match, i.e. the closest, to the standard image, so that the inference of the diagnostic information of the target ultrasound image can be assisted by the target information of the standard image.
In one embodiment, the method further comprises the steps of obtaining scanned object information corresponding to the target ultrasonic image; inquiring historical diagnosis ultrasonic images of the scanned object according to the scanned object information corresponding to the target ultrasonic image, wherein the historical diagnosis ultrasonic images are stored in an image database; and when the historical diagnosis ultrasonic images exist, arranging according to the diagnosis time of the historical diagnosis ultrasonic images to be used as a reference basis for determining the diagnosis information of the target ultrasonic images. Trend judgment or discriminative information judgment can be obtained.
In one embodiment, a disease information diagnosis conclusion, a medication record, a diagnosis and treatment effect and the like of a scanned object under similar ultrasonic images can be obtained. Determining focus information according to the diagnostic information of the target ultrasonic image; inquiring a matching image corresponding to the focus of the same kind in a preset image database according to the focus information; taking the marking information of the matching image corresponding to the similar focus as the reference basis for determining the diagnosis information of the target ultrasonic image
The embodiment of the invention also displays the standard image and the target ultrasonic image in real time through the display; and displaying the matching value of the standard image and the target ultrasonic image in real time. And when the standard image is displayed, corresponding mark information is also displayed. The displays are not limited in number. The displayed ultrasound image, the target ultrasound image and the standard image may be displayed on one display or simultaneously displayed on a plurality of displays, which is not limited in this embodiment. In addition, the display also provides a graphical interface for human-computer interaction of a user while displaying, one or more controlled objects are arranged on the graphical interface, and the operator is provided with a human-computer interaction device to input an operation instruction to control the controlled objects, so that corresponding control operation is executed. For example, projection and VR glasses, but the display may also include an input device, for example, a touch input display screen, and a projector VR glasses for sensing motion. The icons displayed on the display can be operated by the man-machine interaction device to execute specific functions.
According to the ultrasonic intelligent imaging navigation method provided by the embodiment of the invention, after the ultrasonic image acquired by the ultrasonic probe is matched with the matching image in the preset image database to obtain the matching value, the ultrasonic probe is guided to move for accurate positioning through the mark information contained in the standard image, and then the target ultrasonic image required by auxiliary diagnosis is obtained. Therefore, the embodiment of the invention can guide a doctor to obtain the scanning speed of the target ultrasonic image and improve the working efficiency of the doctor.
In one embodiment, when the ultrasonic probe is guided to move in steps S400, S700 and S7000, the mechanical arm navigation may be used or the guidance navigation may be used to guide the ultrasonic probe to move, the mechanical arm navigation includes guiding the ultrasonic probe to move by using at least one mechanical arm, the mechanical arm may be integrated on the ultrasonic probe to drive the ultrasonic probe to move on the body surface of the detected object, and the mechanical arm includes a motor and a roller with an adsorption force.
The instructional navigation includes one or more of a visual guidance mode, a voice guidance mode, or a force feedback guidance mode. Wherein the visual guidance mode is configured to be one or more of image guidance, video guidance, identification guidance, character guidance and projection guidance; the force feedback guidance mode is configured to be one or more of tactile guidance, vibration guidance and traction guidance.
An embodiment of the present invention further provides an ultrasound intelligent imaging navigation apparatus, as shown in fig. 10, the apparatus includes:
the identification module 10 is configured to acquire an environmental image at least including a detection object and an ultrasonic probe through a visual sensor, and identify, based on a target portion to be scanned, position information of the target portion to be scanned of the detection object and initial position information of the ultrasonic probe from the environmental image by using a trained identification network model; for details, refer to the related description of step S100 in the above method embodiment.
The path determining module 20 is configured to determine a scanning navigation path leading the ultrasonic probe to the target portion to be scanned based on the position information of the target portion to be scanned and the initial position information of the ultrasonic probe; for details, refer to the related description of step S200 in the above method embodiment.
The display module 30 is used for displaying the scanning navigation path; for details, refer to the related description of step S300 in the above method embodiment.
And the real-time tracking module 40 is used for identifying the real-time position of the ultrasonic probe through the trained tracking neural network model in the probe moving process, and updating the scanning navigation path according to the real-time position of the ultrasonic probe when the ultrasonic probe deviates from the scanning navigation path. For details, refer to the related description of step S300 in the above method embodiment.
According to the ultrasonic intelligent imaging navigation device provided by the embodiment of the invention, the target part and the ultrasonic probe are identified from the environmental image by using the trained identification network model, the corresponding plurality of historical scanning paths are obtained based on the identified target part, and the scanning navigation path of the ultrasonic probe is confirmed from the plurality of historical scanning paths, so that the scanning path can be determined based on the operation habit of a doctor, the speed is high, the accuracy is high, and the ultrasonic scanning efficiency of the doctor is greatly improved. And the real-time position of the ultrasonic probe is identified through the trained tracking neural network model, and whether the guide path needs to be updated or not can be judged according to the real-time position of the ultrasonic probe so as to find the shortest guide path. The recognition network model provided by the embodiment of the invention can accurately, simply and conveniently acquire the position of the target part, and meanwhile, the real-time position information of the ultrasonic probe is tracked by adopting the tracking neural network model, so that the degree of automation is high, and the accuracy is high.
The functional description of the ultrasonic intelligent imaging navigation device provided by the embodiment of the invention refers to the description of the ultrasonic intelligent imaging navigation method in the above embodiment in detail.
An embodiment of the present invention further provides a storage medium, as shown in fig. 11, on which a computer program 601 is stored, where the instructions, when executed by a processor, implement the steps of the ultrasound intelligent imaging navigation method in the foregoing embodiment. The storage medium is also stored with audio and video stream data, characteristic frame data, an interactive request signaling, encrypted data, preset data size and the like. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
An embodiment of the present invention further provides an electronic device, as shown in fig. 12, the electronic device may include a processor 51 and a memory 52, where the processor 51 and the memory 52 may be connected by a bus or in another manner, and fig. 12 takes the example of connection by a bus as an example.
The processor 51 may be a Central Processing Unit (CPU). The Processor 51 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 52, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the corresponding program instructions/modules in the embodiments of the present invention. The processor 51 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 52, namely, implements the ultrasound intelligent imaging navigation method in the above method embodiment.
The memory 52 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 51, and the like. Further, the memory 52 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 52 may optionally include memory located remotely from the processor 51, and these remote memories may be connected to the processor 51 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 52 and, when executed by the processor 51, perform the ultrasound intelligent imaging navigation method in the embodiment shown in fig. 1-9.
The details of the electronic device may be understood by referring to the corresponding descriptions and effects in the embodiments shown in fig. 1 to fig. 9, and are not described herein again.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (22)

1. An ultrasonic intelligent imaging navigation method is characterized by comprising the following steps:
acquiring an environment image at least comprising a detection object and an ultrasonic probe through a visual sensor, and identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environment image by utilizing a trained identification network model based on the target part to be scanned;
determining a scanning navigation path guided to the target part to be scanned by the ultrasonic probe based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe;
displaying the scanning navigation path;
in the moving process of the probe, identifying the real-time position of the ultrasonic probe through a trained tracking neural network model, and updating the scanning navigation path according to the real-time position of the ultrasonic probe when the ultrasonic probe deviates from the scanning navigation path;
when the ultrasonic probe is guided to the target part to be scanned for scanning, acquiring an ultrasonic image of the target part to be scanned by the ultrasonic probe;
determining a standard image according to a matching degree value of the ultrasonic image and a matching image in a preset image database, wherein the matching image comprises a plurality of marking information, and the matching degree value of the ultrasonic image and the standard image is greater than a first preset matching degree value;
guiding the ultrasonic probe to move according to mark information contained in the standard image, and determining an ultrasonic image with a matching value exceeding a second preset matching value with the standard image as a target ultrasonic image, wherein the second preset matching value is greater than or equal to the first preset matching value;
and determining the diagnosis information of the target ultrasonic image according to the marking information contained in the standard image, wherein the diagnosis information at least comprises one or more of target part information and focus information.
2. The ultrasonic intelligent imaging navigation method according to claim 1, further comprising:
when the ultrasonic probe is guided to the target part to be scanned, acquiring a current ultrasonic image of the target part to be scanned by the ultrasonic probe;
loading a three-dimensional ultrasonic model corresponding to the target part to be scanned based on the obtained target part to be scanned, wherein the three-dimensional ultrasonic model at least comprises a standard scanning tangent plane marked with probe position information and probe angle information;
determining the position information and the angle information of the current ultrasonic probe according to the current ultrasonic image and/or the current environment image acquired by the visual sensor;
and guiding the ultrasonic probe to move to the standard scanning section according to the position information and the angle information of the current ultrasonic probe, the position information of the probe corresponding to the standard scanning section mark and the angle information of the probe corresponding to the standard scanning section mark.
3. The ultrasonic intelligent imaging navigation method according to claim 2, wherein the determining the position information and the angle information of the current ultrasonic probe according to the current ultrasonic image and/or the current environment image acquired by the vision sensor comprises:
inputting the current ultrasonic image and/or the current environment image acquired by the visual sensor and the three-dimensional ultrasonic model into a trained index neural network model or a CNN deep convolution neural network model for processing, and determining the position information and the angle information of the current ultrasonic probe;
or inputting the current ultrasonic image and/or the current environment image acquired by the visual sensor into a trained full convolution neural network model for processing, and determining the position information and the angle information of the current ultrasonic probe.
4. The ultrasonic intelligent imaging navigation method according to claim 3, wherein the inputting the current ultrasonic image and/or the current environmental image collected by the visual sensor and the three-dimensional ultrasonic model into a trained index neural network model for processing, and determining the position information and the angle information of the current ultrasonic probe comprises:
extracting a first characteristic vector in the current ultrasonic image and/or the current environment image acquired by the visual sensor through a two-dimensional convolutional neural network;
extracting a second feature vector in the three-dimensional ultrasonic model through a three-dimensional convolution neural network;
splicing the first feature vector and the second feature vector in a dimension to obtain a first spliced feature vector;
and inputting the first splicing characteristic vector into a full-connection layer, and outputting the position information and the angle information of the current ultrasonic probe.
5. The intelligent ultrasonic imaging navigation method according to claim 3, wherein the inputting the current ultrasonic image and/or the current environmental image collected by the visual sensor into a trained full convolution neural network model for processing, and determining the position information and the angle information of the current ultrasonic probe comprises:
inputting the current ultrasonic image into a full convolution neural network for processing to obtain a characteristic diagram of the current ultrasonic image;
performing global maximum pooling on the feature map to obtain a third feature vector of the current ultrasonic image;
carrying out global average pooling on the feature map to obtain a fourth feature vector of the current ultrasonic image;
splicing the third feature vector and the fourth feature vector to obtain a second spliced feature vector;
and inputting the second splicing characteristic vector into a full-connection layer, and outputting the position information and the angle information of the current ultrasonic probe.
6. The intelligent ultrasonic imaging navigation method according to claim 3, wherein the inputting the current ultrasonic image and/or the current environmental image acquired by the visual sensor and the three-dimensional ultrasonic model into a trained CNN deep convolution neural network model for processing to determine the position information and the angle information of the current ultrasonic probe comprises:
obtaining IMU information collected by an inertia measurement unit arranged in the ultrasonic probe;
extracting a fifth feature vector in the current ultrasonic image and/or the current environmental image acquired by the visual sensor through the CNN deep convolutional neural network;
extracting a sixth feature vector in the three-dimensional ultrasonic model through the CNN deep convolution neural network;
extracting a seventh feature vector in the IMU information through the CNN deep convolutional neural network;
splicing the fifth feature vector, the sixth feature vector and the seventh feature vector to obtain a first spliced feature vector;
and inputting the first splicing characteristic vector into a full-connection layer for characteristic vector fusion to obtain the position information and the angle information of the current ultrasonic probe.
7. The ultrasonic intelligent imaging navigation method according to claim 6, wherein the acquiring IMU information acquired by an inertial measurement unit disposed in the ultrasonic probe comprises:
acquiring first IMU information of the ultrasonic probe at the current moment through an inertia measurement unit;
obtaining a plurality of IMU information which is measured in advance and stored in a preset time period before the current moment of the ultrasonic probe;
inputting first IMU information of the current moment of the ultrasonic probe and a plurality of IMU information in a preset time period before the current moment into a recurrent neural network model for processing to obtain second IMU information of the ultrasonic probe, wherein the accuracy of the second IMU information is greater than that of the first IMU information, and determining the second IMU information as the IMU information acquired by an inertial measurement unit in the ultrasonic probe.
8. The intelligent ultrasonic imaging navigation method according to claim 1, wherein the recognition network model is a segmentation model for segmenting different organ contours and ultrasonic probe contours, or the recognition network model is a detection model for recognizing organs and ultrasonic probe distribution regions,
the segmentation model includes: the device comprises an input layer, a plurality of convolution layers, a plurality of pooling layers, a plurality of bilinear interpolation layers and an output layer, wherein the number of channels of the bilinear interpolation layers is the same as that of target positions to be scanned and the number of probes;
the detection model comprises: the device comprises an input layer, a plurality of convolution layers, a plurality of pooling layers, a plurality of bilinear interpolation layers and an output layer, wherein the output of the bilinear interpolation layers added with the convolution layers enters the output layer through two-layer convolution and is output.
9. The ultrasonic intelligent imaging navigation method according to claim 1, wherein the step of identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environmental image by using the trained identification network model comprises:
segmenting distribution areas of different parts of the detection object and the distribution area of the ultrasonic probe from the environment image by using the trained recognition network model;
identifying part information corresponding to different distribution areas by using a trained identification network model, wherein the part information at least comprises part names or part categories;
determining a distribution area of the target part to be scanned based on the target part to be scanned and the part information of different distribution areas obtained by recognition by using the trained recognition network model;
and determining the position information of the target part to be scanned and the initial position information of the ultrasonic probe according to the distribution area of the target part to be scanned and the distribution area of the ultrasonic probe.
10. The ultrasonic intelligent imaging navigation method according to claim 1, wherein determining a scanning navigation path guided by an ultrasonic probe to a target part to be scanned based on position information of the target part to be scanned and initial position information of the ultrasonic probe comprises:
acquiring a plurality of historical navigation paths of the ultrasonic probe based on the position information of the target part to be scanned;
and determining a scanning navigation path of the ultrasonic probe corresponding to the target part to be scanned from the plurality of historical navigation paths according to the initial position information of the ultrasonic probe.
11. The intelligent ultrasonic imaging navigation method according to claim 10, wherein the determining a scanning navigation path of the ultrasonic probe corresponding to a target part to be scanned from the plurality of historical navigation paths according to the initial position information of the ultrasonic probe comprises:
judging whether the ultrasonic probe is positioned on any one of the plurality of historical navigation paths according to the initial position information of the ultrasonic probe;
if the ultrasonic probe is positioned on any one of the plurality of historical navigation paths, determining the corresponding historical navigation path as a scanning navigation path;
and if the ultrasonic probe is not on any historical navigation path, determining one historical navigation path with the shortest vertical distance to the ultrasonic probe in the plurality of historical navigation paths as a scanning navigation path.
12. The ultrasonic intelligent imaging navigation method according to claim 11, wherein if the ultrasonic probe is not on any historical navigation path, determining one historical navigation path with the shortest vertical distance to the ultrasonic probe in the plurality of historical navigation paths as a scanning navigation path, comprises:
if the ultrasonic probe is not on any historical navigation path, determining a historical navigation path with the shortest vertical distance between the ultrasonic probe and the historical navigation paths;
and determining a vertical distance between the ultrasonic probe and the determined historical navigation path and a distance between a vertical point of the determined historical navigation path and an end point of the determined historical navigation path as a scanning navigation path.
13. The ultrasonic intelligent imaging navigation method according to claim 1, wherein determining a scanning navigation path guided by the ultrasonic probe to the target part to be scanned based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe comprises: and generating a scanning navigation path based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe.
14. The ultrasonic intelligent imaging navigation method according to claim 1, wherein the tracking neural network model adopts a convolutional neural network, and the step of identifying the real-time position of the ultrasonic probe through the trained tracking neural network model comprises:
acquiring a model image of an ultrasonic probe;
inputting the model image and the environment image into a convolutional neural network, wherein the convolutional neural network outputs a first feature corresponding to the model image and a second feature corresponding to the environment image;
convolving the first characteristic serving as a convolution kernel with the second characteristic to obtain a spatial response diagram;
and outputting the spatial response map to a linear interpolation layer to acquire the real-time position of the ultrasonic probe in the environment image.
15. The ultrasonic intelligent imaging navigation method according to claim 1, wherein the updating the scanning navigation path according to the real-time position of the ultrasonic probe when the ultrasonic probe deviates from the scanning navigation path comprises:
when the distance of the ultrasonic probe deviating from the scanning navigation path is within a preset distance range, a deviation prompt is sent out, wherein the deviation prompt comprises one or more of a visual prompt, a voice prompt and a touch prompt;
and when the distance of the ultrasonic probe deviating from the scanning navigation path exceeds a preset distance range, determining one historical navigation path with the shortest vertical distance with the ultrasonic probe in the plurality of historical navigation paths as the scanning navigation path.
16. The ultrasound intelligent imaging navigation method according to claim 1, wherein the matching value of the ultrasound image and the matching image or the matching value of the ultrasound image and the standard image is calculated by the following method comprising:
calculating the matching degree value of the ultrasonic image and the matching image or the ultrasonic image and the standard image by a cosine similarity algorithm; and/or
And calculating the matching degree value of the ultrasonic image and the matching image or the ultrasonic image and the standard image through the trained matching neural network model.
17. The intelligent ultrasound imaging navigation method according to claim 1, wherein the matching images comprise at least one or more of matching ultrasound images, matching CT images, matching nuclear magnetic images, and matching X-ray images.
18. The ultrasound intelligent imaging navigation method according to any one of claims 1-17, wherein the guiding the ultrasound probe to move comprises:
guiding the ultrasonic probe to move according to mechanical arm navigation or instruction navigation, wherein the mechanical arm navigation comprises guiding the ultrasonic probe to move by adopting at least one mechanical arm, and the instruction navigation comprises one or more of a visual guide mode, a voice guide mode or a force feedback guide mode.
19. The ultrasonic intelligent imaging navigation method according to claim 18,
the visual guidance mode is configured to be one or more of image guidance, video guidance, identification guidance, character guidance and projection guidance;
the force feedback guidance means is configured as one or more of a tactile guidance, a vibration guidance, a traction guidance.
20. An ultrasonic intelligent imaging navigation device, comprising:
the identification module is used for acquiring an environment image at least comprising a detection object and an ultrasonic probe through a visual sensor, and identifying the position information of the target part to be scanned of the detection object and the initial position information of the ultrasonic probe from the environment image by utilizing a trained identification network model based on the target part to be scanned;
the path determining module is used for determining a scanning navigation path guided to the target part to be scanned by the ultrasonic probe based on the position information of the target part to be scanned and the initial position information of the ultrasonic probe;
the display module is used for displaying the scanning navigation path;
the real-time tracking module is used for identifying the real-time position of the ultrasonic probe through a trained tracking neural network model in the probe moving process, and updating the scanning navigation path according to the real-time position of the ultrasonic probe when the ultrasonic probe deviates from the scanning navigation path;
the image acquisition module is used for acquiring an ultrasonic image of the target part to be scanned by the ultrasonic probe when the ultrasonic probe is guided to the target part to be scanned for scanning;
the matching module is used for determining a standard image according to the matching degree value of the ultrasonic image and the matching image in a preset image database, the matching image comprises a plurality of marking information, and the matching degree value of the ultrasonic image and the standard image is greater than a first preset matching degree value;
the guiding module is used for guiding the ultrasonic probe to move according to mark information contained in the standard image, and determining an ultrasonic image with a matching value exceeding a second preset matching value with the standard image as a target ultrasonic image, wherein the second preset matching value is greater than or equal to the first preset matching value;
and the diagnosis module is used for determining the diagnosis information of the target ultrasonic image according to the marking information contained in the standard image, and the diagnosis information at least comprises one or more of target part information and focus information.
21. A computer-readable storage medium storing computer instructions for causing a computer to perform the ultrasound smart imaging navigation method of any one of claims 1-19.
22. An ultrasound device, comprising: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing computer instructions, and the processor executing the computer instructions to perform the ultrasound intelligent imaging navigation method according to any one of claims 1-19.
CN202011326525.6A 2019-12-31 2020-11-23 Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium Active CN112215843B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911413630 2019-12-31
CN2019114136300 2019-12-31

Publications (2)

Publication Number Publication Date
CN112215843A CN112215843A (en) 2021-01-12
CN112215843B true CN112215843B (en) 2021-06-11

Family

ID=74068186

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011321102.5A Active CN112288742B (en) 2019-12-31 2020-11-23 Navigation method and device for ultrasonic probe, storage medium and electronic equipment
CN202011326525.6A Active CN112215843B (en) 2019-12-31 2020-11-23 Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011321102.5A Active CN112288742B (en) 2019-12-31 2020-11-23 Navigation method and device for ultrasonic probe, storage medium and electronic equipment

Country Status (1)

Country Link
CN (2) CN112288742B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112807025A (en) * 2021-02-08 2021-05-18 威朋(苏州)医疗器械有限公司 Ultrasonic scanning guiding method, device, system, computer equipment and storage medium
CN112885450A (en) * 2021-02-09 2021-06-01 青岛大学附属医院 Ultrasonic body mark intelligent recognition system
CN113180731B (en) * 2021-03-31 2023-07-11 上海深至信息科技有限公司 Ultrasonic scanning guiding system and method
CN113171118B (en) * 2021-04-06 2023-07-14 上海深至信息科技有限公司 Ultrasonic inspection operation guiding method based on generation type countermeasure network
CN113274051B (en) * 2021-04-30 2023-02-21 中国医学科学院北京协和医院 Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium
CN113317816A (en) * 2021-05-07 2021-08-31 武汉凯进医疗技术有限公司 Wireless portable handheld ultrasonic processing equipment and method supporting real-time state display
CN113842165B (en) * 2021-10-14 2022-12-30 合肥合滨智能机器人有限公司 Portable remote ultrasonic scanning system and safe ultrasonic scanning compliance control method
CN114098807A (en) * 2021-11-26 2022-03-01 中国人民解放军海军军医大学 Auxiliary device, method, medium and electronic equipment for chest and abdomen ultrasonic scanning
CN113951932A (en) * 2021-11-30 2022-01-21 上海深至信息科技有限公司 Scanning method and device for ultrasonic equipment
WO2023165157A1 (en) * 2022-03-04 2023-09-07 武汉迈瑞科技有限公司 Medical navigation apparatus, navigation processing apparatus and method, and medical navigation system
CN114578348B (en) * 2022-05-05 2022-07-29 深圳安德空间技术有限公司 Autonomous intelligent scanning and navigation method for ground penetrating radar based on deep learning
CN117522765A (en) * 2022-07-28 2024-02-06 杭州堃博生物科技有限公司 Endoscope pose estimation method, device and storage medium
CN116158851B (en) * 2023-03-01 2024-03-01 哈尔滨工业大学 Scanning target positioning system and method of medical remote ultrasonic automatic scanning robot
CN115990032B (en) * 2023-03-22 2023-06-02 中国科学院自动化研究所 Priori knowledge-based ultrasonic scanning visual navigation method, apparatus and device
CN117058146B (en) * 2023-10-12 2024-03-29 广州索诺星信息科技有限公司 Ultrasonic data safety supervision system and method based on artificial intelligence
CN117132587B (en) * 2023-10-20 2024-03-01 深圳微创心算子医疗科技有限公司 Ultrasonic scanning navigation method, device, computer equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103371870A (en) * 2013-07-16 2013-10-30 深圳先进技术研究院 Multimode image based surgical operation navigation system
CN107451997A (en) * 2017-07-31 2017-12-08 南昌航空大学 A kind of automatic identifying method of the welding line ultrasonic TOFD D scanning defect types based on deep learning
WO2018127498A1 (en) * 2017-01-05 2018-07-12 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN108664844A (en) * 2017-03-28 2018-10-16 爱唯秀股份有限公司 The image object semantics of convolution deep neural network identify and tracking
CN109480906A (en) * 2018-12-28 2019-03-19 无锡祥生医疗科技股份有限公司 Ultrasonic transducer navigation system and supersonic imaging apparatus
CN109549667A (en) * 2018-12-29 2019-04-02 无锡祥生医疗科技股份有限公司 Ultrasonic transducer scanning system, method and supersonic imaging apparatus
CN109567865A (en) * 2019-01-23 2019-04-05 上海浅葱网络技术有限公司 A kind of intelligent ultrasonic diagnostic equipment towards Non-medical-staff
CN109589141A (en) * 2018-12-28 2019-04-09 深圳开立生物医疗科技股份有限公司 A kind of ultrasound diagnosis assisting system, system and ultrasonic diagnostic equipment
CN109805963A (en) * 2019-03-22 2019-05-28 深圳开立生物医疗科技股份有限公司 The determination method and system of one Endometrium parting
CN110090069A (en) * 2019-06-18 2019-08-06 无锡祥生医疗科技股份有限公司 Ultrasonic puncture bootstrap technique, guide device and storage medium
CN110123450A (en) * 2018-02-08 2019-08-16 柯惠有限合伙公司 System and method for the conduit detection in fluoroscopic image and the display position for updating conduit
CN110363746A (en) * 2019-06-13 2019-10-22 西安交通大学 A kind of Ultrasonic nondestructive test signal classification method based on convolutional neural networks
CN110584714A (en) * 2019-10-23 2019-12-20 无锡祥生医疗科技股份有限公司 Ultrasonic fusion imaging method, ultrasonic device, and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414411B (en) * 2007-10-17 2010-08-25 财团法人工业技术研究院 Image type vacancy detection system and method
CN102662190B (en) * 2012-05-04 2014-06-25 同济大学 Ultrasonic quick scanning exploration method and system for same
TWI765895B (en) * 2016-06-20 2022-06-01 美商蝴蝶網路公司 Systems and methods of automated image acquisition for assisting a user to operate an ultrasound device
US11832969B2 (en) * 2016-12-22 2023-12-05 The Johns Hopkins University Machine learning approach to beamforming
CN109044398B (en) * 2018-06-07 2021-10-19 深圳华声医疗技术股份有限公司 Ultrasound system imaging method, device and computer readable storage medium
CN109044400A (en) * 2018-08-31 2018-12-21 上海联影医疗科技有限公司 Ultrasound image mask method, device, processor and readable storage medium storing program for executing
CN109480908A (en) * 2018-12-29 2019-03-19 无锡祥生医疗科技股份有限公司 Energy converter air navigation aid and imaging device
CN110070576A (en) * 2019-04-29 2019-07-30 成都思多科医疗科技有限公司 A kind of ultrasound based on deep learning network adopts figure intelligent locating method and system
CN110279467A (en) * 2019-06-19 2019-09-27 天津大学 Ultrasound image under optical alignment and information fusion method in the art of puncture biopsy needle
CN110477956A (en) * 2019-09-27 2019-11-22 哈尔滨工业大学 A kind of intelligent checking method of the robotic diagnostic system based on ultrasound image guidance

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103371870A (en) * 2013-07-16 2013-10-30 深圳先进技术研究院 Multimode image based surgical operation navigation system
WO2018127498A1 (en) * 2017-01-05 2018-07-12 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN108664844A (en) * 2017-03-28 2018-10-16 爱唯秀股份有限公司 The image object semantics of convolution deep neural network identify and tracking
CN107451997A (en) * 2017-07-31 2017-12-08 南昌航空大学 A kind of automatic identifying method of the welding line ultrasonic TOFD D scanning defect types based on deep learning
CN110123450A (en) * 2018-02-08 2019-08-16 柯惠有限合伙公司 System and method for the conduit detection in fluoroscopic image and the display position for updating conduit
CN109480906A (en) * 2018-12-28 2019-03-19 无锡祥生医疗科技股份有限公司 Ultrasonic transducer navigation system and supersonic imaging apparatus
CN109589141A (en) * 2018-12-28 2019-04-09 深圳开立生物医疗科技股份有限公司 A kind of ultrasound diagnosis assisting system, system and ultrasonic diagnostic equipment
CN109549667A (en) * 2018-12-29 2019-04-02 无锡祥生医疗科技股份有限公司 Ultrasonic transducer scanning system, method and supersonic imaging apparatus
CN109567865A (en) * 2019-01-23 2019-04-05 上海浅葱网络技术有限公司 A kind of intelligent ultrasonic diagnostic equipment towards Non-medical-staff
CN109805963A (en) * 2019-03-22 2019-05-28 深圳开立生物医疗科技股份有限公司 The determination method and system of one Endometrium parting
CN110363746A (en) * 2019-06-13 2019-10-22 西安交通大学 A kind of Ultrasonic nondestructive test signal classification method based on convolutional neural networks
CN110090069A (en) * 2019-06-18 2019-08-06 无锡祥生医疗科技股份有限公司 Ultrasonic puncture bootstrap technique, guide device and storage medium
CN110584714A (en) * 2019-10-23 2019-12-20 无锡祥生医疗科技股份有限公司 Ultrasonic fusion imaging method, ultrasonic device, and storage medium

Also Published As

Publication number Publication date
CN112288742B (en) 2021-11-19
CN112215843A (en) 2021-01-12
CN112288742A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN112215843B (en) Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium
US20200187906A1 (en) System and methods for at-home ultrasound imaging
Droste et al. Automatic probe movement guidance for freehand obstetric ultrasound
CN110870792B (en) System and method for ultrasound navigation
US10881353B2 (en) Machine-guided imaging techniques
US20190117190A1 (en) Ultrasound imaging probe positioning
US11751848B2 (en) Methods and apparatuses for ultrasound data collection
US20200113542A1 (en) Methods and system for detecting medical imaging scan planes using probe position feedback
JP2019521745A (en) Automatic image acquisition to assist the user in operating the ultrasound system
US20230042756A1 (en) Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
CN113116386B (en) Ultrasound imaging guidance method, ultrasound apparatus, and storage medium
CN101861526A (en) System and method for automatic calibration of tracked ultrasound
CN111134727B (en) Puncture guiding system for vein and artery identification based on neural network
CN111657997A (en) Ultrasonic auxiliary guiding method, device and storage medium
KR20200068880A (en) Untrasonic Imaging Apparatus having acupuncture guiding function
JP2021029675A (en) Information processor, inspection system, and information processing method
KR102182134B1 (en) Untrasonic Imaging Apparatus having needle guiding function using marker
CN110418610A (en) Determine guidance signal and for providing the system of guidance for ultrasonic hand-held energy converter
US20190388057A1 (en) System and method to guide the positioning of a physiological sensor
CN117257346A (en) Ultrasonic probe guiding method and device based on image recognition
KR102537328B1 (en) Lidar sensor-based breast modeling and breast measurement method and device
WO2020087732A1 (en) Neural network-based method and system for vein and artery identification
EP3916507B1 (en) Methods and systems for enabling human robot interaction by sharing cognition
CN111144163A (en) Vein and artery identification system based on neural network
CN111292248B (en) Ultrasonic fusion imaging method and ultrasonic fusion navigation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant