CN110322399B - Ultrasonic image adjustment method, system, equipment and computer storage medium - Google Patents

Ultrasonic image adjustment method, system, equipment and computer storage medium Download PDF

Info

Publication number
CN110322399B
CN110322399B CN201910604343.1A CN201910604343A CN110322399B CN 110322399 B CN110322399 B CN 110322399B CN 201910604343 A CN201910604343 A CN 201910604343A CN 110322399 B CN110322399 B CN 110322399B
Authority
CN
China
Prior art keywords
image
initial
ultrasonic image
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910604343.1A
Other languages
Chinese (zh)
Other versions
CN110322399A (en
Inventor
徐顶
姜文
周国义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonoscape Medical Corp
Original Assignee
Sonoscape Medical Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonoscape Medical Corp filed Critical Sonoscape Medical Corp
Priority to CN201910604343.1A priority Critical patent/CN110322399B/en
Publication of CN110322399A publication Critical patent/CN110322399A/en
Application granted granted Critical
Publication of CN110322399B publication Critical patent/CN110322399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an ultrasonic image adjustment method, an ultrasonic image adjustment system, ultrasonic image adjustment equipment and a computer storage medium, wherein an initial ultrasonic image shot by ultrasonic equipment is obtained; transmitting an initial ultrasonic image to a pre-trained image recognition neural network model; receiving a characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image; and performing three-dimensional visualization adjustment on the initial ultrasonic image based on the feature point image. According to the ultrasonic image adjustment method, the received initial ultrasonic image is automatically identified by means of the pre-trained image identification neural network model, the characteristic image point image is obtained, and then the three-dimensional visual adjustment is carried out on the initial ultrasonic image based on the characteristic image point image, so that the three-dimensional visual adjustment can be carried out on the ultrasonic image without manual work, and the adjustment efficiency of the three-dimensional visual adjustment on the ultrasonic image can be improved. The ultrasonic image adjustment system, the ultrasonic image adjustment device and the computer readable storage medium also solve the corresponding technical problems.

Description

Ultrasonic image adjustment method, system, equipment and computer storage medium
Technical Field
The present disclosure relates to the field of ultrasound image processing, and more particularly, to an ultrasound image adjustment method, system, apparatus, and computer medium.
Background
When the ultrasonic device is used for observing ultrasonic images such as the basin bottom, three-dimensional visualization is needed for the ultrasonic images shot by the ultrasonic device, and in the process, the ultrasonic images are needed to be adjusted to be at proper positions, such as to be at horizontal positions, so that the ultrasonic images can be observed conveniently.
The existing ultrasonic image adjusting method comprises the following steps: and manually identifying a characteristic region in the ultrasonic image, and adjusting the ultrasonic image according to the characteristic region.
However, in the existing ultrasound image adjustment method, the characteristic region in the ultrasound image needs to be identified manually, the identification efficiency is low, and the manual adjustment is needed after the identification, so that the efficiency of three-dimensional visualization of the ultrasound image is low.
In summary, how to improve the efficiency of three-dimensional visualization of ultrasound images is a problem to be solved by those skilled in the art.
Disclosure of Invention
The purpose of the application is to provide an ultrasonic image adjusting method, which can solve the technical problem of how to improve the three-dimensional visualization of ultrasonic images to a certain extent. The application also provides an ultrasonic image adjustment system, an ultrasonic image adjustment device and a computer readable storage medium.
In order to achieve the above object, the present application provides the following technical solutions:
an ultrasound image adjustment method, comprising:
acquiring an initial ultrasonic image shot by ultrasonic equipment;
transmitting the initial ultrasonic image to a pre-trained image recognition neural network model;
receiving a characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image;
and carrying out three-dimensional visual adjustment on the initial ultrasonic image based on the characteristic point image.
Preferably, the three-dimensional visualization adjustment of the initial ultrasound image based on the feature point map includes:
acquiring a viewfinder parameter of the ultrasonic equipment;
calculating a translation rotation matrix of the initial ultrasonic image based on the characteristic point image and the viewfinder parameter;
multiplying the translation rotation matrix with the initial ultrasonic image to obtain a three-dimensional visual adjusted target ultrasonic image.
Preferably, before the transmitting the initial ultrasound image to the pre-trained image recognition neural network model, the method further comprises:
acquiring an initial ultrasonic image training sample;
acquiring a sample label corresponding to the initial ultrasonic image training sample;
and training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain the pre-trained image recognition neural network model.
Preferably, the acquiring the sample label corresponding to the initial ultrasound image training sample includes:
acquiring an initial ultrasonic image labeling sample after labeling the characteristic object in the initial ultrasonic image training sample;
and carrying out Gaussian operation on the marked initial ultrasonic image marking sample to obtain the sample label.
Preferably, the obtaining an initial ultrasound image labeling sample after labeling the feature object in the initial ultrasound image training sample includes:
acquiring a characteristic region labeling sample of the initial ultrasonic image obtained after labeling the characteristic region in the initial ultrasonic image training sample;
obtaining a feature point labeling sample of the initial ultrasonic image obtained after the feature points in the initial ultrasonic image training sample are labeled; wherein the characteristic points are on the characteristic region, and the characteristic region and the characteristic points respectively comprise at least 2.
Preferably, the image recognition neural network model comprises a contour recognition network and a characteristic point recognition network; the contour recognition network is used for extracting features of the initial ultrasonic image to obtain a feature area image; the characteristic point identification network is used for carrying out characteristic fusion on the characteristic region labeling sample and the initial ultrasonic image to obtain the characteristic point image;
training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain a pre-trained image recognition neural network model, wherein the training comprises the following steps:
training the contour recognition network based on the initial ultrasonic image training sample and the characteristic region labeling sample to obtain a trained contour recognition network;
and training the characteristic point identification network based on the initial ultrasonic image training sample, the characteristic region labeling sample and the characteristic point labeling sample to obtain the trained characteristic point identification network.
Preferably, the image recognition neural network model further comprises an image enhancement network, which is used for enhancing the target feature points according to the position relation between other feature points and the target feature points to obtain feature point enhancement images;
the image enhancement network is used for carrying out convolution operation on a feature point image to obtain a convolution operation result; cascading the convolution operation result with another feature point image to obtain a cascading result; and carrying out residual operation on the cascade result to obtain a characteristic point enhanced image of the other characteristic point image.
Preferably, the training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain the pre-trained image recognition neural network model includes:
transmitting the initial ultrasound image training sample and the sample tag to the image recognition neural network model;
acquiring a label result output by the image recognition neural network model;
transmitting the sample label and the label result to a pre-built discriminator, wherein the discriminator is used for judging whether the label result and the sample label meet a preset approaching condition or not;
receiving a judging result output by the judging device, and transmitting the judging result to the image recognition neural network model so that the image recognition neural network model adjusts own parameters based on the judging result;
repeating the steps of obtaining the label result output by the image recognition neural network model and the later until the pre-trained image recognition neural network model is obtained, wherein the image recognition neural network model comprises the discriminator.
An ultrasound image adjustment system, comprising:
the first acquisition module is used for acquiring an initial ultrasonic image shot by the ultrasonic equipment;
the first transmission module is used for transmitting the initial ultrasonic image to a pre-trained image recognition neural network model;
the first receiving module is used for receiving the characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image;
and the first adjustment module is used for carrying out three-dimensional visual adjustment on the initial ultrasonic image based on the characteristic point image.
An ultrasound image adjustment apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the ultrasound image adjustment method as described in any one of the above when executing the computer program.
A computer readable storage medium having stored therein a computer program which when executed by a processor performs the steps of the ultrasound image adjustment method as described in any of the above.
According to the ultrasonic image adjusting method, an initial ultrasonic image shot by ultrasonic equipment is obtained; transmitting an initial ultrasonic image to a pre-trained image recognition neural network model; receiving a characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image; and performing three-dimensional visualization adjustment on the initial ultrasonic image based on the feature point image. According to the ultrasonic image adjusting method, the received initial ultrasonic image is automatically identified by means of the pre-trained image identification neural network model, the characteristic point image is obtained, and then the three-dimensional visual adjustment is carried out on the initial ultrasonic image based on the characteristic point image, so that the three-dimensional visual adjustment can be carried out on the ultrasonic image without manual work, and compared with the prior art, the three-dimensional visual adjustment efficiency of the ultrasonic image can be improved. The application provides an ultrasonic image adjusting system, ultrasonic image adjusting equipment and a computer readable storage medium, which also solve the corresponding technical problems.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
Fig. 1 is a first flowchart of an ultrasound image adjustment method according to an embodiment of the present application;
FIG. 2 is a second flowchart of an ultrasound image recognition method according to an embodiment of the present application;
FIG. 3 is a flow chart of an image recognition neural network model for recognizing an initial ultrasound image;
FIG. 4 is a schematic diagram of the acquisition of feature point candidate images in a basin bottom ultrasound image;
FIG. 5 is a schematic diagram of feature point candidate graphs to obtain feature point graphs;
FIG. 6 is a schematic diagram illustrating the effect of the method for adjusting an ultrasonic device provided by the present application;
fig. 7 is a schematic structural diagram of an ultrasound image adjustment system according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an ultrasound image adjustment device according to an embodiment of the present application;
fig. 9 is another schematic structural diagram of an ultrasound image adjustment apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a first flowchart of an ultrasound image adjustment method according to an embodiment of the present application.
The method for adjusting the ultrasonic image provided by the embodiment of the application can comprise the following steps:
step S101: an initial ultrasound image taken by an ultrasound device is acquired.
In practical application, an initial ultrasonic image photographed by the ultrasonic device may be obtained first, and the type of the ultrasonic image photographed by the ultrasonic device may be determined according to actual needs, which is not specifically limited herein.
Step S102: transmitting the initial ultrasound image to a pre-trained image recognition neural network model.
In practical application, after the initial ultrasonic image is acquired, the initial ultrasonic image can be transmitted to a pre-trained image recognition neural network model, and the initial ultrasonic image is recognized by means of the image recognition neural network model to obtain a characteristic image in the initial ultrasonic image. In a specific application scenario, the structure of the image recognition neural network model can be determined according to the image recognition precision, the type of the neural network model and the like.
Step S103: and receiving characteristic point images obtained by the identification of the image identification neural network model in the initial ultrasonic image.
Step S104: and performing three-dimensional visualization adjustment on the initial ultrasonic image based on the feature point image.
In practical application, after the image recognition neural network model obtains the characteristic point image in the initial ultrasonic image, the characteristic point image obtained by the image recognition neural network model can be received, so that the initial ultrasonic image can be subjected to three-dimensional visual adjustment based on the characteristic point image, for example, the initial ultrasonic image can be subjected to three-dimensional visual adjustment according to the position information of the characteristic point image, for example, the initial ultrasonic image is subjected to translation, rotation and the like, so that the information of the target object in the initial ultrasonic image is presented to an observer as much as possible, and the observer can observe the target object.
According to the ultrasonic image adjusting method, an initial ultrasonic image shot by ultrasonic equipment is obtained; transmitting an initial ultrasonic image to a pre-trained image recognition neural network model; receiving a characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image; and performing three-dimensional visualization adjustment on the initial ultrasonic image based on the feature point image. According to the ultrasonic image adjusting method, the received initial ultrasonic image is automatically identified by means of the pre-trained image identification neural network model, the characteristic point image is obtained, and then the three-dimensional visual adjustment is carried out on the initial ultrasonic image based on the characteristic point image, so that the three-dimensional visual adjustment can be carried out on the ultrasonic image without manual work, and compared with the prior art, the three-dimensional visual adjustment efficiency of the ultrasonic image can be improved.
Referring to fig. 2, fig. 2 is a second flowchart of an ultrasound image recognition method according to an embodiment of the present application.
In practical application, the method for identifying the ultrasonic image provided by the embodiment of the application can comprise the following steps:
step S201: an initial ultrasound image taken by an ultrasound device is acquired.
Step S202: transmitting the initial ultrasound image to a pre-trained image recognition neural network model.
Step S203: and receiving characteristic point images obtained by the identification of the image identification neural network model in the initial ultrasonic image.
Step S204: the viewfinder parameters of the ultrasound device are acquired.
Step S205: a translational rotation matrix of the initial ultrasound image is calculated based on the feature point image and the viewfinder parameters.
Step S206: multiplying the translation rotation matrix with the initial ultrasonic image to obtain a three-dimensional visual adjusted target ultrasonic image.
In practical application, in the three-dimensional visualization adjustment process of the initial ultrasonic image, because the acquired information of the ultrasonic image is affected by the view finder frame, in short, when the view finder frame views the target object from different angles, the ultrasonic images obtained by shooting are different, and the observer observes the initial ultrasonic image through the view finder frame, so that in order to facilitate the observer to observe the initial ultrasonic image through the view finder frame, when the three-dimensional visualization adjustment is performed on the initial ultrasonic image, the initial ultrasonic image can be adjusted by combining the initial ultrasonic image obtained by shooting and the corresponding parameters of the view finder frame, for example, in order to adjust the position of the initial ultrasonic image in the view finder frame, the translation rotation matrix of the initial ultrasonic image can be calculated based on the characteristic point image and the view finder frame parameters, and the translation rotation matrix is multiplied by the initial ultrasonic image, so as to obtain the target ultrasonic image after the three-dimensional visualization adjustment. The parameters of the viewfinder may be vertex position parameters of the viewfinder, position parameters of symmetry lines of the viewfinder, and the like. It should be noted that, in a specific application scenario, the position of the viewfinder when the target object is photographed by the viewfinder may also be adjusted according to the initial ultrasound image and the viewfinder parameter, so that the ultrasound image photographed by the viewfinder next time meets the observation requirement of the observer, and so on.
In the embodiment shown in fig. 1 and fig. 2 of the present application, the training process of the image neural network model determines the recognition mode, recognition accuracy, etc. of the image recognition neural network model on the ultrasonic image, for this reason, in the ultrasonic image adjustment process provided in the present application, before transmitting the initial ultrasonic image to the pre-trained image recognition neural network model, the initial ultrasonic image training sample may also be obtained; acquiring a sample label corresponding to an initial ultrasonic image training sample; based on the initial ultrasonic image training sample and the sample label, training the initial image recognition neural network model to obtain a pre-trained image recognition neural network model. It should be noted that the initial image recognition neural network model may be a currently built neural network model, a neural network model generated in a history process, or the like. In the process of training the initial image recognition neural network model, the user and other outside can set preset training parameters such as learning rate and the like of the image recognition neural network model, after the initial ultrasonic image training sample and the sample label are received, the image recognition neural network model can firstly generate a process label based on the initial ultrasonic image training sample, determine real-time training parameters based on the process label and the sample label, judge whether the real-time training parameters are consistent with the preset training parameters, if not, adjust the parameters of the self network model, and return to execute the process label and the following steps based on the initial ultrasonic image training sample, if so, the trained image recognition neural network model is obtained.
In practical application, in order to facilitate obtaining a training sample in the process of training the image recognition neural network model, the training sample can be obtained by means of mathematical logic, and when a sample label corresponding to the initial ultrasonic image training sample is obtained, an initial ultrasonic image labeling sample after labeling a feature object in the initial ultrasonic image training sample can be obtained; and carrying out Gaussian operation on the marked initial ultrasonic image marking sample to obtain a sample label. And the initial ultrasonic image training sample can be operated through Gaussian operation, so that a sample label only containing the characteristic object is obtained.
In practical application, because of the diversity of information contained in the ultrasonic image, if the characteristic objects in the ultrasonic image are directly identified, the identification speed is slow, and the accuracy is possibly poor, in order to improve the situation, only the characteristic points in the ultrasonic image can be extracted so as to perform three-dimensional visual adjustment on the ultrasonic image, and then when an initial ultrasonic image labeling sample after labeling the characteristic objects in an initial ultrasonic image training sample is obtained, a characteristic region labeling sample of the initial ultrasonic image obtained after labeling the characteristic regions in the initial ultrasonic image training sample can be obtained; obtaining a feature point labeling sample of an initial ultrasonic image, which is obtained after feature points in the training sample of the initial ultrasonic image are labeled; the feature points are on the feature areas, the feature areas and the feature points respectively comprise at least 2, and the feature points can be in one-to-one correspondence with the feature areas. Firstly, marking a characteristic region in an initial ultrasonic image training sample to obtain an initial ultrasonic image characteristic region marking sample, marking characteristic points in the initial ultrasonic image training sample to obtain an initial ultrasonic image characteristic point marking sample, and training an image recognition neural network model by means of the initial ultrasonic image characteristic region marking sample and the initial ultrasonic image characteristic point marking sample, wherein at the moment, as the characteristic points are points in the characteristic region, the logic for recognizing ultrasonic equipment by the image recognition neural network model is as follows: the image recognition neural network model firstly recognizes a characteristic region in the ultrasonic image, and then recognizes the characteristic region to obtain characteristic points meeting the requirements.
In a specific application scenario, the image recognition neural network model may include a contour recognition network and a feature point recognition network; the contour recognition network is used for extracting features of the initial ultrasonic image to obtain a feature area image; the characteristic point identification network is used for carrying out characteristic fusion on the characteristic region labeling sample and the initial ultrasonic image to obtain a characteristic point image; correspondingly, when training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain a pre-trained image recognition neural network model, training the contour recognition network based on the initial ultrasonic image training sample and the characteristic region labeling sample to obtain a trained contour recognition network; and training the characteristic point recognition network based on the initial ultrasonic image training sample, the characteristic region labeling sample and the characteristic point labeling sample to obtain a trained characteristic point recognition network.
In practical application, in order to further ensure the accuracy of the feature point image output by the image recognition neural network model, the image recognition neural network model may further include an image enhancement network, configured to enhance the target feature point according to the position relationship between other feature points and the target feature point, so as to obtain a feature point enhancement image, that is, correct the position of the target feature point according to the position relationship between the other feature points and the target feature point; the image enhancement network is used for carrying out convolution operation on a feature point image to obtain a convolution operation result; cascading the convolution operation result with another feature point image to obtain a cascading result; and carrying out residual operation on the cascade result to obtain a characteristic point enhanced image of the other characteristic point image.
In a specific application scene, when the number of the characteristic areas and the number of the characteristic points are 2 and are in one-to-one correspondence, a first characteristic point image and a second characteristic point image which are obtained by the image recognition neural network model in the initial ultrasonic image can be received when the characteristic image which is obtained by the image recognition neural network model in the initial ultrasonic image is received; the process of image recognition neural network model to recognize an initial ultrasound image may refer to fig. 3.
FIG. 3 is a flow chart of an image recognition neural network model for recognizing an initial ultrasound image.
The process of the image recognition neural network model for recognizing the initial ultrasound image can be as follows:
step S301: and extracting features of the initial ultrasonic image to obtain a first feature area image and a second feature area image.
Step S302: and carrying out feature fusion on the first feature region image and the initial ultrasonic image to obtain a first feature point candidate image.
Step S303: and carrying out feature fusion on the second feature region image and the initial ultrasonic image to obtain a second feature point candidate image.
For ease of understanding, taking the initial ultrasound image as the pelvic floor ultrasound image as an example, the feature region in the pelvic floor ultrasound image may be the pubic symphysis trailing lower edge and the anorectal angle leading edge, and referring to fig. 4, fig. 4 is a schematic diagram of obtaining the feature point candidate map in the pelvic floor ultrasound image.
Step S304: and carrying out convolution operation on the second characteristic point candidate image, cascading the second characteristic point candidate image with the first characteristic point candidate image, and carrying out residual error operation on the first-stage connection result to obtain a first characteristic point image.
Step S305: and carrying out convolution operation on the first characteristic point candidate image, cascading the first characteristic point candidate image with the second characteristic point candidate image, and carrying out residual error operation on a second cascading result to obtain a second characteristic point image.
In practical applications, there may be correlation between feature points in the ultrasound image, so that the feature point candidate graph may be processed by means of the correlation between feature points to obtain a more accurate feature point graph, please refer to fig. 5 and 6, fig. 5 is a schematic diagram of the feature point candidate graph to obtain the feature point graph, wherein conv represents a convolution operation,
Figure BDA0002120319160000091
representing cascading operation, residual block representing residual operation, a representing a first feature point candidate graph, b representing a second feature point candidate graph, and A representing a first feature point enhancement graph; fig. 6 is a schematic diagram showing an effect of the method for adjusting an ultrasonic device provided by the present application, and as can be seen from fig. 6, the method for adjusting an ultrasonic device provided by the present application has a good adjusting effect.
In practical application, in order to further improve accuracy of the image recognition neural network model, in a training process of the image recognition neural network model, the image recognition neural network model can be trained based on a principle of a generated type countermeasure network, at this time, the image recognition neural network model comprises a discriminator, then the initial image recognition neural network model is trained based on an initial ultrasonic image training sample and a sample label, when the pre-trained image recognition neural network model is obtained, the initial image recognition neural network model can be used as a generator in the generated type countermeasure network, the discriminator in the generated type countermeasure network is built, and the generator is trained based on the initial ultrasonic image training sample, the sample label and the discriminator until the pre-trained image recognition neural network model is obtained; the discriminator may include a first operation layer, a second operation layer, a third operation layer, a first full connection layer, a relu (Rectified Linear Units) layer, a second full connection layer, and an activation layer connected in sequence; the first, second, and third operational layers are each composed of a convolutional layer, a BN (Batch Normalization) layer, and an LRelu (leak-ReLU) layer.
In a specific application scene, when the image recognition neural network model is trained by the application discriminator, a label result obtained after the image recognition neural network model outputs an initial ultrasonic image training sample, the type of the label result can be determined according to the type of a sample label, for example, when the sample label is a characteristic point image, the label result is a characteristic point image, when the sample label is a characteristic point enhanced image, the label result is a characteristic point enhanced image and the like, the label result is transmitted to the discriminator, the discriminator compares whether the label result and the sample label meet preset approaching conditions or not, for example, whether the similarity between the label result and the sample label meets the similarity conditions or not is compared, the discriminator feeds the similarity back to the image recognition neural network model, the image recognition neural network model adjusts own parameters according to the similarity, and the process is repeated until the image recognition neural network model does not adjust own parameters according to the similarity any more, and a trained image recognition neural network model is obtained; in the process, the discriminator can output a Boolean value according to the similarity between the sample label and the label result, and the similarity between the sample label and the sample result is represented by the Boolean value.
The application also provides an ultrasonic image adjusting system which has the corresponding effect of the ultrasonic image adjusting method. Referring to fig. 7, fig. 7 is a schematic structural diagram of an ultrasound image adjustment system according to an embodiment of the present application.
An ultrasound image adjustment system provided in an embodiment of the present application may include:
a first obtaining module 101, configured to obtain an initial ultrasound image captured by an ultrasound device;
a first transmission module 102, configured to transmit an initial ultrasound image to a pre-trained image recognition neural network model;
a first receiving module 103, configured to receive a feature point image obtained by the image recognition neural network model through recognition in an initial ultrasound image;
a first adjustment module 104 is configured to perform three-dimensional visualization adjustment on the initial ultrasound image based on the feature point map.
The embodiment of the application provides an ultrasound image adjustment system, and a first adjustment module may include:
the first acquisition submodule is used for acquiring the parameters of a view finding frame of the ultrasonic equipment;
the first calculation sub-module is used for calculating a translation rotation matrix of the initial ultrasonic image based on the characteristic point image and the viewfinder parameters;
and the first operation sub-module is used for multiplying the translation rotation matrix with the initial ultrasonic image to obtain the target ultrasonic image after three-dimensional visualization adjustment.
The ultrasound image adjustment system provided in the embodiment of the present application may further include:
the second acquisition module is used for acquiring an initial ultrasonic image training sample before the first transmission module transmits the initial ultrasonic image to the pre-trained image recognition neural network model;
the third acquisition module is used for acquiring a sample label corresponding to the initial ultrasonic image training sample;
the first training module is used for training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain a pre-trained image recognition neural network model.
The embodiment of the present application provides an ultrasound image adjustment system, and the third obtaining module may include:
the second acquisition sub-module is used for acquiring an initial ultrasonic image labeling sample after labeling the characteristic objects in the initial ultrasonic image training sample;
and the second operation sub-module is used for carrying out Gaussian operation on the marked initial ultrasonic image marking sample to obtain a sample label.
The second obtaining sub-module may include:
the first acquisition unit is used for acquiring a characteristic region labeling sample of the initial ultrasonic image, which is obtained after the characteristic region in the initial ultrasonic image training sample is labeled;
the second acquisition unit is used for acquiring a feature point labeling sample of the initial ultrasonic image, which is obtained after the feature points in the training sample of the initial ultrasonic image are labeled; wherein the feature points are on the feature region, and the feature region and the feature points respectively comprise at least 2.
The ultrasonic image adjustment system provided by the embodiment of the application can include an outline recognition network and a characteristic point recognition network; the contour recognition network is used for extracting features of the initial ultrasonic image to obtain a feature area image; the characteristic point identification network is used for carrying out characteristic fusion on the characteristic region labeling sample and the initial ultrasonic image to obtain a characteristic point image;
accordingly, the first training module may include:
the first training submodule is used for training the contour recognition network based on the initial ultrasonic image training sample and the characteristic region labeling sample to obtain a trained contour recognition network;
the second training sub-module is used for training the characteristic point recognition network based on the initial ultrasonic image training sample, the characteristic region labeling sample and the characteristic point labeling sample to obtain a trained characteristic point recognition network.
The ultrasonic image adjustment system provided by the embodiment of the application, the image recognition neural network model can further comprise an image enhancement network;
the image enhancement network is used for carrying out convolution operation on a feature point image to obtain a convolution operation result; cascading the convolution operation result with another feature point image to obtain a cascading result; and carrying out residual operation on the cascade result to obtain a characteristic point enhanced image of the other characteristic point image.
The embodiment of the application provides an ultrasonic image adjustment system, and a first training module may include:
the first transmission unit is used for transmitting the initial ultrasonic image training sample and the sample label to the image recognition neural network model;
the third acquisition unit is used for acquiring a label result output by the image recognition neural network model;
the second transmission unit is used for transmitting the sample label and the label result to a pre-built discriminator, and the discriminator is used for judging whether the label result and the sample label meet preset conditions or not;
the first receiving unit is used for receiving the judging result output by the judging device and transmitting the judging result to the image recognition neural network model so that the image recognition neural network model adjusts own parameters based on the judging result;
the first execution unit is used for repeatedly executing the label result output by the image recognition neural network model and the subsequent steps until the pre-trained image recognition neural network model is obtained.
The application also provides an ultrasonic image adjusting device and a computer readable storage medium, which have the corresponding effects of the ultrasonic image adjusting method. Referring to fig. 8, fig. 8 is a schematic structural diagram of an ultrasound image adjustment apparatus according to an embodiment of the present application.
An ultrasound image adjustment apparatus provided in an embodiment of the present application includes:
a memory 201 for storing a computer program;
a processor 202 for implementing the steps of the ultrasound image adjustment method as described in any of the embodiments above when executing a computer program.
Referring to fig. 9, another ultrasound image adjustment apparatus provided in an embodiment of the present application may further include: an input port 203 connected to the processor 202 for transmitting an externally input command to the processor 202; a display unit 204 connected to the processor 202, for displaying the processing result of the processor 202 to the outside; and a communication module 205 connected with the processor 202, for implementing communication between the ultrasound image adjustment device and the outside. The display unit 204 may be a display panel, a laser scanning display, or the like; communication means employed by the communication module 205 include, but are not limited to, mobile high definition link technology (HML), universal Serial Bus (USB), high Definition Multimedia Interface (HDMI), wireless connection: wireless fidelity (WiFi), bluetooth communication, bluetooth low energy communication, ieee802.11s based communication.
The embodiment of the application provides a computer readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the steps of the ultrasound image adjustment method described in any embodiment above are implemented.
The computer readable storage medium referred to in this application includes Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The description of the related parts in the ultrasound image adjustment system, the device and the computer readable storage medium provided in the embodiments of the present application refers to the detailed description of the corresponding parts in the ultrasound image adjustment method provided in the embodiments of the present application, and will not be repeated here. In addition, the parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of the corresponding technical solutions in the prior art, are not described in detail, so that redundant descriptions are avoided.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An ultrasound image adjustment method, comprising:
acquiring an initial ultrasonic image shot by ultrasonic equipment;
transmitting the initial ultrasonic image to a pre-trained image recognition neural network model;
receiving a characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image;
performing three-dimensional visual adjustment on the initial ultrasonic image based on the characteristic point image;
wherein the performing three-dimensional visualization adjustment on the initial ultrasound image based on the feature point image includes:
acquiring a viewfinder parameter of the ultrasonic equipment;
calculating a translation rotation matrix of the initial ultrasonic image based on the characteristic point image and the viewfinder parameter;
multiplying the translation rotation matrix with the initial ultrasonic image to obtain a three-dimensional visual adjusted target ultrasonic image.
2. The method of claim 1, wherein the transmitting the initial ultrasound image to a pre-trained image recognition neural network model is preceded by:
acquiring an initial ultrasonic image training sample;
acquiring a sample label corresponding to the initial ultrasonic image training sample;
and training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain the pre-trained image recognition neural network model.
3. The method of claim 2, wherein the obtaining a sample tag corresponding to the initial ultrasound image training sample comprises:
acquiring an initial ultrasonic image labeling sample after labeling the characteristic object in the initial ultrasonic image training sample;
and carrying out Gaussian operation on the marked initial ultrasonic image marking sample to obtain the sample label.
4. The method of claim 3, wherein the obtaining an initial ultrasound image annotation sample that is annotated to the feature object in the initial ultrasound image training sample comprises:
acquiring a characteristic region labeling sample of the initial ultrasonic image obtained after labeling the characteristic region in the initial ultrasonic image training sample;
obtaining a feature point labeling sample of the initial ultrasonic image obtained after the feature points in the initial ultrasonic image training sample are labeled; wherein the characteristic points are on the characteristic region, and the characteristic region and the characteristic points respectively comprise at least 2.
5. The method of claim 4, wherein the image recognition neural network model includes a contour recognition network and a feature point recognition network; the contour recognition network is used for extracting features of the initial ultrasonic image to obtain a feature area image; the characteristic point identification network is used for carrying out characteristic fusion on the characteristic region labeling sample and the initial ultrasonic image to obtain the characteristic point image;
training the built initial image recognition neural network model based on the initial ultrasonic image training sample and the sample label to obtain a pre-trained image recognition neural network model, wherein the training comprises the following steps:
training the contour recognition network based on the initial ultrasonic image training sample and the characteristic region labeling sample to obtain a trained contour recognition network;
and training the characteristic point identification network based on the initial ultrasonic image training sample, the characteristic region labeling sample and the characteristic point labeling sample to obtain the trained characteristic point identification network.
6. The method according to claim 5, wherein the image recognition neural network model further comprises an image enhancement network, which is used for enhancing the target feature points according to the position relation between other feature points and the target feature points to obtain feature point enhanced images;
the image enhancement network is used for carrying out convolution operation on a feature point image to obtain a convolution operation result; cascading the convolution operation result with another feature point image to obtain a cascading result; and carrying out residual operation on the cascade result to obtain a characteristic point enhanced image of the other characteristic point image.
7. The method of claim 6, wherein the training the built initial image recognition neural network model based on the initial ultrasound image training sample and the sample tag to obtain the pre-trained image recognition neural network model comprises:
transmitting the initial ultrasound image training sample and the sample tag to the image recognition neural network model;
acquiring a label result output by the image recognition neural network model;
transmitting the sample label and the label result to a pre-built discriminator, wherein the discriminator is used for judging whether the label result and the sample label meet a preset approaching condition or not;
receiving a judging result output by the judging device, and transmitting the judging result to the image recognition neural network model so that the image recognition neural network model adjusts own parameters based on the judging result;
repeating the steps of obtaining the label result output by the image recognition neural network model and the later until the pre-trained image recognition neural network model is obtained, wherein the image recognition neural network model comprises the discriminator.
8. An ultrasound image adjustment system, comprising:
the first acquisition module is used for acquiring an initial ultrasonic image shot by the ultrasonic equipment;
the first transmission module is used for transmitting the initial ultrasonic image to a pre-trained image recognition neural network model;
the first receiving module is used for receiving the characteristic point image obtained by the image recognition neural network model in the initial ultrasonic image;
the first adjusting module is used for carrying out three-dimensional visual adjustment on the initial ultrasonic image based on the characteristic point image;
wherein, the first adjustment module includes:
a first obtaining sub-module, configured to obtain a viewfinder parameter of the ultrasound device;
a first calculation sub-module for calculating a translational rotation matrix of the initial ultrasound image based on the feature point image and the viewfinder parameter;
and the first operation sub-module is used for multiplying the translation rotation matrix with the initial ultrasonic image to obtain a three-dimensional visual adjusted target ultrasonic image.
9. An ultrasound device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the ultrasound image adjustment method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the steps of the ultrasound image adjustment method according to any one of claims 1 to 7.
CN201910604343.1A 2019-07-05 2019-07-05 Ultrasonic image adjustment method, system, equipment and computer storage medium Active CN110322399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910604343.1A CN110322399B (en) 2019-07-05 2019-07-05 Ultrasonic image adjustment method, system, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910604343.1A CN110322399B (en) 2019-07-05 2019-07-05 Ultrasonic image adjustment method, system, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN110322399A CN110322399A (en) 2019-10-11
CN110322399B true CN110322399B (en) 2023-05-05

Family

ID=68122801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910604343.1A Active CN110322399B (en) 2019-07-05 2019-07-05 Ultrasonic image adjustment method, system, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN110322399B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI719687B (en) * 2019-10-25 2021-02-21 佳世達科技股份有限公司 Ultrasound imaging system and ultrasound imaging method
CN110840482B (en) * 2019-10-28 2022-12-30 苏州佳世达电通有限公司 Ultrasonic imaging system and method thereof
CN111222560B (en) * 2019-12-30 2022-05-20 深圳大学 Image processing model generation method, intelligent terminal and storage medium
CN111444830B (en) * 2020-03-25 2023-10-31 腾讯科技(深圳)有限公司 Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device
CN112184683A (en) * 2020-10-09 2021-01-05 深圳度影医疗科技有限公司 Ultrasonic image identification method, terminal equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107174342A (en) * 2017-03-21 2017-09-19 哈尔滨工程大学 A kind of area of computer aided fracture reduction degree measure
CN109117773A (en) * 2018-08-01 2019-01-01 Oppo广东移动通信有限公司 A kind of characteristics of image point detecting method, terminal device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107174342A (en) * 2017-03-21 2017-09-19 哈尔滨工程大学 A kind of area of computer aided fracture reduction degree measure
CN109117773A (en) * 2018-08-01 2019-01-01 Oppo广东移动通信有限公司 A kind of characteristics of image point detecting method, terminal device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于BP神经网络的超声图像识别方法的研究;刘晓等;《中国医疗器械杂志》;20041130(第06期);第3.2-3.3节 *

Also Published As

Publication number Publication date
CN110322399A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110322399B (en) Ultrasonic image adjustment method, system, equipment and computer storage medium
CN111160375B (en) Three-dimensional key point prediction and deep learning model training method, device and equipment
EP3852003A1 (en) Feature point locating method, storage medium and computer device
US10466797B2 (en) Pointing interaction method, apparatus, and system
US20210062653A1 (en) Method and device for acquiring three-dimensional coordinates of ore based on mining process
CN111428731B (en) Multi-category identification positioning method, device and equipment based on machine vision
CN112508975A (en) Image identification method, device, equipment and storage medium
US10945888B2 (en) Intelligent blind guide method and apparatus
CN113674421B (en) 3D target detection method, model training method, related device and electronic equipment
CN103714321A (en) Driver face locating system based on distance image and strength image
CN112200884B (en) Lane line generation method and device
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
CN113724128B (en) Training sample expansion method
US20220172376A1 (en) Target Tracking Method and Device, and Electronic Apparatus
CN115810133B (en) Welding control method based on image processing and point cloud processing and related equipment
US20180032793A1 (en) Apparatus and method for recognizing objects
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN113158833A (en) Unmanned vehicle control command method based on human body posture
CN112541394A (en) Black eye and rhinitis identification method, system and computer medium
CN112529917A (en) Three-dimensional target segmentation method, device, equipment and storage medium
US20220351495A1 (en) Method for matching image feature point, electronic device and storage medium
KR101391667B1 (en) A model learning and recognition method for object category recognition robust to scale changes
CN115205806A (en) Method and device for generating target detection model and automatic driving vehicle
CN115471863A (en) Three-dimensional posture acquisition method, model training method and related equipment
KR102382883B1 (en) 3d hand posture recognition apparatus and method using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant