CN110223279A - A kind of image processing method and device, electronic equipment - Google Patents
A kind of image processing method and device, electronic equipment Download PDFInfo
- Publication number
- CN110223279A CN110223279A CN201910473265.6A CN201910473265A CN110223279A CN 110223279 A CN110223279 A CN 110223279A CN 201910473265 A CN201910473265 A CN 201910473265A CN 110223279 A CN110223279 A CN 110223279A
- Authority
- CN
- China
- Prior art keywords
- image data
- subobject
- convolutional neural
- neural networks
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 167
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000012545 processing Methods 0.000 claims description 65
- 238000006073 displacement reaction Methods 0.000 claims description 63
- 210000000988 bone and bone Anatomy 0.000 claims description 61
- 230000015654 memory Effects 0.000 claims description 29
- 241000406668 Loxodonta cyclotis Species 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 description 63
- 238000010586 diagram Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 19
- 238000005070 sampling Methods 0.000 description 12
- 230000001360 synchronised effect Effects 0.000 description 9
- 241001269238 Data Species 0.000 description 7
- 230000005291 magnetic effect Effects 0.000 description 7
- 238000009394 selective breeding Methods 0.000 description 6
- 230000003068 static effect Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 241000282465 Canis Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005294 ferromagnetic effect Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013421 nuclear magnetic resonance imaging Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
- G06T2207/30012—Spine; Backbone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/033—Recognition of patterns in medical or anatomical images of skeletal patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing methods and device, electronic equipment.The described method includes: obtaining the image data comprising target object;The target object includes at least one subobject;Described image data are handled based on full convolutional neural networks, obtain destination image data, the destination image data includes at least the central point of each subobject in the target object.
Description
Technical field
The present invention relates to image processing techniques, and in particular to a kind of image processing method and device, electronic equipment.
Background technique
Under normal conditions, there are 26 pieces of vertebras on people's vertebra, be arranged successively from top to bottom.Vertebra is important position of human body ginseng
Examine object.Detection, positioning, the center for identifying 26 pieces of vertebras can provide relative position letter for the positioning of other organs, tissue
Breath, to be conducive to the activities such as the surgery planning then carried out, pathological examination, postoperative effect assessment;On the other hand, it detects and fixed
Position vertebra center can carry out mathematical modeling to vertebra, to provide the prior information of vertebra shape, be conducive to carry out vertebra its
The segmentation of hetero-organization.Therefore, the positioning at vertebra center has important application value
Currently, the positioning at vertebra center is mainly in the following ways: first is that being positioned manually, but being swept in three-dimensional computer tomography
It retouches and carries out the identification of vertebra type in (CT, Computed Tomography) image and the positioning at vertebra center very time-consuming takes
Power, and it is easy to produce human negligence, in certain difficult complicated images, artificial positioning can have certain subjectivity,
It is likely to cause mistake;Second is that using auto-check system, but the algorithm that current auto-check system uses is using artificial choosing
The feature selected, Generalization Capability is poor, causes system performance poor, and the accuracy of vertebra centralized positioning is not high.
Summary of the invention
To solve existing technical problem, the embodiment of the present invention provides a kind of image processing method and device, electronics
Equipment.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is achieved in that
The embodiment of the invention provides a kind of image processing methods, which comprises
Obtain the image data comprising target object;The target object includes at least one subobject;
Described image data are handled based on full convolutional neural networks, obtain destination image data, the target figure
As data include at least the central point of each subobject in the target object.
It is described that described image data are handled based on full convolutional neural networks in above scheme, obtain target image
Data, comprising: described image data are handled based on the first full convolutional neural networks, obtain destination image data, it is described
Destination image data includes the central point of each subobject in the target object.
It is described that described image data are handled based on full convolutional neural networks in above scheme, obtain target image
Data, comprising:
Described image data are handled based on the first full convolutional neural networks, obtain the first image data, described the
One image data includes the central point of each subobject in the target object;
Described image data and the first image data are handled based on the second full convolutional neural networks, obtain the
Two image datas, second image data are used to indicate the classification of each subobject in the target object.
In above scheme, the first full convolutional neural networks that are based on handle described image data, comprising: are based on
The first full convolutional neural networks handle described image data, obtain the pixel corresponding the in described image data
One displacement data;First displacement data characterizes the position at the center of the pixel and the subobject nearest apart from the pixel
It moves;
Based on the position data of first displacement data and the pixel itself determine apart from the pixel it is nearest the
The initial position of the central point of one subobject;First subobject is any subobject at least one described subobject;
Obtain the initial bit of the central point of corresponding first subobject of at least partly pixel in described image data
It sets, determines the quantity of the identical initial position in position, determined in first subobject based on the most initial position of quantity
Heart point.
It is described to be determined based on the position data of first displacement data and the pixel itself apart from institute in above scheme
Before the initial position for stating the central point of the first nearest subobject of pixel, the method also includes:
Based on corresponding first shift length of at least one pixel in described image data at least one described pixel
It is screened, obtains and meet specific item with the distance between the center of first subobject nearest apart from least one described pixel
At least one first pixel of part;
It is described nearest apart from the pixel based on the determination of the position data of first displacement data and the pixel itself
The first subobject central point initial position, comprising:
The position data of the first displacement data and first pixel itself based on first pixel is determined apart from institute
State the initial position of the central point of the first subobject.
In above scheme, the second full convolutional neural networks that are based on are to described image data and the first image data
It is handled, obtains the second image data, comprising:
Described image data and the first image data are merged, destination image data is obtained;
The destination image data is handled based on the second full convolutional neural networks, obtains the destination image data
In the probability value of the classification of subobject that belongs to of pixel, the classification of the corresponding subobject of most probable value is determined as the picture
The classification for the subobject that element belongs to;
The second image data is obtained based on the subobject classification that the pixel in the destination image data belongs to.
In above scheme, the probability of the classification for the subobject that the pixel obtained in the destination image data belongs to
The classification of the corresponding subobject of most probable value, is determined as the classification for the subobject that the pixel belongs to by value, comprising: obtains institute
State the probability value of the classification for the subobject that the central point respective pixel of the second subobject in destination image data belongs to;Described
Two subobjects are any subobject at least one described subobject;
The classification of corresponding second subobject of most probable value is determined as to the classification of second subobject.
In above scheme, the second full convolutional neural networks that are based on are to described image data and the first image data
It is handled, obtains the second image data, comprising:
Down-sampling processing is carried out to described image data, obtains third image data;
The third image data and the first image data are handled based on the second full convolutional neural networks, obtained
Obtain the second image data.
In above scheme, the training process of the first full convolutional neural networks includes:
Obtain first sample image data and the first sample image data corresponding first comprising target object
Labeled data;First labeled data is used to indicate each subobject in the target object in the first sample image data
Central point;
According to the first sample image data and the corresponding first labeled data training first full convolutional Neural net
Network.
It is described according to the first sample image data and corresponding first labeled data training described the in above scheme
One full convolutional neural networks, comprising:
The first sample image data is handled according to the first full convolutional neural networks, obtains initial pictures number
It include the initial center of each subobject in target object in the first sample image data according to, the original data
Point;
Loss function is determined based on the original data and first labeled data, is based on the loss function tune
The parameter of the whole first full convolutional neural networks, with the full convolutional neural networks of training described first.
In above scheme, the training process of the second full convolutional neural networks includes:
Obtain the first sample image data comprising target object, relevant second sample of the first sample image data
Image data and corresponding second labeled data of the first sample image data;Two sample image data includes described
The central point of each subobject in target object in first sample image data;Second labeled data is used to indicate described
The classification of each subobject in target object in first sample image data;
Based on the first sample image data, second sample image data and second labeled data training institute
State the second full convolutional neural networks.
It is described based on the first sample image data, second sample image data and described the in above scheme
Two labeled data training, the second full convolutional neural networks, comprising:
Down-sampling processing is carried out to the first sample image data, obtains third sample image data;
Based on the third sample image data, second sample image data and second labeled data training institute
State the second full convolutional neural networks.
In above scheme, the target object includes vertebral bones;The vertebral bones include at least one vertebra.
The embodiment of the invention also provides a kind of image processing apparatus, described device includes: acquiring unit and image procossing
Unit;Wherein,
The acquiring unit, for obtaining the image data comprising target object;The target object includes at least one
Subobject;
Described image processing unit obtains target for handling based on convolutional neural networks described image data
Image data, the destination image data include at least the central point of each subobject in the target object.
In above scheme, described image processing unit, for based on the first convolutional neural networks to described image data into
Row processing, obtains destination image data, the destination image data includes the central point of each subobject in the target object.
In above scheme, described image processing unit, for based on the first convolutional neural networks to described image data into
Row processing, obtains the first image data, the first image data include the central point of each subobject in the target object;
Described image data and the first image data are handled based on the second convolutional neural networks, obtain the second picture number
According to second image data is used to indicate the classification of each subobject in the target object.
In above scheme, described image processing unit includes first processing module, for based on the described first full convolution mind
Described image data are handled through network, obtain corresponding first displacement data of pixel in described image data;It is described
First displacement data characterizes the displacement at the center of the pixel and the subobject nearest apart from the pixel;Based on described first
Move the initial of the central point of determining first subobject nearest apart from the pixel of position data of data and the pixel itself
Position;First subobject is any subobject at least one described subobject;It obtains in described image data extremely
The initial position of the central point of corresponding first subobject of small part pixel, determines the number of the identical initial position in position
Amount, the central point of first subobject is determined based on the most initial position of quantity.
In above scheme, the first processing module, for corresponding based at least one pixel in described image data
The first shift length at least one described pixel is screened, obtain with apart from least one described pixel it is nearest first
The distance between center of subobject meets at least one first pixel of specified conditions;First based on first pixel
Move the initial position of the determining central point apart from first subobject of position data of data and first pixel itself.
In above scheme, described image processing unit includes Second processing module, for by described image data and described
First image data merges, and obtains destination image data;Based on the second full convolutional neural networks to the target image number
According to being handled, the probability value of the classification for the subobject that the pixel in the destination image data belongs to is obtained, by maximum probability
The classification for being worth corresponding subobject is determined as the classification for the subobject that the pixel belongs to;Based in the destination image data
The subobject classification that pixel belongs to obtains the second image data.
In above scheme, the Second processing module, for obtaining the second subobject in the destination image data
The probability value of the classification for the subobject that central point respective pixel belongs to;Second subobject is at least one described subobject
Any subobject;The classification of corresponding second subobject of most probable value is determined as to the classification of second subobject.
In above scheme, described image processing unit obtains third for carrying out down-sampling processing to described image data
Image data;The third image data and the first image data are handled based on the second convolutional neural networks, obtained
Obtain the second image data.
In above scheme, described device further includes the first training unit, for obtaining the first sample comprising target object
Image data and corresponding first labeled data of the first sample image data;First labeled data is used to indicate institute
State the central point of each subobject in the target object in first sample image data;According to the first sample image data and
Corresponding first labeled data training first convolutional neural networks.
In above scheme, first training unit is used for according to the first convolutional neural networks to the first sample figure
As data are handled, original data is obtained, the original data includes in the first sample image data
The initial center point of each subobject in target object;Damage is determined based on the original data and first labeled data
Function is lost, adjusts the parameter of first convolutional neural networks, based on the loss function with training first convolutional Neural
Network.
In above scheme, described device further includes the second training unit, for obtaining the first sample comprising target object
Image data, relevant second sample image data of the first sample image data and the first sample image data pair
The second labeled data answered;Two sample image data includes each in target object in the first sample image data
The central point of subobject;Second labeled data is used to indicate in the target object in the first sample image data each
The classification of subobject;Based on the first sample image data, second sample image data and second labeled data
Training second convolutional neural networks.
In above scheme, second training unit, for carrying out down-sampling processing to the first sample image data,
Obtain third sample image data;Based on the third sample image data, second sample image data and described second
Labeled data training second convolutional neural networks.
In above scheme, the target object includes vertebral bones;The vertebral bones include at least one vertebra.
The embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, the journey
The step of the method for the embodiment of the present invention is realized when sequence is executed by processor.
The embodiment of the invention also provides a kind of electronic equipment, including memory, processor and storage are on a memory simultaneously
The computer program that can be run on a processor, the processor realize the method for the embodiment of the present invention when executing described program
The step of.
Image processing method and device provided in an embodiment of the present invention, electronic equipment, which comprises obtaining includes mesh
Mark the image data of object;The target object includes at least one subobject;Based on convolutional neural networks to described image number
According to being handled, destination image data is obtained, the destination image data includes at least each subobject in the target object
Central point.Using the technical solution of the embodiment of the present invention, image data is handled by full convolutional neural networks, is obtained
Including at least the destination image data of the central point of at least one subobject in target object, such as obtains and include at least vertebrae
The destination image data of the central point of each vertebra in bone, on the one hand, carry out image data automatically by full convolutional neural networks
Feature identification, feature selecting and tagsort, compared to the mode of artificial selection feature, lifting system performance improves vertebra
The accuracy of centralized positioning;On the other hand, full convolutional neural networks can classify to each pixel, that is to say, that logical
Crossing full convolutional neural networks more can improve training effectiveness using the spatial relationship between centrum, promote network performance.
Detailed description of the invention
Fig. 1 is the flow diagram one of the image processing method of the embodiment of the present invention;
Fig. 2 is the flow diagram two of the image processing method of the embodiment of the present invention;
Fig. 3 is the flow diagram three of the image processing method of the embodiment of the present invention;
Fig. 4 is a kind of application schematic diagram of the image processing method of the embodiment of the present invention;
Fig. 5 is a kind of flow diagram of the network training method in the image processing method of the embodiment of the present invention;
Fig. 6 is another flow diagram of the network training method in the image processing method of the embodiment of the present invention;
Fig. 7 is the composed structure schematic diagram one of the image processing apparatus of the embodiment of the present invention;
Fig. 8 is the composed structure schematic diagram two of the image processing apparatus of the embodiment of the present invention;
Fig. 9 is the composed structure schematic diagram three of the image processing apparatus of the embodiment of the present invention;
Figure 10 is the composed structure schematic diagram four of the image processing apparatus of the embodiment of the present invention;
Figure 11 is the composed structure schematic diagram five of the image processing apparatus of the embodiment of the present invention;
Figure 12 is the composed structure schematic diagram of the electronic equipment of the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawing and specific embodiment the present invention is described in further detail.
The embodiment of the invention provides a kind of image processing methods.Fig. 1 is the image processing method of the embodiment of the present invention
Flow diagram one;As shown in Figure 1, which comprises
Step 101: obtaining the image data comprising target object;The target object includes at least one subobject;
Step 102: described image data being handled based on full convolutional neural networks, obtain destination image data, institute
Destination image data is stated including at least the central point of each subobject in the target object.
In the present embodiment step 101, image data is the image data comprising target object;Image described in the present embodiment
Data are the 3 d image data comprising target object.In the present embodiment, the target object includes vertebral bones;The vertebra
Bone includes at least one vertebra.In following embodiment, using target object as vertebral bones, (then target object includes extremely
A few vertebra) for be illustrated, in other embodiments, target object is not limited to vertebral anatomy, right in the present embodiment
This is without limitation.
As an example, described image data can be the three-dimensional figure comprising vertebral bones obtained by imaging technique
As data, for example, described image data can be CT scan (CT, Computed comprising vertebral bones
Tomography) image data, Magnetic resonance imaging (MRI, Nuclear Magnetic Resonance Imaging) image
Data etc., certainly, the image data in the present embodiment are not limited to the image data of aforesaid way acquisition, other are any to pass through into
As the 3 d image data of the vertebral bones of technology acquisition is the image data in the present embodiment.
Wherein, the vertebral bones in the present embodiment include but is not limited to the vertebral bones for being the mankind, be can also be with ridge
The vertebral bones of other animals of vertebra.Under normal circumstances, by taking the mankind as an example, vertebral bones include 26 pieces of vertebraes, including
24 pieces of vertebras (7 pieces of cervical vertebra, 12 pieces of thoracic vertebrae, 5 pieces of lumbar vertebrae), 1 piece of rumpbone and 1 piece of coccyx, image data described in the present embodiment
Including at least partly vertebrae in 26 pieces of vertebraes.It is appreciated that may include complete vertebrae in described image data,
It can only include partial vertebra.In the case where only including partial vertebra in image data, vertebra classification is more difficult, namely determines
Which vertebra center is that belong to which block vertebra more difficult.
It is described that described image data are handled based on full convolutional neural networks in the present embodiment step 102, obtain mesh
Logo image data, comprising: be input to trained full convolutional neural networks for described image data as input data, obtain extremely
It less include the destination image data of the central point of each subobject in the target object.
By taking target object is vertebral bones as an example, the present embodiment carries out described image data by full convolutional neural networks
Processing obtains the destination image data for including at least the central point of each vertebra in vertebral bones.Wherein, on the one hand, by complete
Convolutional neural networks carry out feature identification, feature selecting and the tagsort of image data automatically, compared to artificial selection feature
Mode, lifting system performance, improve vertebra centralized positioning accuracy;On the other hand, full convolutional neural networks can be to every
A pixel is classified, that is to say, that can more utilize the spatial relationship between centrum by full convolutional neural networks
Training effectiveness is improved, network performance is promoted.
Based on described in previous embodiment step 101 to step 102, the embodiment of the invention also provides a kind of image processing methods
Method.The present embodiment is further elaborated for step 102.Specifically, in step 102, it is described to be based on full convolutional neural networks
Described image data are handled, destination image data is obtained, comprising: based on the first full convolutional neural networks to described image
Data are handled, and obtain destination image data, the destination image data includes each subobject in the target object
Central point.
In the present embodiment, by taking target object is vertebral bones as an example, vertebra bone is realized by the first full convolutional neural networks
The positioning of the central point of each vertebra in bone.It is appreciated that the first full convolutional neural networks be in advance training obtain, pass through by
Image data is input to the first full convolutional neural networks, obtains the target including the central point of each vertebra in the vertebral bones
Image data, to determine the position of the central point of each vertebra by the destination image data.In this way, user (such as profession
Doctor) after taking destination image data, the vertebra classification that can be belonged to based on empirically determined each central point, i.e., using artificial side
Formula determines vertebra classification corresponding to central point.
In an alternative embodiment of the invention, it is described be based on the first full convolutional neural networks to described image data into
Row processing, obtains target data, comprising: handle based on the described first full convolutional neural networks described image data, obtain
Obtain corresponding first displacement data of pixel in described image data;First displacement data characterizes the pixel and apart from institute
State the displacement at the center of the nearest subobject of pixel;Position data based on first displacement data and the pixel itself is true
The initial position of the central point of the first nearest subobject of pixel described in set a distance;First subobject be it is described at least one
Any subobject in subobject;It obtains in corresponding first subobject of at least partly pixel in described image data
The initial position of heart point, determines the quantity of the identical initial position in position, determines described based on the most initial position of quantity
The central point of one subobject;Destination image data is obtained based on the central point of determining first subobject.
In the present embodiment, by the trained first full convolutional neural networks to the image data comprising vertebral bones into
Row is handled, the first displacement data of each pixel and the center apart from nearest vertebra in acquisition described image data, and this first
Move the displacement data that data include three x-axis direction, y-axis direction and z-axis direction directions;It is based further on the position of pixel itself
The determining initial position with the central point of the nearest vertebra of the pixel distance of the first displacement data corresponding with the pixel.It can
To understand, each pixel can determine that the initial position of the central point of the vertebra nearest with the pixel distance, and for same
For a vertebra, multiple initial positions corresponding to the vertebra can also be determined based on the partial pixel in image data, really
Fixed multiple initial positions are likely to that part is identical, part is different;Based on this, ballot method is used in the present embodiment, namely
The quantity of identical initial position is counted, for example, there are 100 initial positions, wherein there are 50 initial positions is a, and 20 initial
Position is b, and 15 initial positions are c, and 10 initial positions are d, and 5 initial positions are e, it is determined that initial position is the first of a
Beginning position is the central point position of the vertebra.
As an implementation, described to be determined based on the position data of first displacement data and the pixel itself
Before the initial position of the central point of the first subobject nearest apart from the pixel, the method also includes: it is based on the figure
As corresponding first shift length of at least one pixel in data screens at least one described pixel, obtain and distance
The distance between the center of the first nearest subobject of at least one described pixel meets at least one first picture of specified conditions
Element;It is described based on the position data of first displacement data and the pixel itself determine apart from the pixel it is nearest first
The initial position of the central point of subobject, comprising: the first displacement data and first pixel based on first pixel are certainly
The position data of body determines the initial position of the central point apart from first subobject.
It, can be first determining to initial position is participated in front of the initial position for the central point for determining vertebra in the present embodiment
Pixel carry out preliminary screening, namely initial position it is not necessary that all pixels in image data to be both participated in vertebra central point
It determines.Specifically, since corresponding first shift length of each pixel characterizes the pixel and the vertebra nearest apart from the pixel
The displacement at the center of bone, and during determining the initial position of central point of vertebra, it can be only in vertebra
The a certain range of pixel of heart point.
As an implementation, the center of the acquisition and first subobject nearest apart from least one described pixel
The distance between meet at least one first pixels of specified conditions, comprising: obtain with it is nearest apart from least one described pixel
The distance between the center of the first subobject be less than at least one first pixel of preset threshold.In practical application, due to
One displacement data includes the displacement data of three x-axis direction, y-axis direction and z-axis direction directions, then can determine whether the first displacement number
Whether the preset threshold is respectively less than according to the numerical value of middle x-axis direction, the displacement data in three directions in y-axis direction and z-axis direction;?
X-axis direction in first displacement data, three directions in y-axis direction and z-axis direction displacement data numerical value be respectively less than it is described default
In the case where threshold value, show that the pixel is to meet the first pixel of specified conditions.Pass through at least one first pixel filtered out
The first displacement data and first pixel itself position data determine central point apart from first subobject just
Beginning position, this mode can greatly reduce data processing amount.
The present embodiment is handled described image data by the first full convolutional neural networks, is obtained and is included at least target
The destination image data of the central point of at least one subobject in object, such as obtain and include at least each vertebra in vertebral bones
Central point destination image data.Wherein, on the one hand, carry out the spy of image data automatically by the first full convolutional neural networks
Sign identification, feature selecting and tagsort, compared to the mode of artificial selection feature, lifting system performance improves vertebra center
The accuracy of positioning;On the other hand, full convolutional neural networks can classify to each pixel, that is to say, that pass through
One full convolutional neural networks more can improve training effectiveness using the spatial relationship between centrum, promote network performance.
The embodiment of the invention also provides a kind of image processing methods.Fig. 2 is the image processing method of the embodiment of the present invention
Flow diagram two;As shown in Figure 2, which comprises
Step 201: obtaining the image data comprising target object;The target object includes at least one subobject;
Step 202: described image data being handled based on the first full convolutional neural networks, obtain the first picture number
According to the first image data include the central point of each subobject in the target object;
Step 203: based on the second full convolutional neural networks to described image data and the first image data at
Reason, obtains the second image data, and second image data is used to indicate the classification of each subobject in the target object.
Related illustrate of the present embodiment step 201 specifically can refer to elaborating for step 101 in previous embodiment, for section
Length is saved, which is not described herein again.
In the present embodiment step 202, the center of each vertebra in vertebral anatomy is realized by the first full convolutional neural networks
The positioning of point.It is appreciated that the first full convolutional neural networks are that training obtains in advance, by the way that image data is input to first
Full convolutional neural networks obtain the first image data including the central point of each vertebra in the vertebral bones, to pass through
State the position that the first image data determines the central point of each vertebra.
In an alternative embodiment of the invention, it is described be based on the first full convolutional neural networks to described image data into
Row processing, obtain the first image data, comprising: based on the described first full convolutional neural networks to described image data at
Reason obtains corresponding first displacement data of pixel in described image data;First displacement data characterize the pixel with
The displacement at the center of the subobject nearest apart from the pixel;Position based on first displacement data and the pixel itself
Data determine the initial position of the central point of first subobject nearest apart from the pixel;First subobject be it is described extremely
Any subobject in a few subobject;Corresponding first son of at least partly pixel obtained in described image data is right
The initial position of the central point of elephant determines the quantity of the identical initial position in position, is determined based on the most initial position of quantity
The central point of first subobject;The first image data is obtained based on the central point of determining first subobject.
In the present embodiment, by the trained first full convolutional neural networks to the image data comprising vertebral bones into
Row is handled, the first displacement data of each pixel and the center apart from nearest vertebra in acquisition described image data, and this first
Move the displacement data that data include three x-axis direction, y-axis direction and z-axis direction directions;It is based further on the position of pixel itself
The determining initial position with the central point of the nearest vertebra of the pixel distance of the first displacement data corresponding with the pixel.It can
To understand, each pixel can determine that the initial position of the central point of the vertebra nearest with the pixel distance, and for same
For a vertebra, multiple initial positions corresponding to the vertebra can also be determined based on the partial pixel in image data, really
Fixed multiple initial positions are likely to that part is identical, part is different;Based on this, ballot method is used in the present embodiment, namely
The quantity of identical initial position is counted, for example, there are 100 initial positions, wherein there are 50 initial positions is a, and 20 initial
Position is b, and 15 initial positions are c, and 10 initial positions are d, and 5 initial positions are e, it is determined that initial position is the first of a
Beginning position is the central point position of the vertebra.
As an implementation, described to be determined based on the position data of first displacement data and the pixel itself
Before the initial position of the central point of the first subobject nearest apart from the pixel, the method also includes: it is based on the figure
As corresponding first shift length of at least one pixel in data screens at least one described pixel, obtain and distance
The distance between the center of the first nearest subobject of at least one described pixel meets at least one first picture of specified conditions
Element;It is described based on the position data of first displacement data and the pixel itself determine apart from the pixel it is nearest first
The initial position of the central point of subobject, comprising: the first displacement data and first pixel based on first pixel are certainly
The position data of body determines the initial position of the central point apart from first subobject.
It, can be first determining to initial position is participated in front of the initial position for the central point for determining vertebra in the present embodiment
Pixel carry out preliminary screening, namely initial position it is not necessary that all pixels in image data to be both participated in vertebra central point
It determines.Specifically, since corresponding first shift length of each pixel characterizes the pixel and the vertebra nearest apart from the pixel
The displacement at the center of bone, and during determining the initial position of central point of vertebra, it can be only in vertebra
The a certain range of pixel of heart point.
As an implementation, the center of the acquisition and first subobject nearest apart from least one described pixel
The distance between meet at least one first pixels of specified conditions, comprising: obtain with it is nearest apart from least one described pixel
The distance between the center of the first subobject be less than at least one first pixel of preset threshold.In practical application, due to
One displacement data includes the displacement data of three x-axis direction, y-axis direction and z-axis direction directions, then can determine whether the first displacement number
Whether the preset threshold is respectively less than according to the numerical value of middle x-axis direction, the displacement data in three directions in y-axis direction and z-axis direction;?
X-axis direction in first displacement data, three directions in y-axis direction and z-axis direction displacement data numerical value be respectively less than it is described default
In the case where threshold value, show that the pixel is to meet the first pixel of specified conditions.Pass through at least one first pixel filtered out
The first displacement data and first pixel itself position data determine central point apart from first subobject just
Beginning position, this mode can greatly reduce data processing amount.
It is the present embodiment the problem of belonging to which block vertebra in order to further determine the central point in the first image data
In step 203, by the second full convolutional neural networks classifying to the classification of vertebra each in vertebral anatomy, so that it is determined that
The classification of each vertebra in image data, so carry out with the central point in the first image data it is corresponding, so that it is determined that accordingly
The vertebra type that central point belongs to.It is appreciated that the second full convolutional neural networks are that training obtains in advance, by by the figure
It is used to indicate in the vertebral bones often as data and the first image data are input to the second full convolutional neural networks acquisition
Second image data of the classification of a vertebra.
In an alternative embodiment of the invention, it is described be based on the second full convolutional neural networks to described image data and
The first image data are handled, and obtain the second image data, comprising: by described image data and the first image number
According to merging, destination image data is obtained;The destination image data is handled based on the second full convolutional neural networks,
The probability value for obtaining the classification for the subobject that the pixel in the destination image data belongs to, the corresponding son of most probable value is right
The classification of elephant is determined as the classification for the subobject that the pixel belongs to;The son belonged to based on the pixel in the destination image data
Object type obtains the second image data.
In the present embodiment, by the trained second full convolutional neural networks to the image data comprising vertebral bones and
First image data of the central point comprising vertebra each in vertebral bones is handled;First to image data and the first image
Data merge processing, in practical application, can merge, obtain for each pixel corresponding channel data in image data
Destination image data is obtained, then the destination image data is handled by the second full convolutional neural networks, obtains target figure
As the probability value for the vertebra classification that pixel each in data or partial pixel belong to, by the corresponding vertebra classification of most probable value
It is determined as the vertebra classification that the pixel belongs to.For example, the probability that some pixel belongs to first piece of vertebra is 0.01, belong to second
The probability of block vertebra is 0.02, and the probability for belonging to third block vertebra is 0.2, and the probability for belonging to the 4th piece of vertebra is 0.72, is belonged to
The probability of 5th piece of vertebra is 0.15, and the probability for belonging to the 6th piece of vertebra is 0.03 etc., determines that maximum probability value is 0.72,
It then can determine that the pixel belongs to the 4th piece of vertebra.
In other embodiments, it can be carried out by the vertebra classification belonged to pixel each in destination image data true
It is fixed, and then the vertebra classification belonged to based on each pixel carries out the segmentation at least one vertebra for including in vertebral bones, thus
Determine at least one vertebra for including in the destination image data.
As an implementation, the classification for the subobject that the pixel obtained in the destination image data belongs to
The classification of the corresponding subobject of most probable value is determined as the classification for the subobject that the pixel belongs to, comprising: obtain by probability value
The second subobject in the destination image data the probability value of the classification of subobject that belongs to of central point respective pixel;Institute
Stating the second subobject is any subobject at least one described subobject;By corresponding second subobject of most probable value
Classification is determined as the classification of second subobject.
In the present embodiment, by above embodiment, vertebra classification described in vertebra central point can be directly determined, then it can be true
The classification of vertebra where the fixed vertebra central point.
As another embodiment, the classification for the subobject that the pixel obtained in the destination image data belongs to
Probability value, the classification of the corresponding subobject of most probable value is determined as to the classification for the subobject that the pixel belongs to, comprising:
Obtain the classification for the subobject that the central point respective pixel of the second subobject in the destination image data belongs to first is general
Rate value, and obtain the second probability of the classification of the subobject belonged to other pixels of the central point apart from specific threshold
Value determines the quantity of the probability value in first probability value and second probability value with identical numerical value, and quantity is most
The classification of the corresponding subobject of probability value be determined as the classification of second subobject.
In the present embodiment, the vertebra class is determined by other pixels near vertebra central point and vertebra central point
Not.In practical application, each pixel can determine that corresponding vertebra classification, and near vertebra central point and vertebra central point
Other pixels it is possible to establish that the vertebra classification gone out is different, then ballot method can be used in the present embodiment, count vertebra central point
And in the vertebra classification determined of other pixels near vertebra central point the same category quantity, such as determine the 4th piece
The quantity of vertebra is most, then can determine that the classification of the vertebra is the 4th piece of vertebra.
It is appreciated that the first image data and second image data in the present embodiment correspond to aforementioned implementation
Destination image data in example, i.e. destination image data are two, including for determining the first image data of vertebra central point
With the second image data for being used to indicate vertebra classification.
Each vertebra in the vertebral bones that the present embodiment includes to described image data by the first full convolutional neural networks
Central point positioned, the class of each vertebra in the vertebral bones for including to image data by the second full convolutional neural networks
Do not classify, be equivalent to and the local message of image data is handled by the first full convolutional neural networks, determines each
The central point of vertebra;The global information of image data is handled by the second full convolutional neural networks, determines each vertebra
Classification.On the one hand, pass through full convolutional neural networks (including the first full convolutional neural networks and second full convolutional neural networks)
Automatic feature identification, feature selecting and the tagsort for carrying out image data promotes system compared to the mode of artificial selection feature
System performance, improves the accuracy of vertebra centralized positioning;On the other hand, full convolutional neural networks can divide each pixel
Class, that is to say, that more can improve training effectiveness, tool using the spatial relationship between centrum by full convolutional neural networks
Body is handled the global information of image data by the second full convolutional neural networks, according to each vertebra in vertebral bones it
Between spatial relationship training the second full convolutional neural networks, promoted network performance.
Based on previous embodiment, the embodiment of the invention also provides a kind of image processing methods.Fig. 3 is the embodiment of the present invention
Image processing method flow diagram three;As shown in Figure 3, which comprises
Step 301: obtaining the image data comprising target object;The target object includes at least one subobject;
Step 302: described image data being handled based on the first full convolutional neural networks, obtain the first picture number
According to the first image data include the central point of each subobject in the target object;
Step 303: down-sampling processing being carried out to described image data, obtains third image data;
Step 304: based on the second full convolutional neural networks to the third image data and the first image data into
Row processing, obtains the second image data, and second image data is used to indicate the class of each subobject in the target object
Not.
The present embodiment step 301 elaborates that specifically to can refer to abovementioned steps 201 detailed to step 202 to step 302
Thin to illustrate, to save space, which is not described herein again.
Difference with previous embodiment is, is obtaining the second image based on the second full convolutional neural networks in the present embodiment
Before data, down-sampling processing is carried out to described image data, namely reduce described image data, obtains third image data,
The third image data and the first image data are inputted into the second full convolutional neural networks, obtain the second picture number
According to.Wherein, the effect of down scaling image data is to reduce data volume, to solve the problems, such as that video memory is limited, from another point of view
It is also greatly promoted also by the global information (passing through the related information of vertebra, i.e. " contextual information " of vertebra) of integral image
System performance.
It is illustrated below with reference to image procossing scheme of the specific application scenarios to the embodiment of the present invention.
Fig. 4 is a kind of application schematic diagram of the image processing method of the embodiment of the present invention;As shown in connection with fig. 4, vertebra by
The patient of damage goes under the scene of hospital admission, and (the CT image is specially three-dimensional figure to the CT image for having taken about vertebra
Picture);Doctor can image procossing scheme through the embodiment of the present invention the central point of each vertebra in CT image is positioned.
Specifically, as shown in connection with fig. 4, it is assumed that the CT image of shooting is denoted as original CT image, on the one hand, pass through the first full volume
Product neural network handles original CT image, obtains the first image data;The first image data include the vertebra
The central point of each vertebra in bone.Wherein, due to the central point of each vertebra be it is self-existent, not by other vertebras
It influences, as long as provide some vertebra and surrounding image just can be determined in the vertebra by the first full convolutional neural networks
Heart point, but vertebra central point needs are determined for example, by detailed information such as vertebra boundaries, therefore, in the present embodiment pass through institute
It states the first full convolutional neural networks to position the central point of vertebra each in original CT image, and more thin by retaining
The original CT image of section information positions the central point of each vertebra, it will be understood that the first full convolutional neural networks are
For handling local message.
On the other hand, in order to reduce data volume, solve the problems, such as existing limited, the present embodiment carries out down original CT image
Sampling processing, the CT image after being reduced;By the second full convolutional network to the CT image and the first image after diminution
Data are handled, and the second image data is obtained;Second image data is used to indicate each vertebra in the vertebral bones
Classification.
In one embodiment, the central point ownership determined in the first image data can be determined by modes such as experiences
Vertebra classification.If but lacking certain block vertebra in original CT image, or the first figure obtained by the first full convolutional neural networks
As positioning result of the data to the central point of vertebra is bad, the central point of certain vertebras is missed out, then will appear vertebra central point
The vertebra classification of ownership determine whether there is or not the problem of.It is proposed based on this, in the present embodiment true by the second full convolutional neural networks
Determine vertebra classification.To determine vertebra classification, need to comprehensively consider the relative positional relationship of vertebra Yu other vertebras, it therefore, can be with
Understand, the second full convolutional neural networks are for handling global information.In practical application, in full convolutional neural networks
The receptive field of convolution kernel is limited, if input image it is too big, convolution kernel can not perceptual image overall picture, then can not be whole
Close the global information of image;On the other hand, since the classification of vertebra needs to consider the relativeness of vertebra Yu other vertebras, and vertebra
The detail information on bone periphery is not important, therefore reduces original CT image in the present embodiment by way of down-sampling, makees
For for determining the input data of vertebra classification.
About the training method of the aforementioned first full convolutional neural networks, Fig. 5 is the image processing method of the embodiment of the present invention
In network training method a kind of flow diagram;As shown in Figure 5, which comprises
Step 401: obtaining first sample image data and the first sample image data pair comprising target object
The first labeled data answered;First labeled data is used to indicate in the target object in the first sample image data often
The central point of a subobject;
Step 402: according to the first sample image data and the full volume of corresponding first labeled data training described first
Product neural network.
In the present embodiment, the target object includes vertebral bones;The vertebral bones include at least one vertebra.
In the present embodiment step 401, first sample image data and corresponding first labeled data are for training first
The data of full convolutional neural networks.Wherein, the first sample image data includes target object, the target object such as ridge
Vertebra bone.In practical application, for the first full convolutional neural networks of training, multiple first sample image datas can be obtained ahead of time, it is more
The vertebral bones for including in a first sample image data are same category, the classification such as mankind or with vertebral bones
Animal etc.;It is appreciated that the multiple first sample image datas obtained are the sample image number for including human Spine bone
According to alternatively, the multiple first sample image datas obtained are the sample image data etc. for including certain kind canine vertebral bones
Deng.
Wherein, first labeled data is labelled in the vertebral bones in first sample image data in each vertebra
Heart point.As an example, first labeled data can be the coordinate data of the central point corresponding to each vertebra;As
Another example, first labeled data are also possible to corresponding to the first sample image data including each vertebra
Central point image data.
It is described according to the first sample image data and corresponding first labeled data in the present embodiment step 402
The training first full convolutional neural networks, comprising: according to the first full convolutional neural networks to the first sample image data
It is handled, obtains original data, the original data includes the target pair in the first sample image data
The initial center point of each subobject as in;Loss letter is determined based on the original data and first labeled data
Number, adjusts the parameter of the described first full convolutional neural networks, based on the loss function with the full convolutional Neural of training described first
Network.
The present embodiment inputs the first sample image data in the training process to the first full convolutional neural networks
To the first full convolutional neural networks, by the described first full convolutional neural networks according to initial parameter to the first sample image
Data are handled, and original data is obtained;The original data includes the vertebrae in first sample image data
The initial center point of each vertebra in bone.Under normal conditions, the initial center point of the vertebra of acquisition with it is right in the first labeled data
Be between the central point for the vertebra answered it is discrepant, loss function is determined based on this species diversity in the present embodiment, based on determining
Loss function is adjusted the parameter of the first full convolutional neural networks, thus the training first full convolutional neural networks.It can
To understand, vertebra is corresponded in the central point and the first labeled data of the vertebra that the first full convolutional neural networks that training obtains determine
Central point between difference meet preset condition, the preset condition can be preset threshold, for example, training obtain first
The displacement corresponded between the central point of vertebra in the central point and the first labeled data of the vertebra that full convolutional neural networks determine is small
In the preset threshold.
As an implementation, described that loss letter is determined based on the original data and first labeled data
Number, comprising: based on the first location information of the initial center point of vertebra in the original data and the first mark number
First group of displacement is determined according to the second location information of the central point of middle respective vertebrae;First group of displacement includes three dimensions
Displacement;Determine whether the initial center point of the vertebra in first labeled data corresponds to vertebra based on first group of displacement
Within the scope of the pre-determined distance of the central point of bone, the first result is obtained;It is true based on first group of displacement and/or first result
Determine loss function.
In the present embodiment, due to not training complete the first full convolutional neural networks parameter not be it is optimal,
It is discrepant that the initial center point of vertebra in original data, which is compared between accurate central point,.In the present embodiment
One full convolutional neural networks are the first location information packets for the initial center point for handling 3 d image data, therefore obtaining
Include the data of three dimensions.Assuming that establishing x and y-axis in horizontal plane, z-axis is established with the direction of vertical level, it is three-dimensional to generate xyz
Coordinate system, then the first location information can be (x, y, z) three-dimensional coordinate data in xyz three-dimensional system of coordinate;Correspondingly, institute
The central point for stating respective vertebrae in the first labeled data is represented by (x ', y ', z ') three-dimensional coordinate data.Then first group of position
Shifting is represented by ((x '-x), (y '-y), (z '-z)).Further the initial center point can be determined by first group of displacement
Whether the pre-determined distance range of the central point of vertebra is corresponded in the first labeled data.The loss function determined in the present embodiment can
It is related to first group of displacement and/or first result;Assuming that loss function is related to first group of displacement and the first result,
It then may include four relevant parameters: the initial center of (x '-x), (y '-y), (z '-z) and the vertebra in the loss function
The first result within the scope of the pre-determined distance for the central point whether point corresponds to vertebra in first labeled data;The present embodiment
It is middle according to the loss function (such as aforementioned four relevant parameter in the loss function) to the described first full convolutional Neural
The parameter of network is adjusted.In practical application, need to adjust by multiple parameter to the described first full convolutional neural networks
It is trained, the first full convolutional neural networks that final training obtains can satisfy and handle first sample image data
Difference between the vertebra central point obtained afterwards and the central point for corresponding to vertebra in first labeled data is in preset threshold model
In enclosing.
In the present embodiment, the first full convolutional neural networks be can be with coder-decoder (Encoder-
Decoder) the full convolutional neural networks of the V-Net of framework.
Each vertebra in the vertebral bones that the present embodiment includes to described image data by the first full convolutional neural networks
Central point positioned.On the one hand, feature identification, the feature of image data are carried out automatically by the first full convolutional neural networks
Selection and tagsort, compared to the mode of artificial selection feature, lifting system performance improves the accurate of vertebra centralized positioning
Degree;On the other hand, the present embodiment can be accurately obtained every by being trained end to end to the first full convolutional neural networks
The position of the central point of a vertebra.
About the training method of the aforementioned second full convolutional neural networks, Fig. 6 is the image processing method of the embodiment of the present invention
In network training method another flow diagram;As shown in Figure 6, which comprises
Step 501: it is relevant to obtain first sample image data, the first sample image data comprising target object
Second sample image data and corresponding second labeled data of the first sample image data;The second sample image number
According to the central point including each subobject in the target object in the first sample image data;Second labeled data is used
In the classification for indicating each subobject in the target object in the first sample image data;
Step 502: based on the first sample image data, second sample image data and the second mark number
According to the full convolutional neural networks of training described second.
In the present embodiment step 501, first sample image data and corresponding first labeled data are for training first
The data of full convolutional neural networks.Wherein, the first sample image data includes target object, the target object such as ridge
Vertebra bone.In practical application, for the second full convolutional neural networks of training, multiple first sample image datas can be obtained ahead of time, it is more
The vertebral bones for including in a first sample image data are same category, the classification such as mankind or with vertebral bones
Animal etc.;It is appreciated that the multiple first sample image datas obtained are the sample image number for including human Spine bone
According to alternatively, the multiple first sample image datas obtained are the sample image data etc. for including certain kind canine vertebral bones
Deng.
Wherein, second sample image data contains and target object (such as the ridge in first sample image data
Vertebra bone) corresponding each subobject (such as vertebra) central point.As an implementation, the second sample image number
According to the image data including vertebra central point that can be the first full convolutional neural networks acquisition obtained by aforementioned training.
Wherein, second labeled data corresponds to the data of each vertebra classification in first sample image data.Make
For a kind of example, second labeled data can be the second image data shown in such as Fig. 4, i.e., by manually marking
Image data of the profile of the vertebra for each classification that mode identifies to generate.
In the present embodiment step 502, it is described based on the first sample image data, second sample image data and
The second labeled data training, the second full convolutional neural networks, comprising: the first sample image data is carried out down
Sampling processing obtains third sample image data;Based on the third sample image data, second sample image data and
The second labeled data training, the second full convolutional neural networks.
In the present embodiment, in order to reduce the data volume in network training process, solve the problems, such as that video memory is limited, to second
Before full convolutional neural networks are trained, down-sampling processing is carried out to first sample image data first, obtains third sample
Image data;Based on the training of the third sample image data, second sample image data and second labeled data
The second full convolutional neural networks.It is similar to the training method of the aforementioned first full convolutional neural networks, it is complete by described second
Convolutional neural networks are handled the third sample image data and second sample image data according to initial parameter,
What is obtained includes the original data of the initial category of each vertebra;Based on the original data and second mark
Difference between note data determines loss function, based on the loss function to the parameters of the described second full convolutional neural networks into
Row adjustment, thus the training second full convolutional neural networks.
In the present embodiment, the second full convolutional neural networks can be the full convolutional neural networks of V-Net.
Each vertebra in the vertebral bones that the present embodiment includes to described image data by the first full convolutional neural networks
Central point positioned, the class of each vertebra in the vertebral bones for including to image data by the second full convolutional neural networks
Do not classify, be equivalent to and the local message of image data is handled by the first full convolutional neural networks, determines each
The central point of vertebra;The global information of image data is handled by the second full convolutional neural networks, determines each vertebra
Classification.On the one hand, pass through full convolutional neural networks (including the first full convolutional neural networks and second full convolutional neural networks)
Automatic feature identification, feature selecting and the tagsort for carrying out image data promotes system compared to the mode of artificial selection feature
System performance, improves the accuracy of vertebra centralized positioning;On the other hand, full convolutional neural networks can divide each pixel
Class, that is to say, that more can improve training effectiveness, tool using the spatial relationship between centrum by full convolutional neural networks
Body is handled the global information of image data by the second full convolutional neural networks, according to each vertebra in vertebral bones it
Between spatial relationship training the second full convolutional neural networks, promoted network performance.
The embodiment of the invention also provides a kind of image processing apparatus.Fig. 7 is the image processing apparatus of the embodiment of the present invention
A kind of composed structure schematic diagram;As shown in fig. 7, described device includes: acquiring unit 61 and image processing unit 62;Wherein,
The acquiring unit 61, for obtaining the image data comprising target object;The target object includes at least one
A subobject;
Described image processing unit 62 is obtained for being handled based on full convolutional neural networks described image data
Destination image data, the destination image data include at least the central point of each subobject in the target object.
As an implementation, described image processing unit 62, for being based on the first full convolutional neural networks to described
Image data is handled, and obtains destination image data, the destination image data includes every height pair in the target object
The central point of elephant.
As another embodiment, described image processing unit 62, for being based on the first full convolutional neural networks to institute
It states image data to be handled, obtains the first image data, the first image data include every height in the target object
The central point of object;Described image data and the first image data are handled based on the second full convolutional neural networks,
The second image data is obtained, second image data is used to indicate the classification of each subobject in the target object.
In an alternative embodiment of the invention, as shown in figure 8, described image processing unit 62 includes the first processing mould
Block 621 is obtained in described image data for being handled based on the described first full convolutional neural networks described image data
Corresponding first displacement data of pixel;First displacement data characterizes the pixel and the son nearest apart from the pixel is right
The displacement at the center of elephant;It is determined apart from the pixel most based on the position data of first displacement data and the pixel itself
The initial position of the central point of the first close subobject;First subobject is any son at least one described subobject
Object;The initial position of the central point of corresponding first subobject of at least partly pixel in described image data is obtained,
The quantity for determining the identical initial position in position determines the center of first subobject based on the most initial position of quantity
Point.
In an alternative embodiment, the first processing module 621, for based at least one in described image data
Corresponding first shift length of a pixel screens at least one described pixel, obtain with apart from least one described pixel
The distance between the center of the first nearest subobject meets at least one first pixel of specified conditions;Based on first picture
First displacement data of element and the position data of first pixel itself determine the central point apart from first subobject
Initial position.
In an alternative embodiment of the invention, as shown in figure 9, described image processing unit 62 includes second processing mould
Block 622 obtains destination image data for merging described image data and the first image data;Based on second
Full convolutional neural networks handle the destination image data, obtain the son that the pixel in the destination image data belongs to
The classification of the corresponding subobject of most probable value is determined as the subobject that the pixel belongs to by the probability value of the classification of object
Classification;The second image data is obtained based on the subobject classification that the pixel in the destination image data belongs to.
In an alternative embodiment, the Second processing module 622, for obtaining in the destination image data
The probability value of the classification for the subobject that the central point respective pixel of two subobjects belongs to;Second subobject is described at least one
Any subobject in a subobject;The classification of corresponding second subobject of most probable value is determined as second subobject
Classification.
In an alternative embodiment of the invention, described image processing unit 62, for being carried out to described image data
Down-sampling processing, obtains third image data;Based on the second full convolutional neural networks to the third image data and described
One image data is handled, and the second image data is obtained.
In an alternative embodiment of the invention, as shown in Figure 10, described device further includes the first training unit 63, is used
In the corresponding first mark number of first sample image data of the acquisition comprising target object and the first sample image data
According to;First labeled data is used to indicate the center of each subobject in the target object in the first sample image data
Point;According to the first sample image data and the corresponding first labeled data training first full convolutional neural networks.
In the present embodiment, first training unit 63 is used for according to the first full convolutional neural networks to first sample
This image data is handled, and obtains original data, the original data includes the first sample image data
In target object in each subobject initial center point;It is true based on the original data and first labeled data
Determine loss function, the parameter of the described first full convolutional neural networks is adjusted based on the loss function, it is complete with training described first
Convolutional neural networks.
In an alternative embodiment of the invention, as shown in figure 11, described device further includes the second training unit 64, is used
In first sample image data of the acquisition comprising target object, the relevant second sample image number of the first sample image data
Accordingly and corresponding second labeled data of the first sample image data;Two sample image data includes first sample
The central point of each subobject in target object in this image data;Second labeled data is used to indicate first sample
The classification of each subobject in target object in this image data;Based on the first sample image data, second sample
This image data and second labeled data training, the second full convolutional neural networks.
Optionally, second training unit 64 is obtained for carrying out down-sampling processing to the first sample image data
Obtain third sample image data;Based on the third sample image data, second sample image data and second mark
Infuse the data training second full convolutional neural networks.
In the present embodiment, the target object includes vertebral bones;The vertebral bones include at least one vertebra.
In the embodiment of the present invention, acquiring unit 61, image processing unit 62 in described device (including first processing module
621 and Second processing module 622), the first training unit 63 and the second training unit 64, in practical applications can be by the end
Central processing unit (CPU, Central Processing Unit), digital signal processor (DSP, Digital in end
Signal Processor), micro-control unit (MCU, Microcontroller Unit) or programmable gate array (FPGA,
Field-Programmable Gate Array) it realizes.
It should be understood that image processing apparatus provided by the above embodiment is when performing image processing, only with above-mentioned each
The division progress of program module can according to need for example, in practical application and distribute above-mentioned processing by different journeys
Sequence module is completed, i.e., the internal structure of device is divided into different program modules, to complete whole described above or portion
Divide processing.In addition, image processing apparatus provided by the above embodiment and image processing method embodiment belong to same design, have
Body realizes that process is detailed in embodiment of the method, and which is not described herein again.
The embodiment of the invention also provides a kind of electronic equipment, Figure 12 is the composition knot of the electronic equipment of the embodiment of the present invention
Structure schematic diagram, as shown in figure 12, the electronic equipment include memory 72, processor 71 and are stored on memory 72 and can be
The computer program run on processor 71, the processor 71 realize the method for the embodiment of the present invention when executing described program
The step of.
In the present embodiment, the various components in electronic equipment can be coupled by bus system 73.It is understood that bus
System 73 is for realizing the connection communication between these components.Bus system 73 further includes power supply in addition to including data/address bus
Bus, control bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus in Figure 12
System 73.
It is appreciated that memory 72 can be volatile memory or nonvolatile memory, may also comprise volatibility and
Both nonvolatile memories.Wherein, nonvolatile memory can be read-only memory (ROM, Read Only Memory),
Programmable read only memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read Only Memory EPROM
(EPROM, Erasable Programmable Read-Only Memory), electrically erasable programmable read-only memory
The storage of (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access
Device (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface are deposited
Reservoir, CD or CD-ROM (CD-ROM, Compact Disc Read-Only Memory);Magnetic surface storage can be
Magnetic disk storage or magnetic tape storage.Volatile memory can be random access memory (RAM, Random Access
Memory), it is used as External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as
Static random access memory (SRAM, Static Random Access Memory), synchronous static random access memory
(SSRAM, Synchronous Static Random Access Memory), dynamic random access memory (DRAM,
Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous
Dynamic Random Access Memory), double data speed synchronous dynamic RAM (DDRSDRAM,
Double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random
Access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronized links
Dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct rambus
Random access memory (DRRAM, Direct Rambus Random Access Memory).Description of the embodiment of the present invention is deposited
Reservoir 72 is intended to include but is not limited to the memory of these and any other suitable type.
The method that the embodiments of the present invention disclose can be applied in processor 71, or be realized by processor 71.Place
Managing device 71 may be a kind of IC chip, the processing capacity with signal.During realization, each step of the above method
It can be completed by the integrated logic circuit of the hardware in processor 71 or the instruction of software form.Above-mentioned processor 71 can
Be general processor, digital signal processor (DSP, Digital Signal Processor) or other programmable patrol
Collect device, discrete gate or transistor logic, discrete hardware components etc..Processor 71 may be implemented or execute the present invention
Disclosed each method, step and logic diagram in embodiment.General processor can be microprocessor or any conventional
Processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly in hardware decoding processor execution
Complete, or in decoding processor hardware and software module combine execute completion.Software module can be located at storage medium
In, which is located at memory 72, and processor 71 reads the information in memory 72, completes preceding method in conjunction with its hardware
The step of.
In the exemplary embodiment, electronic equipment can by one or more application specific integrated circuit (ASIC,
Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable
Logic Device), Complex Programmable Logic Devices (CPLD, Complex Programmable Logic Device), scene
Programmable gate array (FPGA, Field-Programmable Gate Array), general processor, controller, microcontroller
(MCU, Micro Controller Unit), microprocessor (Microprocessor) or other electronic components are realized, are used for
Execute preceding method.
In several embodiments provided herein, it should be understood that disclosed device, device and method, it can be with
It realizes by another way.Apparatus embodiments described above are merely indicative, for example, the division of the unit,
Only a kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can be tied
It closes, or is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each group
Can be through some interfaces at the mutual coupling in part or direct-coupling or communication connection, equipment or unit it is indirect
Coupling or communication connection, can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned include: movable storage device, ROM,
The various media that can store program code such as RAM, magnetic or disk.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with
It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.
And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (10)
1. a kind of image processing method, which is characterized in that the described method includes:
Obtain the image data comprising target object;The target object includes at least one subobject;
Described image data are handled based on full convolutional neural networks, obtain destination image data, the target image number
According to the central point for including at least each subobject in the target object.
2. the method according to claim 1, wherein described be based on full convolutional neural networks to described image data
It is handled, obtains destination image data, comprising:
Described image data are handled based on the first full convolutional neural networks, obtain destination image data, the target figure
As data include the central point of each subobject in the target object.
3. the method according to claim 1, wherein described be based on full convolutional neural networks to described image data
It is handled, obtains destination image data, comprising:
Described image data are handled based on the first full convolutional neural networks, obtain the first image data, first figure
As data include the central point of each subobject in the target object;
Described image data and the first image data are handled based on the second full convolutional neural networks, obtain the second figure
As data, second image data is used to indicate the classification of each subobject in the target object.
4. according to the method in claim 2 or 3, which is characterized in that the first full convolutional neural networks that are based on are to described
Image data is handled, comprising:
Described image data are handled based on the described first full convolutional neural networks, obtain the pixel in described image data
Corresponding first displacement data;First displacement data characterizes in the pixel and the subobject nearest apart from the pixel
The displacement of the heart;
First son nearest apart from the pixel is determined based on the position data of first displacement data and the pixel itself
The initial position of the central point of object;First subobject is any subobject at least one described subobject;
The initial position of the central point of corresponding first subobject of at least partly pixel in described image data is obtained, really
The quantity of identical initial position is set in positioning, and the central point of first subobject is determined based on the most initial position of quantity.
5. according to the method described in claim 3, it is characterized in that, the second full convolutional neural networks that are based on are to described image
Data and the first image data are handled, and the second image data is obtained, comprising:
Described image data and the first image data are merged, destination image data is obtained;
The destination image data is handled based on the second full convolutional neural networks, is obtained in the destination image data
The classification of the corresponding subobject of most probable value is determined as the pixel category by the probability value of the classification for the subobject that pixel belongs to
In subobject classification;
The second image data is obtained based on the subobject classification that the pixel in the destination image data belongs to.
6. according to the method described in claim 5, it is characterized in that, the pixel obtained in the destination image data belongs to
Subobject classification probability value, it is right that the classification of the corresponding subobject of most probable value is determined as the son that the pixel belongs to
The classification of elephant, comprising:
Obtain the general of the classification for the subobject that the central point respective pixel of the second subobject in the destination image data belongs to
Rate value;Second subobject is any subobject at least one described subobject;
The classification of corresponding second subobject of most probable value is determined as to the classification of second subobject.
7. method according to any one of claims 1 to 6, which is characterized in that the target object includes vertebral bones;Institute
Stating vertebral bones includes at least one vertebra.
8. a kind of image processing apparatus, which is characterized in that described device includes: acquiring unit and image processing unit;Wherein,
The acquiring unit, for obtaining the image data comprising target object;The target object includes that at least one son is right
As;
Described image processing unit obtains target image for handling based on convolutional neural networks described image data
Data, the destination image data include at least the central point of each subobject in the target object.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor
The step of any one of claim 1 to 7 the method is realized when row.
10. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor
Machine program, which is characterized in that the processor realizes the step of any one of claim 1 to 7 the method when executing described program
Suddenly.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910473265.6A CN110223279B (en) | 2019-05-31 | 2019-05-31 | Image processing method and device and electronic equipment |
PCT/CN2019/114498 WO2020238007A1 (en) | 2019-05-31 | 2019-10-30 | Image processing method and apparatus, and electronic device |
KR1020217025980A KR20210115010A (en) | 2019-05-31 | 2019-10-30 | Image processing method and apparatus, electronic device |
SG11202108960QA SG11202108960QA (en) | 2019-05-31 | 2019-10-30 | Image processing method and apparatus, and electronic device |
JP2021539924A JP2022516970A (en) | 2019-05-31 | 2019-10-30 | Image processing methods and devices, electronic devices |
TW109113374A TWI758714B (en) | 2019-05-31 | 2020-04-21 | Method and device for image processing and electronic device thereof |
US17/399,121 US20210374452A1 (en) | 2019-05-31 | 2021-08-11 | Method and device for image processing, and elecrtonic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910473265.6A CN110223279B (en) | 2019-05-31 | 2019-05-31 | Image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110223279A true CN110223279A (en) | 2019-09-10 |
CN110223279B CN110223279B (en) | 2021-10-08 |
Family
ID=67819324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910473265.6A Active CN110223279B (en) | 2019-05-31 | 2019-05-31 | Image processing method and device and electronic equipment |
Country Status (7)
Country | Link |
---|---|
US (1) | US20210374452A1 (en) |
JP (1) | JP2022516970A (en) |
KR (1) | KR20210115010A (en) |
CN (1) | CN110223279B (en) |
SG (1) | SG11202108960QA (en) |
TW (1) | TWI758714B (en) |
WO (1) | WO2020238007A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179247A (en) * | 2019-12-27 | 2020-05-19 | 上海商汤智能科技有限公司 | Three-dimensional target detection method, training method of model thereof, and related device and equipment |
WO2020238007A1 (en) * | 2019-05-31 | 2020-12-03 | 上海商汤智能科技有限公司 | Image processing method and apparatus, and electronic device |
CN112219224A (en) * | 2019-12-30 | 2021-01-12 | 商汤国际私人有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2022116923A1 (en) * | 2020-12-02 | 2022-06-09 | Ping An Technology (Shenzhen) Co., Ltd. | Method and device for vertebra localization and identification |
CN115204383A (en) * | 2021-04-13 | 2022-10-18 | 北京三快在线科技有限公司 | Training method and device for central point prediction model |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102564737B1 (en) * | 2022-05-25 | 2023-08-10 | 주식회사 래디센 | Method for training a device for denoising an X-ray image and computing device for the same |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682697A (en) * | 2016-12-29 | 2017-05-17 | 华中科技大学 | End-to-end object detection method based on convolutional neural network |
US20180071087A1 (en) * | 2015-02-27 | 2018-03-15 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Double Component Mandrel for Electrospun Stentless, Multi-Leaflet Valve Fabrication |
CN108596904A (en) * | 2018-05-07 | 2018-09-28 | 北京长木谷医疗科技有限公司 | The method for generating the method for location model and spinal sagittal bit image being handled |
CN108614999A (en) * | 2018-04-16 | 2018-10-02 | 贵州大学 | Eyes based on deep learning open closed state detection method |
CN108634934A (en) * | 2018-05-07 | 2018-10-12 | 北京长木谷医疗科技有限公司 | The method and apparatus that spinal sagittal bit image is handled |
CN109166104A (en) * | 2018-08-01 | 2019-01-08 | 沈阳东软医疗***有限公司 | A kind of lesion detection method, device and equipment |
CN109190444A (en) * | 2018-07-02 | 2019-01-11 | 南京大学 | A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video |
CN109214403A (en) * | 2017-07-06 | 2019-01-15 | 阿里巴巴集团控股有限公司 | Image-recognizing method, device and equipment, readable medium |
CN109544537A (en) * | 2018-11-26 | 2019-03-29 | 中国科学技术大学 | The fast automatic analysis method of hip joint x-ray image |
CN109785303A (en) * | 2018-12-28 | 2019-05-21 | 上海联影智能医疗科技有限公司 | Rib cage labeling method, device, equipment and Image Segmentation Model training method |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3171297A1 (en) * | 2015-11-18 | 2017-05-24 | CentraleSupélec | Joint boundary detection image segmentation and object recognition using deep learning |
CN107305635A (en) * | 2016-04-15 | 2017-10-31 | 株式会社理光 | Object identifying method, object recognition equipment and classifier training method |
JP6778625B2 (en) * | 2017-01-31 | 2020-11-04 | 株式会社デンソーアイティーラボラトリ | Image search system, image search method and image search program |
EP3610410A1 (en) * | 2017-04-14 | 2020-02-19 | Koninklijke Philips N.V. | Person identification systems and methods |
EP3698320B1 (en) * | 2017-10-20 | 2021-07-21 | Nuvasive, Inc. | Intervertebral disc modeling |
EP3701260A4 (en) * | 2017-10-26 | 2021-10-27 | Essenlix Corporation | System and methods of image-based assay using crof and machine learning |
CN107977971A (en) * | 2017-11-09 | 2018-05-01 | 哈尔滨理工大学 | The method of vertebra positioning based on convolutional neural networks |
CN108038860A (en) * | 2017-11-30 | 2018-05-15 | 杭州电子科技大学 | Spine segmentation method based on the full convolutional neural networks of 3D |
CN110223279B (en) * | 2019-05-31 | 2021-10-08 | 上海商汤智能科技有限公司 | Image processing method and device and electronic equipment |
-
2019
- 2019-05-31 CN CN201910473265.6A patent/CN110223279B/en active Active
- 2019-10-30 WO PCT/CN2019/114498 patent/WO2020238007A1/en active Application Filing
- 2019-10-30 JP JP2021539924A patent/JP2022516970A/en active Pending
- 2019-10-30 KR KR1020217025980A patent/KR20210115010A/en active Search and Examination
- 2019-10-30 SG SG11202108960QA patent/SG11202108960QA/en unknown
-
2020
- 2020-04-21 TW TW109113374A patent/TWI758714B/en active
-
2021
- 2021-08-11 US US17/399,121 patent/US20210374452A1/en not_active Abandoned
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180071087A1 (en) * | 2015-02-27 | 2018-03-15 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Double Component Mandrel for Electrospun Stentless, Multi-Leaflet Valve Fabrication |
CN106682697A (en) * | 2016-12-29 | 2017-05-17 | 华中科技大学 | End-to-end object detection method based on convolutional neural network |
CN109214403A (en) * | 2017-07-06 | 2019-01-15 | 阿里巴巴集团控股有限公司 | Image-recognizing method, device and equipment, readable medium |
CN108614999A (en) * | 2018-04-16 | 2018-10-02 | 贵州大学 | Eyes based on deep learning open closed state detection method |
CN108596904A (en) * | 2018-05-07 | 2018-09-28 | 北京长木谷医疗科技有限公司 | The method for generating the method for location model and spinal sagittal bit image being handled |
CN108634934A (en) * | 2018-05-07 | 2018-10-12 | 北京长木谷医疗科技有限公司 | The method and apparatus that spinal sagittal bit image is handled |
CN109190444A (en) * | 2018-07-02 | 2019-01-11 | 南京大学 | A kind of implementation method of the lane in which the drivers should pay fees vehicle feature recognition system based on video |
CN109166104A (en) * | 2018-08-01 | 2019-01-08 | 沈阳东软医疗***有限公司 | A kind of lesion detection method, device and equipment |
CN109544537A (en) * | 2018-11-26 | 2019-03-29 | 中国科学技术大学 | The fast automatic analysis method of hip joint x-ray image |
CN109785303A (en) * | 2018-12-28 | 2019-05-21 | 上海联影智能医疗科技有限公司 | Rib cage labeling method, device, equipment and Image Segmentation Model training method |
Non-Patent Citations (1)
Title |
---|
何汉武 等: "《增强现实交互方法与实现》", 31 December 2018, 华中科技大学出版社 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020238007A1 (en) * | 2019-05-31 | 2020-12-03 | 上海商汤智能科技有限公司 | Image processing method and apparatus, and electronic device |
CN111179247A (en) * | 2019-12-27 | 2020-05-19 | 上海商汤智能科技有限公司 | Three-dimensional target detection method, training method of model thereof, and related device and equipment |
CN112219224A (en) * | 2019-12-30 | 2021-01-12 | 商汤国际私人有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112219224B (en) * | 2019-12-30 | 2024-04-26 | 商汤国际私人有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2022116923A1 (en) * | 2020-12-02 | 2022-06-09 | Ping An Technology (Shenzhen) Co., Ltd. | Method and device for vertebra localization and identification |
CN115204383A (en) * | 2021-04-13 | 2022-10-18 | 北京三快在线科技有限公司 | Training method and device for central point prediction model |
Also Published As
Publication number | Publication date |
---|---|
KR20210115010A (en) | 2021-09-24 |
US20210374452A1 (en) | 2021-12-02 |
TWI758714B (en) | 2022-03-21 |
TW202046241A (en) | 2020-12-16 |
SG11202108960QA (en) | 2021-09-29 |
CN110223279B (en) | 2021-10-08 |
JP2022516970A (en) | 2022-03-03 |
WO2020238007A1 (en) | 2020-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110223279A (en) | A kind of image processing method and device, electronic equipment | |
CN110232383B (en) | Focus image recognition method and focus image recognition system based on deep learning model | |
Shareef et al. | Segmentation of medical images using LEGION | |
US20220198230A1 (en) | Auxiliary detection method and image recognition method for rib fractures based on deep learning | |
CN104637024B (en) | Medical image-processing apparatus and medical image processing method | |
US8355553B2 (en) | Systems, apparatus and processes for automated medical image segmentation using a statistical model | |
CN105760874B (en) | CT image processing system and its CT image processing method towards pneumoconiosis | |
CN111507381A (en) | Image recognition method and related device and equipment | |
JP6913697B2 (en) | Image Atlas System and Method | |
CN110246580B (en) | Cranial image analysis method and system based on neural network and random forest | |
WO2024001140A1 (en) | Vertebral body sub-region segmentation method and apparatus, and storage medium | |
JP3234668U (en) | Image recognition system for scoliosis by X-ray | |
CN109983502A (en) | The device and method of quality evaluation for medical images data sets | |
CN116012394A (en) | Method, equipment and storage medium for removing handwriting of pathological section image | |
CN111178428B (en) | Cartilage damage classification method, cartilage damage classification device, computer equipment and storage medium | |
Sha et al. | The improved faster-RCNN for spinal fracture lesions detection | |
CN116188443A (en) | Lumbar intervertebral disc protrusion parting system based on axial medical image | |
Zhang et al. | A novel tool to provide predictable alignment data irrespective of source and image quality acquired on mobile phones: what engineers can offer clinicians | |
Kurochka et al. | An algorithm of segmentation of a human spine X-ray image with the help of Mask R-CNN neural network for the purpose of vertebrae localization | |
CN114972026A (en) | Image processing method and storage medium | |
CN113781496A (en) | Vertebral pedicle screw channel automatic planning system and method based on CBCT vertebral image | |
RU2813938C1 (en) | Device and method for determining boundaries of pathology on medical image | |
RU2806982C1 (en) | Device and method for analysis of medical images | |
US11983870B2 (en) | Structure separating apparatus, structure separating method, and structure separating program, learning device, learning method, and learning program, and learned model | |
van Sonsbeek et al. | End-to-end vertebra localization and level detection in weakly labelled 3d spinal mr using cascaded neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40006563 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |