CN113616945B - Detection method based on focused ultrasonic image recognition and beauty and body-building device - Google Patents
Detection method based on focused ultrasonic image recognition and beauty and body-building device Download PDFInfo
- Publication number
- CN113616945B CN113616945B CN202110933029.5A CN202110933029A CN113616945B CN 113616945 B CN113616945 B CN 113616945B CN 202110933029 A CN202110933029 A CN 202110933029A CN 113616945 B CN113616945 B CN 113616945B
- Authority
- CN
- China
- Prior art keywords
- images
- cross
- image
- image group
- target roi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003796 beauty Effects 0.000 title claims abstract description 18
- 238000001514 detection method Methods 0.000 title claims abstract description 15
- 239000000523 sample Substances 0.000 claims abstract description 63
- 238000002604 ultrasonography Methods 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims description 31
- 230000009466 transformation Effects 0.000 claims description 14
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000001228 spectrum Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 4
- 239000002537 cosmetic Substances 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003475 lamination Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 5
- 230000032683 aging Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000005187 foaming Methods 0.000 abstract description 2
- 239000002344 surface layer Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000003712 anti-aging effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 3
- 206010053615 Thermal burn Diseases 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000037380 skin damage Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0858—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving measuring tissue layers, e.g. skin, interfaces
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention provides a detection method and a beauty and body-care device based on focused ultrasonic image identification, wherein the detection method comprises the following steps: scanning by using an ultrasonic probe to acquire an image group of human skin in real time, wherein the image group comprises a plurality of frames of images; calculating the cross correlation between every two adjacent frames of images; identifying a target ROI (Region ofInterest ) region from the image set based on cross correlation between each two adjacent frames of images, the target ROI region being caused by incomplete fit of the ultrasound probe to the skin; and acquiring a scanning time corresponding to the target ROI area, and controlling the ultrasonic probe to stop energy output on the human skin at the scanning time. The invention plays an effective safety control mechanism aiming at the scalding and foaming phenomenon of the skin with shallow surface layer easily caused by the action of the existing focused ultrasound technology product on the skin to resist the aging.
Description
Technical Field
The invention relates to the field of image processing, in particular to a detection method based on focused ultrasonic image recognition and a beauty and body-care device.
Background
Along with the increasing of the living standard of people, the aesthetic appeal of the masses is more and more embodied and comprehensive; the demands of the vast beauty-seeking people on the aspects of skin wrinkle removal and aging resistance are increased, and the strong market growth trend is presented;
the anti-aging products based on different technical principles act on human skin are various, wherein the anti-aging products based on the focused ultrasound principle gradually enter the field of vision of the masses due to the characteristics and advancement of the self technical principles, and are touted by the market;
at present, more and more clinical application cases show that the existing focused ultrasound technology product acts on skin to resist aging, so that scalding and foaming phenomena of superficial skin are easily caused, and an effective safety control mechanism is lacked; not only weakening the wrinkle-removing and anti-aging effects, but also having the original purpose of no postoperative recovery period pursued by non-invasive medical cosmetology; but also severely impacts customer experience, with unnecessary skin damage and subsequent repair costs.
Disclosure of Invention
The invention provides a detection method based on focused ultrasonic image recognition and a beauty and body-care device aiming at the technical problems existing in the prior art.
According to a first aspect of the present invention, there is provided a focused ultrasound-based cosmetic detection method comprising: scanning by using an ultrasonic probe to obtain an image group of human skin, wherein the image group comprises a plurality of frames of images; calculating the cross correlation between every two adjacent frames of images; identifying a target ROI region from the image group based on the cross correlation between every two adjacent frames of images, wherein the target ROI region is caused by incomplete fit of an ultrasonic probe and skin; and acquiring a scanning moment corresponding to the target ROI area, and controlling the ultrasonic probe to stop energy output on human skin at the scanning moment.
On the basis of the technical scheme, the invention can also make the following improvements.
Optionally, the scanning by using the ultrasonic probe obtains an image group of human skin, where the image group includes multiple frames of images, and includes: repeatedly scanning human skin between fixed points by using a focusing single-array element ultrasonic probe in a linear and linear way to obtain a scanned image group; and sequencing the multi-frame images in the image group according to the travel path of the ultrasonic probe, and recording and storing the multi-frame images.
Optionally, the calculating the cross-correlation between every two adjacent frames of images includes: and preprocessing each frame of image in the image group, wherein the preprocessing comprises noise reduction processing and mask signal-to-noise ratio filtering processing.
Optionally, the calculating the cross-correlation between every two adjacent frames of images includes: for each frame of image in the preprocessed image group, performing spectrum transformation through Hilbert transformation to obtain an envelope curve of the image; based on the envelope curves of every two adjacent frames of images, a cross-correlation value between the two frames of images is calculated based on a line-by-line convolution method.
Optionally, based on the cross correlation between every two adjacent frames of images, a target ROI area is identified from the image group, where the target ROI area is an area where the ultrasound probe is not fully attached to the skin, and is caused by the fact that the ultrasound probe is not fully attached to the skin, and the method includes: and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, marking a target ROI (region of interest) corresponding to the two adjacent frame images, and recording the related scanning moment.
Optionally, the calculating the cross-correlation between every two adjacent frames of images includes: extracting a first multi-frame image from all images of an image group as an image subsequence according to a set interval, and calculating the cross-correlation value of every two adjacent frames of images in the image subsequence; correspondingly, based on the cross-correlation between every two adjacent frames of images, a target ROI area is identified from the image group, wherein the target ROI area is caused by incomplete fit of the ultrasonic probe and the skin, and the method comprises the following steps: when the cross correlation value between two adjacent frames of images is smaller than a set threshold value, recording the serial numbers of the two adjacent frames of images in the image group, and acquiring a second multi-frame image between the two adjacent frames of images in the image group; calculating a cross-correlation value between every two adjacent frames of images in the second multi-frame image; when the cross correlation value between two adjacent frame images is smaller than a set threshold value, acquiring a target ROI area corresponding to the two adjacent frame images, and recording the scanning time of the two adjacent frame images.
According to a second aspect of the present invention, there is provided a beauty and body-building device based on focused ultrasound image recognition, comprising an ultrasound probe, a main control module and a power module; the ultrasonic probe is used for scanning and acquiring an image group of human skin, wherein the image group comprises a plurality of frames of images; the main control module is used for calculating the cross correlation between every two adjacent frames of images; identifying a target ROI region from the image group based on the cross correlation between every two adjacent frames of images, wherein the target ROI region is caused by incomplete fit of an ultrasonic probe and skin; and the power module is also used for acquiring the scanning time corresponding to the target ROI area, and controlling the power module to stop outputting energy to the skin of the human body through the ultrasonic probe at the scanning time.
Optionally, the main control module is configured to calculate a cross-correlation between every two adjacent frames of images, and includes: for each frame of image in the image group, performing spectrum transformation through Hilbert transformation to obtain an envelope curve of the image; based on the envelope curves of every two adjacent frames of images, a cross-correlation value between the two frames of images is calculated based on a line-by-line convolution method.
Optionally, the main control module identifies a target ROI area from the image group based on a cross correlation between every two adjacent frames of images, the target ROI area being caused by incomplete lamination of the ultrasound probe and the skin, and the main control module comprises: and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, acquiring a target ROI (region of interest) corresponding to the two adjacent frame images and acquiring the scanning time of the two adjacent frame images.
According to the beauty detection method and the detection device based on the focused ultrasound, provided by the invention, the part which is possibly scalded is analyzed and the time is analyzed by analyzing the image scanned by the ultrasonic probe, the ultrasonic probe is controlled to stop scanning the skin of the human body at the moment, the skin is prevented from being damaged in the beauty process, and a good safety control mechanism is provided.
Drawings
FIG. 1 is a flow chart of a method for detecting beauty based on focused ultrasound;
FIG. 2 is a flow chart of two-frame image cross-correlation calculation;
fig. 3 is a schematic structural diagram of an ultrasonic beauty detection system provided by the invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of a beauty treatment detection method based on focused ultrasound, as shown in fig. 1, the method includes: 101. scanning by using an ultrasonic probe to obtain an image group of human skin, wherein the image group comprises a plurality of frames of images; 102. calculating the cross correlation between every two adjacent frames of images; 103. identifying a target ROI region from the image group based on the cross correlation between every two adjacent frames of images, wherein the target ROI region is caused by incomplete fit of an ultrasonic probe and skin; 104. and acquiring a scanning moment corresponding to the target ROI area, and controlling the ultrasonic probe to stop energy output on human skin at the scanning moment.
It can be appreciated that based on the defects in the background technology, the embodiment of the invention provides a method for detecting the position of the skin possibly scalded and the time of the skin possibly scalded in the anti-aging and beautifying process so as to avoid scalding.
In the skin anti-aging beautifying process, the ultrasonic probe is utilized to scan the skin of each part of the human body, an image group obtained by scanning is obtained, for a plurality of frames of images in the image group, a target ROI (region of interest) in the image group can be identified according to the cross correlation by calculating the cross correlation between every two adjacent frames of images, and the target ROI is a region where the ultrasonic probe is not fully attached to the skin of the human body, and the parts can be scalded.
Here, it should be noted that, when the ultrasound probe scans normally, the scanned images are very clear, and it is now necessary to identify a region without ultrasound echo information from the scanned images, where the region is a target ROI region, and the human skin corresponding to the region may be scalded.
After the target ROI area is identified, the scanning time corresponding to the target ROI area is acquired, and the ultrasonic probe is controlled to stop scanning the human skin from the scanning time.
According to the embodiment of the invention, the image scanned by the ultrasonic probe is analyzed, the position and time when the scald possibly occurs are analyzed, the ultrasonic probe is controlled to stop scanning the human skin at the moment, the skin is prevented from being damaged in the beautifying process, and a good safety control mechanism is provided.
In one possible embodiment, the scanning with the ultrasound probe acquires an image set of human skin, the image set comprising a plurality of frames of images, comprising: repeatedly scanning human skin between fixed points by using a focusing single-array element ultrasonic probe in a linear and linear way to obtain a scanned image group; and sequencing the multi-frame images in the image group according to the travel path of the ultrasonic probe, and recording and storing the multi-frame images.
It is understood that when the ultrasonic probe scans the skin of the human body, the path of the ultrasonic probe is a straight line, and the scanning is repeatedly performed, for example, from the point a to the point B of the skin of the human body, then the ultrasonic probe performs the repeated scanning between the point a and the point B to obtain a group of images, which are called an image group, wherein multiple frames of images in the image group are ordered according to the path of the ultrasonic probe, and in addition, the scanning time when the ultrasonic probe scans each frame of images is recorded.
Here, all images scanned by the ultrasonic probe are removed from the images with low resolution and unclear resolution, and the images with high resolution are reserved.
In a possible embodiment, the calculating of the cross-correlation between each two adjacent frames of images previously comprises: and preprocessing each frame of image in the image group, wherein the preprocessing comprises noise reduction processing and mask signal-to-noise ratio filtering processing.
Specifically, for each frame of image in the image group, a masking method is applied to perform noise reduction pretreatment.
In a possible embodiment, the calculating the cross-correlation between every two adjacent frames of images includes: for each frame of image in the preprocessed image group, performing spectrum transformation through Hilbert transformation to obtain an envelope curve of the image; based on the envelope curves of every two adjacent frames of images, a cross-correlation value between the two frames of images is calculated based on a line-by-line convolution method.
It can be understood that, for each preprocessed frame image in the image group, the cross-correlation between every two adjacent frame images is calculated, referring to fig. 2, for two adjacent frame images, for example, the i-th frame image and the (i+1) -th frame image, the envelope curve of each frame image is obtained by performing spectral transformation through hilbert transformation, and the cross-correlation between two adjacent frame images is calculated by using a line-by-line convolution method, so as to obtain the correlation value between two adjacent frame images.
In one possible embodiment, a target ROI area is identified from the image set based on a cross-correlation between every two adjacent frames of images, the target ROI area resulting from incomplete engagement of the ultrasound probe with the skin, comprising: and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, marking a target ROI (region of interest) corresponding to the two adjacent frame images, and recording the related scanning moment.
Specifically, the step calculates the cross-correlation value between every two adjacent frames of images, and identifies the target ROI area according to the cross-correlation value between every two adjacent frames of images. And judging the size between the cross-correlation value between every two adjacent frames of images and a set threshold value, and taking the two adjacent frames of images as a target ROI region when the cross-correlation value between the two adjacent frames of images is smaller than the set threshold value. For example, the correlation value between the 1 st frame and the 2 nd frame, the 2 nd frame and the 3 rd frame, the 3 rd frame and the 4 th frame is larger than the set threshold, and the cross correlation value between the m th frame and the (m+1) th frame is smaller than the set threshold, so that the m th frame and the (m+1) th frame are the target ROI area, the m th frame and the (m+1) th frame are acquired, and the scanning time of the ultrasonic probe for scanning the m th frame and the (m+1) th frame is acquired.
In a possible embodiment, the calculating the correlation between each two adjacent frames of images includes: extracting a first multi-frame image from all images of an image group as an image subsequence according to a set interval, and calculating the cross-correlation value of every two adjacent frames of images in the image subsequence; accordingly, based on the cross-correlation between every two adjacent frames of images, a target ROI area is identified from the image group, the target ROI area being caused by incomplete fit of the ultrasound probe to the skin, comprising: when the cross correlation value between two adjacent frames of images is smaller than a set threshold value, recording the serial numbers of the two adjacent frames of images in the image group, and acquiring a second multi-frame image between the two adjacent frames of images in the image group; calculating a cross-correlation value between every two adjacent frames of images in the second multi-frame image; when the cross correlation value between two adjacent frame images is smaller than a set threshold value, acquiring a target ROI area corresponding to the two adjacent frame images, and recording the scanning time of the two adjacent frame images.
It will be appreciated that in order to improve the efficiency of identifying the target ROI region, multiple frames of images, referred to as image sub-sequences, may be extracted at certain intervals for all images acquired by the ultrasound probe scan when identifying the target ROI region. For example, one frame of image is sampled every two frames, for example, 1 st frame, 4 th frame, 7 th frame are extracted from the image group, and so on, and a plurality of frames of images are sampled to constitute an image sub-sequence.
For images in the image sub-sequence, calculating a correlation value between every two adjacent frames of images, and likewise, judging whether the correlation value between every two adjacent frames of images is larger than a set threshold value, acquiring two adjacent frames of images with correlation values smaller than the set threshold value, and recording serial number positions of the two frames of images in an original image group, for example, the serial numbers of the two frames of images in the original image group are an nth frame and an (n+p) th frame, so that a target ROI area is indicated between the nth frame and the (n+p) th frame.
At this time, the (p+1) th frame image is obtained from the nth frame to the (n+p) th frame in the original image group, and for this (p+1) th frame image, the correlation value between every two adjacent frame images is calculated, and similarly, two adjacent frame images whose correlation value is smaller than the set threshold value are obtained as the ROI region. Thus, screening is performed in a large range and then in a small range, and the recognition efficiency of the target ROI area is improved.
Two frames of images corresponding to the target ROI area are identified, the human skin parts corresponding to the two frames of images can be scalded, the scanning time corresponding to the two frames of images is acquired, and from the scanning time, the ultrasonic probe is controlled to stop outputting energy to the human skin so as to avoid the scalding problem.
Fig. 3 is a structural diagram of a beauty treatment detecting device based on focused ultrasound according to an embodiment of the present invention, which mainly includes an ultrasonic probe 31, a main control module 32 and a power module 33, wherein,
an ultrasonic probe 31 for scanning and acquiring an image group of human skin, the image group including a plurality of frames of images; the main control module 32 is used for calculating the cross correlation between every two adjacent frames of images; identifying a target ROI area from the image group, the target ROI area being caused by incomplete fit of the ultrasound probe 31 to the skin, based on a cross-correlation between every two adjacent frames of images; and is further configured to acquire a scanning time corresponding to the target ROI area, and control the power module 33 to stop outputting energy to the skin of the human body through the ultrasound probe 31 at the scanning time.
The main control module 32 is configured to calculate a cross-correlation between every two adjacent frames of images, and includes: for each frame of image in the image group, performing spectrum transformation through Hilbert transformation to obtain an envelope curve of the image; based on the envelope curves of every two adjacent frames of images, a cross-correlation value between the two frames of images is calculated based on a line-by-line convolution method.
Wherein, the main control module 32 identifies a target ROI area from the image group based on the cross correlation between every two adjacent frames of images, the target ROI area being caused by the incomplete lamination of the ultrasonic probe 31 and the skin, comprising: and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, acquiring a target ROI (region of interest) corresponding to the two adjacent frame images and acquiring the scanning time of the two adjacent frame images.
It can be understood that the method for realizing the beauty detection by using the beauty detection device based on focused ultrasound provided by the invention can refer to the relevant technical features of the ultrasonic beauty method provided by each embodiment, and will not be described herein.
According to the detection method based on the focused ultrasonic image recognition and the beauty and body-care device provided by the embodiment of the invention, the part where the scald possibly occurs is analyzed and the time when the ultrasonic probe is controlled to stop scanning the human skin at the moment by analyzing the image scanned by the ultrasonic probe, so that the damage to the skin in the beauty and body-care process is avoided, and a good safety control mechanism is provided.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (8)
1. The detection method based on the focused ultrasonic image recognition is characterized by comprising the following steps of:
scanning by using an ultrasonic probe to obtain an image group of human skin, wherein the image group comprises a plurality of frames of images;
calculating the cross correlation between every two adjacent frames of images;
identifying a target ROI region from the image group based on the cross correlation between every two adjacent frames of images, wherein the target ROI region is caused by incomplete fit of an ultrasonic probe and skin;
acquiring a scanning moment corresponding to the target ROI area, and controlling an ultrasonic probe to stop energy output on human skin at the scanning moment;
based on the cross-correlation between every two adjacent frames of images, identifying a target ROI area from the image group, wherein the target ROI area is caused by incomplete fit of an ultrasonic probe and skin, and the method comprises the following steps:
and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, marking a target ROI (region of interest) corresponding to the two adjacent frame images, and recording the related scanning moment.
2. The method for detecting the ultrasonic image recognition based on the focusing according to claim 1, wherein the scanning by the ultrasonic probe is used for obtaining an image group of human skin, the image group comprises a plurality of frames of images, and the method comprises the following steps:
repeatedly scanning human skin between fixed points by using a focusing single-array element ultrasonic probe in a linear and linear way to obtain a scanned image group;
and sequencing the multi-frame images in the image group according to the travel path of the ultrasonic probe, and recording and storing the multi-frame images.
3. The method for detecting images based on focused ultrasound image recognition according to claim 1 or 2, wherein the calculating the cross-correlation between every two adjacent frames of images previously comprises: and preprocessing each frame of image in the image group, wherein the preprocessing comprises noise reduction processing and mask signal-to-noise ratio filtering processing.
4. A method of detecting based on focused ultrasound image recognition according to claim 3, wherein the calculating the cross-correlation between every two adjacent frames of images comprises:
for each frame of image in the preprocessed image group, performing spectrum transformation through Hilbert transformation to obtain an envelope curve of the image;
based on the envelope curves of every two adjacent frames of images, a cross-correlation value between the two frames of images is calculated based on a line-by-line convolution method.
5. The method for detecting images based on focused ultrasound image recognition according to claim 1, wherein the calculating the cross-correlation between every two adjacent frames of images includes:
extracting a first multi-frame image from all images of an image group as an image subsequence according to a set interval, and calculating the cross-correlation value of every two adjacent frames of images in the image subsequence;
correspondingly, based on the cross-correlation between every two adjacent frames of images, a target ROI area is identified from the image group, wherein the target ROI area is caused by incomplete lamination of the ultrasonic probe and the skin, and the method comprises the following steps:
when the cross correlation value between two adjacent frames of images is smaller than a set threshold value, recording the serial numbers of the two adjacent frames of images in the image group, and acquiring a second multi-frame image between the two adjacent frames of images in the image group;
calculating a cross-correlation value between every two adjacent frames of images in the second multi-frame image;
when the cross correlation value between two adjacent frame images is smaller than a set threshold value, acquiring a target ROI area corresponding to the two adjacent frame images, and recording the scanning time of the two adjacent frame images.
6. The beauty and body-building device based on the focused ultrasonic image recognition is characterized by comprising an ultrasonic probe, a main control module and a power module;
the ultrasonic probe is used for scanning and acquiring an image group of human skin, wherein the image group comprises a plurality of frames of images;
the main control module is used for calculating the cross correlation between every two adjacent frames of images; identifying a target ROI region from the image group based on the cross correlation between every two adjacent frames of images, wherein the target ROI region is caused by incomplete fit of an ultrasonic probe and skin; the power module is also used for acquiring the scanning time corresponding to the target ROI area, and controlling the power module to stop outputting energy to the skin of the human body through the ultrasonic probe at the scanning time;
based on the cross-correlation between every two adjacent frames of images, identifying a target ROI area from the image group, wherein the target ROI area is caused by incomplete fit of an ultrasonic probe and skin, and the method comprises the following steps:
and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, marking a target ROI (region of interest) corresponding to the two adjacent frame images, and recording the related scanning moment.
7. The focused ultrasound image recognition-based cosmetic body-building device of claim 6, wherein the main control module is configured to calculate a cross-correlation between every two adjacent frames of images, and comprises:
for each frame of image in the image group, performing spectrum transformation through Hilbert transformation to obtain an envelope curve of the image;
based on the envelope curves of every two adjacent frames of images, a cross-correlation value between the two frames of images is calculated based on a line-by-line convolution method.
8. The focused ultrasound image recognition-based cosmetic body-building device of claim 7, wherein the main control module recognizes a target ROI area from the image group based on a cross correlation between every two adjacent frames of images, the target ROI area being caused by incomplete adhesion of an ultrasound probe to skin, comprising:
and judging the size between the cross-correlation value between every two adjacent frame images and a set threshold value, and when the cross-correlation value between the two adjacent frame images is smaller than the set threshold value, acquiring a target ROI (region of interest) corresponding to the two adjacent frame images and acquiring the scanning time of the two adjacent frame images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110933029.5A CN113616945B (en) | 2021-08-13 | 2021-08-13 | Detection method based on focused ultrasonic image recognition and beauty and body-building device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110933029.5A CN113616945B (en) | 2021-08-13 | 2021-08-13 | Detection method based on focused ultrasonic image recognition and beauty and body-building device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113616945A CN113616945A (en) | 2021-11-09 |
CN113616945B true CN113616945B (en) | 2024-03-08 |
Family
ID=78385409
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110933029.5A Active CN113616945B (en) | 2021-08-13 | 2021-08-13 | Detection method based on focused ultrasonic image recognition and beauty and body-building device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113616945B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116502019B (en) * | 2023-04-27 | 2024-06-25 | 广东花至美容科技有限公司 | Skin collagen protein lifting rate calculation method, storage medium and electronic equipment |
CN117442895B (en) * | 2023-12-26 | 2024-03-05 | 广州中科医疗美容仪器有限公司 | Ultrasonic automatic control method and system based on machine learning |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1555760A (en) * | 2004-01-02 | 2004-12-22 | 清华大学 | Ultrasonic temperature measuring bivalue image fuzzy tracing method |
CN102085411A (en) * | 2009-12-03 | 2011-06-08 | 林心一 | Intelligent use-safety defection method of ultrasonic treatment device |
CN103156636A (en) * | 2011-12-15 | 2013-06-19 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging device and method |
CN109674494A (en) * | 2019-01-29 | 2019-04-26 | 深圳瀚维智能医疗科技有限公司 | Ultrasonic scan real-time control method, device, storage medium and computer equipment |
CN110432928A (en) * | 2019-08-22 | 2019-11-12 | 深圳瀚维智能医疗科技有限公司 | Ultrasound image checking method, device and equipment |
CN110974294A (en) * | 2019-12-19 | 2020-04-10 | 上海尽星生物科技有限责任公司 | Ultrasonic scanning method and device |
CN111227864A (en) * | 2020-01-12 | 2020-06-05 | 刘涛 | Method and apparatus for lesion detection using ultrasound image using computer vision |
CN111449680A (en) * | 2020-01-14 | 2020-07-28 | 深圳大学 | Optimization method of ultrasonic scanning path and ultrasonic equipment |
CN112472133A (en) * | 2020-12-22 | 2021-03-12 | 深圳市德力凯医疗设备股份有限公司 | Posture monitoring method and device for ultrasonic probe |
CN112971842A (en) * | 2019-12-16 | 2021-06-18 | 深圳市理邦精密仪器股份有限公司 | Method and device for controlling low power consumption of ultrasonic diagnostic equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5649083B2 (en) * | 2010-07-14 | 2015-01-07 | 株式会社日立メディコ | Image restoration method and apparatus for ultrasonic image, and ultrasonic diagnostic apparatus |
US20190159762A1 (en) * | 2012-11-28 | 2019-05-30 | Shenzhen Mindray Bio-Medical Electronics Co., Ltd. | System and method for ultrasound elastography and method for dynamically processing frames in real time |
CN105877780B (en) * | 2015-08-25 | 2019-05-31 | 上海深博医疗器械有限公司 | Fully-automatic ultrasonic scanner and scanning detection method |
WO2019061148A1 (en) * | 2017-09-28 | 2019-04-04 | 北京匡图医疗科技有限公司 | Ultrasonic dynamic image processing method and apparatus, and ultrasonic camera device |
-
2021
- 2021-08-13 CN CN202110933029.5A patent/CN113616945B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1555760A (en) * | 2004-01-02 | 2004-12-22 | 清华大学 | Ultrasonic temperature measuring bivalue image fuzzy tracing method |
CN102085411A (en) * | 2009-12-03 | 2011-06-08 | 林心一 | Intelligent use-safety defection method of ultrasonic treatment device |
CN103156636A (en) * | 2011-12-15 | 2013-06-19 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging device and method |
CN109674494A (en) * | 2019-01-29 | 2019-04-26 | 深圳瀚维智能医疗科技有限公司 | Ultrasonic scan real-time control method, device, storage medium and computer equipment |
CN110432928A (en) * | 2019-08-22 | 2019-11-12 | 深圳瀚维智能医疗科技有限公司 | Ultrasound image checking method, device and equipment |
CN112971842A (en) * | 2019-12-16 | 2021-06-18 | 深圳市理邦精密仪器股份有限公司 | Method and device for controlling low power consumption of ultrasonic diagnostic equipment |
CN110974294A (en) * | 2019-12-19 | 2020-04-10 | 上海尽星生物科技有限责任公司 | Ultrasonic scanning method and device |
CN111227864A (en) * | 2020-01-12 | 2020-06-05 | 刘涛 | Method and apparatus for lesion detection using ultrasound image using computer vision |
CN111449680A (en) * | 2020-01-14 | 2020-07-28 | 深圳大学 | Optimization method of ultrasonic scanning path and ultrasonic equipment |
CN112472133A (en) * | 2020-12-22 | 2021-03-12 | 深圳市德力凯医疗设备股份有限公司 | Posture monitoring method and device for ultrasonic probe |
Also Published As
Publication number | Publication date |
---|---|
CN113616945A (en) | 2021-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113616945B (en) | Detection method based on focused ultrasonic image recognition and beauty and body-building device | |
CN108510502B (en) | Melanoma image tissue segmentation method and system based on deep neural network | |
Wang et al. | Sentence recognition from articulatory movements for silent speech interfaces | |
CN106951753B (en) | Electrocardiosignal authentication method and device | |
CN101502425B (en) | System and method for detecting characteristic of vocal cord vibration mechanics | |
JP2017093760A (en) | Device and method for measuring periodic variation interlocking with heart beat | |
CN111339806A (en) | Training method of lip language recognition model, living body recognition method and device | |
CN110428364B (en) | Method and device for expanding Parkinson voiceprint spectrogram sample and computer storage medium | |
CN112070685B (en) | Method for predicting dynamic soft tissue movement of HIFU treatment system | |
Hagedorn et al. | Automatic Analysis of Singleton and Geminate Consonant Articulation Using Real-Time Magnetic Resonance Imaging. | |
CN108648745B (en) | Method for converting lip image sequence into voice coding parameter | |
JP2010197998A (en) | Audio signal processing system and autonomous robot having such system | |
Douros et al. | Towards a method of dynamic vocal tract shapes generation by combining static 3D and dynamic 2D MRI speech data | |
CN110602978A (en) | System and method for extracting physiological information from video sequence | |
CN115831352B (en) | Detection method based on dynamic texture features and time slicing weight network | |
KR101413853B1 (en) | Method and apparatus for measuring physiological signal usuing infrared image | |
Mannem et al. | Acoustic and Articulatory Feature Based Speech Rate Estimation Using a Convolutional Dense Neural Network. | |
CN111829956B (en) | Photoacoustic endoscopic quantitative tomography method and system based on layered guidance of ultrasonic structure | |
Palo | Can we detect initiation of tongue internal changes before overt movement onset in ultrasound? | |
Wang et al. | Vowel recognition from continuous articulatory movements for speaker-dependent applications | |
Talea et al. | Automatic combined lip segmentation in color images | |
Kalinli | Syllable Segmentation of Continuous Speech Using Auditory Attention Cues. | |
Hsieh et al. | Pharyngeal constriction in English diphthong production | |
Naeini et al. | Automated temporal segmentation of orofacial assessment videos | |
JP6686553B2 (en) | Response quality evaluation program, response quality evaluation method and response quality evaluation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |