CN111640114B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111640114B
CN111640114B CN202010551099.XA CN202010551099A CN111640114B CN 111640114 B CN111640114 B CN 111640114B CN 202010551099 A CN202010551099 A CN 202010551099A CN 111640114 B CN111640114 B CN 111640114B
Authority
CN
China
Prior art keywords
contour
position information
coordinate system
processing
under
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010551099.XA
Other languages
Chinese (zh)
Other versions
CN111640114A (en
Inventor
吴振洲
许文勇
李彦康
张培芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ande Yizhi Technology Co ltd
Original Assignee
Beijing Ande Yizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ande Yizhi Technology Co ltd filed Critical Beijing Ande Yizhi Technology Co ltd
Priority to CN202010551099.XA priority Critical patent/CN111640114B/en
Publication of CN111640114A publication Critical patent/CN111640114A/en
Application granted granted Critical
Publication of CN111640114B publication Critical patent/CN111640114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, the method comprising: performing target detection processing on the image to be processed to obtain a first contour of a target in the image to be processed under a first coordinate system; when the first contour is incomplete, carrying out coordinate transformation processing on the first position information of the first contour to obtain a second contour under a second coordinate system; determining third position information of a complete third contour according to the second position information of the second contour; and carrying out coordinate transformation processing on the third position information to obtain a fourth contour of the target under the first coordinate system. According to the image processing method of the embodiment of the disclosure, under the condition that the detected outline of the target is incomplete, the outline can be supplemented completely according to the coordinate transformation processing and the second position information of the pixel points on the outline after the coordinate transformation, a complete target area can be obtained, the detection efficiency is improved, the detection precision is improved, and the visual effect of the detection result is optimized.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of computers, and in particular, to an image processing method and apparatus.
Background
With the development of computer vision technology, the deep learning neural network method can be applied to medical images such as magnetic resonance imaging (Magnetic Resonance Imaging, MRI) or computed tomography (Computed Tomography, CT) to provide basis for diagnosis and treatment. In the related art, a rectangular frame may be used to map the location of a target area (e.g., a lesion area or a tissue of an organ), but complex details (e.g., curvature) cannot be expressed by the rectangular frame, and if the details are to be determined, the boundary contour of the target area is also determined to partition the target area.
Tumor margins are usually classified by medical professionals according to visual clues and medical experience, and diagnosis difficulty is high and accuracy is low. Deep learning neural networks are often not suitable for detection of certain lesions in medicine, e.g., a tumor contour may not have visually apparent edges, resulting in incomplete contours, inability to form complete areas, and inability to obtain segmentation masks to segment target areas.
Disclosure of Invention
In view of this, the present disclosure proposes an image processing method and apparatus.
According to an aspect of the present disclosure, there is provided an image processing method including: performing target detection processing on an image to be processed to obtain a first contour of a target in the image to be processed under a first coordinate system; when the first contour is incomplete, carrying out coordinate transformation processing on first position information of the pixel points of the first contour under the first coordinate system to obtain a second contour under a second coordinate system; determining third position information of the complete third contour pixel point under the second coordinate system according to the second position information of the second contour pixel point under the second coordinate system; and carrying out coordinate transformation processing on the third position information to obtain a fourth contour of the target under the first coordinate system, wherein the fourth contour is a complete contour.
In one possible implementation manner, determining third position information of a complete pixel point of a third contour in the second coordinate system according to second position information of the pixel point of the second contour in the second coordinate system includes: performing interpolation processing according to the second position information to obtain a position representation of the pixel point of the second contour under the second coordinate system; obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation; performing frequency domain transformation processing on the fourth position information to obtain a first frequency domain representation of the fourth position information; performing attenuation processing on the preset frequency response of the first frequency domain representation to obtain a second frequency domain representation; and carrying out frequency domain inverse transformation on the second frequency domain representation to obtain third position information of the pixel points of the complete third contour under the second coordinate system.
In a possible implementation, the second coordinate system is a polar coordinate system, the second position information includes a polar coordinate angle and a radial distance of the pixel point of the second contour, and the position represents a relationship between the polar coordinate angle and the radial distance of the pixel point including the second contour.
In one possible implementation manner, obtaining fourth position information of a plurality of pixel points of the complete fifth contour according to the position representation includes: uniformly setting a plurality of polar coordinate angles; and determining radial distances corresponding to the polar coordinate angles according to the position representation.
In one possible implementation manner, the second coordinate system is a polar coordinate system, and the performing coordinate transformation processing on the first position information of the pixel point of the first contour in the first coordinate system to obtain a second contour in the second coordinate system includes: performing weighted average processing on first position information of a plurality of pixel points of the first contour to obtain a reference point; and taking the reference point as a central point of the polar coordinate system, performing polar coordinate transformation processing on the first position information to obtain second position information of the pixel point of the second contour under a second coordinate system, wherein the second position information comprises the polar coordinate angle and the radial distance of the pixel point.
In one possible implementation manner, performing object detection processing on an image to be processed to obtain a first contour of an object in the image to be processed under a first coordinate system, where the method includes: inputting the image to be processed into a detection network for detection processing to obtain a target contour of the target area; and carrying out binarization processing on pixel values of the pixel points of the target contour to obtain the first contour.
In one possible implementation, the image to be processed comprises a medical image.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the detection module is used for carrying out target detection processing on the image to be processed to obtain a first contour of a target in the image to be processed under a first coordinate system; the first transformation module is used for carrying out coordinate transformation processing on first position information of the pixel points of the first contour under the first coordinate system when the first contour is incomplete, so as to obtain a second contour under a second coordinate system; the supplementing module is used for determining third position information of the pixel points of the complete third contour under the second coordinate system according to the second position information of the pixel points of the second contour under the second coordinate system; and the second transformation module is used for carrying out coordinate transformation processing on the third position information to obtain a fourth contour of the target under the first coordinate system, wherein the fourth contour is a complete contour.
In one possible implementation, the supplemental module is further configured to: performing interpolation processing according to the second position information to obtain a position representation of the pixel point of the second contour under the second coordinate system; obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation; performing frequency domain transformation processing on the fourth position information to obtain a first frequency domain representation of the fourth position information; performing attenuation processing on the preset frequency response of the first frequency domain representation to obtain a second frequency domain representation; and carrying out frequency domain inverse transformation on the second frequency domain representation to obtain third position information of the pixel points of the complete third contour under the second coordinate system.
In a possible implementation, the second coordinate system is a polar coordinate system, the second position information includes a polar coordinate angle and a radial distance of the pixel point of the second contour, and the position represents a relationship between the polar coordinate angle and the radial distance of the pixel point including the second contour.
In one possible implementation, the supplemental module is further configured to: uniformly setting a plurality of polar coordinate angles; and determining radial distances corresponding to the polar coordinate angles according to the position representation.
In one possible implementation, the second coordinate system is a polar coordinate system, and the first transformation module is further configured to: performing weighted average processing on first position information of a plurality of pixel points of the first contour to obtain a reference point; and taking the reference point as a central point of the polar coordinate system, performing polar coordinate transformation processing on the first position information to obtain second position information of the pixel point of the second contour under a second coordinate system, wherein the second position information comprises the polar coordinate angle and the radial distance of the pixel point.
In one possible implementation, the detection module is further configured to: inputting the image to be processed into a detection network for detection processing to obtain a target contour of the target area; and carrying out binarization processing on pixel values of the pixel points of the target contour to obtain the first contour.
In one possible implementation, the image to be processed comprises a medical image.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described method.
According to the image processing method of the embodiment of the disclosure, under the condition that the detected outline of the target is incomplete, the outline can be supplemented completely according to the coordinate transformation processing and the second position information of the pixel points on the outline after the coordinate transformation, a complete target area can be obtained, the detection efficiency is improved, the detection precision is improved, and the visual effect of the detection result is optimized.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow chart of an image processing method according to an embodiment of the present disclosure;
fig. 2A and 2B illustrate application diagrams of an image processing method according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of detecting network performance evaluations according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of detecting network performance evaluations according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 7 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, the method including:
in step S11, performing object detection processing on an image to be processed, to obtain a first contour of an object in the image to be processed under a first coordinate system;
in step S12, when the first contour is incomplete, performing coordinate transformation processing on first position information of the pixel point of the first contour in the first coordinate system to obtain a second contour in a second coordinate system;
in step S13, third position information of the pixel point of the complete third contour in the second coordinate system is determined according to the second position information of the pixel point of the second contour in the second coordinate system;
in step S14, coordinate transformation processing is performed on the third position information, so as to obtain a fourth contour of the target under the first coordinate system, where the fourth contour is a complete contour.
According to the image processing method of the embodiment of the disclosure, under the condition that the detected outline of the target is incomplete, the outline can be supplemented completely according to the coordinate transformation processing and the second position information of the pixel points on the outline after the coordinate transformation, a complete target area can be obtained, the detection efficiency is improved, the detection precision is improved, and the visual effect of the detection result is optimized.
In one possible implementation, the image to be processed may include a medical image, such as a medical image of magnetic resonance imaging (Magnetic Resonance Imaging, MRI) or computed tomography (Computed Tomography, CT), and the image to be processed may also include other images, such as a street view image, a person image, and the like, and the type of image to be processed is not limited by the present disclosure.
In a possible implementation manner, in step S11, the image to be processed may be subjected to a target detection process to obtain a target area where a target in the image to be processed is located, where the target may include a predetermined organ, tissue, or lesion (for example, tumor), and the first contour of the target area in the first coordinate system may be obtained through the target detection process. The first coordinate system may be an image coordinate system, for example, if the image to be processed is a three-dimensional image, the first coordinate system may be a three-dimensional cartesian coordinate system, if the image to be processed is a two-dimensional image, the first coordinate system may be a two-dimensional cartesian coordinate system, and the dimensions of the first coordinate system are not limited in this disclosure.
In one possible implementation, step S11 may include: inputting the image to be processed into a detection network for detection processing to obtain a target contour of the target area; and carrying out binarization processing on pixel values of the pixel points of the target contour to obtain the first contour.
In one possible implementation manner, the detection network may be a deep learning neural network, for example, a convolutional neural network, or the like, and may be used to segment a target area where a target in the image to be processed is located, so as to obtain a first contour of the target area.
In an example, the detection network may detect a complete first contour of the target area where the target is located, e.g. using the detection network in the detection of a portrait image, the detection network may segment out the complete contour of the portrait. However, the detection network may not be adapted to the detection of certain types of images, e.g. the detection network may not be adapted to the detection of medical images, and the detection network may not be able to accurately obtain the target area in the medical image, nor the complete first contour, in which case the detection network may output an incomplete first contour.
In one possible implementation, the first contour may include a plurality of pixel points, each of which may have position information, i.e., first position information in a first coordinate system, e.g., the first position information may be represented as coordinates in the first coordinate system. And the pixel points of the first contour can have pixel values, and the pixel values of the pixel points can be subjected to binarization processing, so that the first contour is clearer, and the contrast of the first contour is enhanced. In an example, a pixel value of a pixel having a pixel value greater than or equal to a preset threshold may be set to 255, otherwise, the pixel value of the pixel is set to 0, and the present disclosure does not limit the set pixel value.
In one possible implementation, in step S12, if the first contour is incomplete, the incomplete contour may be complemented with the complete contour to obtain a complete target area, i.e. the location of the target is determined. The first position information of the pixel points of the first contour under the first coordinate system can be subjected to coordinate transformation processing to obtain a second contour under the second coordinate system. In an example, the second coordinate system may be a polar coordinate system, a planar polar coordinate system in the case where the first coordinate system is a two-dimensional cartesian coordinate system, and a spatial polar coordinate system in the case where the first coordinate system is a three-dimensional cartesian coordinate system, the disclosure is not limited to the type of the second coordinate system.
In one possible implementation, step S12 may include: performing weighted average processing on first position information of a plurality of pixel points of the first contour to obtain a reference point; and taking the reference point as a central point of the polar coordinate system, performing polar coordinate transformation processing on the first position information to obtain second position information of the pixel point of the second contour under a second coordinate system, wherein the second position information comprises the polar coordinate angle and the radial distance of the pixel point.
In one possible implementation, in the case where the second coordinate system is a polar coordinate system, a center point of the polar coordinate system may be determined, first position information of a plurality of pixel points of the first contour (i.e., coordinates under the first coordinate system) may be subjected to weighted average processing, the resulting average coordinates may be used as a reference point, and the reference point may be used as the center point of the polar coordinate system.
In one possible implementation, after determining the center point of the polar coordinate system, the first position information (i.e., coordinates in the cartesian coordinate system) may be subjected to a polar coordinate transformation process, and converted into second position information in the polar coordinate system, i.e., an angle and a radial distance (a linear distance from the center point) in the polar coordinate system, to obtain a second contour in the polar coordinate system.
In an example, the first profile is an incomplete profile, and the second profile is also an incomplete profile, e.g. when the angles vary from 0 ° -360 °, there is a corresponding radial distance for each angle, the profile is a complete profile, but in the second profile there is no corresponding radial distance for part of the angles, and thus the second profile is incomplete.
In one possible implementation, the second coordinate system may be a complex space, such as a quaternion space, etc., and the disclosure is not limited to the type of second coordinate system.
In one possible implementation, in step S13, the incomplete second profile may be complemented with a complete third profile in a polar coordinate system. For example, a full profile may be obtained by traversing a plurality of angles from 0 ° -360 ° and determining the radial distance corresponding to each angle.
In one possible implementation, step S13 may include: performing interpolation processing according to the second position information to obtain a position representation of the pixel point of the second contour under the second coordinate system; obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation; performing frequency domain transformation processing on the fourth position information to obtain a first frequency domain representation of the fourth position information; performing attenuation processing on the preset frequency response of the first frequency domain representation to obtain a second frequency domain representation; and carrying out frequency domain inverse transformation on the second frequency domain representation to obtain third position information of the pixel points of the complete third contour under the second coordinate system.
In a possible implementation, the second coordinate system is a polar coordinate system, the second position information includes a polar coordinate angle and a radial distance of the pixel point of the second contour, and the position represents a relationship between the polar coordinate angle and the radial distance of the pixel point including the second contour.
In an example, the second contour may include a plurality of pixel points, each pixel point may have second position information, that is, a polar coordinate angle and a radial distance of the pixel point, and the position representation of the pixel point of the second contour under the second coordinate system, that is, a relationship between the polar coordinate angle and the radial distance of the pixel point, may be determined through interpolation processing, for example, a piecewise interpolation or spline interpolation method may be used to determine the relationship between the polar coordinate angle and the radial distance, that is, the position representation may be obtained.
In one possible implementation, the incomplete second contour may be complemented by a complete, based on the location representation. Obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation, wherein the fourth position information comprises: uniformly setting a plurality of polar coordinate angles; and determining radial distances corresponding to the polar coordinate angles according to the position representation.
In one possible implementation, the second profile is incomplete, i.e. only part of the angles have a corresponding radial distance, and another part of the angles do not have a corresponding radial distance. The position representation may be utilized to determine radial distances for a plurality of angles from 0 deg. -360 deg..
In an example, a plurality of polar angles may be uniformly set, for example, may be in the range of 0 ° -360 °, for example, the set angles may be more, one polar angle may be set every 1 °, one polar angle may be set every 0.1 °, one polar angle may be set every 0.01 °, and the like, the present disclosure does not limit the angle setting interval and the number of setting angles.
In an example, the relationship between the position representation, i.e. the polar coordinate angle and the radial distance, may be used to determine the radial distance corresponding to each set polar coordinate angle, i.e. the radial distances corresponding to a plurality of angles in the range of 0 ° -360 ° may be obtained, so that the fourth position information of the pixel point with a plurality of angles in the range of 0 ° -360 °, i.e. the fourth position information of the fifth contour, may be obtained, the more the set angles, the smaller the angle interval, the closer to the complete contour.
In one possible implementation, the fourth location information of the plurality of pixels of the fifth contour may be subjected to a frequency domain transformation, for example, a radial distance corresponding to each angle may be subjected to a frequency domain transformation, and a first frequency domain representation of the fourth location information may be obtained.
In an example, the radial distance may be subjected to a frequency domain transform process by means of a fast fourier transform or a wavelet transform, etc., to obtain the first frequency domain representation of the fourth location information. The present disclosure does not limit the manner in which the frequency domain is transformed.
In one possible implementation, the first frequency domain representation may include responses of multiple frequencies, and to smooth the profile, noise disturbances of the high frequency response may be removed, reducing glitches in the profile. In an example, a predetermined frequency band (e.g., a high frequency band) may be set and a predetermined frequency response within the predetermined frequency band is attenuated to remove high frequency noise interference.
In an example, the parameter of the predetermined frequency (e.g., high frequency) response may be set to zero, i.e., the predetermined frequency response is completely removed. Alternatively, the parameters of the predetermined frequency response may be multiplied by an attenuation coefficient (e.g., an exponential attenuation coefficient, etc.) such that the response of the predetermined frequency is reduced. The present disclosure does not limit the manner in which high frequency noise is removed.
In a possible implementation, after attenuating the predetermined frequency response, a second frequency domain representation may be obtained, and the second frequency domain representation may be subjected to a frequency domain inverse transformation, i.e. third position information of the pixels of the third contour in the second coordinate system may be obtained. The third profile is a complete and smooth profile.
In this way, the contour can be smoothed by converting the first position information into the second position information in the polar coordinate system and taking the polar coordinate angle as an argument to complement the contour while complementing the angle completely, and reducing the high frequency noise by the frequency domain transformation and attenuation of the predetermined frequency response.
In one possible implementation manner, in step S14, the third position information of each pixel point of the third contour may be subjected to coordinate transformation, so as to convert the third contour in the second coordinate system into the fourth contour in the first coordinate system, that is, convert the third coordinate information of each pixel point of the third contour in the polar coordinate system into the coordinates of the fourth contour in the cartesian coordinate system, that is, obtain the complete and smooth fourth contour.
According to the image processing method of the embodiment of the present disclosure, in the case where the contour of the detected object is incomplete, the contour can be smoothed by converting the first position information into the second position information in the polar coordinate system and taking the polar coordinate angle as an argument, supplementing the contour to be complete while supplementing the angle to be complete, and reducing high-frequency noise through frequency domain transformation and attenuation of a predetermined frequency response. The complete target area can be obtained, the detection efficiency is improved, the detection precision is improved, and the visual effect of the detection result is optimized.
Fig. 2A and 2B illustrate application diagrams of an image processing method according to an embodiment of the present disclosure, where an image to be processed may be a human brain CT image, and a lesion site in the brain CT image may be identified and segmented by detecting a network.
In one possible implementation, as shown in fig. 2A, the detection network is not suitable for the detection of medical images, and the target area where the focal part in the brain CT image is located cannot be accurately obtained, and the complete first contour cannot be obtained.
In one possible implementation, the incomplete first profile may be complemented to obtain a complete target area. The first position information of the pixel point of the first contour in the first coordinate system (image coordinate system, i.e., cartesian coordinate system) may be subjected to coordinate transformation processing to obtain the second contour in the second coordinate system (polar coordinate system).
In an example, the first position information (coordinates) of the pixel point of the first contour in the first coordinate system may be weighted-averaged to obtain a reference point, i.e., a center point of the polar coordinate system. The first position information may then be converted into polar coordinates with the reference point as the center point of the polar coordinate system, obtaining second position information (polar coordinate angle and radial distance) of the second profile.
In one possible implementation, the second profile is also an incomplete profile, which can be complemented by polar coordinate angles. In an example, since the second contour is incomplete, only a portion of the angles in the second contour have corresponding radial distances (i.e., second position information), the second position information may be interpolated to obtain a positional representation, i.e., a relationship between the polar coordinate angle and the radial distances.
In one possible implementation, the plurality of polar angles may be uniformly set within the range of 0 ° -360 °, e.g., more angles may be set (e.g., one polar angle set every 0.01 °), the more angles are set, the smaller the angular separation, the closer to a full profile. And the radial distance corresponding to each polar angle can be determined from the position representation (i.e., the relationship between the polar angle and the radial distance), a fifth profile can be obtained.
In one possible implementation, the fourth location information of the plurality of pixels of the fifth contour may be subjected to a frequency domain transform process, for example, a radial distance corresponding to each angle may be subjected to a fast fourier transform process, to obtain the first frequency domain representation of the fourth location information. Further, high frequency noise in the first frequency domain representation may be removed, e.g. the frequency response of the high frequency band may be set to 0, and a second frequency domain representation obtained to remove high frequency noise interference such that the contour is smoothed.
In one possible implementation, the second frequency domain representation may be subjected to a frequency domain inverse transformation, e.g. an inverse fourier transformation, to obtain third position information of the complete third contour, which is position information in a polar coordinate system, which may be converted into fourth position information in a cartesian coordinate system to obtain the complete fourth contour in the cartesian coordinate system.
In a possible implementation, the image processing method may be used in the processing of an image (for example, a medical image), where a complete contour of the target area may be obtained, and a segmentation mask may be generated according to the contour, so as to segment the target area, for example, where a complete contour of the lesion area may be obtained in the processing of the medical image, and the lesion area may be segmented.
Further, the image processing method may be used to evaluate the performance of the detection network. In an example, a sample image may be processed through a detection network to obtain a contour of a predicted region, and performance of the detection network may be determined based on the contour of the predicted region and a contour of a labeled region in the sample image. For example, the performance of the detection network may be evaluated by a Jaccard coefficient (Jaccard coefficient, cross-over) or a collective similarity metric coefficient (Dice) of the contours of the prediction region and the labeling region. However, the above two evaluation methods do not evaluate the performance of detecting the outline of the network detection area well.
Fig. 3 shows a schematic diagram of detecting network performance evaluations according to an embodiment of the present disclosure. As shown in fig. 3, fig. 3 a is a schematic diagram of a labeling area and a contour of the labeling area, fig. B is a schematic diagram of a prediction area and a prediction area boundary obtained by a certain detection network, and fig. C is a schematic diagram of a prediction area and a prediction area boundary obtained by another detection network. The Jacquard coefficient between the predicted area and the marked area obtained by the detection network corresponding to the B diagram is 0.91, the Jacquard coefficient between the predicted area and the marked area obtained by the detection network corresponding to the C diagram is 0.41, and the performance of the detection network corresponding to the B diagram is stronger than that of the detection network corresponding to the C diagram in the aspect of area detection performance. However, since the detected area has a slight deviation, the outline of the predicted area obtained by the detection network corresponding to the B graph and the outline of the labeling area cannot be overlapped, so that the jaccard coefficient between the outline of the predicted area and the outline of the labeling area is 0.33, while the jaccard coefficient between the outline of the predicted area obtained by the detection network corresponding to the C graph and the outline of the labeling area is 0.38, although a large number of false detection areas exist in the predicted area obtained by the detection network corresponding to the C graph, the jaccard coefficient of the detection outline is still higher than that of the detection network corresponding to the B graph, and therefore, the performance of the detection area of the detection network has deviation from the performance of the detection outline, and the performance of the detection area of the detection network is evaluated by using the evaluation method.
In one possible implementation, if the profile obtained by the detection network is complete with respect to the deviation, the region surrounded by the profile (i.e., the prediction region) may be filled, so that the performance of the detection network may be evaluated by using the jaccard coefficient or the aggregate similarity metric coefficient between the filled region and the labeling region, i.e., by using a similar manner to the jaccard coefficient or the aggregate similarity metric coefficient between the prediction region and the labeling region. However, if the profile is incomplete, filling is difficult and there may be bias in directly evaluating the Jaccard coefficients or aggregate similarity metric coefficients of the profile.
Fig. 4 shows a schematic diagram of detecting network performance evaluations according to an embodiment of the present disclosure. In fig. 4, a diagram a is a schematic diagram of a labeling area and a contour of the labeling area, and the jetty coefficients of the contour of the prediction area and the contour of the labeling area obtained by the detection network corresponding to the diagram B, the diagram C, the diagram D, the diagram E and the diagram F are lower, but the contour is complete, and the performance of the detection network can be evaluated according to the jetty coefficient between the area formed after filling and the labeling area by filling the area surrounded by the contour.
However, the profiles of G, H, I and J are incomplete, difficult to fill, and the jetty coefficient of the profile of the predicted area versus the profile of the labeled area is low to evaluate the performance of the detection network for deviation. Therefore, the outline can be supplemented completely through the image processing method, after a closed area is obtained, the detection network is evaluated through the Jacaded coefficient between the area obtained through filling processing and the marked area, so that the detection network with the optimal performance is selected, the objectivity for evaluating the performance of the detection network can be enhanced, and the accuracy for selecting the detection network is improved.
Fig. 5 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the apparatus including:
the detection module 11 is used for carrying out target detection processing on an image to be processed to obtain a first contour of a target in the image to be processed under a first coordinate system;
the first transformation module 12 is configured to perform coordinate transformation processing on first position information of a pixel point of the first contour in the first coordinate system when the first contour is incomplete, so as to obtain a second contour in a second coordinate system;
the supplementing module 13 is configured to determine third position information of a complete pixel point of a third contour in the second coordinate system according to second position information of the pixel point of the second contour in the second coordinate system;
And the second transformation module 14 is configured to perform coordinate transformation processing on the third position information to obtain a fourth contour of the target in the first coordinate system, where the fourth contour is a complete contour.
In one possible implementation, the supplemental module is further configured to: performing interpolation processing according to the second position information to obtain a position representation of the pixel point of the second contour under the second coordinate system; obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation; performing frequency domain transformation processing on the fourth position information to obtain a first frequency domain representation of the fourth position information; performing attenuation processing on the preset frequency response of the first frequency domain representation to obtain a second frequency domain representation; and carrying out frequency domain inverse transformation on the second frequency domain representation to obtain third position information of the pixel points of the complete third contour under the second coordinate system.
In a possible implementation, the second coordinate system is a polar coordinate system, the second position information includes a polar coordinate angle and a radial distance of the pixel point of the second contour, and the position represents a relationship between the polar coordinate angle and the radial distance of the pixel point including the second contour.
In one possible implementation, the supplemental module is further configured to: uniformly setting a plurality of polar coordinate angles; and determining radial distances corresponding to the polar coordinate angles according to the position representation.
In one possible implementation, the second coordinate system is a polar coordinate system, and the first transformation module is further configured to: performing weighted average processing on first position information of a plurality of pixel points of the first contour to obtain a reference point; and taking the reference point as a central point of the polar coordinate system, performing polar coordinate transformation processing on the first position information to obtain second position information of the pixel point of the second contour under a second coordinate system, wherein the second position information comprises the polar coordinate angle and the radial distance of the pixel point.
In one possible implementation, the detection module is further configured to: inputting the image to be processed into a detection network for detection processing to obtain a target contour of the target area; and carrying out binarization processing on pixel values of the pixel points of the target contour to obtain the first contour.
In one possible implementation, the image to be processed comprises a medical image.
In one possible implementation manner, the present disclosure further provides an image processing apparatus, which is characterized by including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: and calling the instructions stored in the memory to execute the image processing method.
Fig. 6 is a block diagram of an image processing apparatus 800 according to an exemplary embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of apparatus 800 to perform the above-described methods.
Fig. 7 is a block diagram of an image processing apparatus 1900 according to an exemplary embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 7, the apparatus 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by the processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The apparatus 1900 may further include a power component 1926 configured to perform power management of the apparatus 1900, a wired or wireless network interface 1950 configured to connect the apparatus 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of apparatus 1900 to perform the above-described methods.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (7)

1. An image processing method, comprising:
performing target detection processing on an image to be processed to obtain a first contour of a target in the image to be processed under a first coordinate system;
when the first contour is incomplete, carrying out coordinate transformation processing on first position information of the pixel points of the first contour under the first coordinate system to obtain a second contour under a second coordinate system;
determining third position information of the complete third contour pixel point under the second coordinate system according to the second position information of the second contour pixel point under the second coordinate system;
Performing coordinate transformation processing on the third position information to obtain a fourth contour of the target under the first coordinate system, wherein the fourth contour is a complete contour;
determining third position information of the complete third contour pixel point under the second coordinate system according to the second position information of the second contour pixel point under the second coordinate system, wherein the third position information comprises:
performing interpolation processing according to the second position information to obtain a position representation of the pixel point of the second contour under the second coordinate system;
obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation;
performing frequency domain transformation processing on the fourth position information to obtain a first frequency domain representation of the fourth position information;
performing attenuation processing on the preset frequency response of the first frequency domain representation to obtain a second frequency domain representation;
performing frequency domain inverse transformation on the second frequency domain representation to obtain third position information of the pixel points of the complete third contour under the second coordinate system;
the second coordinate system is a polar coordinate system, the second position information comprises a polar coordinate angle and a radial distance of the pixel points of the second contour, and the position represents a relationship between the polar coordinate angle and the radial distance of the pixel points of the second contour;
The coordinate transformation processing is performed on the first position information of the pixel point of the first contour in the first coordinate system to obtain a second contour in a second coordinate system, including:
performing weighted average processing on first position information of a plurality of pixel points of the first contour to obtain a reference point;
and taking the reference point as a central point of the polar coordinate system, and performing polar coordinate transformation processing on the first position information to obtain the second position information.
2. The method of claim 1, wherein obtaining fourth location information for a plurality of pixels of a complete fifth contour from the location representation comprises:
uniformly setting a plurality of polar coordinate angles;
and determining radial distances corresponding to the polar coordinate angles according to the position representation.
3. The method of claim 1, wherein performing object detection on the image to be processed to obtain a first contour of the object in the image to be processed in a first coordinate system comprises:
inputting the image to be processed into a detection network for detection processing to obtain a target contour of a target area; the target area is an area where the target is located;
And carrying out binarization processing on pixel values of the pixel points of the target contour to obtain the first contour.
4. The method of claim 1, wherein the image to be processed comprises a medical image.
5. An image processing apparatus, comprising:
the detection module is used for carrying out target detection processing on the image to be processed to obtain a first contour of a target in the image to be processed under a first coordinate system;
the first transformation module is used for carrying out coordinate transformation processing on first position information of the pixel points of the first contour under the first coordinate system when the first contour is incomplete, so as to obtain a second contour under a second coordinate system;
the supplementing module is used for determining third position information of the pixel points of the complete third contour under the second coordinate system according to the second position information of the pixel points of the second contour under the second coordinate system;
the second transformation module is used for carrying out coordinate transformation processing on the third position information to obtain a fourth contour of the target under the first coordinate system, wherein the fourth contour is a complete contour;
the supplementary module is further configured to:
Performing interpolation processing according to the second position information to obtain a position representation of the pixel point of the second contour under the second coordinate system;
obtaining fourth position information of a plurality of pixel points of the complete fifth outline according to the position representation;
performing frequency domain transformation processing on the fourth position information to obtain a first frequency domain representation of the fourth position information;
performing attenuation processing on the preset frequency response of the first frequency domain representation to obtain a second frequency domain representation;
performing frequency domain inverse transformation on the second frequency domain representation to obtain third position information of the pixel points of the complete third contour under the second coordinate system;
the second coordinate system is a polar coordinate system, the second position information comprises a polar coordinate angle and a radial distance of the pixel points of the second contour, and the position represents a relationship between the polar coordinate angle and the radial distance of the pixel points of the second contour;
the first transformation module is further configured to:
performing weighted average processing on first position information of a plurality of pixel points of the first contour to obtain a reference point;
and taking the reference point as a central point of the polar coordinate system, and performing polar coordinate transformation processing on the first position information to obtain the second position information.
6. An image processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: invoking said memory-stored instructions to perform the method of any of claims 1-4.
7. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 4.
CN202010551099.XA 2020-06-16 2020-06-16 Image processing method and device Active CN111640114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010551099.XA CN111640114B (en) 2020-06-16 2020-06-16 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010551099.XA CN111640114B (en) 2020-06-16 2020-06-16 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111640114A CN111640114A (en) 2020-09-08
CN111640114B true CN111640114B (en) 2024-03-15

Family

ID=72328799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010551099.XA Active CN111640114B (en) 2020-06-16 2020-06-16 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111640114B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI761948B (en) * 2020-09-14 2022-04-21 倍利科技股份有限公司 A positioning method for obtaining contours from detected images
CN113837067B (en) * 2021-09-18 2023-06-02 成都数字天空科技有限公司 Organ contour detection method, organ contour detection device, electronic device, and readable storage medium
CN114004764B (en) * 2021-11-03 2024-03-15 昆明理工大学 Improved sensitivity coding reconstruction method based on sparse transform learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005010820A2 (en) * 2003-07-29 2005-02-03 Holding Bev Sa Automated method and device for perception associated with determination and characterisation of borders and boundaries of an object of a space, contouring and applications
CN109472786A (en) * 2018-11-05 2019-03-15 平安科技(深圳)有限公司 Cerebral hemorrhage image processing method, device, computer equipment and storage medium
CN111210423A (en) * 2020-01-13 2020-05-29 浙江杜比医疗科技有限公司 Breast contour extraction method, system and device of NIR image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4999163B2 (en) * 2006-04-17 2012-08-15 富士フイルム株式会社 Image processing method, apparatus, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005010820A2 (en) * 2003-07-29 2005-02-03 Holding Bev Sa Automated method and device for perception associated with determination and characterisation of borders and boundaries of an object of a space, contouring and applications
CN109472786A (en) * 2018-11-05 2019-03-15 平安科技(深圳)有限公司 Cerebral hemorrhage image processing method, device, computer equipment and storage medium
CN111210423A (en) * 2020-01-13 2020-05-29 浙江杜比医疗科技有限公司 Breast contour extraction method, system and device of NIR image

Also Published As

Publication number Publication date
CN111640114A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN110647834B (en) Human face and human hand correlation detection method and device, electronic equipment and storage medium
CN111310764B (en) Network training method, image processing device, electronic equipment and storage medium
CN110348537B (en) Image processing method and device, electronic equipment and storage medium
CN111310616B (en) Image processing method and device, electronic equipment and storage medium
CN109522910B (en) Key point detection method and device, electronic equipment and storage medium
CN109697734B (en) Pose estimation method and device, electronic equipment and storage medium
CN109344832B (en) Image processing method and device, electronic equipment and storage medium
CN110674719B (en) Target object matching method and device, electronic equipment and storage medium
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN111640114B (en) Image processing method and device
CN111899268B (en) Image segmentation method and device, electronic equipment and storage medium
KR20210047336A (en) Image processing method and apparatus, electronic device and storage medium
US11216904B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN111340733B (en) Image processing method and device, electronic equipment and storage medium
CN111243011A (en) Key point detection method and device, electronic equipment and storage medium
WO2022227394A1 (en) Image processing method and apparatus, and device, storage medium and program
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN111882558A (en) Image processing method and device, electronic equipment and storage medium
CN113538310A (en) Image processing method and device, electronic equipment and storage medium
US11410268B2 (en) Image processing methods and apparatuses, electronic devices, and storage media
CN115512116B (en) Image segmentation model optimization method and device, electronic equipment and readable storage medium
CN111488964A (en) Image processing method and device and neural network training method and device
CN111784772B (en) Attitude estimation model training method and device based on domain randomization
CN112802032A (en) Training and image processing method, device, equipment and medium for image segmentation network
CN112200820A (en) Three-dimensional image processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant