CN111091570B - Image segmentation labeling method, device, equipment and storage medium - Google Patents

Image segmentation labeling method, device, equipment and storage medium Download PDF

Info

Publication number
CN111091570B
CN111091570B CN201911148079.1A CN201911148079A CN111091570B CN 111091570 B CN111091570 B CN 111091570B CN 201911148079 A CN201911148079 A CN 201911148079A CN 111091570 B CN111091570 B CN 111091570B
Authority
CN
China
Prior art keywords
image
surrounding
segmented
segmentation
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911148079.1A
Other languages
Chinese (zh)
Other versions
CN111091570A (en
Inventor
李金龙
陈曦
李雄
董家林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Bank Co Ltd
Original Assignee
China Merchants Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Bank Co Ltd filed Critical China Merchants Bank Co Ltd
Priority to CN201911148079.1A priority Critical patent/CN111091570B/en
Publication of CN111091570A publication Critical patent/CN111091570A/en
Application granted granted Critical
Publication of CN111091570B publication Critical patent/CN111091570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image segmentation labeling method, device, equipment and storage medium, wherein the method comprises the following steps: loading an image to be processed to a current canvas, and drawing one or more areas to be segmented based on a click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be marked, and outputting marking results. Therefore, based on a non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is marked by using a local processing method of the image, so that the image segmentation marking flow is simplified, and the image segmentation marking efficiency is improved.

Description

Image segmentation labeling method, device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for image segmentation labeling.
Background
In recent years, with the continuous rise of artificial intelligence, data annotation becomes particularly important, and various annotation tools are layered endlessly. The image segmentation labeling plays an extremely important role in the field of image segmentation artificial intelligence. The main flow of image segmentation and labeling comprises image segmentation, graying, binarization, color value inversion, morphological dilation and corrosion and the like, so that the image segmentation and labeling flow is complex and the efficiency is low.
Disclosure of Invention
The invention provides an image segmentation labeling method, an image segmentation labeling device, image segmentation labeling equipment and a storage medium, which aim to simplify the image segmentation labeling process and improve the image segmentation labeling efficiency.
In order to achieve the above object, the present invention provides an image segmentation labeling method, which includes:
loading an image to be processed to a current canvas, and drawing one or more areas to be segmented based on a click event received by the current canvas;
acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
and carrying out local processing on one or more images to be marked, and outputting marking results.
Preferably, the step of acquiring the segmentation position index set of the region to be segmented based on a non-zero surrounding rule includes:
converting the edges of the region to be segmented into vectors, and initializing the number of the circles of each pixel point in the image to be processed to zero;
taking the pixel point to be judged as a starting point, and making a ray which is positive along the x-axis and parallel to the y-axis;
calculating the surrounding number of the pixel points to be judged based on a surrounding number formula according to the rays;
if the surrounding number of the pixel points to be judged is not equal to 0, judging that the pixel points to be judged are positioned in the area to be segmented;
and sequentially judging whether each pixel point in the image to be processed is positioned in the area to be segmented, and storing the pixel points positioned in the area to be segmented into the segmentation position index set.
Preferably, the areas to be segmented are all polygonal approximations, and the step of calculating the number of circles of the pixel points to be judged based on a formula of the number of circles includes:
the pixel point to be judged is expressed as (x, y), and the surrounding number is expressed as f m (x, y), the surrounding number calculation formula is
Wherein n+1 represents the number of pixel points in the region to be segmented, m represents the aggregate set of the pixel points in the region to be segmented, i is an integer from 0 to n, and% represents a film calculation.
Preferably, the step of calculating the number of circles of the pixel point to be determined based on the number of circles counting formula further includes:
the direction of one side of the polygon is defined by the original pixel point (x 0 ,y 0 ) Pointing to the first pixel point (x 1 ,y 1 ) And (2) andsides of the polygon are not within the partitioned area;
when y is 0 <y 1 When the origin is taken as the center, the surrounding count f is calculated in the anticlockwise direction according to the first surrounding count calculation formula (x0,y0)(x1,y1) (x, y), wherein the first wrap-around count calculation formula is
When y is 0 >y 1 When the origin is taken as the center, the surrounding count f is calculated according to the second surrounding count calculation formula in the clockwise direction (x0,y0)(x1,y1) (x, y), wherein the second wrap-around count calculation formula is
When y is 0 =y 1 When calculating the surrounding count f according to a third surrounding count calculation formula (x0,y0)(x1,y1) (x, y), wherein the third wrap-around count formula is
Preferably, the step of locally processing one or more images to be annotated and outputting an annotation result includes:
representing the image to be marked as I c The image pixel of the pixel point (x, y) is represented as I c (x, y), obtaining the labeling result I of the image to be labeled according to a local processing formula p And outputs the labeling result I p The method comprises the steps of carrying out a first treatment on the surface of the Wherein,,
wherein (x, y) E is the graph to be markedAn index set of the image is displayed,is an image processing algorithm.
Preferably, the step of extracting one or more images to be annotated from the images to be processed according to the segmentation position index set includes:
obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as the number of the areas to be segmented, and the number of the image segmentation interfaces is the same as the number of the segmentation position index sets;
and calling an image segmentation operation, wherein the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface.
In order to achieve the above object, an embodiment of the present invention further provides an image segmentation and labeling device, including:
the drawing module is used for loading the image to be processed to the current canvas and drawing one or more areas to be segmented based on the clicking event received by the current canvas;
the acquisition module is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
the extraction module is used for extracting one or more images to be marked from the images to be processed according to the segmentation position index set;
and the local processing module is used for carrying out local processing on one or more images to be marked and outputting marking results.
Preferably, the acquiring module includes:
the conversion unit is used for converting the edges of the region to be segmented into vectors and initializing the number of the circles of each pixel point in the image to be processed to zero;
the ray unit is used for taking a pixel point to be judged as a starting point and making a ray which is positive along the x-axis and parallel to the y-axis;
the calculation unit is used for calculating the surrounding number of the pixel points to be judged based on a surrounding number formula according to the rays;
the judging unit is used for judging that the pixel points to be judged are positioned in the area to be segmented if the surrounding number of the pixel points to be judged is not equal to 0;
and the storage unit is used for sequentially judging whether each pixel point in the image to be processed is positioned in the area to be segmented, and storing the pixel points positioned in the area to be segmented into the segmentation position index set.
In order to achieve the above object, an embodiment of the present invention further provides an image segmentation labeling apparatus, where the image segmentation labeling apparatus includes a processor, a memory, and an image segmentation labeling program stored in the memory, and when the image segmentation labeling program is executed by the processor, the steps of the image segmentation labeling method described above are implemented.
To achieve the above object, an embodiment of the present invention further provides a computer storage medium having stored thereon an image segmentation labeling program which, when executed by a processor, implements the steps of the image segmentation labeling method described above.
Compared with the prior art, the invention provides an image cutting and labeling method, device, equipment and storage medium, wherein an image to be processed is loaded to a current canvas, and one or more areas to be segmented are drawn based on click events received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be marked, and outputting marking results. Therefore, based on a non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is marked by using a local processing method of the image, so that the image segmentation marking flow is simplified, and the image segmentation marking efficiency is improved.
Drawings
FIG. 1 is a schematic hardware structure of an image segmentation labeling device according to various embodiments of the present invention;
FIG. 2 is a flowchart of a first embodiment of an image segmentation labeling method according to the present invention;
fig. 3 is a schematic functional block diagram of a first embodiment of the image segmentation and labeling device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The image segmentation labeling device mainly related to the embodiment of the invention refers to network connection equipment capable of realizing network connection, and the image segmentation labeling device can be a server, a cloud platform and the like. In addition, the mobile terminal related to the embodiment of the invention can be mobile network equipment such as a mobile phone, a tablet personal computer and the like.
Referring to fig. 1, fig. 1 is a schematic hardware configuration diagram of an image segmentation labeling apparatus according to various embodiments of the present invention. In an embodiment of the present invention, the image segmentation labeling device may include a processor 1001 (e.g., a central processing unit Central Processing Unit, a CPU), a communication bus 1002, an input port 1003, an output port 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communications between these components; the input port 1003 is used for data input; the output port 1004 is used for data output, and the memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory, and the memory 1005 may be an optional storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration shown in fig. 1 is not limiting of the invention and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
With continued reference to FIG. 1, the memory 1005 of FIG. 1, which is a readable storage medium, may include an operating system, a network communication module, an application module, and an image segmentation annotation program. In fig. 1, the network communication module is mainly used for connecting with a server and performing data communication with the server; and the processor 1001 may call the image segmentation labeling program stored in the memory 1005 and execute the image segmentation labeling method provided by the embodiment of the present invention.
The embodiment of the invention provides an image segmentation labeling method.
Referring to fig. 2, fig. 2 is a flowchart of a first embodiment of the image segmentation labeling method according to the present invention.
In this embodiment, the image segmentation labeling method is applied to an image segmentation labeling device, and the method includes:
step S101, loading an image to be processed to a current canvas, and drawing one or more areas to be segmented based on a click event received by the current canvas;
in this embodiment, the image to be processed may be an image selected or set by a user. Typically, the user imports the image to be processed through a preset interface.
And loading the image to be processed imported by the user to a current canvas, and displaying the portrait to be processed on a screen. And then receiving click events received by the current canvas, wherein the click events comprise selection events, stretching events, drawing events and the like. The user can trigger a click event through a mouse or touch operation, and select and/or draw the image to be segmented, which needs to be segmented and marked, through the click event. It will be appreciated that the user may select and/or render one or more images to be segmented at a time.
And after the click event is received, drawing one or more images to be segmented corresponding to the click event drawing. The drawn outline generated by the click event may be represented by a black line, a white line or a color line, i.e. the one or more areas to be segmented are identified by a line outline. The graph composed of the contours can be considered as an infinite approximation polygon no matter what shape the contours are drawn, and therefore, the region to be segmented is set as a polygon in this embodiment.
Step S102, obtaining a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
in graphics, it may be determined whether a point is inside a polygon according to a non-zero surrounding rule (Nonzero Winding Number Rule). First the edges of the polygon are made into vectors. Initializing the number of loops to zero, and making a ray in any direction from the point p to be judged. When moving from the p-point in the ray direction, the edges that pass through the ray in each direction are counted, the number of circles is increased by 1 each time the edge of the polygon passes through the ray counterclockwise, the number of circles is decreased by 1 each time the ray passes clockwise, and all relevant edges of the polygon are calculated in turn. After all relevant edges of the polygon are processed, if the number of circles is non-zero, p is an internal point, otherwise, p is an external point.
Specifically, in this embodiment, the step of obtaining the segmentation position index set of the region to be segmented based on the non-zero surrounding rule includes:
step S102a, converting the edges of the region to be segmented into vectors, and initializing the number of the circles of each pixel point in the image to be processed to zero;
and setting the region to be segmented into polygons, and converting the edges of the region to be segmented into vectors, wherein the direction of the vectors can be one of clockwise or clockwise, and can be selected according to the needs.
In this embodiment, it is necessary to determine whether all the pixels in the image to be processed are located in the region to be segmented, so that the number of circles of each pixel in the image to be processed is initialized to zero in advance.
Step S102b, taking the pixel point to be judged as a starting point, and making a ray which is positive along the x-axis and parallel to the y-axis;
in general, the ray may be in any direction. For ease of calculation, the present embodiment fixes the ray direction to be positive along the x-axis and parallel to the y-axis.
Step S102c, calculating the surrounding number of the pixel point to be judged based on a surrounding number formula according to the rays;
specifically, the pixel point to be determined is expressed as (x, y), and the surrounding number is expressed as f m (x, y), the surrounding number calculation formula is
Wherein n+1 represents the number of pixel points in the region to be segmented, m represents the aggregate set of pixel points in the region to be segmented, i is an integer from 0 to n,% represents a film calculation, and when i=0, f (x0,y0)(x1,y1) (x, y) means that the direction of the pixel point (x, y) to be judged on one side of the polygon is defined by the original pixel point (x 0 ,y 0 ) Pointing to the first pixel point (x 1 ,y 1 ) Is a wrap number count formula for (1).
Further, the step of calculating the surrounding count of the pixel to be judged based on the surrounding count formula according to the ray further includes:
the direction of one side of the polygon is defined by the original pixel point (x 0 ,y 0 ) Pointing to the first pixel point (x 1 ,y 1 ) And sides of the polygon are not within the partitioned area;
when y is 0 <y 1 When the origin is taken as the center, the surrounding count f is calculated in the anticlockwise direction according to the first surrounding count calculation formula (x0,y0)(x1,y1) (x, y), wherein the first wrap-around count calculation formula is
As can be seen from the first round count calculation formula, whenWhen the wrap-around count is 1; when y=y 0 Or->When the wrap-around count is 0.5; in other cases the wrap around count is 0.
When y is 0 >y 1 In the clockwise direction with the origin as the centerCalculating the wrap-around count f according to a second wrap-around count calculation formula (x0,y0)(x1,y1) (x, y), wherein the second wrap-around count calculation formula is
As can be seen from the second round count calculation formula, whenWhen the surrounding count is-1; when y=y 0 Or->When the surrounding count is-0.5; in other cases the wrap around count is 0.
When y is 0 =y 1 When the surrounding count f is calculated according to a third surrounding number calculation formula (x0,y0)(x1,y1) (x, y), wherein the third wrap-around count formula is
When y is 0 =y 1 The edge of the polygonal line is selected to be a line segment parallel to the x-axis, which is never intersected by the ray, or which is partially coincident with the ray, at which point the wrap-around count is 0.
When y=y 0 Or y=y 1 When the intersection point of the rays is at the end point of the line segment represented by the edge, the circle count is calculated twice, and the circle count is divided by 2, so that the circle count is calculated as 0.5 or-0.5, and when y=y 0 Or y=y 1 When the number of loops is calculated using the first or second loop count calculation formula.
Step S102d, if the surrounding number of the pixel points to be judged is not equal to 0, judging that the pixel points to be judged are positioned in the area to be segmented;
and according to the surrounding number calculation formula, the first surrounding count calculation formula, the second surrounding count calculation formula and the third surrounding count calculation formula, whether the pixel point to be judged is positioned in the region to be segmented or not can be rapidly judged, namely whether the pixel point to be judged is positioned in a polygon or not is judged.
When f m When (x, y) is not equal to 0, judging that the pixel point to be judged is in the area to be segmented; conversely, f m And (x, y) =0, determining that the pixel point to be determined is not in the region to be segmented.
When the number of the areas to be segmented is plural, for the pixel points located in one of the areas to be segmented, there is one area to be segmented satisfying f m (x, y) noteq0. In this embodiment, the aggregate set of the plurality of regions to be segmented is denoted as M, and M e is M.
Step S102e, determining whether each pixel point in the image to be processed is located in the region to be segmented, and storing the pixel points located in the region to be segmented in the segmentation position index set.
Whether all the pixel points in the image to be processed are located in the area to be segmented is required to be judged, so that step S102b to step S102d are repeated, whether all the pixel points in the image to be processed are located in the area to be segmented is sequentially judged, the pixel points located in the area to be segmented are stored in the segmentation position index set, and the segmentation position index set is represented by E. It can be appreciated that, if the to-be-segmented area is plural, the number of the segmentation position index sets is also plural correspondingly.
Step S103, extracting one or more images to be marked from the images to be processed according to the segmentation position index set;
the step of extracting one or more images to be annotated from the images to be processed according to the segmentation position index set comprises the following steps:
obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as the number of the areas to be segmented, and the number of the image segmentation interfaces is the same as the number of the segmentation position index sets; and taking the boundary of the segmentation position index set as the image segmentation interface.
And calling an image segmentation operation, wherein the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface. And extracting one or more images to be marked by the application program according to the image segmentation interface by calling the preset segmentation operation of the application program.
And step S104, carrying out local processing on one or more images to be marked, and outputting marking results.
Specifically, the step of locally processing one or more images to be annotated and outputting an annotation result includes:
representing the image to be marked as I c The image pixel of the pixel point (x, y) is represented as I c (x, y), obtaining the labeling result I of the image to be labeled according to a local processing formula p And outputs the labeling result I p The method comprises the steps of carrying out a first treatment on the surface of the Wherein,,
wherein, (x, y) E is the index set of the image to be marked. The saidThe image processing algorithm comprises binarization, color value inversion, expansion, corrosion and the like.
Furthermore, the image segmentation labeling method can be well applied to the web front end of the web page with poor processing performance.
When the web front end is processed, a front end htlm native tag Canvas can be adopted to load and read the image to be processed; monitoring clicking events of a mouse on a canvas through open source tools such as fabric, js and the like, and simply and quickly completing drawing of polygons so as to obtain areas to be segmented; obtaining pixel points in the polygon according to the surrounding number calculation formula, clicking page image segmentation operation, and calling an ROI instruction to complete multi-region image segmentation; realizing a required image interface according to a segmentation position index set obtained by image segmentation, selecting a required processing method on a page, and completing local processing; and finally, outputting and storing the labeling result.
According to the embodiment, through the scheme, the image to be processed is loaded to the current canvas, and one or more areas to be segmented are drawn based on the click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be marked, and outputting marking results. Therefore, based on a non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is marked by using a local processing method of the image, so that the image segmentation marking flow is simplified, and the image segmentation marking efficiency is improved.
In addition, the embodiment also provides an image segmentation labeling device. Referring to fig. 3, fig. 3 is a schematic functional block diagram of a first embodiment of an image segmentation and labeling device according to the present invention.
In this embodiment, the image segmentation labeling device is a virtual device, and is stored in the memory 1005 of the image segmentation labeling device shown in fig. 1, so as to implement all functions of the image segmentation labeling program: the method comprises the steps of loading an image to be processed to a current canvas, and drawing one or more areas to be segmented based on click events received by the current canvas; the method comprises the steps of acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; the method comprises the steps of extracting one or more images to be marked from the images to be processed according to the segmentation position index set; and the method is used for carrying out local processing on one or more images to be marked and outputting marking results.
Specifically, the image segmentation labeling device comprises:
the drawing module is used for loading the image to be processed to the current canvas and drawing one or more areas to be segmented based on the clicking event received by the current canvas;
the acquisition module is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
the extraction module is used for extracting one or more images to be marked from the images to be processed according to the segmentation position index set;
and the local processing module is used for carrying out local processing on one or more images to be marked and outputting marking results.
Preferably, the acquiring module includes:
the conversion unit is used for converting the edges of the region to be segmented into vectors and initializing the number of the circles of each pixel point in the image to be processed to zero;
the ray unit is used for taking a pixel point to be judged as a starting point and making a ray which is positive along the x-axis and parallel to the y-axis;
the calculation unit is used for calculating the surrounding number of the pixel points to be judged based on a surrounding number formula according to the rays;
the judging unit is used for judging that the pixel points to be judged are positioned in the area to be segmented if the surrounding number of the pixel points to be judged is not equal to 0;
and the storage unit is used for sequentially judging whether each pixel point in the image to be processed is positioned in the area to be segmented, and storing the pixel points positioned in the area to be segmented into the segmentation position index set.
Preferably, the setting calculation unit calculates the number of rounds, the round calculation formula being
Wherein n+1 represents the number of pixel points in the region to be segmented, m represents the aggregate set of the pixel points in the region to be segmented, i is an integer from 0 to n, and% represents a film calculation.
Preferably, the setting surround number count calculating unit includes:
setting a subunit: the direction of one side of the polygon is defined by the original pixel point (x 0 ,y 0 ) Pointing to the first pixel point (x 1 ,y 1 ) And sides of the polygon are not within the partitioned area;
a first computing subunit, when y 0 <y 1 When the origin is taken as the center, the surrounding count f is calculated in the anticlockwise direction according to the first surrounding count calculation formula (x0,y0)(x1,y1) (x, y), wherein the first wrap-around count calculation formula is
A second computing subunit, when y 0 >y 1 When the origin is taken as the center, the surrounding count f is calculated according to the second surrounding count calculation formula in the clockwise direction (x0,y0)(x1,y1) (x, y), wherein the second wrap-around count calculation formula is
A third calculation subunit, when y 0 =y 1 When calculating the surrounding count f according to a third surrounding count calculation formula (x0,y0)(x1,y1) (x, y), wherein the third wrap-around count formula is
f (x0,y0)(x1,y1) (x,y)=0。
Preferably, the labeling module includes:
marking unit for representing the image to be marked as I c The image pixel of the pixel point (x, y) is represented as I c (x, y), obtaining the labeling result I of the image to be labeled according to a local processing formula p And outputs the labeling result I p The method comprises the steps of carrying out a first treatment on the surface of the Wherein,,
wherein (x, y) E is the index set of the image to be marked,is an image processing algorithm.
Preferably, the extraction module comprises:
the obtaining unit is used for obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as the number of the areas to be segmented, and the number of the image segmentation interfaces is the same as the number of the segmentation position index sets;
and the calling unit is used for calling image segmentation operation, and the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface.
In addition, the embodiment of the present invention further provides a computer storage medium, where an image segmentation labeling program is stored, and the steps of the image segmentation labeling method described above are implemented when the image segmentation labeling program is run by a processor, which is not described herein again.
Compared with the prior art, the image cutting and labeling method, the device, the equipment and the storage medium provided by the invention have the advantages that the image to be processed is loaded to the current canvas, and one or more areas to be segmented are drawn based on the click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be marked, and outputting marking results. Therefore, based on a non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is marked by using a local processing method of the image, so that the image segmentation marking flow is simplified, and the image segmentation marking efficiency is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising several instructions for causing a terminal device to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or modifications in the structures or processes described in the specification and drawings, or the direct or indirect application of the present invention to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. An image segmentation labeling method, which is characterized by comprising the following steps:
loading an image to be processed to a current canvas, and drawing one or more areas to be segmented based on a click event received by the current canvas;
acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
carrying out local processing on one or more images to be marked and outputting marking results;
the step of obtaining the segmentation position index set of the region to be segmented based on a non-zero surrounding rule comprises the following steps:
converting the edges of the region to be segmented into vectors, and initializing the number of the circles of each pixel point in the image to be processed to zero;
taking the pixel point to be judged as a starting point, and making a ray which is positive along the x-axis and parallel to the y-axis;
calculating the surrounding number of the pixel points to be judged based on a surrounding number formula according to the rays;
if the surrounding number of the pixel points to be judged is not equal to 0, judging that the pixel points to be judged are positioned in the area to be segmented;
sequentially judging whether each pixel point in the image to be processed is positioned in the area to be segmented, and storing the pixel points positioned in the area to be segmented into the segmentation position index set;
the areas to be segmented are all polygonal approximations, and the step of calculating the surrounding number of the pixel points to be judged based on the surrounding number formula comprises the following steps:
the pixel point to be judged is expressed as (x, y), and the surrounding number is expressed as f m (x, y), the surrounding number calculation formula is
Wherein n+1 represents the number of pixel points in the region to be segmented, m represents a collection of pixel points in the region to be segmented, i is an integer from 0 to n, and% represents a film calculation;
the step of calculating the number of the circles of the pixel points to be judged based on the circle number formula further comprises the following steps:
the direction of one side of the polygon is defined by the original pixel point (x 0 ,y 0 ) Pointing to the first pixel point (x 1 ,y 1 ) And sides of the polygon are not within the partitioned area;
when y is 0 <y 1 When the origin is taken as the center, the surrounding count is calculated in the anticlockwise direction according to the first surrounding count calculation formulaWherein the first round counting calculation formula is as follows
When y is 0 >y 1 When the origin is taken as the center, the surrounding count is calculated clockwise according to the second surrounding count calculation formulaWherein the second round counting calculation formula is that
When y is 0 =y 1 When the surrounding count is calculated according to a third surrounding count calculation formulaWherein the third round counting calculation formula is as follows
2. The method according to claim 1, wherein the step of locally processing one or more of the images to be annotated and outputting an annotation result includes:
representing the image to be marked as I c The image pixel of the pixel point (x, y) is represented as I c (x, y), then according to localObtaining a labeling result I of the image to be labeled by a processing formula p And outputs the labeling result I p The method comprises the steps of carrying out a first treatment on the surface of the Wherein,,
wherein (x, y) E is the index set of the image to be marked,is an image processing algorithm.
3. The method of claim 1, wherein the step of extracting one or more images to be annotated from the images to be processed according to the set of segmentation position indices comprises:
obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as the number of the areas to be segmented, and the number of the image segmentation interfaces is the same as the number of the segmentation position index sets;
and calling an image segmentation operation, wherein the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface.
4. An image segmentation and labeling device, which is characterized by comprising:
the drawing module is used for loading the image to be processed to the current canvas and drawing one or more areas to be segmented based on the clicking event received by the current canvas;
the acquisition module is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
the extraction module is used for extracting one or more images to be marked from the images to be processed according to the segmentation position index set;
the local processing module is used for carrying out local processing on one or more images to be marked and outputting marking results;
the acquisition module comprises:
the conversion unit is used for converting the edges of the region to be segmented into vectors and initializing the number of the circles of each pixel point in the image to be processed to zero;
the ray unit is used for taking a pixel point to be judged as a starting point and making a ray which is positive along the x-axis and parallel to the y-axis;
the calculation unit is used for calculating the surrounding number of the pixel points to be judged based on a surrounding number formula according to the rays;
the judging unit is used for judging that the pixel points to be judged are positioned in the area to be segmented if the surrounding number of the pixel points to be judged is not equal to 0;
the storage unit is used for sequentially judging whether each pixel point in the image to be processed is positioned in the area to be segmented, and storing the pixel points positioned in the area to be segmented into the segmentation position index set;
the region to be segmented is polygonal approximation, and the computing unit is further configured to represent the pixel to be determined as (x, y) and the surrounding number as f m (x, y), the surrounding number calculation formula is
Wherein n+1 represents the number of pixel points in the region to be segmented, m represents a collection of pixel points in the region to be segmented, i is an integer from 0 to n, and% represents a film calculation;
and setting the direction of one side of the polygon to be defined by the original pixel point (x 0 ,y 0 ) Pointing to the first pixel point (x 1 ,y 1 ) And sides of the polygon are not within the partitioned area;
when y is 0 <y 1 When the origin is taken as the center, the surrounding count is calculated in the anticlockwise direction according to the first surrounding count calculation formulaWherein the first round counting calculation formula is as follows
When y is 0 >y 1 When the origin is taken as the center, the surrounding count is calculated clockwise according to the second surrounding count calculation formulaWherein the second round counting calculation formula is that
When y is 0 =y 1 When the surrounding count is calculated according to a third surrounding count calculation formulaWherein the third round counting calculation formula is as follows
5. An image segmentation annotation device comprising a processor, a memory and an image segmentation annotation program stored in the memory, which when executed by the processor, implements the steps of the image segmentation annotation method according to any of claims 1-3.
6. A computer storage medium, wherein an image segmentation labeling program is stored on the computer storage medium, and the image segmentation labeling program realizes the steps of the image segmentation labeling method according to any one of claims 1-3 when the image segmentation labeling program is executed by a processor.
CN201911148079.1A 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium Active CN111091570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911148079.1A CN111091570B (en) 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911148079.1A CN111091570B (en) 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111091570A CN111091570A (en) 2020-05-01
CN111091570B true CN111091570B (en) 2023-07-25

Family

ID=70393526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911148079.1A Active CN111091570B (en) 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111091570B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952324A (en) * 2017-04-07 2017-07-14 山东理工大学 The parallel overlap-add procedure device and method of vector polygon rasterizing
CN107918549B (en) * 2017-11-27 2021-01-19 广州视睿电子科技有限公司 Marking method and device for three-dimensional expansion drawing, computer equipment and storage medium
CN109978894A (en) * 2019-03-26 2019-07-05 成都迭迦科技有限公司 A kind of lesion region mask method and system based on three-dimensional mammary gland color ultrasound

Also Published As

Publication number Publication date
CN111091570A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN107729935B (en) The recognition methods of similar pictures and device, server, storage medium
CN112785674A (en) Texture map generation method, rendering method, device, equipment and storage medium
CN109583509B (en) Data generation method and device and electronic equipment
CN111967297B (en) Image semantic segmentation method and device, electronic equipment and medium
CN112967381A (en) Three-dimensional reconstruction method, apparatus, and medium
CN112541902A (en) Similar area searching method, similar area searching device, electronic equipment and medium
CN111709879B (en) Image processing method, image processing device and terminal equipment
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113837194B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110717405A (en) Face feature point positioning method, device, medium and electronic equipment
CN113902856A (en) Semantic annotation method and device, electronic equipment and storage medium
KR102239588B1 (en) Image processing method and apparatus
CN114066814A (en) Gesture 3D key point detection method of AR device and electronic device
CN113506305A (en) Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data
CN113240578A (en) Image special effect generation method and device, electronic equipment and storage medium
CN111091570B (en) Image segmentation labeling method, device, equipment and storage medium
CN112966592A (en) Hand key point detection method, device, equipment and medium
CN112256254A (en) Method and device for generating layout code
CN114511862B (en) Form identification method and device and electronic equipment
CN115187834A (en) Bill identification method and device
CN113610856A (en) Method and device for training image segmentation model and image segmentation
CN114037630A (en) Model training and image defogging method, device, equipment and storage medium
CN111292342A (en) Method, device and equipment for cutting text in image and readable storage medium
CN113343965A (en) Image tilt correction method, apparatus and storage medium
CN114359903B (en) Text recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant