CN113962306A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113962306A
CN113962306A CN202111235044.9A CN202111235044A CN113962306A CN 113962306 A CN113962306 A CN 113962306A CN 202111235044 A CN202111235044 A CN 202111235044A CN 113962306 A CN113962306 A CN 113962306A
Authority
CN
China
Prior art keywords
image
pixel point
pixel
recognized
angle difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111235044.9A
Other languages
Chinese (zh)
Inventor
徐青松
李青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruisheng Software Co Ltd
Original Assignee
Hangzhou Ruisheng Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruisheng Software Co Ltd filed Critical Hangzhou Ruisheng Software Co Ltd
Priority to CN202111235044.9A priority Critical patent/CN113962306A/en
Publication of CN113962306A publication Critical patent/CN113962306A/en
Priority to PCT/CN2022/112384 priority patent/WO2023065792A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

An image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium. The image processing method comprises the following steps: acquiring an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed; performing first image processing on an input image to determine feature information of an object to be recognized, the feature information including a reference point of the object to be recognized; acquiring angle difference information of a region to be processed, wherein the angle difference information indicates angle differences of a plurality of first pixel points, and the angle difference of each first pixel point is an angle difference value between a pixel direction between each first pixel point and a reference point and a gradient direction of each first pixel point; and identifying the angle difference information and the input image to obtain an identification result corresponding to the object to be identified. The method can improve the accuracy of identifying the object to be identified.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
Embodiments of the present disclosure relate to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of science and technology and economy, the image processing technology has deepened into the aspects of our lives, and the image processing technology has an indispensable role in modern life. For example, video playback, extraction of data and information from images, and the like may be achieved using image processing techniques. Image recognition technology is one of the important applications in image processing technology. The image recognition technology can realize face recognition, target feature recognition and the like.
Disclosure of Invention
At least one embodiment of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed; performing first image processing on an input image to determine feature information of an object to be recognized, wherein the feature information includes a reference point of the object to be recognized; acquiring angle difference information of a region to be processed, wherein the angle difference information indicates angle differences of a plurality of first pixel points, and the angle difference of each first pixel point is an angle difference value between a pixel direction between each first pixel point and a reference point and a gradient direction of each first pixel point; and identifying the angle difference information and the input image to obtain an identification result corresponding to the object to be identified.
For example, in an image processing method provided in an embodiment of the present disclosure, the angle difference information includes an angle difference image, the angle difference image includes a plurality of second pixel points, the plurality of second pixel points and the plurality of first pixel points correspond to one another one to one, and the obtaining of the angle difference information of the to-be-processed region includes: obtaining angle differences of a plurality of first pixel points; determining the gray values of a plurality of second pixel points according to the angle differences of the plurality of first pixel points, wherein the gray value of each second pixel point is determined according to the angle difference of the first pixel point corresponding to each second pixel point; and generating an angle difference image according to the gray values of the plurality of second pixel points.
For example, in the image processing method provided in an embodiment of the present disclosure, the gray value of each second pixel point is inversely correlated with the angle difference of the first pixel point corresponding to each second pixel point.
For example, in an image processing method provided in an embodiment of the present disclosure, obtaining angle differences of a plurality of first pixel points includes: determining the gradient direction of each first pixel point; taking the direction of a vector formed by each first pixel point and the reference point as the pixel direction of each first pixel point, wherein the starting point of the vector is each first pixel point; and calculating an angle difference value between the gradient direction of each first pixel point and the pixel direction of each first pixel point, and taking the angle difference value as the angle difference of each first pixel point.
For example, in an image processing method provided in an embodiment of the present disclosure, determining a gradient direction of each first pixel point includes: respectively acquiring a gradient value of each first pixel point in a first direction and a gradient value of each first pixel point in a second direction; and determining the gradient direction of each first pixel point according to the gradient value of each first pixel point in the first direction and the gradient value of each first pixel point in the second direction.
For example, in an image processing method provided by an embodiment of the present disclosure, identifying angle difference information and an input image to obtain an identification result corresponding to an object to be identified includes: performing edge detection on the input image to obtain an edge line graph corresponding to the input image; carrying out convolution calculation on the angle difference information and the edge line graph to obtain a projection graph; and identifying the projection graph to obtain an identification result corresponding to the object to be identified.
For example, in an image processing method provided in an embodiment of the present disclosure, the feature information further includes an outer contour of the object to be recognized, and the recognizing the projection diagram to obtain a recognition result corresponding to the object to be recognized includes: performing second image processing on the projection drawing according to the outer contour to obtain an image area, wherein the image area is an area where an object to be identified is located, and the area to be processed comprises the image area; and analyzing the image area to obtain a recognition result corresponding to the object to be recognized.
For example, in an image processing method provided by an embodiment of the present disclosure, performing second image processing on a projection drawing according to an outer contour to obtain an image area includes: and performing cutting processing on the projection drawing according to the outer contour so as to cut out the image area from the projection drawing.
For example, in an image processing method provided by an embodiment of the present disclosure, performing second image processing on a projection drawing according to an outer contour to obtain an image area includes: and setting the gray value of each pixel point in other areas outside the outer contour in the projection drawing as a fixed value to obtain an image area, wherein the size of the image area is the same as that of the projection drawing.
For example, in an image processing method provided in an embodiment of the present disclosure, analyzing an image region to obtain a recognition result corresponding to an object to be recognized includes: normalizing the gray values of all pixel points in the image area to obtain a normalized image, wherein the gray value of each pixel point in the normalized image is in the range of [0,255 ]; and analyzing the normalized image to obtain an identification result corresponding to the object to be identified.
For example, in an image processing method provided by an embodiment of the present disclosure, an object to be recognized includes at least one stripe to be recognized, and analyzing a normalized image to obtain a recognition result of the object to be recognized includes: and analyzing the normalized image to obtain the number of at least one stripe to be recognized, wherein the recognition result comprises the number of the at least one stripe to be recognized.
For example, in an image processing method provided by an embodiment of the present disclosure, a normalized image includes at least one curved stripe having a first gray value and at least one curved stripe having a second gray value, the at least one curved stripe having the first gray value and the at least one curved stripe having the second gray value are alternately arranged, and the second gray value is smaller than the first gray value, and analyzing the normalized image to obtain a number of at least one to-be-identified stripe includes: at least one ray is drawn from the reference point by taking the reference point as a vertex, wherein each ray intersects at least one curvilinear stripe with a first gray value and/or at least one curvilinear stripe with a second gray value; counting the change times of the gray value of a pixel point on the ray in the direction of the ray from a first gray value to a second gray value aiming at each ray; and determining the number of at least one stripe to be identified based on the change times corresponding to each ray.
For example, in an image processing method provided by an embodiment of the present disclosure, the determining, by using at least one ray as a plurality of rays, the number of at least one to-be-identified stripe based on the number of changes corresponding to each ray includes: and calculating the average value of a plurality of change times corresponding to the plurality of rays, and taking the average value as the number of at least one to-be-identified stripe.
For example, in an image processing method provided by an embodiment of the present disclosure, performing first image processing on an input image to determine feature information of an object to be recognized includes: carrying out fuzzy processing on an input image to obtain a fuzzy image; carrying out edge detection on the blurred image to obtain a detection contour of the object to be identified; and fitting the detection contour to obtain the outer contour of the object to be recognized and the center of the outer contour, wherein the center of the outer contour is a datum point.
For example, in an image processing method provided by an embodiment of the present disclosure, performing first image processing on an input image to determine feature information of an object to be recognized includes: the input image is input into the recognition model to recognize the feature information of the object to be recognized by the recognition model.
For example, in the image processing method provided by an embodiment of the present disclosure, the object to be identified includes a cross section of a tree, the cross section of the tree includes an annual ring of the tree, and the identification result includes the annual ring of the tree or the number of annual rings of the tree.
At least one embodiment of the present disclosure provides an image processing apparatus including: the image acquisition unit is configured to acquire an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed; a processing unit configured to perform first image processing on an input image to determine feature information of an object to be recognized, the feature information including a reference point of the object to be recognized; the angle difference acquisition unit is configured to acquire angle difference information of the area to be processed, the angle difference information indicates angle differences of a plurality of first pixel points, and the angle difference of each first pixel point is an angle difference value between a pixel direction between each first pixel point and the reference point and a gradient direction of each first pixel point; and the identification unit is configured to identify the angle difference information and the input image so as to obtain an identification result corresponding to the object to be identified.
At least one embodiment of the present disclosure provides an electronic device comprising a processor; a memory including one or more computer program modules; wherein one or more computer program modules are stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for implementing the image processing method provided by any of the embodiments of the present disclosure.
At least one embodiment of the present disclosure provides a computer-readable storage medium for non-transitory storage of computer-readable instructions that, when executed by a computer, may implement an image processing method provided by any embodiment of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
Fig. 1A illustrates a flowchart of an image processing method provided by at least one embodiment of the present disclosure;
FIG. 1B illustrates a schematic diagram of an input image provided by at least one embodiment of the present disclosure;
fig. 1C illustrates a schematic diagram of a pixel direction and a gradient direction of an exemplary illustrated pixel point P provided by at least one embodiment of the present disclosure;
FIG. 1D schematically shows an angle difference image of the input image shown in FIG. 1B;
fig. 2A illustrates a flowchart of a method of step S20 in fig. 1A according to at least one embodiment of the present disclosure;
fig. 2B is a schematic diagram illustrating a blurred image obtained by performing step S21 according to at least one embodiment of the disclosure;
fig. 2C is a schematic diagram illustrating the detection profile obtained by performing step S22 according to at least one embodiment of the present disclosure;
fig. 2D illustrates the outer contour of the object to be recognized and the center of the outer contour obtained by performing step S23 according to at least one embodiment of the present disclosure;
fig. 2E illustrates a schematic diagram of an area where an object to be recognized is located and an outer contour of the object to be recognized, according to at least one embodiment of the present disclosure;
fig. 3A illustrates a flowchart of a method of step S30 in fig. 1A according to at least one embodiment of the present disclosure;
fig. 3B illustrates an effect graph generated according to a gradient of a pixel along a first direction, provided by at least one embodiment of the present disclosure;
fig. 3C illustrates an effect graph generated according to a gradient of a pixel along a second direction provided by at least one embodiment of the present disclosure;
fig. 3D illustrates an effect graph generated according to a gradient direction of each first pixel point according to at least one embodiment of the present disclosure;
fig. 3E illustrates a corresponding relationship diagram of a pixel direction and a gray scale of a pixel point and an effect diagram generated according to the pixel direction of the pixel according to at least one embodiment of the disclosure;
fig. 3F illustrates a schematic diagram of an angle difference between two exemplary pixel points P and Q provided by at least one embodiment of the present disclosure;
fig. 4A illustrates a flowchart of a method of step S40 in fig. 1A according to at least one embodiment of the present disclosure;
FIG. 4B schematically shows an edge line graph obtained by edge detection of the input image shown in FIG. 1B;
fig. 4C illustrates that the step S42 provided by at least one embodiment of the present disclosure obtains a projection diagram;
fig. 4D illustrates a flowchart of a method of step S43 in fig. 4A according to at least one embodiment of the present disclosure;
fig. 4E illustrates an image region obtained by performing a cutting process on the projection diagram according to at least one embodiment of the disclosure;
FIG. 4F illustrates a schematic diagram of a normalization process provided by at least one embodiment of the present disclosure;
fig. 4G illustrates a normalized image obtained by normalizing the gray values of all the pixel points in the image according to at least one embodiment of the present disclosure;
fig. 5A is a flowchart illustrating a method for analyzing a normalized image to obtain a number of at least one to-be-identified stripe according to at least one embodiment of the present disclosure;
FIG. 5B illustrates a schematic diagram of the method illustrated in FIG. 5A provided by at least one embodiment of the present disclosure;
fig. 6 shows a schematic block diagram of an image processing apparatus provided in at least one embodiment of the present disclosure;
fig. 7 illustrates a schematic block diagram of an electronic device provided by at least one embodiment of the present disclosure;
fig. 8 illustrates a schematic block diagram of another electronic device provided by at least one embodiment of the present disclosure; and
fig. 9 illustrates a schematic diagram of a computer-readable storage medium provided by at least one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
Image recognition technology has been widely used, providing a great deal of help for our life and technological progress. For example, the annual rings are important bases for researching tree growth rules, forest productivity calculation and climate change, and have important significance for monitoring the growth condition of the trees. The traditional annual ring identification method is mainly measured manually by professional staff and has the defects of large workload, low efficiency, high cost, high possibility of making mistakes and the like. With the development of image recognition technology, image recognition technology has been applied to the recognition of annual rings. However, due to some environmental factors, the existing image recognition technology cannot accurately recognize an object to be recognized (for example, annual rings). For example, due to scabs generated during the growth process of trees, burrs generated during the felling process, noise points generated during the collection process, and the like, the annual rings cannot be accurately identified by the conventional image identification method.
At least one embodiment of the present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium. The image processing method comprises the following steps: acquiring an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed; performing first image processing on an input image to determine feature information of an object to be recognized, the feature information including a reference point of the object to be recognized; acquiring angle difference information of a region to be processed, wherein the angle difference information indicates angle differences of a plurality of first pixel points, and the angle difference of each first pixel point is an angle difference value between a pixel direction between each first pixel point and a reference point and a gradient direction of each first pixel point; and identifying the angle difference information and the input image to obtain an identification result corresponding to the object to be identified.
The image processing method can identify the object to be identified in the input image according to the angle difference information between the pixel direction between the pixel point in the area to be identified in the input image and the reference point of the object to be identified and the gradient method of the pixel point in the area to be identified, so that the noise in the input image can be eliminated, and the accuracy of identifying the object to be identified is improved.
Fig. 1A illustrates a flowchart of an image processing method according to at least one embodiment of the present disclosure.
As shown in fig. 1A, the image processing method may include steps S10 to S40.
Step S10: the method comprises the steps of obtaining an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed.
Step S20: first image processing is performed on an input image to determine feature information of an object to be recognized, the feature information including a reference point of the object to be recognized.
Step S30: and obtaining angle difference information of the area to be processed, wherein the angle difference information indicates the angle difference of the first pixel points.
Step S40: and identifying the angle difference information and the input image to obtain an identification result corresponding to the object to be identified.
With step S10, the input image may be a two-dimensional image or a stereoscopic image. For example, the input image may be a two-dimensional image or a three-dimensional stereoscopic image obtained by image-capturing an object to be recognized by an image-capturing device (e.g., a digital camera, a mobile phone, a scanner, or the like). As another example, the input image may be, for example, a two-dimensional image or a stereoscopic image of the object to be recognized, which is constructed by design software or modeling software. For example, the input image may be a grayscale image or a color image. For example, the shape of the input image may be a regular shape such as a rectangle or a square, or may be an irregular shape, and the shape, size, and the like of the input image may be set by the user according to the actual situation. For example, the input image may be an original image directly acquired by the image acquisition device, and in addition, in order to avoid influence of data quality, data imbalance and the like of the original image on the identification of the input image, the image processing method provided by the embodiment of the disclosure may further include an operation of preprocessing the original image, that is, the input image may also be an image obtained by preprocessing the original image. For example, the pre-processing may eliminate extraneous or noisy information in the original image to facilitate better processing of the original image. The preprocessing may include, for example, performing warping correction, scaling, cropping, Gamma (Gamma) correction, image enhancement, or noise reduction filtering on the original image, so as to improve the accuracy and reliability of each operation in the subsequent steps.
In some embodiments of the present disclosure, the object to be identified comprises a cross section of a tree, the cross section of the tree comprising an annual ring of the tree. For example, an input image in which a cross section of a stump is an object to be recognized is obtained by image-capturing a cross section of the stump by an image capturing device.
In some embodiments of the present disclosure, the region to be processed of the input image may be a region containing an object to be recognized, or a region containing at least a part of an object to be recognized. For example, all of the input image is a region to be processed, or a partial region of the input image is a region to be processed, for example, a region in which an object to be recognized in the input image is located is a region to be processed. Or, a part of the region where the object to be recognized is located and a part of the region in the input image other than the region where the object to be recognized is located are used as the regions to be processed.
Fig. 1B illustrates a schematic diagram of an input image provided by at least one embodiment of the present disclosure.
As shown in fig. 1B, the input image 100 may be acquired by image capturing a stump. The object to be identified may be, for example, a cross-section 101 of the stump.
The area to be processed may be, for example, the entire input image 100 or may also be the area where the cross-section 101 of the stump is located.
For step S20, for example, the feature information of the object to be recognized may be information describing a feature of the object to be recognized, the feature information including a reference point of the object to be recognized. For example, the feature information may also include an outer contour of the object to be recognized.
In some embodiments of the present disclosure, the reference point may be, for example, the center of the outer contour, the center of gravity, or the like.
For example, if the object to be recognized is a circular pattern or an elliptical pattern, the reference point of the object to be recognized may include the center of the circular pattern or the center of the elliptical pattern.
In some embodiments of the present disclosure, the object to be recognized may be an irregular pattern. For example, a stump is an irregular pattern resembling a circle or an ellipse. For the irregular pattern, for example, the input image may be subjected to image processing, the irregular pattern may be processed into a regular pattern closest to the irregular pattern, and then the feature information of the regular pattern may be used as the feature information of the object to be recognized. For example, for the cross section of the stump, the stump is subjected to edge detection to obtain the annual ring area where the annual ring of the stump is located, and then curve fitting is carried out on the annual ring with the largest annual ring area to obtain the closest regular pattern of the largest annual ring, wherein the regular pattern is used as the outer contour of the annual ring area. For example, if the outer contour is a circle, the center of the circle and the radius of the circle may also be used as the characteristic information of the cross section of the stump.
In some embodiments of the present disclosure, for example, an input image is input into a recognition model to recognize feature information of an object to be recognized by the recognition model. The recognition model is obtained by, for example, training a neural network on a plurality of samples and sample feature information of each of the plurality of samples. For example, the recognition model may be implemented using machine learning techniques and run, for example, on a general purpose computing device or a special purpose computing device. The recognition models are all neural network models obtained by pre-training. For example, the recognition model may be implemented using a neural network such as a DEEP convolutional neural network (DEEP-CNN).
In other embodiments of the present disclosure, for example, the input image is first subjected to preprocessing such as graying, filtering and noise reduction (for example, implemented based on a bilateral filter), image segmentation (for example, implemented based on an adaptive threshold segmentation algorithm, and the like), and then the input image after preprocessing is subjected to edge extraction by using an edge detection operator (for example, a canny operator in opencv, a sobel operator, and the like) to obtain the outer contour of the object to be recognized. And then, obtaining a plurality of suspected centers of the outer contour by utilizing Hough transform of gradient transform, and finally clustering the suspected centers by utilizing a K-center point clustering (K-medoids) algorithm to obtain a reference point.
Fig. 2A illustrates a flowchart of a method of step S20 in fig. 1A according to at least one embodiment of the present disclosure.
In other embodiments of the present disclosure, as shown in fig. 2A, step S20 may include steps S21 to S23.
Step S21: the input image is blurred to obtain a blurred image.
Step S22: and carrying out edge detection on the blurred image to obtain a detection contour of the object to be identified.
Step S23: and fitting the detection contour to obtain the outer contour of the object to be identified and the center of the outer contour, wherein the center of the outer contour is a datum point.
For example, in the example shown in fig. 2A, the first image processing includes a blurring process, an edge detection process, and a fitting process.
The embodiment of fig. 2A is described below in conjunction with fig. 2B-2D.
Fig. 2B is a schematic diagram illustrating a blurred image obtained by performing step S21 according to at least one embodiment of the disclosure.
In step S21, the image shown in fig. 2B is a blurred image obtained by blurring the input image 100 shown in fig. 1B. For example, the input image 100 is subjected to blurring processing such as Gaussian blurring (Gaussian blur) and mean filtering to obtain a blurred image.
Fig. 2C is a schematic diagram illustrating the detection profile obtained by performing step S22 according to at least one embodiment of the present disclosure.
For step S22, for example, the image shown in fig. 2C is a detected contour of the object to be recognized obtained by edge detection of the blurred image shown in fig. 2B using an edge detection operator. For example, in fig. 2C, a white line is the detection contour. The edge detection operator may be any one of Canny operator, Sobel (Sobel) operator, Laplace (Laplace) operator, and the like in opencv, for example.
Fig. 2D illustrates the outer contour of the object to be recognized and the center of the outer contour obtained by performing step S23 according to at least one embodiment of the present disclosure.
For step S23, for example, in some embodiments, the detected contour shown in fig. 2C is curve-fitted to obtain an outer contour of the object to be recognized and a center of the outer contour, as shown in fig. 2D, the outer contour of the object to be recognized is the outer contour 201, and the outer contour 201 is a circular contour. The center of the circular outer contour is the center of a circle O. The center O is used as a reference point. For example, the least square method may be adopted to perform curve fitting to obtain the outer contour of the object to be identified, and then the center of the outer contour is determined according to the outer contour.
It should be noted that the outer contour of the object to be recognized may also be in a shape such as a substantially elliptical shape, and the shape of the outer contour of the object to be recognized is not particularly limited in this disclosure. The region enclosed by the outer contour of the object to be recognized and the region in which the object to be recognized is located do not necessarily overlap completely, and the outer contour of the object to be recognized only needs to substantially enclose the object to be recognized.
Fig. 2E illustrates a schematic diagram of an area where an object to be recognized is located and an outer contour of the object to be recognized, according to at least one embodiment of the present disclosure.
Fig. 2E is a schematic diagram obtained by projecting the fitted outer contour of the object to be recognized onto the blurred image, for example.
As shown in fig. 2E, the area 202 in which the object to be recognized (i.e., the cross-section of the stump) is located and the area enclosed by the outer contour 201 of the object to be recognized do not completely overlap. The area of the region 202 in which the object to be identified (i.e. the cross-section of the stump) is located is slightly larger than the area of the region enclosed by the outer contour 201 of the object to be identified.
For step S30, the angle difference of each first pixel point is the angle difference between the pixel direction between each first pixel point and the reference point and the gradient direction of each first pixel point.
In some embodiments of the present disclosure, a direction of a vector formed by each first pixel point and the reference point is taken as a pixel direction of each first pixel point, and a starting point of the vector is taken as each first pixel point.
In some embodiments of the present disclosure, the gradient direction of each first pixel point is the direction of the gradient of each first pixel point.
In some embodiments of the present disclosure, the angular difference information comprises an angular difference image. Fig. 1C exemplarily shows an angle difference image of the region to be processed in fig. 1B. The pixel direction and gradient direction of the pixel point are described below with reference to fig. 1C.
Fig. 1C illustrates a schematic diagram of a pixel direction and a gradient direction of an exemplary illustrated pixel point P provided in at least one embodiment of the present disclosure.
As shown in fig. 1C, the pixel point P is an example of a first pixel point of the present disclosure. The pixel direction of the pixel point P is a vector formed by the pixel point P and the reference point O
Figure BDA0003317407540000111
The starting point of the vector is the pixel point P.
As shown in FIG. 1C, the gradient direction of the pixel P is the gradient of the pixel P
Figure BDA0003317407540000112
In the direction of (a).
The angle difference of the pixel point P is a vector
Figure BDA0003317407540000113
Direction of (2) and gradient of pixel point P
Figure BDA0003317407540000114
The difference between the directions of (a) and (b).
E.g. vectors
Figure BDA0003317407540000115
Is at an angle alpha with respect to a reference direction (the direction shown by the dotted line with an arrow in fig. 1C), and the gradient
Figure BDA0003317407540000116
Is beta with respect to the reference direction, the gradient of the pixel point P
Figure BDA0003317407540000117
Direction and vector of
Figure BDA0003317407540000118
Is the difference between angle beta and angle alpha, beta-alpha. That is, the angle difference of the pixel point P is the difference β - α between the angle β and the angle α. The reference direction may be, for example, the width direction of the input image, i.e., the X direction in fig. 1C.
In some embodiments of the present disclosure, for example, the angular difference of the pixel points ranges from [ -90 °, 90 ° ].
In some embodiments of the present disclosure, the angle difference information may be, for example, an angle difference image, where the angle difference image includes a plurality of second pixel points, and the plurality of second pixel points correspond to the plurality of first pixel points one to one.
In step S30, for example, an angle difference image of the region to be processed is obtained, and the gray value of each second pixel point in the angle difference image is inversely correlated with the angle difference of the first pixel point corresponding to each second pixel point. That is, the larger the angle difference of the pixel point in the region to be processed is, the smaller the gray value of the pixel point corresponding to the pixel point in the angle difference image is.
For example, a gray value of 255 (i.e., white pixels) indicates a small angle difference or 0, and a gray value of 0 (i.e., black pixels) indicates a large angle difference. For example, in a situation where the object to be identified is a cross section of a tree, the smaller the angle difference of the pixel point in the region to be processed is, the higher the probability that the pixel point is an annual ring is, that is, in the cross section of the tree, the smaller the angle difference of the pixel point corresponding to the annual ring is, so that a rough line profile of the annual ring can be displayed through the angle difference image.
Fig. 1D schematically shows an angle difference image of the input image shown in fig. 1B.
As shown in fig. 1D, the angle difference image shows a substantially linear profile of the growth rings.
In fig. 1D, the line profile formed by the pixel points with higher gray values is an approximate line profile of the growth ring.
As shown in fig. 1D, the gray values of the pixel points 101, 102, 103, etc. are higher, and the pixel points 101, 102, 103, etc. form an approximate line contour of the growth ring.
It should be understood that the pixel 101, the pixel 102, and the pixel 103 are only examples for convenience of illustrating the rough line contour of the annual ring, and the rough line contour of the annual ring is actually composed of more pixels, rather than only three pixels.
Fig. 3A illustrates a flowchart of a method of step S30 in fig. 1A according to at least one embodiment of the present disclosure.
As shown in fig. 3A, step S30 may include steps S31 to S33.
Step S31: and obtaining the angle difference of the first pixel points.
Step S32: determining the gray values of a plurality of second pixel points according to the angle differences of the plurality of first pixel points, wherein the gray value of each second pixel point is determined according to the angle difference of the first pixel point corresponding to each second pixel point.
Step S33: and generating an angle difference image according to the gray values of the plurality of second pixel points.
In this embodiment, the angle difference of the plurality of first pixel points is represented by the angle difference image, which is beneficial to the execution of step S40, and the angle difference of the plurality of first pixel points can be visually shown when the angle difference image is output.
For step S31, for example, the gradient direction of each first pixel point is determined, the direction of a vector formed by each first pixel point and the reference point is taken as the pixel direction of each first pixel point, the starting point of the vector is taken as each first pixel point, and an angle difference between the gradient direction of each first pixel point and the pixel direction of each first pixel point is calculated, where the angle difference is the angle difference of the first pixel point.
For example, the pixel direction and the gradient direction of each first pixel point are determined according to the method schematically described in fig. 1C, and then the angle difference between the gradient direction of each first pixel and the pixel direction of each first pixel point is calculated.
In some embodiments of the present disclosure, determining the gradient direction of each first pixel point comprises: respectively obtaining the gradient value of each first pixel point in the first direction and the gradient value of each first pixel point in the second direction, and determining the gradient direction of each first pixel point according to the gradient value of each first pixel point in the first direction and the gradient value of each first pixel point in the second direction.
For example, an input image is subjected to gray processing to obtain a gray image, and then gradient values in a first direction and gradient values in a second direction of pixel points in a region corresponding to a region to be processed in the gray image are calculated respectively based on the gray image. The gradient value of each pixel point in the region corresponding to the region to be processed in the gray image in the first direction and the gradient value of each pixel point in the second direction are the gradient value of each corresponding first pixel point in the region to be processed in the first direction and the gradient value of each corresponding first pixel point in the second direction.
In some embodiments of the present disclosure, the first direction may be, for example, a width direction of a gray image. The width direction of the grayscale image coincides with the width direction of the input image. For example, the first direction coincides with the X direction in fig. 1C. The second direction may be a direction perpendicular to the first direction. The second direction is, for example, a height direction of the grayscale image, and coincides with, for example, the Y direction in fig. 1C. For example, in some embodiments, the X direction may be a horizontal direction and the Y direction may be a vertical direction.
For example, the gradient value of the pixel point (x, y) in the grayscale image in the first direction is calculated according to the formula gx ═ f (x +1, y) -f (x, y). gx represents the gradient value in the first direction, f (x, y) is the gray value of the pixel (x, y) in the gray image, and f (x +1, y) is the gray value of the pixel (x +1, y) adjacent to the pixel (x, y) in the first direction.
In another embodiment of the present disclosure, the gradient of the pixel point in the first direction may be calculated, for example, using a sobel operator. Fig. 3B illustrates a first gradient image generated by calculating a gradient of a pixel point in a gray scale image of an input image in a first direction using a sobel operator according to at least one embodiment of the present disclosure. As shown in fig. 3B, for example, a first gradient image is obtained by performing convolution calculation on a gray scale image of the input image by using a sobel operator, and a gray scale value of a pixel point in the first gradient image indicates a gradient value of a pixel point in the gray scale image corresponding to the pixel point in the first gradient image in the first direction.
Similarly, the gradient value of the pixel point (x, y) in the gray image in the second direction may be calculated according to the formula gy ═ f (x, y +1) -f (x, y). gy represents the gradient value in the second direction, f (x, y) is the gray value of the pixel (x, y) in the gray image, and f (x, y +1) is the gray value of the pixel (x, y +1) adjacent to the pixel (x, y) in the second direction. Fig. 3C illustrates a second gradient image generated by calculating a gradient of a pixel point in a gray scale image of an input image in a second direction by using a sobel operator according to at least one embodiment of the present disclosure. As shown in fig. 3C, for example, a sobel operator is used to perform convolution calculation on the gray scale image of the input image to obtain a second gradient image, where the gray scale pixel point in the second gradient image indicates a gradient value of the pixel point in the gray scale image corresponding to the pixel point in the first gradient image in the second direction.
In some embodiments of the present disclosure, for example, if a gradient value of a pixel point in a first direction is gx and a gradient value in a second direction is gy, the gradient direction of the pixel point can be represented by an arctangent of gy/gx, i.e., an angle between the gradient direction and the reference direction is equal to arctan2(gy, gx).
Fig. 3D illustrates an effect graph generated according to a gradient direction of each pixel point according to at least one embodiment of the present disclosure.
For example, in fig. 3D, the gray-level value of each pixel point represents the gradient direction of the pixel point.
Fig. 3E illustrates a corresponding relationship diagram between a pixel direction and a gray level of a pixel point and an effect diagram generated according to the pixel direction of the pixel point according to at least one embodiment of the disclosure.
The left diagram of fig. 3E shows the correspondence between the pixel direction of a pixel point and the effect diagram generated by the pixel point according to the pixel direction of the pixel point.
The right diagram of fig. 3E shows an exemplary effect diagram generated according to the pixel directions of a plurality of pixel points.
The gray scale image in the right image of fig. 3E is an exemplary effect image obtained by normalizing the pixel directions of a plurality of pixel points. For example, the maximum value of the included angle between the pixel direction of the plurality of pixel points in the input image and the reference direction is max, and the minimum value of the included angle between the pixel direction of the plurality of pixel points in the input image and the reference direction is min, the pixel direction may be normalized according to (a (x, y) -min)/(max-min), where a (x, y) is the included angle between the pixel direction of the pixel point (x, y) and the reference direction.
In some embodiments of the present disclosure, the larger the angular difference between the direction of the vector formed by the pixel point and the reference point (i.e., the pixel direction) and the reference direction, the larger the grayscale value of the pixel point.
As shown in the left diagram of fig. 3E, the angle range between the pixel direction of the pixel point and the reference direction is [ -180 °, 180 ° ]. The smaller the angle between the pixel direction of the pixel point and the reference direction is, the smaller the gray value of the pixel point in the effect graph is. That is, the smaller the angle between the pixel direction of the pixel point and the reference direction is, the darker the color of the pixel point in the effect map is.
Fig. 3F illustrates an exemplary schematic diagram of an angle difference between two pixel points P and Q according to at least one embodiment of the present disclosure.
Pixel P and two pixels Q this disclosure's example of a first pixel. As shown in FIG. 3F, the angle difference of the pixel P is the gradient of the pixel P
Figure BDA0003317407540000141
Direction and vector of
Figure BDA0003317407540000142
I.e. the difference between the angle beta and the angle alpha, beta-alpha.
The angle difference of the pixel point Q is the gradient of the pixel point Q
Figure BDA0003317407540000143
Direction mu and vector of
Figure BDA0003317407540000144
Direction of (1)
Figure BDA0003317407540000145
The difference between the sums, i.e. the angle mu and the angle
Figure BDA0003317407540000146
Difference between them
Figure BDA0003317407540000147
As shown in FIG. 3F, the gradient direction and the pixel direction (i.e. gradient) of the pixel point P
Figure BDA0003317407540000151
Direction and vector of
Figure BDA0003317407540000152
Direction) is small, and the gradient method and pixel direction (i.e., gradient) of pixel point Q
Figure BDA0003317407540000153
Direction mu and vector of
Figure BDA0003317407540000154
Direction of (1)
Figure BDA0003317407540000155
) The angle difference therebetween is large. In the scene that the object to be identified is the cross section of the tree, the smaller the angle difference of the pixel point is, the higher the probability that the pixel point is the annual ring is, so that the probability that the pixel point P is the pixel point corresponding to the annual ring is higher than the probability that the pixel point Q is the pixel point corresponding to the annual ring.
For step S32, determining the gray values of the plurality of second pixel points according to the angle difference of the first pixel point corresponding to each second pixel point. For example, the gray value of each second pixel point is determined according to the mapping relationship between the angle difference of the first pixel point and the gray value. The mapping relation can be set by a user according to actual conditions. For example, when the angle difference of the first pixel point is within the first angle difference range, the gray value of the first pixel point may be the gray value a, and when the angle difference of the first pixel point is within the second angle difference range, the gray value of the first pixel point may be the gray value B, and so on. The first angle difference range, the second angle difference range, the gradation value a, and the gradation value B may be preset by a user and stored in the form of a table.
In other embodiments of the present disclosure, the angular difference information may also be an angular difference matrix formed by the angular difference of each pixel point in each region to be processed.
For step S40, for example, the object to be recognized includes at least one stripe to be recognized, and the angle difference information of the pixel points in the image region where the at least one stripe to be recognized is located and the input image are recognized to obtain the at least one stripe to be recognized or the number of the at least one stripe to be recognized. At this time, the recognition result corresponding to the object to be recognized is the shape, number, position (position of the at least one stripe to be recognized in the input image) and the like of the at least one stripe to be recognized.
For example, in the input image shown in fig. 1B, the object to be recognized includes a cross section of a tree, the cross section of the tree includes the annual rings of the tree, and the recognition result includes the number, shape, position (position of the annual ring in the input image) and the like of the annual ring of the tree or the tree. And identifying the angle difference information of each pixel point in the cross section and the input image to obtain the annual rings of the trees or the number of the annual rings of the trees.
It should be understood that, although the embodiment of the present disclosure is described herein by taking the object to be identified as the cross section or the stripe of the tree as an example, the image processing method provided by the present disclosure is not limited to identifying the cross section or the stripe of the tree. The fringes may include or be concentric fringes, such as interference fringes of Newton's rings.
Fig. 4A illustrates a flowchart of a method of step S40 in fig. 1A according to at least one embodiment of the present disclosure.
As shown in fig. 4A, step S40 includes steps S41 to S43.
Step S41: and carrying out edge detection on the input image to obtain an edge line graph corresponding to the input image.
Step S42: and carrying out convolution calculation on the angle difference information and the edge line graph to obtain a projection graph.
Step S43: and identifying the projection graph to obtain an identification result corresponding to the object to be identified.
In this embodiment, the angle difference information and the edge line graph can be fused by performing convolution calculation on the angle difference information and the edge line graph, so that the noise influence is reduced, and the recognition accuracy can be improved.
In step S41, an edge of the input image is detected by using any one of a Canny operator, a Sobel operator, a Laplace operator, and the like, for example, to obtain an edge line graph.
Fig. 4B schematically shows an edge line graph obtained by performing edge detection on the input image shown in fig. 1B.
For step S42, the angular difference information may be, for example, an angular difference image, and the angular difference image and the edge line graph are subjected to convolution calculation (for example, convolution multiplication calculation) to obtain a projection graph.
Fig. 4C illustrates a projection view obtained in step S42 provided by at least one embodiment of the present disclosure.
In step S43, for example, a line in the projection drawing is recognized, and a recognition result is obtained.
Fig. 4D illustrates a flowchart of a method of step S43 in fig. 4A according to at least one embodiment of the present disclosure.
As shown in fig. 4D, step S43 may include step S431 and step S432.
In this embodiment, the characteristic information further comprises an outer contour of the object to be recognized.
Step S431: and carrying out second image processing on the projection drawing according to the outer contour to obtain an image area, wherein the image area is the area where the object to be identified is located, and the area to be processed comprises the image area.
In some embodiments of the present disclosure, the projected image is cut out of the projected image by performing a cutting process on the projected image according to the outer contour.
For example, the region of the projection view outside the region surrounded by the outer contour is removed, and the image region is cut out of the projection view.
Fig. 4E illustrates an image region obtained by performing a cutting process on the projection diagram according to at least one embodiment of the present disclosure.
As shown in fig. 4E, the area of the image region is close to or slightly larger than the area enclosed by the outer contour.
In other embodiments of the present disclosure, the gray value of each pixel point in the region outside the region enclosed by the outer contour in the projection drawing is set to a fixed value to obtain an image region, and the size of the image region is the same as the size of the projection drawing. The method is simple and easy to realize.
For example, in some embodiments, the gray value of each pixel point in the area outside the area enclosed by the outer contour in the projected graph is set to 127.
Step S432: and analyzing the image area to obtain an identification result corresponding to the object to be identified.
In some embodiments of the present disclosure, step S432 includes: carrying out normalization processing on the gray values of all pixel points in the image area to obtain a normalized image; and analyzing the normalized image to obtain an identification result corresponding to the object to be identified. For example, the gray value of each pixel in the normalized image is in the range of [0,255 ].
In some embodiments of the present disclosure, the normalization process may be performed, for example, according to the following formula.
maxval,minval=np.max(projection),np.min(projection)
P=((projection-minval)/(maxval-minval)*255).astype(np.uint8)
max (projection), np.min (projection) indicates that the maximum and minimum gray values in the projection view are calculated, respectively. maxval represents a maximum value of gradation, and minval represents a minimum value of gradation. project represents the gray value of the pixel point in the projection image, and P represents the normalized gray value of the pixel point in the projection image. The astype represents the variable type transfer function in the python language.
Fig. 4F illustrates a schematic diagram of a normalization process provided by at least one embodiment of the present disclosure.
As shown in fig. 4F, before normalization, the distribution of the gray values of a plurality of pixel points in the image area in the projection diagram is shown as a curve 401, and at this time, the gray values range from [ -256,256 ]. After normalization, the distribution of the gray values of the pixels in the normalized image is shown as curve 402. In the coordinate system shown in fig. 4F, the abscissa represents the gray-scale value, and the ordinate represents the number of pixels.
In the embodiment of the present disclosure, since the projection diagram is obtained by performing convolution calculation on the angle difference information and the edge line diagram, there is an element having a negative value in a matrix corresponding to the image area of the projection diagram, as shown by a curve 401. The elements of negative values appear black in the projected view. However, the gray scale value of a pixel in the image is an integer of 0, 255. Therefore, in order to better identify the image region, the gray-scale value in the image region in the projection view needs to be normalized. The value range of the elements in the projection graph is changed into [0,255] through the normalization processing, and the normalization process is similar to the process that the curve 401 is compressed into the curve 402, and the overall distribution shape of the gray values of a plurality of pixel points is unchanged. As shown in fig. 4F, the gray-level value of the black pixel point, i.e., the pixel point with the gray-level value of 0, in the image region in the projection image in the normalized image is changed to 127.
Fig. 4G shows a normalized image obtained by normalizing the gray values of all the pixel points in the image region according to at least one embodiment of the present disclosure.
In the embodiment shown in fig. 4G, the image area is obtained by setting the gray value of each pixel point in the area outside the area enclosed by the outer contour in the projection drawing to 127, at this time, the size of the image area is the same as that of the projection drawing, and the image area includes the area enclosed by the outer contour and the area outside the area enclosed by the outer contour.
The gray value of each pixel point in the region outside the region surrounded by the outer contour in the projection drawing and the gray value of the black pixel point in the region surrounded by the outer contour in the projection drawing are uniformly set to be 127, so that the number of annual rings/stripes can be conveniently calculated in the subsequent steps.
In some embodiments of the present disclosure, the object to be recognized includes at least one stripe to be recognized, and analyzing the normalized image to obtain a recognition result of the object to be recognized includes: and analyzing the normalized image to obtain the number of at least one to-be-recognized stripe, wherein the recognition result comprises the number of the at least one to-be-recognized stripe.
For example, according to the characteristics of the stripes to be identified, the normalized image is identified, so that the number of the stripes with the characteristics of the stripes to be identified in the normalized image is determined. The features of the stripes to be identified may include, for example, the gray values of the stripes, the shapes of the stripes, and the like.
In some embodiments of the present disclosure, the normalized image includes at least one curvilinear stripe having a first gray value and at least one curvilinear stripe having a second gray value, the at least one curvilinear stripe having the first gray value and the at least one curvilinear stripe having the second gray value being alternately arranged, the second gray value being less than the first gray value.
Fig. 5A is a flowchart illustrating a method for analyzing a normalized image to obtain a number of at least one to-be-identified stripe according to at least one embodiment of the present disclosure.
As shown in fig. 5A, analyzing the normalized image to obtain the number of at least one to-be-recognized stripe may include steps S510 to S530.
Step S510: at least one ray is drawn from the reference point, using the reference point as a vertex, wherein each ray intersects at least one curvilinear stripe having a first gray value and/or at least one curvilinear stripe having a second gray value.
Step S520: and counting the change times of the gray value of the pixel point on the ray in the direction of the ray from the first gray value to the second gray value aiming at each ray.
Step S530: and determining the number of at least one stripe to be identified based on the change times corresponding to each ray.
The embodiment determines the number of the stripes to be recognized based on the number of times of the gray value change of the pixel points on the ray, so that the accuracy of determining the number of the stripes to be recognized can be improved.
Fig. 5B illustrates a schematic diagram of the method illustrated in fig. 5A provided by at least one embodiment of the present disclosure. The method of fig. 5A is described below in conjunction with fig. 5B. As shown in fig. 5B, the cross section of the tree as the object to be identified is taken as an example for explanation.
As shown in fig. 5B, the normalized image includes at least one curved stripe 1 having a first gray scale value and at least one curved stripe 2 having a second gray scale value, and the at least one curved stripe 1 having the first gray scale value and the at least one curved stripe 2 having the second gray scale value are alternately arranged. As shown in fig. 5B, the gray scale value of the curved stripe 1 may be 127. The gray scale value of the curved stripe 2 is 0.
For step S510, as shown in FIG. 5B, a plurality of rays R1-R5 are drawn from the reference point O with the reference point O as a vertex, each of the plurality of rays R1-R5 intersecting at least one curvilinear stripe 1 having a first gray scale value and at least one curvilinear stripe 2 having a second gray scale value.
For step S520, for example, when the gray value of the pixel point on the ray is not 0, i.e., the pixel point becomes black, it is considered that the growth rings are present, and the number of times the gray value of the pixel point on the ray changes from the first gray value to 0 is the number of growth rings.
As shown in fig. 5B, for each ray, the direction of the ray is a direction from the reference point O to the outer contour, and the number of times of change of the gray value of the pixel point on the ray from the first gray value to the second gray value is counted in the direction of the ray.
As shown in fig. 5B, with the ray R1, the number of changes in the gray-scale value of the pixel point on the ray R1 is 11 in the direction of the ray R1. For the ray R4, the number of changes in the gradation value of the pixel point on the ray R4 is 12 times in the direction of the ray R4 from the reference point O.
In some embodiments of the present disclosure, the at least one ray is a plurality of rays, an average value of a plurality of variation times corresponding to the plurality of rays is calculated, and the average value is taken as the number of the at least one to-be-identified streak.
By calculating the average value of the change times of the plurality of rays, the influence of noise can be reduced, and the accuracy of the identification result is further improved.
For example, the plurality of rays have different angles with respect to the reference direction, for example, an angle range of 0 to 90 degrees is selected, 90 rays are selected, the angles corresponding to the 90 rays are within the above angle range, the number of changes corresponding to the 90 rays is averaged, and of course, 45 rays or the like may be selected. For another example, the angle range may be 0 to 270 degrees. It should be noted that, in the embodiment of the present disclosure, the angle of each ray with respect to the reference direction represents: from this reference direction, in a counter-clockwise direction (or clockwise direction) to the angle determined by the ray.
Fig. 6 illustrates a schematic block diagram of an image processing apparatus 600 according to at least one embodiment of the present disclosure.
For example, as shown in fig. 6, the image processing apparatus 600 includes an image acquisition unit 610, a processing unit 620, an angle difference acquisition unit 630, and a recognition unit 640.
The image acquisition unit 610 is configured to acquire an input image, where the input image includes an object to be recognized and a region to be processed, the region to be processed includes a plurality of first pixel points, and at least a part of the object to be recognized is located in the region to be processed.
The image acquisition unit 610 may perform step S10 described in fig. 1A, for example.
The processing unit 620 is configured to perform a first image processing on the input image to determine feature information of the object to be recognized, the feature information including a reference point of the object to be recognized.
The processing unit 620 may, for example, perform step S20 described in fig. 1A.
The angle difference obtaining unit 630 is configured to obtain angle difference information of the to-be-processed area, where the angle difference information indicates angle differences of a plurality of first pixel points, and the angle difference of each first pixel point is an angle difference between a pixel direction between each first pixel point and the reference point and a gradient direction of each first pixel point.
The angle difference acquisition unit 630 may perform step S30 described in fig. 1A, for example.
The recognition unit 640 is configured to recognize the angle difference information and the input image to obtain a recognition result corresponding to the object to be recognized.
The identifying unit 640 may, for example, perform step S40 described in fig. 1A.
For example, the image acquisition unit 610, the processing unit 620, the angle difference acquisition unit 630, and the identification unit 640 may be hardware, software, firmware, and any feasible combination thereof. For example, the image acquiring unit 610, the processing unit 620, the angle difference acquiring unit 630, and the identifying unit 640 may be dedicated or general circuits, chips, or devices, and may also be a combination of a processor and a memory. The embodiments of the present disclosure are not limited in this regard to the specific implementation forms of the above units.
It should be noted that, in the embodiment of the present disclosure, each unit of the image processing apparatus 600 corresponds to each step of the foregoing image processing method, and for specific functions of the image processing apparatus 600, reference may be made to the related description about the image processing method, which is not described herein again. The components and configuration of the image processing apparatus 600 shown in fig. 6 are exemplary only, and not limiting, and the image processing apparatus 600 may further include other components and configurations as needed.
At least one embodiment of the present disclosure also provides an electronic device comprising a processor and a memory, the memory including one or more computer program modules. One or more computer program modules are stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for implementing the image processing method described above. The electronic equipment can improve the accuracy of identifying the object to be identified.
Fig. 7 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure. As shown in fig. 7, the electronic device 700 includes a processor 710 and a memory 720. Memory 720 is used for non-transitory storage of computer-readable instructions (e.g., one or more computer program modules). The processor 710 is configured to execute computer-readable instructions, which when executed by the processor 710 may perform one or more of the steps of the image processing method described above. The memory 720 and the processor 710 may be interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the processor 710 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capabilities and/or program execution capabilities. For example, the Central Processing Unit (CPU) may be an X86 or ARM architecture or the like. The processor 710 may be a general-purpose processor or a special-purpose processor that may control other components in the electronic device 700 to perform desired functions.
For example, memory 720 may include any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer program modules may be stored on the computer-readable storage medium and executed by the processor 710 to implement various functions of the electronic device 700. Various applications and various data, as well as various data used and/or generated by the applications, and the like, may also be stored in the computer-readable storage medium.
It should be noted that, in the embodiment of the present disclosure, reference may be made to the above description on the image processing method for specific functions and technical effects of the electronic device 700, and details are not described here.
Fig. 8 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. The electronic device 900 is, for example, suitable for implementing the image processing method provided by the embodiments of the present disclosure. The electronic device 900 may be a terminal device or the like. It should be noted that the electronic device 900 shown in fig. 8 is only one example and does not bring any limitations to the function and scope of the embodiments of the present disclosure.
As shown in fig. 8, electronic device 900 may include a processing means (e.g., central processing unit, graphics processor, etc.) 910 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)920 or a program loaded from a storage means 980 into a Random Access Memory (RAM) 930. In the RAM930, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The processing device 910, the ROM 920, and the RAM930 are connected to each other by a bus 940. An input/output (I/O) interface 950 is also connected to bus 940.
Generally, the following devices may be connected to the I/O interface 950: input devices 960 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 970 including, for example, a Liquid Crystal Display (LCD), speaker, vibrator, or the like; storage 980 including, for example, magnetic tape, hard disk, etc.; and a communication device 990. The communication means 990 may allow the electronic device 900 to communicate wirelessly or by wire with other electronic devices to exchange data. While fig. 8 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided, and that the electronic device 900 may alternatively be implemented or provided with more or less means.
For example, according to an embodiment of the present disclosure, the above-described image processing method may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the image processing method described above. In such embodiments, the computer program may be downloaded and installed from a network through the communication device 990, or installed from the storage device 980, or installed from the ROM 920. When executed by the processing device 910, the computer program may implement the functions defined in the image processing method provided by the embodiments of the present disclosure.
At least one embodiment of the present disclosure also provides a computer-readable storage medium for non-transitory storage of computer-readable instructions that, when executed by a computer, may implement the image processing method described above. With the computer-readable storage medium, the accuracy of recognizing an object to be recognized can be improved.
Fig. 9 is a schematic diagram of a storage medium according to some embodiments of the present disclosure. As shown in fig. 9, the storage medium 1000 is used to non-transitory store computer readable instructions 1010. For example, the storage medium 1000 may be a non-transitory storage medium. For example, the computer readable instructions 1010, when executed by a computer, may perform one or more steps in accordance with the image processing method described above.
For example, the storage medium 1000 may be applied to the electronic device 700 described above. For example, the storage medium 1000 may be the memory 720 in the electronic device 700 shown in fig. 7. For example, the related description about the storage medium 1000 may refer to the corresponding description of the memory 720 in the electronic device 700 shown in fig. 7, and is not repeated here.
The following points need to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to common designs.
(2) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (19)

1. An image processing method comprising:
acquiring an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed;
performing first image processing on the input image to determine feature information of the object to be recognized, wherein the feature information includes a reference point of the object to be recognized;
acquiring angle difference information of the region to be processed, wherein the angle difference information indicates angle differences of the plurality of first pixel points, and the angle difference of each first pixel point is an angle difference value between a pixel direction between each first pixel point and the reference point and a gradient direction of each first pixel point; and
and identifying the angle difference information and the input image to obtain an identification result corresponding to the object to be identified.
2. The method of claim 1, wherein the angular difference information comprises an angular difference image comprising a plurality of second pixel points in one-to-one correspondence with the plurality of first pixel points,
acquiring the angular difference information of the area to be processed, including:
obtaining angle differences of the first pixel points;
determining the gray values of the plurality of second pixel points according to the angle differences of the plurality of first pixel points, wherein the gray value of each second pixel point is determined according to the angle difference of the first pixel point corresponding to each second pixel point; and
and generating the angle difference image according to the gray values of the plurality of second pixel points.
3. The method of claim 2, wherein the gray value of each second pixel point is inversely related to the angular difference of the first pixel point corresponding to said each second pixel point.
4. The method of claim 2, wherein obtaining the angular difference of the first plurality of pixel points comprises: determining the gradient direction of each first pixel point;
taking the direction of a vector formed by each first pixel point and the reference point as the pixel direction of each first pixel point, wherein the starting point of the vector is each first pixel point; and
and calculating an angle difference value between the gradient direction of each first pixel point and the pixel direction of each first pixel point, and taking the angle difference value as the angle difference of each first pixel point.
5. The method of claim 4, wherein determining the gradient direction of each of the first pixel points comprises:
respectively acquiring the gradient value of each first pixel point in a first direction and the gradient value of each first pixel point in a second direction; and
and determining the gradient direction of each first pixel point according to the gradient value of each first pixel point in the first direction and the gradient value of each first pixel point in the second direction.
6. The method according to claim 1, wherein identifying the angular difference information and the input image to obtain an identification result corresponding to the object to be identified comprises:
performing edge detection on the input image to obtain an edge line graph corresponding to the input image;
performing convolution calculation on the angle difference information and the edge line graph to obtain a projection graph; and
and identifying the projection graph to obtain an identification result corresponding to the object to be identified.
7. The method according to claim 6, wherein the feature information further comprises an outer contour of the object to be recognized,
identifying the projection graph to obtain an identification result corresponding to the object to be identified, including:
according to the outer contour, second image processing is carried out on the projection drawing to obtain an image area, wherein the image area is an area where the object to be identified is located, and the area to be processed comprises the image area; and
and analyzing and processing the image area to obtain an identification result corresponding to the object to be identified.
8. The method of claim 7, wherein performing a second image processing on the projection view to obtain the image area according to the outer contour comprises:
and carrying out cutting processing on the projection drawing according to the outer contour so as to cut the image area from the projection drawing.
9. The method of claim 7, wherein performing a second image processing on the projection view to obtain the image area according to the outer contour comprises:
setting the gray value of each pixel point in other areas outside the outer contour in the projection graph as a fixed value to obtain the image area,
wherein the size of the image area is the same as the size of the projected pattern.
10. The method according to claim 7, wherein analyzing the image area to obtain a recognition result corresponding to the object to be recognized comprises:
normalizing the gray values of all pixel points in the image area to obtain a normalized image, wherein the gray value of each pixel point in the normalized image is in the range of [0,255 ]; and
and analyzing the normalized image to obtain an identification result corresponding to the object to be identified.
11. The method of claim 10, wherein the object to be recognized comprises at least one stripe to be recognized, and analyzing the normalized image to obtain a recognition result of the object to be recognized comprises:
and analyzing the normalized image to obtain the number of the at least one stripe to be recognized, wherein the recognition result comprises the number of the at least one stripe to be recognized.
12. The method of claim 11, wherein the normalized image includes at least one curvilinear stripe having a first gray value and at least one curvilinear stripe having a second gray value, the at least one curvilinear stripe having the first gray value and the at least one curvilinear stripe having the second gray value being alternately arranged, the second gray value being less than the first gray value,
analyzing and processing the normalized image to obtain the number of the at least one stripe to be recognized, wherein the number of the at least one stripe to be recognized comprises the following steps:
at least one ray is drawn from the reference point by taking the reference point as a vertex, wherein each ray intersects with at least one curvilinear stripe with the first gray scale value and/or at least one curvilinear stripe with the second gray scale value;
counting the change times of the gray value of a pixel point on the ray in the direction of the ray from the first gray value to the second gray value aiming at each ray; and
and determining the number of the at least one stripe to be identified based on the change times corresponding to each ray.
13. The method of claim 12, wherein the at least one ray is a plurality of rays,
determining the number of the at least one to-be-identified stripe based on the number of changes corresponding to each ray, including:
and calculating the average value of a plurality of change times corresponding to the plurality of rays, and taking the average value as the number of the at least one to-be-identified stripe.
14. The method of claim 1, wherein performing a first image processing on the input image to determine feature information of the object to be recognized comprises:
blurring the input image to obtain a blurred image;
carrying out edge detection on the blurred image to obtain a detection contour of the object to be identified; and
and fitting the detection contour to obtain an outer contour of the object to be recognized and a center of the outer contour, wherein the center of the outer contour is the reference point.
15. The method of claim 1, wherein performing a first image processing on the input image to determine feature information of the object to be recognized comprises:
inputting the input image into a recognition model to recognize feature information of the object to be recognized through the recognition model.
16. The method of any one of claims 1-15, wherein the object to be identified comprises a cross section of a tree, the cross section of the tree comprises an annual ring of the tree, and the identification comprises the annual ring of the tree or a number of annual rings of the tree.
17. An image processing apparatus comprising:
the image acquisition unit is configured to acquire an input image, wherein the input image comprises an object to be identified and a region to be processed, the region to be processed comprises a plurality of first pixel points, and at least part of the object to be identified is located in the region to be processed;
a processing unit configured to perform first image processing on the input image to determine feature information of the object to be recognized, wherein the feature information includes a reference point of the object to be recognized;
an angle difference obtaining unit configured to obtain angle difference information of the to-be-processed region, where the angle difference information indicates angle differences of the plurality of first pixel points, and an angle difference of each first pixel point is an angle difference between a pixel direction between each first pixel point and the reference point and a gradient direction of each first pixel point; and
and the identification unit is configured to identify the angle difference information and the input image so as to obtain an identification result corresponding to the object to be identified.
18. An electronic device, comprising:
a processor;
a memory comprising one or more computer program instructions;
wherein the one or more computer program instructions are stored in the memory and when executed by the processor implement the image processing method of any of claims 1-16.
19. A computer readable storage medium, non-transitory storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the image processing method of any one of claims 1-16.
CN202111235044.9A 2021-10-22 2021-10-22 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN113962306A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111235044.9A CN113962306A (en) 2021-10-22 2021-10-22 Image processing method, image processing device, electronic equipment and computer readable storage medium
PCT/CN2022/112384 WO2023065792A1 (en) 2021-10-22 2022-08-15 Image processing method and apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111235044.9A CN113962306A (en) 2021-10-22 2021-10-22 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113962306A true CN113962306A (en) 2022-01-21

Family

ID=79466392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111235044.9A Pending CN113962306A (en) 2021-10-22 2021-10-22 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113962306A (en)
WO (1) WO2023065792A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023065792A1 (en) * 2021-10-22 2023-04-27 杭州睿胜软件有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN116188392A (en) * 2022-12-30 2023-05-30 阿里巴巴(中国)有限公司 Image processing method, computer-readable storage medium, and computer terminal

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503481B (en) * 2023-06-25 2023-08-29 天津中德应用技术大学 Automatic parking position and orientation detecting system based on image visual guidance
CN116664559B (en) * 2023-07-28 2023-11-03 深圳市金胜电子科技有限公司 Machine vision-based memory bank damage rapid detection method
CN116834023B (en) * 2023-08-28 2023-11-14 山东嘉达装配式建筑科技有限责任公司 Nailing robot control system
CN117455916B (en) * 2023-12-25 2024-03-15 山东太阳耐磨件有限公司 Visual detection method for surface defects of steel plate
CN117557785B (en) * 2024-01-11 2024-04-02 宁波海上鲜信息技术股份有限公司 Image processing-based long-distance fishing boat plate recognition method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014186710A (en) * 2013-02-25 2014-10-02 Medekku:Kk Shiitake mushroom grading apparatus using image processing technology
CN105224941B (en) * 2014-06-18 2018-11-20 台达电子工业股份有限公司 Process identification and localization method
WO2016208001A1 (en) * 2015-06-24 2016-12-29 オリンパス株式会社 Image processing device, endoscope device, program, and image processing method
CN111540021B (en) * 2020-04-29 2023-06-13 网易(杭州)网络有限公司 Hair data processing method and device and electronic equipment
CN113221696A (en) * 2021-04-29 2021-08-06 四川大学华西医院 Image recognition method, system, equipment and storage medium
CN113962306A (en) * 2021-10-22 2022-01-21 杭州睿胜软件有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023065792A1 (en) * 2021-10-22 2023-04-27 杭州睿胜软件有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN116188392A (en) * 2022-12-30 2023-05-30 阿里巴巴(中国)有限公司 Image processing method, computer-readable storage medium, and computer terminal

Also Published As

Publication number Publication date
WO2023065792A1 (en) 2023-04-27

Similar Documents

Publication Publication Date Title
CN113962306A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
RU2680765C1 (en) Automated determination and cutting of non-singular contour of a picture on an image
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
JP4528309B2 (en) Object detection method, object detection apparatus, and object detection program
CN114240845B (en) Light cutting method surface roughness measurement method applied to cutting workpiece
CN109190617B (en) Image rectangle detection method and device and storage medium
CN107862235B (en) Two-dimensional code position positioning method and device and terminal equipment
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
CN111259908A (en) Machine vision-based steel coil number identification method, system, equipment and storage medium
CN116958125B (en) Electronic contest host power supply element defect visual detection method based on image processing
CN112396050B (en) Image processing method, device and storage medium
CN113302619A (en) System and method for target area evaluation and feature point evaluation
CN112017231A (en) Human body weight identification method and device based on monocular camera and storage medium
CN114037992A (en) Instrument reading identification method and device, electronic equipment and storage medium
CN111222507A (en) Automatic identification method of digital meter reading and computer readable storage medium
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN117557565B (en) Detection method and device for lithium battery pole piece
CN116342519A (en) Image processing method based on machine learning
CN111898610A (en) Card unfilled corner detection method and device, computer equipment and storage medium
CN117115358B (en) Automatic digital person modeling method and device
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
CN112329845A (en) Method and device for replacing paper money, terminal equipment and computer readable storage medium
CN116894849A (en) Image segmentation method and device
CN113343987B (en) Text detection processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination