CN110163212B - Text cutting method in inscription image - Google Patents

Text cutting method in inscription image Download PDF

Info

Publication number
CN110163212B
CN110163212B CN201910276531.6A CN201910276531A CN110163212B CN 110163212 B CN110163212 B CN 110163212B CN 201910276531 A CN201910276531 A CN 201910276531A CN 110163212 B CN110163212 B CN 110163212B
Authority
CN
China
Prior art keywords
image
text
generate
black
white
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910276531.6A
Other languages
Chinese (zh)
Other versions
CN110163212A (en
Inventor
李幼萌
孙进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910276531.6A priority Critical patent/CN110163212B/en
Publication of CN110163212A publication Critical patent/CN110163212A/en
Application granted granted Critical
Publication of CN110163212B publication Critical patent/CN110163212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for cutting inscription characters based on image morphology, which comprises the following steps: step 1, selecting a inscription character image for correction to generate a preprocessing image; step 2, performing text cutting on the preprocessed image to generate a text image; step 3, noise reduction treatment is carried out on the character image; the characters cut by the method are subjected to noise reduction treatment, so that the generated characters are clearer.

Description

Text cutting method in inscription image
Technical Field
The method relates to an image character cutting method, in particular to a inscription character cutting method.
Background
Mathematical morphology (Mathematical morphology) is an image analysis discipline based on lattice and topology, and is the fundamental theory of mathematical morphological image processing. The basic operation includes: corrosion and expansion, open and closed operations, skeleton extraction, extreme corrosion, hit miss transformation, morphological gradients, top-hat transformation, particle analysis, drainage basin transformation, and the like.
Expansion (displacement) is defined as "whether or not there is a detected object with a probe (structural element) at a certain point? "the result of an image A after expansion of structural element B can be written as:
Figure GDA0002132340180000011
wherein B is x = { x+b|b e B }, representing the set of points after the structural element has been translated x, B is the coordinates of the element of image B. In addition, can also be written as
Figure GDA0002132340180000012
As above, wherein A -b Refers to a new point set of the binary image A after translation-b.
Corrosion (Erosion) is defined as "whether or not the probe (structural element) at a certain point is all detected with an object? "the result of an image A after being corroded by the structural element B can be written as:
Figure GDA0002132340180000013
open operation (open) and close operation (close) are combinations of erosion and dilation using the same structural function:
the open operation is that the corrosion is firstly carried out and then the expansion is carried out,
Figure GDA0002132340180000021
the closing operation is that the expansion is carried out before the corrosion,
Figure GDA0002132340180000022
affine transformation is a linear transformation from two-dimensional coordinates to two-dimensional coordinates, which maintains the "flatness" (straight lines remain after transformation) and "parallelism" (the relative positional relationship between two-dimensional patterns remains unchanged, parallel lines remain parallel lines, and the positional order of points on straight lines remains unchanged) of a two-dimensional pattern. Any affine transformation can be expressed as a multiplication by a matrix (linear transformation) plus a vector (translation).
Figure GDA0002132340180000023
x′=m 11 x+m 12 y+m 13
y′=m 21 x+m 22 y+m 23
The above formula maps the point (x, y) to (x ', y'), which is done in OpenCV by specifying a 2x3 matrix (m matrix in the formula is a combination of linear transformation and translation, m11, m12, m21, m22 is a linear variation parameter, m13, m23 is a translation parameter, and the last row is fixed to 0,1, thus simplifying the 3x3 matrix to 2x 3).
Perspective transformation is the projection of a picture onto a new viewing plane, also called projection mapping, which is a mapping of two dimensions (X, Y) into three dimensions (X, Y, Z) and into another two dimensions (X ', Y') space.
It provides more flexibility with respect to affine transformations, mapping one quadrilateral region to another (not necessarily a parallelogram). It is not only a linear transformation but also a matrix multiplication, using a 3x3 matrix, the first two rows of which are identical to the affine matrix (m 11, m12, m13, m21, m22, m 23), but also a linear transformation and a translation, the third row being used to implement the perspective transformation.
Figure GDA0002132340180000031
X=m 11 x+m 12 y+m 13 z
Y=m 21 x+m 22 y+M 23 z
Z=m 31 x+m 32 y+m 33 z
Figure GDA0002132340180000032
Figure GDA0002132340180000033
The above formula sets the point before transformation to be a point with Z value of 1, its three-dimensional plane has x, y and 1, its projection on the two-dimensional plane is x, y, and it is transformed into a point X, Y, Z in three dimensions by matrix, and then it is transformed into a point x ', y' in two dimensions by dividing by the Z axis value in three dimensions.
From the above formula, affine transformation is a special case of perspective transformation. It converts the two dimensions into three dimensions, and after transformation, it is mapped back into the previous two dimensions.
Image segmentation is an important part in digital image processing, and has been highly valued by researchers, so far hundreds of different segmentation algorithms are available, but all the algorithms are generated aiming at specific problems, and no commonly applicable algorithm can solve all the problems.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a text cutting method in a inscription image, which has universality; meanwhile, the method carries out noise reduction treatment on the cut characters, so that the generated characters are clearer.
In order to solve the problems in the prior art, the technical scheme is as follows:
a text cutting method in a inscription image comprises the following steps:
step 1, selecting a inscription character image for correction to generate a preprocessing image;
step 2, performing text cutting on the preprocessed image to generate a text image;
step 3, noise reduction treatment is carried out on the character image;
the step of performing text cutting on the preprocessed image to generate a text image in the step 2 comprises the following steps:
2.1, carrying out enhancement and graying treatment on the preprocessed image to generate a gray image;
2.2, converting the gray level image into a black-and-white image after binarization treatment;
2.3, performing first column projection on the black-and-white image, estimating the number of columns of the image characters, and changing the width of the image to be ' column fixed width ' & ltestimated column number ' >
2.3, performing first column projection processing on the black-and-white image to obtain the width of the image column;
2.4, performing secondary column projection processing on the adjusted black-and-white image to obtain column projection cutting coordinates;
and 2.5, cutting the text image according to the column projection cutting coordinates to generate the text image.
The step 1 of correcting the selected inscription character image to generate a preprocessing image comprises the following steps:
1.1, enhancing and graying the selected inscription character image to generate a gray image;
1.2, converting the gray level image into a black-and-white image after binarization treatment;
1.3 for black and white images
1.3, carrying out affine algorithm on the black-and-white image to generate a preprocessed image.
The step 3 of denoising the text image comprises the following steps:
3.1, carrying out enhancement and graying treatment on the preprocessed image to generate a gray image;
3.2, converting the gray level image into a black-and-white image after binarization treatment;
3.3, performing small noise and text frame noise processing on the black-and-white image to generate a first denoising image;
3.4, carrying out frame and peripheral noise point processing on the first noise-removed image to generate a second noise-removed image;
the first denoising image adopts the following steps:
3.31, converting the image into w.h size of black matrix and white character, unifying the size to facilitate corrosion and expansion;
3.32, opening operation to remove small noise points;
3.33, carrying out mean value filtering treatment on the operated image,
the second denoising image adopts the following steps:
3.41, performing mask operation on the frame noise points;
and 3.42, performing transverse and longitudinal projection cleaning on the peripheral noise points.
Compared with the prior art, the invention has the beneficial effects that:
1. the method enhances the cutting accuracy of the inscription images in different forms.
2. Compared with the prior art, the method can realize correct cutting of the image characters with different sizes, numbers and forms.
Drawings
FIG. 1 is a flow chart of the correction steps in a text cutting method in a inscription image according to the present invention.
FIG. 2 is a flow chart showing the steps of the text cutting method in the inscription image according to the present invention.
FIG. 3 is a flow chart of the noise reduction step in a text cutting method in a inscription image according to the present invention.
FIG. 4 is a flowchart of a discriminating algorithm in a text cutting method in a inscription image according to the present invention.
Fig. 5 shows affine images in a text-based segmentation method in a signature image according to the present invention.
FIG. 6 is a flow chart of affine algorithm in a text segmentation method in a signature image according to the present invention.
Fig. 7 illustrates a method for text cutting in a inscription image according to the present invention, wherein vertices are obtained by projection.
Fig. 8 shows an improved binarization algorithm for text cut-out in a inscription image according to the present invention.
Fig. 9 is a schematic diagram showing the selection of the cutting points in the text cutting method in the image of the inscription.
Fig. 10 shows a verification algorithm in a text cut method in a logo image according to the present invention.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the attached drawings and specific embodiments.
1. Image correction
The original image is subjected to enhancement (brightness, chromaticity, contrast and sharpness) to increase the outline of the character body, then the image is subjected to graying and binarization operation to be changed into a binary image, the binary image is judged to be black-background white words or white-background black words through a judging algorithm, and then the binary image is integrally converted into a white-background black word image so as to recognize the whole outline. The algorithm description is shown in fig. 4.
The method comprises the steps of adjusting an image (org_w, org_h) of a black-white character to be a proper pixel (w, h), then performing corrosion and expansion operations to enable the black character to be basically covered by the white ground color, displaying a main body of the whole image character area, performing main body area identification on the processed image, acquiring four vertexes of the main body area as reference coordinate points (marked as (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 3 ,y 3 )、(x 4 ,y 4 ) And expanding the four reference coordinate points, wherein the expanded area is a rectangular area, namely the position area of the corrected image, and the expansion method comprises the following steps:
x 1 ′=min(x 1 ,x 2 ,x 3 ,x 4 )y 1 ′=min(y 1 ,y 2 ,y 3 ,y 4 )
x′ 2 =max(x 1 ,x 2 ,x 3 ,x 4 )y′ 2 =min(y 1 ,y 2 ,y 3 ,y 4 )
x′ 3 =min(x 1 ,x 2 ,x 3 ,x 4 )y′ 3 =max(y 1 ,y 2 ,y 3 ,y 4 )
x′ 4 =max(x 1 ,x 2 ,x 3 ,x 4 )y′ 4 =max(y 1 ,y 2 ,y 3 ,y 4 )
obtaining new four extended coordinates, and then (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 3 ,y 3 )、(x 4 ,y 4 ) And (x' 1 ,y′ 1 )、(x′ 2 ,y′ 2 )、(x′ 3 ,y′ 3 )、(x′ 4 ,y′ 4 ) Conversion to the corresponding coordinates of the original image (org_x) 1 ,org_y 1 )、(org_x 2 ,org_y 2 )、(org_x 3 ,org_y 3 )、(org_x 4 ,org_y 4 ) And (org_x' 1 ,org_y′ 1 )、(org_x′ 2 ,org_y′ 2 )、(org_x′ 3 ,org_y′ 3 )、(org_x′ 4 ,org_y′ 4 ) Taking three points on the original image to obtain two point sets pts= [ (org_x) 1 ,org_y 1 ),(org_x 2 ,org_y 2 ),(org_x 3 ,org_y 3 )]And pts' = [ (org_x) 1 ′,org_y′ 1 ),(org_x′ 2 ,org_y′ 2 ),(org_x′ 3 ,org_y′ 3 )]An affine transformation matrix M is calculated from pts and pts' and then the affine transformation is used to transform the original image (org_x) 1 ,org_y 1 )、(org_x 2 ,org_y 2 )、(org_x 3 ,org_y 3 )、(org_x 4 ,org_y 4 ) Affine of regions to original image (org_x' 1 ,org_y′ 1 )、(org_x 2 ′,org_y′ 2 )、(org_x′ 3 ,org_y′ 3 )、(org_x′ 4 ,org_y′ 4 ) In the region, the oblique image can be corrected. Wherein, the liquid crystal display device comprises a liquid crystal display device,
org_x 1 =x 1 * ratio_w, ratio_w=org_w/w, org_w is original width, and w is scaled high;
org_y 1 =y 1 * ratio_h, ratio_h=org_h/h, org_h is original image height, h is scaled height; the other coordinates are the same.
Matrix array
Figure GDA0002132340180000071
Figure GDA0002132340180000072
(org_x ', org_y') is affine of the original (ori_x, ori_y) by M.
For the corrected image, a frame with a certain width still exists, and then the vertex coordinates (org_x 'of four new expansion areas of the original image are followed' 1 ,org_y′ 1 )、(org_x′ 2 ,org_y′ 2 )、(org_x′ 3 ,org_y′ 3 )、(org_x′ 4 ,org_y′ 4 ) The original image is cut, and finally, the corrected image without the frame is obtained as shown in fig. 8 and 9.
The algorithm is only to the inscription images which are not inclined in shooting angles under normal conditions, the images are not inclined in three dimensions, and the images are only scaled, translated, rotated, cut and the like on a two-dimensional plane, and for other inscription text images shot by a digital camera and the like, the shooting angles are difficult to ensure that the images are perpendicular to the inscription, the main area of the obtained images is not rectangular or parallelogram, and the images obtained by the method are still inclined in angle. For such images, the binarized image is first projected to obtain four vertex coordinates (x 1 ,y 1 )、(x 2 ,y 2 )、(x 3 ,y 3 )、(x 4 ,y 4 ) As shown in fig. 10. Then pass through the vertexExpansion mode, obtaining new expansion region (x' 1 ,y′ 1 )、(x′ 2 ,y′ 2 )、(x′ 3 ,y′ 3 )、(x′ 4 ,y′ 4 ) And restoring the two areas into the original image, performing perspective transformation on the original vertex coordinates in a new expansion area of the original image, wherein the perspective area is the new expansion area, and obtaining the angle correction of the image. And (3) the corrected image is corrected by adopting the correction method, and the final image is corrected.
2. Word cutting
The corrected image is performed on the original image so that the corrected image is still the result of the correction of the original image, but not the result of the correction of the binarized image. When the line cutting is carried out on the image, the enhancement, the graying and the binarization are still needed to be carried out on the image, the graying is directly carried out, in the binarization image processing, due to the problems of a large number of noise points and the like, the binary image obtained by threshold binarization or adaptive binarization has a large number of noise points, the method adopted here is an improved binarization algorithm carried out according to the average value of pixels, and the algorithm description is shown in fig. 5:
by performing the binarization algorithm operation, the obtained binarized image contains fewer noise points. Then converting the binarized image into a black matrix and white character image, carrying out corrosion expansion treatment, wherein the expansion radius is larger than the corrosion radius, carrying out column projection on the expanded image, recording a first pixel point (0) of a first row, a last pixel point (w-1) and a pixel point with a pixel change in the first row (if the previous pixel point is black and the current pixel point is white) in the projection image, wherein the pixel points represent the maximum projection range of each column of characters, directly cutting according to the pixel points, so that partial character strokes are cut, and simultaneously cutting a black background, causing the cut characters to be incomplete and doped with the black background, so that the pixel points are subjected to traversal screening to obtain the middle coordinates of the two columns of character projections, and the screening algorithm is as follows:
if the first bit is white, starting from the second bit, calculating the tie value of the current pixel point and the next pixel point every two times, and storing the tie value into a new array; if the first is black, from the third, the average of the current pixel and the next pixel is saved to a new array, as shown in fig. 9. The new array stores the intermediate coordinates of the two-column text projection, and then calculates the column number col_num of the entire signature image to be equal to the array length L minus 1, expressed as col_num=l-1. After the number of columns of the signature image is obtained, the width of each column can be set to be a fixed value every_col, then the total width w=every_col_num of the signature image to be scaled is calculated, the binarized image is deformed according to the width, and finally secondary column projection is performed.
In the secondary column projection (the secondary column projection is the real column projection), the erosion kernel is set to (col_num ), the expansion kernel is set to (2×col_num ), and the expansion kernel is larger than the erosion kernel, so that the character strokes need to be gap-filled so that the projection area has less gap. Then, secondary column projection is carried out, corresponding coordinates are recorded by adopting the same method, the coordinates are converted into corresponding coordinates in the original image, then cutting is carried out on the original image, and the inscription image is cut into a plurality of columns.
The method is similar to the column cutting, and the method for estimating the number of lines is the same, and in the process of performing the column cutting, since each column of characters is cut, for example, two characters are easy to cause inaccurate line estimation, and the subsequent secondary cutting is directly affected, a verification algorithm is required to be added to judge the characters with the structures. The algorithm description is shown in fig. 10. The size of each picture, the number of lines and the number of columns of characters are different, and when the expansion corrosion operation is carried out on the images, the expansion and corrosion kernel size cannot be adjusted according to different images. Therefore, the width of each row is fixed, and the size of the expansion and corrosion core is adjusted according to the width of each row, so that the size of the core is followed no matter how the image size and the character number are changed, the over expansion and the over corrosion can be avoided, and the self-adaption purpose 1 is achieved
Through the verification algorithm, the influence of partial text with the upper and lower structures on text estimation can be reduced, and verification is still needed in the secondary line cutting process, so that the text with the upper and lower structures is ensured to be correctly cut. The final cutting operation needs to be performed on the column cut image.
3. Image noise reduction
The column cutting and the row cutting are both performed on the original image, so that the image needing noise reduction is also each text image cut by the original image. The method comprises the steps of firstly carrying out enhancement processing on an image, then carrying out filtering processing and graying processing, smoothing the noise of the obtained image, and then carrying out binarization processing according to an average pixel value, wherein most of the noise of the image obtained by the binarization processing is also removed, and the noise with clearer outline of a gray level image after enhancement is smoother, so that the noise can be cleaned after the binarization processing of the average pixel. The binarized image obtained by the method still has small noise points and character frame noise points, and the small noise points are removed firstly, and the following modes are adopted:
converting the image into w.h size of black matrix and white character, unifying the sizes so as to facilitate corrosion and expansion operation; opening operation to remove small noise points;
the image after the operation is subjected to mean value filtering treatment,
the image thus obtained has mainly frame noise and peripheral noise. The process of clearing the noise points of the frame is divided into two steps, wherein the first step adopts mask operation, and the second step adopts transverse and longitudinal projection clearing.
Clearing noise points of a frame: firstly, corroding, namely cleaning white pixels in a border area, then performing expansion treatment twice, wherein the first time is to recover a character stroke main body, and the second time is to fill the white stroke area in a large range so as to identify a finishing contour; and identifying the whole region by using an ROI identification algorithm, establishing a pure black background, transforming the ROI region into an image before corrosion and expansion, and transplanting the partial region into the pure black background, so that frame noise points can be removed.
Cleaning peripheral noise points: on the basis of clearing the frame noise points, peripheral noise points mainly exist at the edges of the characters. The method is characterized in that a projection mode is still adopted for cleaning peripheral noise points, in order to adjust a convolution kernel to a proper size, an image scaling mode is adopted, firstly, an image is scaled to (w, h), then corrosion operation is carried out, and the aim of completely cleaning the peripheral noise points of characters is achieved; the corroded image and text strokes are corroded, the projection area is too small due to direct projection, and therefore expansion operation is needed to fill the stroke gaps and restore the text main body outline area. And carrying out transverse and longitudinal projection on the expanded image to respectively obtain the minimum and maximum coordinates of the distribution of the white pixel points, wherein the total number of the coordinates is 4. And combining the points projected in the transverse direction and the longitudinal direction in pairs, thus obtaining four coordinate points to be cut, and returning the four coordinate points to the zoomed image for cutting, thereby completing the peripheral noise point cleaning.

Claims (3)

1. A text cutting method in a inscription image is characterized by comprising the following steps:
step 1, selecting a inscription character image for correction to generate a preprocessing image;
step 2, performing text cutting on the preprocessed image to generate a text image;
step 3, noise reduction treatment is carried out on the character image;
the step of performing text cutting on the preprocessed image to generate a text image in the step 2 comprises the following steps:
2.1, carrying out enhancement and graying treatment on the preprocessed image to generate a gray image;
2.2, converting the gray level image into a black-and-white image after binarization treatment;
2.3, performing first column projection on the black-and-white image, estimating the number of columns of characters of the image, changing the width of the image into a column fixed width which is an estimated number of columns, wherein the column fixed width is a fixed value, namely the width of the image generated after each character is finally cut, and setting the width according to actual use requirements by a user;
2.4, taking the black-and-white image with the changed image width after the first column projection processing as an input image of the second column projection processing;
2.5, performing secondary column projection processing on the adjusted black-and-white image to obtain column projection cutting coordinates;
and 2.6, cutting the text image according to the column projection cutting coordinates to generate the text image.
2. The method of claim 1, wherein the step of correcting the selected signature text image in step 1 to generate the preprocessed image comprises:
1.1, enhancing and graying the selected inscription character image to generate a gray image;
1.2, converting the gray level image into a black-and-white image after binarization treatment;
1.3 for black and white images
1.4, carrying out affine algorithm on the black-and-white image to generate a preprocessed image.
3. The method for text cut-out in a inscription image as claimed in claim 1, wherein said step 3 of denoising the text image comprises:
3.1, carrying out enhancement and graying treatment on the preprocessed image to generate a gray image;
3.2, converting the gray level image into a black-and-white image after binarization treatment;
3.3, performing small noise and text frame noise processing on the black-and-white image to generate a first denoising image;
3.4, carrying out frame and peripheral noise point processing on the first noise-removed image to generate a second noise-removed image;
the first denoising image adopts the following steps:
3.31, converting the image into w.h size of black matrix and white character, unifying the size to facilitate corrosion and expansion;
3.32, opening operation to remove small noise points;
3.33, carrying out mean value filtering treatment on the operated image,
the second denoising image adopts the following steps:
3.41, performing mask operation on the frame noise points;
and 3.42, performing transverse and longitudinal projection cleaning on the peripheral noise points.
CN201910276531.6A 2019-04-08 2019-04-08 Text cutting method in inscription image Active CN110163212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910276531.6A CN110163212B (en) 2019-04-08 2019-04-08 Text cutting method in inscription image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910276531.6A CN110163212B (en) 2019-04-08 2019-04-08 Text cutting method in inscription image

Publications (2)

Publication Number Publication Date
CN110163212A CN110163212A (en) 2019-08-23
CN110163212B true CN110163212B (en) 2023-05-23

Family

ID=67639337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910276531.6A Active CN110163212B (en) 2019-04-08 2019-04-08 Text cutting method in inscription image

Country Status (1)

Country Link
CN (1) CN110163212B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699890A (en) * 2021-01-07 2021-04-23 北京美斯齐文化科技有限公司 Picture character cutting system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550682A (en) * 2015-11-26 2016-05-04 北京市计算中心 Tripod inscription rubbing method
CN106980857A (en) * 2017-02-24 2017-07-25 浙江工业大学 A kind of Brush calligraphy segmentation recognition method based on rubbings
CN108830857A (en) * 2018-05-29 2018-11-16 南昌工程学院 A kind of adaptive Chinese character rubbings image binaryzation partitioning algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446896B (en) * 2015-08-04 2020-02-18 阿里巴巴集团控股有限公司 Character segmentation method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550682A (en) * 2015-11-26 2016-05-04 北京市计算中心 Tripod inscription rubbing method
CN106980857A (en) * 2017-02-24 2017-07-25 浙江工业大学 A kind of Brush calligraphy segmentation recognition method based on rubbings
CN108830857A (en) * 2018-05-29 2018-11-16 南昌工程学院 A kind of adaptive Chinese character rubbings image binaryzation partitioning algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Performance comparison of feature extraction algorithms for target detection and classification;Bahrampour S et al.;《Pattern Recognition Letters》;20131231;第34卷(第16期);第2126-2134页 *

Also Published As

Publication number Publication date
CN110163212A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110866924B (en) Line structured light center line extraction method and storage medium
CN109785291B (en) Lane line self-adaptive detection method
US8401333B2 (en) Image processing method and apparatus for multi-resolution feature based image registration
CN102790841B (en) Method of detecting and correcting digital images of books in the book spine area
US7301564B2 (en) Systems and methods for processing a digital captured image
CN110298282B (en) Document image processing method, storage medium and computing device
CN110647795B (en) Form identification method
Zhang et al. A unified framework for document restoration using inpainting and shape-from-shading
US7613355B2 (en) Image processing device and registration data generation method in image processing
US20110013232A1 (en) Image processing device, image processing system, image processing method and computer readable medium
CN109035170B (en) Self-adaptive wide-angle image correction method and device based on single grid image segmentation mapping
WO2011089813A1 (en) Image processing device
CN108335266B (en) Method for correcting document image distortion
CN114494306B (en) Edge gradient covariance guided method for repairing character outline of first bone and Doppler dictionary
CN111126418A (en) Oblique image matching method based on planar perspective projection
CN110163212B (en) Text cutting method in inscription image
CN114648458A (en) Fisheye image correction method and device, electronic equipment and storage medium
CN109359652A (en) A method of the fast automatic extraction rectangular scanning part from digital photograph
US20120038785A1 (en) Method for producing high resolution image
CN116739926A (en) Method for performing perspective transformation correction on shooting result of Demura camera
CN110827209A (en) Self-adaptive depth image restoration method combining color and depth information
JP6006675B2 (en) Marker detection apparatus, marker detection method, and program
CN110390339B (en) Image correction method, device and storage medium
CN114331814A (en) Distorted picture correction method and display equipment
CN107194389B (en) Binary image correction method based on morphology and grid structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant