CN109977959B - Train ticket character area segmentation method and device - Google Patents

Train ticket character area segmentation method and device Download PDF

Info

Publication number
CN109977959B
CN109977959B CN201910249443.7A CN201910249443A CN109977959B CN 109977959 B CN109977959 B CN 109977959B CN 201910249443 A CN201910249443 A CN 201910249443A CN 109977959 B CN109977959 B CN 109977959B
Authority
CN
China
Prior art keywords
image
train ticket
character object
target
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910249443.7A
Other languages
Chinese (zh)
Other versions
CN109977959A (en
Inventor
王栋
徐彧
李宏伟
龚政
郭宝贤
刘琳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guowang Xiongan Finance Technology Group Co ltd
State Grid Digital Technology Holdings Co ltd
State Grid Corp of China SGCC
Original Assignee
Guowang Xiongan Finance Technology Group Co ltd
State Grid Corp of China SGCC
State Grid E Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guowang Xiongan Finance Technology Group Co ltd, State Grid Corp of China SGCC, State Grid E Commerce Co Ltd filed Critical Guowang Xiongan Finance Technology Group Co ltd
Priority to CN201910249443.7A priority Critical patent/CN109977959B/en
Publication of CN109977959A publication Critical patent/CN109977959A/en
Application granted granted Critical
Publication of CN109977959B publication Critical patent/CN109977959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Character Input (AREA)

Abstract

The application provides a method and a device for segmenting character areas of a train ticket, wherein the method comprises the following steps: acquiring an initial image, wherein the initial image comprises a train ticket image; detecting a two-dimensional code area of the train ticket in the initial image; dividing an image containing a target character object from an initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object; and detecting a boundary frame of the character area in the first image, and dividing an image contained in the boundary frame in the first image to be used as a target image. In the present application, the accuracy of character region segmentation can be improved.

Description

Train ticket character area segmentation method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for segmenting a train ticket character region.
Background
In some fields (e.g., the process of travel reimbursement), the entry of train ticket information may be involved. At present, the manual mode is mostly adopted for inputting, the time consumption is long, and errors are easy to occur.
In order to solve the problems existing in manual entry, the automatic train ticket information identification technology is developed. The key steps in the automatic train ticket information identification technology are as follows: and segmenting the character area in the train ticket. However, how to effectively divide the character area in the train ticket becomes a problem.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present application provide a method and an apparatus for segmenting a character region of a train ticket, so as to achieve the purpose of improving the accuracy of segmenting the character region, and the technical scheme is as follows:
a train ticket character area segmentation method comprises the following steps:
acquiring an initial image, wherein the initial image comprises a train ticket image;
detecting a two-dimensional code area of the train ticket in the initial image;
dividing an image containing a target character object from the initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object;
and detecting a boundary frame of a character area in the first image, and segmenting an image contained in the boundary frame in the first image to be used as a target image.
Preferably, the segmenting the image containing the target character object from the initial image based on the relative position relationship between the train ticket two-dimensional code area and the target character object includes:
rotating the initial image, taking the rotated image as a normalized image, and horizontally placing the two-dimensional code area of the train ticket in the normalized image;
calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the two-dimensional code area of the train ticket and the relative position relationship between the two-dimensional code area of the train ticket and the target character object;
and segmenting the image containing the target character object from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the target character object.
Preferably, the detecting a bounding box of the character region in the first image includes:
carrying out reverse color processing on the first image;
extracting a white pixel region from the first image after the reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image;
projecting the binary image in the horizontal direction to obtain a horizontal histogram;
starting from the center in the horizontal histogram, searching points with the number of 0 pixel points with the pixel value of 1 from the two ends, and taking the distance between the searched points as the target height;
performing vertical direction projection on the binary image to obtain a vertical histogram;
starting from a first end point of the vertical histogram, searching a point with the pixel value of 1 and the number of which is not 0 in the central direction, taking the searched first point as a first target point, starting from a second end point of the vertical histogram, searching a point with the pixel value of 1 and the number of which is not 0 in the central direction, taking the searched first point as a second target point, and taking the distance between the first target point and the second target point as a target width;
and taking a rectangular box with the width of the target width and the height of the target height as a boundary box of the character area.
Preferably, the extracting a white pixel region from the first image after the reverse color processing includes:
transforming the first image after the reverse color processing into an image of an HSV domain as a first image to be processed;
calculating the maximum value of a V channel in the first image to be processed;
judging whether the maximum value of the V channel is within an adjustment threshold range;
if so, adjusting the white threshold range, and selecting pixel points with three channel values in the adjusted white threshold range from the first image to be processed as white pixel points;
if not, selecting pixel points with three channel values within the white threshold range from the first image to be processed as white pixel points;
and taking the area containing the white pixel points in the first image to be processed as a white pixel area.
Preferably, the calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relationship between the train ticket two-dimensional code area and the target character object includes:
calculating the coordinates of the upper left corner and the lower right corner of the train ticket image in the normalized image based on the vertex coordinates of the two-dimensional code area of the train ticket;
dividing the train ticket image from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the train ticket image;
and calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket image based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relation between the train ticket two-dimensional code area and the target character object.
A train ticket character area division apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an initial image, and the initial image comprises a train ticket image;
the first detection module is used for detecting a two-dimensional code area of the train ticket in the initial image;
the first segmentation module is used for segmenting an image containing a target character object from the initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object;
and the second segmentation module is used for detecting a boundary frame of the character area in the first image and segmenting an image contained in the boundary frame in the first image to be used as a target image.
Preferably, the first segmentation module includes:
the rotation submodule is used for rotating the initial image, the rotated image is used as a normalized image, and the two-dimensional code area of the train ticket in the normalized image is horizontally placed;
the first calculation sub-module is used for calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the two-dimensional code area of the train ticket and the relative position relation between the two-dimensional code area of the train ticket and the target character object;
and the first segmentation sub-module is used for segmenting the image containing the target character object from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the target character object.
Preferably, the second dividing module includes:
the reverse color processing submodule is used for extracting a white pixel region from the first image after reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image;
the binarization submodule is used for extracting a white pixel area from the first image after the reverse color processing, and marking white pixel points in the white pixel area as 1 and non-white pixel points as 0 to obtain a binarization image;
the first determining submodule is used for carrying out horizontal direction projection on the binary image to obtain a horizontal histogram;
a second determining submodule, configured to search, from the center in the horizontal histogram, for a point with the number of 0 pixel points whose pixel values are 1 from both ends, and use a distance between the searched points as a target height;
the third determining submodule is used for carrying out vertical direction projection on the binary image to obtain a vertical histogram;
a fourth determining submodule, configured to search, from a first end point of the vertical histogram, a point with a pixel value of 1 and a number of pixels not equal to 0 toward a center direction, use the searched first point as a first target point, search, from a second end point of the vertical histogram, a point with a pixel value of 1 and a number of pixels not equal to 0 toward the center direction, use the searched first point as a second target point, and use a distance between the first target point and the second target point as a target width;
and the fifth determining submodule is used for taking a rectangular frame with the width of the target width and the height of the target height as a boundary frame of the character area.
Preferably, the binarization submodule is specifically configured to:
transforming the first image after the reverse color processing into an image of an HSV domain as a first image to be processed;
calculating the maximum value of a V channel in the first image to be processed;
judging whether the maximum value of the V channel is within an adjustment threshold range;
if so, adjusting the white threshold range, and selecting pixel points with three channel values in the adjusted white threshold range from the first image to be processed as white pixel points;
if not, selecting pixel points with three channel values within the white threshold range from the first image to be processed as white pixel points;
and taking the area containing the white pixel points in the first image to be processed as a white pixel area.
Preferably, the first calculation submodule is specifically configured to:
calculating the coordinates of the upper left corner and the lower right corner of the train ticket image in the normalized image based on the vertex coordinates of the two-dimensional code area of the train ticket;
dividing the train ticket image from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the train ticket image;
and calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket image based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relation between the train ticket two-dimensional code area and the target character object.
Compared with the prior art, the beneficial effect of this application is:
in the method, based on the principle that the two-dimensional code in the train ticket image is easy to detect, after the initial image is obtained, the two-dimensional code area of the train ticket in the initial image is detected, and because the position of the character area in the train ticket image in the space is relatively fixed, the image containing the target character object can be segmented from the initial image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object to serve as the first image, so that the preliminary segmentation of the character area is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a method for segmenting character regions of a train ticket according to the present application;
FIG. 2 is a schematic illustration of a first image provided herein;
FIG. 3 is a flow chart of another method for segmenting character regions of a train ticket provided by the present application;
FIG. 4 is a schematic diagram of an image containing a two-dimensional code area of a train ticket provided by the application;
FIG. 5 is a flow chart of another method for segmenting character regions of a train ticket according to the present application;
FIG. 6 is a schematic diagram of a binarized image provided by the present application;
FIG. 7 is a schematic diagram of a horizontal histogram provided herein;
FIG. 8(a) is a schematic diagram of another binarized image provided herein, and FIG. 8(b) is a schematic diagram of another horizontal histogram provided herein;
FIG. 9 is a schematic diagram of a vertical histogram provided herein;
FIG. 10(a) is a schematic diagram of another binarized image provided herein, and FIG. 10(b) is a schematic diagram of another vertical histogram provided herein;
FIG. 11 is a schematic illustration of a target image provided herein;
FIG. 12 is a diagram illustrating the segmentation results of a train ticket image according to the present application;
FIG. 13 is a flow chart of another method for segmenting character regions of a train ticket according to the present application;
FIG. 14 is a flow chart of another method for segmenting character regions of a train ticket according to the present application;
fig. 15 is a schematic logical structure diagram of a train ticket character region segmentation apparatus provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a train ticket character area segmentation method, which comprises the following steps: acquiring an initial image, wherein the initial image comprises a train ticket image; detecting a two-dimensional code area of the train ticket in the initial image; dividing an image containing a target character object from the initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object; and detecting a boundary frame of a character area in the first image, and segmenting an image contained in the boundary frame in the first image to be used as a target image. In the present application, the accuracy of character region segmentation can be improved.
Referring to fig. 1, a method for segmenting a train ticket character area disclosed in an embodiment of the present application is described below, which may include the following steps:
and step S11, acquiring an initial image, wherein the initial image comprises a train ticket image.
In this embodiment, the initial image may be directly obtained when the user inputs the image; or, from the target storage area.
And step S12, detecting a two-dimensional code area of the train ticket in the initial image.
In this embodiment, the process of detecting the two-dimensional code area of the train ticket in the initial image may include:
a11, converting the initial image into a gray image;
in this step, the initial image may be preprocessed, and the preprocessed image may be converted into a grayscale image.
The pretreatment process can comprise the following steps: reading the pixel value of the initial image in an RGB (red, green and blue) three-channel mode, and normalizing the pixel value to a range from 0 to 255 according to the channel.
A12, converting the gray level image into a binary image;
and A13, detecting the two-dimensional code area of the train ticket in the binary image by using a pyzbar software package.
The result of detecting the train ticket two-dimensional code area in the binary image by using the pyzbar software package may include ticket number information and position information of the two-dimensional code (i.e., four vertex coordinates of the two-dimensional code area). It is worth noting the original coordinates of the four vertex coordinates train ticket two-dimensional code output by pyzbar in the image.
More preferably, the process of detecting the train ticket two-dimensional code area in the initial image may include:
and B11, converting the initial image into a gray-scale image.
And B12, calculating a threshold value of the binary image by using a Yen algorithm, and marking the threshold value as T.
B13, binarizing the gray level image by using T to obtain a binarized image;
b14, whether the two-dimensional code area of the train ticket can be detected from the binary image or not;
if yes, go to step B15; if not, go to step B16.
And B15, detecting the two-dimensional code area of the train ticket in the binary image.
And B16, adjusting the T by taking n as a step length, updating the T to be T1 as T1, and returning to execute the step B13 until the two-dimensional code area of the train ticket can be detected.
n is a number greater than 0. Preferably, n may be set to 2.
It should be noted that, possibly due to factors such as light, the two-dimensional code area of the train ticket cannot be successfully detected from the image obtained by binarizing the grayscale image by using T, so that T can be adjusted, and the reliability of the detection of the two-dimensional code area of the train ticket is improved.
And step S13, dividing an image containing the target character object from the initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object.
It can be understood that the position of the character area in the space in the train ticket image is relatively fixed, and a relatively fixed relative position relationship exists between the train ticket two-dimensional code area and the target character object, so that the image containing the target character object can be segmented from the initial image as the first image based on the relative position relationship between the train ticket two-dimensional code area and the target character object.
The target character image may include, but is not limited to: letters, numbers or letters.
In this embodiment, a plurality of train ticket sample images can be obtained in advance, a target character object area and a train ticket two-dimensional code area in the train ticket sample image are labeled, and the relative position relationship between the train ticket two-dimensional code area and the target character object is calibrated by calculating position information such as the distance between the two labeled areas. It can be understood that the larger the number of the train ticket sample images is, the more accurate the relative position relationship between the calibrated train ticket two-dimensional code area and the target character object is.
Target character objects may include, but are not limited to: a departure station character object, a train number character object, a departure time character object, a seat number character object, a money amount character object, a seat level character object, or an identification number and name character object.
As shown in fig. 2, the image having the character content of "Beijing south station Beijing", the image having the character content of "G165", and the like are all the first images that are images divided from the initial image.
Step S14, detecting a bounding box of the character region in the first image, and dividing an image included in the bounding box in the first image as a target image.
Preferably, the bounding box of the character region in the first image may be a minimum bounding box for detecting the character region in the first image. The minimum bounding box can be understood as: the bounding box can accurately surround the character area, and particularly is a minimum bounding box capable of surrounding the character area.
In the method, based on the principle that the two-dimensional code in the train ticket image is easy to detect, after the initial image is obtained, the two-dimensional code area of the train ticket in the initial image is detected, and because the position of the character area in the train ticket image in the space is relatively fixed, the image containing the target character object can be segmented from the initial image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object to serve as the first image, so that the preliminary segmentation of the character area is realized.
As another alternative embodiment of the present application, referring to fig. 3, a flowchart of an embodiment 2 of a train ticket character region segmentation method provided by the present application is shown, where this embodiment mainly describes a refinement scheme of the train ticket character region segmentation method described in the foregoing embodiment, and as shown in fig. 3, the method may include, but is not limited to, the following steps:
and step S31, acquiring an initial image, wherein the initial image comprises a train ticket image.
And step S32, detecting a two-dimensional code area of the train ticket in the initial image.
The detailed processes of steps S31-S32 can be referred to the related descriptions of steps S11-S12 in the previous embodiment, and are not described herein again.
And S33, rotating the initial image, taking the rotated image as a normalized image, and horizontally placing the two-dimensional code area of the train ticket in the normalized image.
In this embodiment, the initial image may be rotated according to the vertex coordinates of the two-dimensional code area of the train ticket, for example, the initial image may be rotated according to the coordinates of the point 1 and the point 4 shown in fig. 4, so that the point 1 and the point 4 are located on a horizontal line, and it is ensured that the two-dimensional code area of the train ticket is horizontally placed, that is, the image of the train ticket is horizontally placed.
Step S34, calculating the coordinates of the upper left corner and the lower right corner of the target character object in the normalized image based on the vertex coordinates of the two-dimensional code area of the train ticket and the relative position relationship between the two-dimensional code area of the train ticket and the target character object.
Preferably, the calculating the coordinates of the upper left corner and the lower right corner of the target character object in the normalized image based on the vertex coordinates of the two-dimensional code area of the train ticket and the relative position relationship between the two-dimensional code area of the train ticket and the target character object may include:
c11, based on the vertex coordinates of the two-dimensional code area of the train ticket, using the relational expression
Figure BDA0002011963940000091
Calculating the width W of the two-dimensional code area of the train ticket and utilizing the relational expression
Figure BDA0002011963940000092
And calculating the height H of the two-dimension code area of the train ticket.
Pi.x denotes the x-coordinate of point i and pi.y denotes the y-coordinate of point i. i is 1,2,3, 4.
C12 using relational expression
Figure BDA0002011963940000093
And calculating the segmentation reference length S.
And C13, calculating the coordinates of the upper left corner and the lower right corner of the target character object in the normalized image by using the vertex coordinates of the two-dimensional code area of the train ticket, the segmentation reference length S and the relative position relationship between the two-dimensional code area of the train ticket and the target character object.
If the target character object is the departure station character object, then the coordinates of the lower left corner of the departure station character object can be calculated by using the coordinates of S1.x ═ p1.x-4.6S, and S1.y ═ p1.y-1.98S, where (S1.x, S1.y) are the coordinates of the upper left corner; the coordinates of the lower right corner of the source character object are calculated by using the relations of (S3.x, S3.y) p1.x-2.5S and (S3. y ) indicating the coordinates of the lower right corner.
If the target character object is a car-number character object, the upper-left corner coordinates of the car-number character object can be calculated by using the relation c1.x ═ p1.x-2.3S, c1.y ═ p1.y-1.98S, (c1.x, c1.y) represents the upper-left corner coordinates of the car-number character object; the lower right-hand corner coordinates of the vehicle-rank character object are calculated using the relationships c3.x ═ p1.x-1.15S, c3.y ═ p3.y-1.45S, (c3.x, c3.y) indicate the lower right-hand corner coordinates of the vehicle-rank character object.
If the target character object is the arrival station character object, the coordinates of the upper left corner of the arrival station character object can be calculated by using the relational expression e1.x ═ p1.x-1S, e1.y ═ p1.y-1.98S, (e1.x, e1.y) represents the coordinates of the upper left corner of the arrival station character object; the lower right-hand corner coordinates of the arrival station character object are calculated using the relationships e3.x ═ p1.x +1S, e3.y ═ p3.y-1.45S, (e3.x, e3.y) indicate the lower right-hand corner coordinates of the arrival station character object.
If the target character object is a departure time character object, calculating the upper left-hand coordinates of the departure time character object by using the relational expression of st1.x ═ p1.x-4.6S and st1.y ═ p1.y-1.25S, (st1.x, st1.y) represents the upper left-hand coordinates of the departure time character object; the lower right-hand corner coordinates of the departure time character object are calculated using the relational expressions st3.x ═ p1.x-1.4S, and st3.y ═ p3.y-0.8S, (st3.x, st3.y) indicate the lower right-hand corner coordinates of the departure time character object.
If the target character object is a seat number character object, the upper left-hand coordinate of the seat number character object can be calculated by using the relation sn1.x ═ p1.x-1S, and sn1.y ═ p1.y-1.25S, (sn1.x, sn1.y) represents the upper left-hand coordinate of the seat number character object; the lower right-hand corner coordinates of the seat number character object are calculated using the relationships sn3.x ═ p1.x +0.4S, and sn3.y ═ p3.y-0.8S, (sn3.x, sn3.y) represent the lower right-hand corner coordinates of the seat number character object.
If the target character object is a sum character object, the upper left-hand coordinate of the sum character object can be calculated by using the relational expression m1. x-p 1.x-4.6S, and m1. y-p 1.y-0.9S, and the upper left-hand coordinate of the sum character object can be represented; the lower right-hand corner coordinates of the amount character object are calculated using the relations m3.x ═ p1.x-2.9S, m3.y ═ p3.y-0.5S, (m3.x, m3.y) indicate the lower right-hand corner coordinates of the amount character object.
If the target character object is a seat-level character object, the upper-left-corner coordinates of the seat-level character object can be calculated by using the relational expression sg1.x ═ p1.x-0.7S, sg1.y ═ p1.y-0.9S, (sg1.x, sg1.y) represents the upper-left-corner coordinates of the seat-level character object; the lower right corner coordinates of the seat level character object are calculated using the relations sg3.x ═ p1.x +0.4S, sg3.y ═ p3.y-0.5S, (sg3.x, sg3.y) indicate the lower right corner coordinates of the seat level character object.
If the target character object is the identity card number and the name character object, the upper left-hand coordinates of the identity card number and the name character object can be calculated by using the relational expression ID1.x ═ P1.x-0.4.6S, ID1.y ═ P1.y +0.05S, (ID1.x, ID1.y) represents the upper left-hand coordinates of the identity card number and the name character object; and calculating the coordinates of the identity card number and the lower right corner of the name character object by using the relational expressions of P1.x-0.75S and P3.y +0.5S, wherein (ID3.x and ID3.y) represent the coordinates of the identity card number and the lower right corner of the name character object.
Step S35, based on the upper left corner coordinate and the lower right corner coordinate of the target character object, segmenting an image including the target character object from the normalized image as a first image.
Based on the upper left corner coordinate and the lower right corner coordinate of the target character object, the position of the target character object in the normalized image can be determined, and then the image containing the target character object can be segmented from the normalized image to be used as the first image.
Step S36, detecting a bounding box of the character region in the first image, and dividing an image included in the bounding box in the first image as a target image.
The detailed process of step S36 can be referred to the related description of step S14 in the previous embodiment, and is not repeated here.
As another alternative embodiment of the present application, referring to fig. 5, a schematic flow chart of an embodiment 3 of a train ticket character region segmentation method provided by the present application is provided, and this embodiment mainly relates to a refinement scheme of the train ticket character region segmentation method described in embodiment 2, as shown in fig. 5, the method may include, but is not limited to, the following steps:
and step S51, acquiring an initial image, wherein the initial image comprises a train ticket image.
And step S52, detecting a two-dimensional code area of the train ticket in the initial image.
And S53, rotating the initial image, taking the rotated image as a normalized image, and horizontally placing the two-dimensional code area of the train ticket in the normalized image.
Step S54, calculating the coordinates of the upper left corner and the lower right corner of the target character object in the normalized image based on the vertex coordinates of the two-dimensional code area of the train ticket and the relative position relationship between the two-dimensional code area of the train ticket and the target character object.
Step S55, based on the upper left corner coordinate and the lower right corner coordinate of the target character object, segmenting an image including the target character object from the normalized image as a first image.
The detailed processes of steps S51-S55 can be referred to the related descriptions of steps S31-S35 in the previous embodiment, and are not described herein again.
Step S56, performing a reverse color process on the first image.
In this embodiment, the first image may be subjected to color reversal processing by using a relational expression Isi 255-Is;
where Is denotes a certain pixel in the first image, and Isi denotes a certain pixel in the image after the reverse color processing.
The first image is subjected to reverse color processing, so that black pixel points in the first image can be changed into white pixel points, and the white pixel points are changed into black pixel points.
The effective character information in the train ticket image is generally black, such as departure station character information, arrival station character information and the like, and after the first image is subjected to reverse color processing, the effective character information in the train ticket image is changed into white pixels.
And step S57, extracting a white pixel region from the first image after the reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image.
It can be understood that, by extracting the white pixel region from the first image after the reverse color processing, the region containing the valid character information can be more accurately determined.
In this embodiment, the white pixel region is binarized by marking the white pixels in the white pixel region as 1 and the non-white pixels as 0, so as to obtain a binarized image.
The binarized image can be shown in fig. 6, where the character information is composed of white pixel points, and the non-character information is composed of black pixel points.
In this embodiment, the process of extracting a white pixel region from the first image after the reverse color processing may include:
d11, converting the first image after the reverse color processing into an image of an HSV domain as a first image to be processed;
d12, calculating a threshold value of a V channel in the first image to be processed by adopting a Yen algorithm, and recording the threshold value as Tv;
d13, adjusting the white threshold range according to the threshold value of the V channel, and selecting pixel points with three channel values in the white threshold range from the first image to be processed as white pixel points.
In this embodiment, the white threshold range is adjusted through the threshold value of the V channel, and the white pixel point is selected according to the adjusted white threshold range, so that the reliability of selecting the white pixel point can be improved.
For example, the initial value of the white threshold range may be set to a white lower threshold Twb ═ 0,0, Tv +38, and a white upper threshold Twt ═ 180,30, 255.
And D14, taking the area containing the white pixel points in the first image to be processed as a white pixel area.
And step S58, projecting the binary image in the horizontal direction to obtain a horizontal histogram.
And step S59, starting from the center in the horizontal histogram, searching points with the number of 0 pixel points with the pixel value of 1 from the two ends, and taking the area between the searched points as the target height.
A point in the horizontal histogram where the number of pixel points having a pixel value of 1 is 0 may represent a certain line in the non-white region.
As shown in fig. 7, starting from the center Q in the horizontal histogram, the points whose number of pixels having a pixel value of 1 is 0 are found to be M and N, respectively, and the region between M and N can be used as the target height.
In the present embodiment, the details of steps S58-S59 will be described using the departure station area whose content is "li water station" as an example, and the image Ib binarized for the departure station character object shown in fig. 8(a) is represented by a matrix of 0 to 1 of [139 × 502], and the horizontal direction projection h (i) ═ sum (Ib (i): i:) is calculated, i ═ 0, …,138, and the horizontal histogram shown in fig. 8(b) is obtained. Detecting the value of the H center coordinate, namely the value of H (69) (namely the number of pixel points with the pixel value of 1), if not, determining the point as an initial searching position Hs, and searching towards the two ends; if the value is 0, adjusting the initial search position, namely searching for a point which is not 0 by using the step length as 1 within the range of plus or minus 10, and searching towards two ends by using the point as the initial search position Hs. When 46 and 134 are searched, the value is 0, and the area between 46 and 134 can be determined to be the target height.
And step S510, carrying out vertical direction projection on the binary image to obtain a vertical histogram.
Step S511, starting from the first end point of the vertical histogram, searching for a point having a pixel value of 1 and having a number of pixels not equal to 0 in the center direction, taking the first found point as a first target point, starting from the second end point of the vertical histogram, searching for a point having a pixel value of 1 and having a number of pixels not equal to 0 in the center direction, taking the first found point as a second target point, and taking the distance between the first target point and the second target point as a target width.
A point in the vertical histogram where the number of pixels having a pixel value of 1 is not 0 may represent a certain column in the white pixel region.
As shown in fig. 9, L1 represents the first target point, L2 represents the second target point, and the distance between L1 and L2 is the target width.
In the present embodiment, the details of steps S510 to S511 are described by taking the starting station area whose content is "li water station" as an example, and setting 0 to lines 0 to 46 and 134 to 138 of the image Ib as the image Ib binarized for the starting station character object shown in fig. 10(a), then calculating the Ib vertical direction projection v (i) ═ sum (Ib (: i)), i ═ 0, …,501, obtaining the vertical histogram shown in fig. 10(b), and starting from the two end points of the vertical histogram, first finding points whose numerical values are 0, that is, 0 point and 500 point in fig. 10(b), and searching toward the center with these two points as the initial search positions, finding points whose numerical values are not 0, that is, two points 110 and 490, and finding the target width which is the area between 110 and 490.
Step 512, a rectangular frame with the width as the target width and the height as the target height is used as a boundary frame of the character area.
Step S513 divides the image included in the bounding box in the first image to be a target image.
The image included in the bounding box in the first image is segmented, and the obtained target image can be referred to fig. 11.
By applying the segmentation method for the train ticket character region described in the foregoing embodiments, each target character object in the train ticket image is segmented, and the segmentation result can be referred to in fig. 12.
As another alternative embodiment of the present application, referring to fig. 13, there is provided a flowchart of embodiment 3 of a method for segmenting a character area of a train ticket, where the method may include, but is not limited to, the following steps:
step 101, obtaining an initial image, wherein the initial image comprises a train ticket image.
And S102, detecting a two-dimensional code area of the train ticket in the initial image.
And S103, rotating the initial image, wherein the rotated image is used as a normalized image, and the two-dimensional code area of the train ticket in the normalized image is horizontally placed.
And step S104, calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the two-dimensional code area of the train ticket and the relative position relation between the two-dimensional code area of the train ticket and the target character object.
And S105, dividing an image containing the target character object from the normalized image as a first image based on the upper left corner coordinate and the lower right corner coordinate of the target character object.
And step S106, performing reverse color processing on the first image.
And S107, extracting a white pixel region from the first image after the reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image.
The detailed processes of steps S101-S107 can be referred to the related descriptions of steps S51-57 in the foregoing embodiments, and are not described herein again.
And S108, performing expansion and corrosion operations on the binary image.
Preferably, the binarized image may be subjected to the dilation and erosion operations using a [3 × 3] convolution kernel.
And step S109, projecting the binarized image subjected to the expansion and corrosion operations in the horizontal direction to obtain a horizontal histogram.
Step S110, starting from the center in the horizontal histogram, searching for points with the number of 0 pixel points with the pixel value of 1 from both ends, and taking the distance between the searched points as the target height.
And step S111, performing vertical direction projection on the binary image subjected to the expansion and corrosion operation to obtain a vertical histogram.
Step S112, starting from the first end point of the vertical histogram, searching for a point with a pixel value of 1 and a number of pixels not equal to 0 in the center direction, taking the first found point as a first target point, and starting from the second end point of the vertical histogram, searching for a point with a pixel value of 1 and a number of pixels not equal to 0 in the center direction, taking the first found point as a second target point, and taking the distance between the first target point and the second target point as a target width.
After the expansion and corrosion operations are carried out on the binary image, the accuracy of the target height and the target width can be improved.
And step S113, taking a rectangular frame with the width as the target width and the height as the target height as a boundary frame of the character area.
And step S114, dividing the image contained in the boundary frame in the first image to be used as a target image.
The detailed processes of steps S109-S114 can refer to the related descriptions of steps S58-S513 in the foregoing embodiments, and are not described herein again.
As another alternative embodiment of the present application, referring to fig. 14, a schematic flowchart of an embodiment 4 of a train ticket character region segmentation method provided by the present application is provided, where this embodiment mainly relates to a refinement scheme of the train ticket character region segmentation method described in embodiment 2, and as shown in fig. 14, this method may include, but is not limited to, the following steps:
and S111, acquiring an initial image, wherein the initial image comprises a train ticket image.
And step S112, detecting a two-dimensional code area of the train ticket in the initial image.
And S113, rotating the initial image, wherein the rotated image is used as a normalized image, and the two-dimensional code area of the train ticket in the normalized image is horizontally placed.
The detailed processes of steps S111-S113 can be referred to the related descriptions of steps S31-S32 in the foregoing embodiments, and are not described herein again.
And S114, calculating the coordinates of the upper left corner and the lower right corner of the train ticket face area in the normalized image based on the vertex coordinates of the train ticket two-dimensional code area.
Specifically, the upper left corner coordinate and the lower right corner coordinate of the train ticket face area in the normalized image may be calculated based on the vertex coordinate of the train ticket two-dimensional code area and the segmentation reference length S described in embodiment 2.
The process of calculating the upper left corner coordinate and the lower right corner coordinate of the train ticket face area in the normalized image based on the vertex coordinate of the train ticket two-dimensional code area and the segmentation reference length S introduced in embodiment 2 may include:
taking a point 1 in the two-dimensional code area of the train ticket as a reference (a point 2, a point 3 or a point 4 can also be used as a reference), calculating the coordinates of the upper left corner of the face area of the train ticket in the normalized image by using the relational expression T1.x ═ max (0, P1.x-5.0S) and T1.y ═ max (0, P1.y-2.6S), wherein (T1.x, T1.y) represents the coordinates of the upper left corner of the face area of the train ticket; and calculating the coordinates of the lower right corner of the train ticket face area in the normalized image by using the relation of (T3. x) ═ min (Iw-1, P1.x +1.6S) and (T3. y) ═ min (Ih-1, P1.y +1.7S), wherein (T1.x, T1.y) represents the coordinates of the lower right corner of the train ticket face area.
Where Iw represents the width of the normalized image and Ih represents the height of the normalized image.
And S115, dividing the train ticket surface area from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the train ticket surface area.
And S116, calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket area based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relation between the train ticket two-dimensional code area and the target character object.
On the basis of dividing the train ticket surface area in the step S115, the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket surface area are calculated, so that the operation is simpler and more convenient, and the operation efficiency is improved.
Step S117, based on the upper left corner coordinate and the lower right corner coordinate of the target character object, segmenting an image including the target character object from the normalized image as a first image.
Step S118, detecting a bounding box of the character region in the first image, and segmenting an image included in the bounding box in the first image to be used as a target image.
The detailed process of steps S117-S118 can refer to steps S35-S36 in the foregoing embodiment, and will not be described herein again.
Next, the train ticket character region dividing device provided by the present application is introduced, and the train ticket character region dividing device described below and the train ticket character region dividing method described above can be referred to correspondingly.
Referring to fig. 15, the train ticket character area dividing apparatus includes: a first acquisition module 11, a first detection module 12, a first segmentation module 13 and a second segmentation module 14.
The first acquisition module 11 is configured to acquire an initial image, where the initial image includes a train ticket image;
the first detection module 12 is configured to detect a two-dimensional code area of the train ticket in the initial image;
a first segmentation module 13, configured to segment, based on a relative position relationship between the train ticket two-dimensional code region and a target character object, an image including the target character object from the initial image, as a first image;
and a second segmentation module 14, configured to detect a bounding box of the character region in the first image, and segment an image included in the bounding box in the first image, where the image is used as a target image.
In this embodiment, the first dividing module 13 may include:
the rotation submodule is used for rotating the initial image, the rotated image is used as a normalized image, and the two-dimensional code area of the train ticket in the normalized image is horizontally placed;
the first calculation sub-module is used for calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the two-dimensional code area of the train ticket and the relative position relation between the two-dimensional code area of the train ticket and the target character object;
and the first segmentation sub-module is used for segmenting the image containing the target character object from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the target character object.
In this embodiment, the second dividing module 14 may include:
the reverse color processing submodule is used for extracting a white pixel region from the first image after reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image;
the binarization submodule is used for extracting a white pixel area from the first image after the reverse color processing, and marking white pixel points in the white pixel area as 1 and non-white pixel points as 0 to obtain a binarization image;
the first determining submodule is used for carrying out horizontal direction projection on the binary image to obtain a horizontal histogram;
a second determining submodule, configured to search, from the center in the horizontal histogram, for a point with the number of 0 pixel points whose pixel values are 1 from both ends, and use a distance between the searched points as a target height;
the third determining submodule is used for carrying out vertical direction projection on the binary image to obtain a vertical histogram;
a fourth determining submodule, configured to search, from a first end point of the vertical histogram, a point with a pixel value of 1 and a number of pixels not equal to 0 toward a center direction, use the searched first point as a first target point, search, from a second end point of the vertical histogram, a point with a pixel value of 1 and a number of pixels not equal to 0 toward the center direction, use the searched first point as a second target point, and use a distance between the first target point and the second target point as a target width;
and the fifth determining submodule is used for taking a rectangular frame with the width of the target width and the height of the target height as a boundary frame of the character area.
In this embodiment, the binarization sub-module may be specifically configured to:
transforming the first image after the reverse color processing into an image of an HSV domain as a first image to be processed;
calculating a threshold value of a V channel in the first image to be processed;
adjusting a white threshold range according to the threshold value of the V channel, and selecting pixel points with three channel values in the adjusted white threshold range from the first image to be processed as white pixel points;
and taking the area containing the white pixel points in the first image to be processed as a white pixel area.
In this embodiment, the first calculating submodule may be specifically configured to:
calculating the coordinates of the upper left corner and the lower right corner of the train ticket face area in the normalized image based on the vertex coordinates of the train ticket two-dimensional code area;
dividing the train ticket face area from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the train ticket face area;
and calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket area based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relation between the train ticket two-dimensional code area and the target character object.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The method and the device for segmenting the character region of the train ticket provided by the application are introduced in detail, a specific example is applied in the method to explain the principle and the implementation mode of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A train ticket character area segmentation method is characterized by comprising the following steps:
acquiring an initial image, wherein the initial image comprises a train ticket image;
detecting a two-dimensional code area of the train ticket in the initial image;
dividing an image containing a target character object from the initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object; the target character object includes at least: a departure station character object, a train number character object, a departure time character object, a seat number character object, a money amount character object, a seat level character object or an identity card number and name character object;
and detecting a boundary frame of a character area in the first image, and segmenting an image contained in the boundary frame in the first image to be used as a target image.
2. The method according to claim 1, wherein the segmenting the image containing the target character object from the initial image based on the relative position relationship between the train ticket two-dimensional code area and the target character object comprises:
rotating the initial image, taking the rotated image as a normalized image, and horizontally placing the two-dimensional code area of the train ticket in the normalized image;
calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the two-dimensional code area of the train ticket and the relative position relationship between the two-dimensional code area of the train ticket and the target character object;
and segmenting the image containing the target character object from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the target character object.
3. The method of claim 2, wherein detecting the bounding box of the character region in the first image comprises:
carrying out reverse color processing on the first image;
extracting a white pixel region from the first image after the reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image;
projecting the binary image in the horizontal direction to obtain a horizontal histogram;
starting from the center in the horizontal histogram, searching points with the number of 0 pixel points with the pixel value of 1 from the two ends, and taking the distance between the searched points as the target height;
performing vertical direction projection on the binary image to obtain a vertical histogram;
starting from a first end point of the vertical histogram, searching a point with the pixel value of 1 and the number of which is not 0 in the central direction, taking the searched first point as a first target point, starting from a second end point of the vertical histogram, searching a point with the pixel value of 1 and the number of which is not 0 in the central direction, taking the searched first point as a second target point, and taking the distance between the first target point and the second target point as a target width;
and taking a rectangular box with the width of the target width and the height of the target height as a boundary box of the character area.
4. The method of claim 3, wherein the extracting the white pixel region from the first image after the reverse color processing comprises:
transforming the first image after the reverse color processing into an image of an HSV domain as a first image to be processed;
calculating a threshold value of a V channel in the first image to be processed (modified to be a calculation threshold here);
adjusting a white threshold range according to the threshold value of the V channel, and selecting pixel points with three channel values in the adjusted white threshold range from the first image to be processed as white pixel points;
and taking the area containing the white pixel points in the first image to be processed as a white pixel area.
5. The method of claim 2, wherein the calculating the coordinates of the upper left corner and the lower right corner of the target character object in the normalized image based on the vertex coordinates of the train ticket two-dimensional code region and the relative position relationship between the train ticket two-dimensional code region and the target character object comprises:
calculating the upper left corner coordinate and the lower right corner coordinate of the train ticket face area in the normalized image based on the vertex coordinate of the train ticket two-dimensional code area;
dividing the train ticket face area from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the train ticket face area;
and calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket area based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relation between the train ticket two-dimensional code area and the target character object.
6. A train ticket character area segmentation device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an initial image, and the initial image comprises a train ticket image;
the first detection module is used for detecting a two-dimensional code area of the train ticket in the initial image;
the first segmentation module is used for segmenting an image containing a target character object from the initial image as a first image based on the relative position relation between the two-dimensional code area of the train ticket and the target character object; the target character object includes at least: a departure station character object, a train number character object, a departure time character object, a seat number character object, a money amount character object, a seat level character object or an identity card number and name character object;
and the second segmentation module is used for detecting a boundary frame of the character area in the first image and segmenting an image contained in the boundary frame in the first image to be used as a target image.
7. The apparatus of claim 6, wherein the first segmentation module comprises:
the rotation submodule is used for rotating the initial image, the rotated image is used as a normalized image, and the two-dimensional code area of the train ticket in the normalized image is horizontally placed;
the first calculation sub-module is used for calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the normalized image based on the vertex coordinate of the two-dimensional code area of the train ticket and the relative position relation between the two-dimensional code area of the train ticket and the target character object;
and the first segmentation sub-module is used for segmenting the image containing the target character object from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the target character object.
8. The apparatus of claim 7, wherein the second segmentation module comprises:
the reverse color processing submodule is used for extracting a white pixel region from the first image after reverse color processing, and marking a white pixel point in the white pixel region as 1 and a non-white pixel point as 0 to obtain a binary image;
the binarization submodule is used for extracting a white pixel area from the first image after the reverse color processing, and marking white pixel points in the white pixel area as 1 and non-white pixel points as 0 to obtain a binarization image;
the first determining submodule is used for carrying out horizontal direction projection on the binary image to obtain a horizontal histogram;
a second determining submodule, configured to search, from the center in the horizontal histogram, for a point with the number of 0 pixel points whose pixel values are 1 from both ends, and use a distance between the searched points as a target height;
the third determining submodule is used for carrying out vertical direction projection on the binary image to obtain a vertical histogram;
a fourth determining submodule, configured to search, from a first end point of the vertical histogram, a point with a pixel value of 1 and a number of pixels not equal to 0 toward a center direction, use the searched first point as a first target point, search, from a second end point of the vertical histogram, a point with a pixel value of 1 and a number of pixels not equal to 0 toward the center direction, use the searched first point as a second target point, and use a distance between the first target point and the second target point as a target width;
and the fifth determining submodule is used for taking a rectangular frame with the width of the target width and the height of the target height as a boundary frame of the character area.
9. The apparatus according to claim 8, wherein the binarization submodule is specifically configured to:
transforming the first image after the reverse color processing into an image of an HSV domain as a first image to be processed;
calculating a threshold value of a V channel in the first image to be processed;
adjusting a white threshold range according to the threshold value of the V channel, and selecting pixel points with three channel values in the adjusted white threshold range from the first image to be processed as white pixel points;
and taking the area containing the white pixel points in the first image to be processed as a white pixel area.
10. The apparatus of claim 7, wherein the first computation submodule is specifically configured to:
calculating the upper left corner coordinate and the lower right corner coordinate of the train ticket face area in the normalized image based on the vertex coordinate of the train ticket two-dimensional code area;
dividing the train ticket face area from the normalized image based on the upper left corner coordinate and the lower right corner coordinate of the train ticket face area;
and calculating the upper left corner coordinate and the lower right corner coordinate of the target character object in the train ticket area based on the vertex coordinate of the train ticket two-dimensional code area and the relative position relation between the train ticket two-dimensional code area and the target character object.
CN201910249443.7A 2019-03-29 2019-03-29 Train ticket character area segmentation method and device Active CN109977959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910249443.7A CN109977959B (en) 2019-03-29 2019-03-29 Train ticket character area segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910249443.7A CN109977959B (en) 2019-03-29 2019-03-29 Train ticket character area segmentation method and device

Publications (2)

Publication Number Publication Date
CN109977959A CN109977959A (en) 2019-07-05
CN109977959B true CN109977959B (en) 2021-07-06

Family

ID=67081654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910249443.7A Active CN109977959B (en) 2019-03-29 2019-03-29 Train ticket character area segmentation method and device

Country Status (1)

Country Link
CN (1) CN109977959B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738522B (en) * 2019-10-15 2022-12-09 卓尔智联(武汉)研究院有限公司 User portrait construction method and device, computer equipment and storage medium
CN110728687B (en) * 2019-10-15 2022-08-02 卓尔智联(武汉)研究院有限公司 File image segmentation method and device, computer equipment and storage medium
CN111414948B (en) * 2020-03-13 2023-10-13 腾讯科技(深圳)有限公司 Target object detection method and related device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408942A (en) * 2008-04-17 2009-04-15 浙江师范大学 Method for locating license plate under a complicated background
CN101599124A (en) * 2008-06-03 2009-12-09 汉王科技股份有限公司 A kind of from video image the method and apparatus of separating character
CN102054182A (en) * 2009-11-06 2011-05-11 山东新北洋信息技术股份有限公司 Ticket manufacturing method and ticket manufacturing device
CN102194275A (en) * 2010-03-15 2011-09-21 党力 Automatic ticket checking method for train tickets
CN102254159A (en) * 2011-07-07 2011-11-23 清华大学深圳研究生院 Interpretation method for digital readout instrument
CN103198315A (en) * 2013-04-17 2013-07-10 南京理工大学 License plate character segmentation algorithm based on character outline and template matching
CN105426887A (en) * 2015-10-30 2016-03-23 北京奇艺世纪科技有限公司 Method and device for text image correction
CN105488797A (en) * 2015-11-25 2016-04-13 安徽创世科技有限公司 License plate location method for HSV space
CN109086751A (en) * 2018-09-27 2018-12-25 珠海格力电器股份有限公司 Recognition methods and device, list filling method and device, storage medium and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868759B (en) * 2015-01-22 2019-11-05 阿里巴巴集团控股有限公司 The method and device of segmented image character
CN105426818B (en) * 2015-10-30 2019-07-02 小米科技有限责任公司 Method for extracting region and device
CN108108734B (en) * 2016-11-24 2021-09-24 杭州海康威视数字技术股份有限公司 License plate recognition method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408942A (en) * 2008-04-17 2009-04-15 浙江师范大学 Method for locating license plate under a complicated background
CN101599124A (en) * 2008-06-03 2009-12-09 汉王科技股份有限公司 A kind of from video image the method and apparatus of separating character
CN102054182A (en) * 2009-11-06 2011-05-11 山东新北洋信息技术股份有限公司 Ticket manufacturing method and ticket manufacturing device
CN102194275A (en) * 2010-03-15 2011-09-21 党力 Automatic ticket checking method for train tickets
CN102254159A (en) * 2011-07-07 2011-11-23 清华大学深圳研究生院 Interpretation method for digital readout instrument
CN103198315A (en) * 2013-04-17 2013-07-10 南京理工大学 License plate character segmentation algorithm based on character outline and template matching
CN105426887A (en) * 2015-10-30 2016-03-23 北京奇艺世纪科技有限公司 Method and device for text image correction
CN105488797A (en) * 2015-11-25 2016-04-13 安徽创世科技有限公司 License plate location method for HSV space
CN109086751A (en) * 2018-09-27 2018-12-25 珠海格力电器股份有限公司 Recognition methods and device, list filling method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN109977959A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
Yuan et al. A robust and efficient approach to license plate detection
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN109977959B (en) Train ticket character area segmentation method and device
US8611662B2 (en) Text detection using multi-layer connected components with histograms
Mu et al. Lane detection based on object segmentation and piecewise fitting
CN111783757A (en) OCR technology-based identification card recognition method in complex scene
CN110415296B (en) Method for positioning rectangular electric device under shadow illumination
CN113903024A (en) Handwritten bill numerical value information identification method, system, medium and device
CN113158895A (en) Bill identification method and device, electronic equipment and storage medium
CN108961262B (en) Bar code positioning method in complex scene
CN111861979A (en) Positioning method, positioning equipment and computer readable storage medium
CN110852207A (en) Blue roof building extraction method based on object-oriented image classification technology
CN112179294A (en) Land proofreading method, device and system
CN112419207A (en) Image correction method, device and system
CN108960247B (en) Image significance detection method and device and electronic equipment
CN113435219B (en) Anti-counterfeiting detection method and device, electronic equipment and storage medium
Bodnár et al. A novel method for barcode localization in image domain
CN111860687A (en) Image identification method and device, electronic equipment and storage medium
CN111047614A (en) Feature extraction-based method for extracting target corner of complex scene image
CN113378847B (en) Character segmentation method, system, computer device and storage medium
Deb et al. Optical Recognition of Vehicle license plates
CN105930813B (en) A method of detection composes a piece of writing this under any natural scene
CN111027521B (en) Text processing method and system, data processing device and storage medium
CN108345893B (en) Straight line detection method and device, computer storage medium and terminal
JP2004094427A (en) Slip image processor and program for realizing the same device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant after: STATE GRID CORPORATION OF CHINA

Applicant after: STATE GRID E-COMMERCE Co.,Ltd.

Applicant after: Guowang Xiongan Finance Technology Group Co.,Ltd.

Address before: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant before: STATE GRID CORPORATION OF CHINA

Applicant before: STATE GRID E-COMMERCE Co.,Ltd.

Applicant before: STATE GRID XIONG'AN FINANCIAL TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100031 No. 86 West Chang'an Avenue, Beijing, Xicheng District

Patentee after: STATE GRID CORPORATION OF CHINA

Patentee after: State Grid Digital Technology Holdings Co.,Ltd.

Patentee after: Guowang Xiongan Finance Technology Group Co.,Ltd.

Address before: 100031 No. 86 West Chang'an Avenue, Beijing, Xicheng District

Patentee before: STATE GRID CORPORATION OF CHINA

Patentee before: STATE GRID ELECTRONIC COMMERCE Co.,Ltd.

Patentee before: Guowang Xiongan Finance Technology Group Co.,Ltd.