CN107133647A - A kind of quick Manuscripted Characters Identification Method - Google Patents

A kind of quick Manuscripted Characters Identification Method Download PDF

Info

Publication number
CN107133647A
CN107133647A CN201710308717.6A CN201710308717A CN107133647A CN 107133647 A CN107133647 A CN 107133647A CN 201710308717 A CN201710308717 A CN 201710308717A CN 107133647 A CN107133647 A CN 107133647A
Authority
CN
China
Prior art keywords
mrow
image
hog
gradient
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710308717.6A
Other languages
Chinese (zh)
Inventor
欧阳建权
胡谦磊
唐欢容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN201710308717.6A priority Critical patent/CN107133647A/en
Publication of CN107133647A publication Critical patent/CN107133647A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • G06V10/473Contour-based spatial representations, e.g. vector-coding using gradient analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of method of quick identification handwriting digital, its step includes:It is trained the HOG feature extractions of sample;Using the HOG features of training sample as the input of extreme learning machine, and the network model of extreme learning machine is trained;Images to be recognized is extracted into HOG features;Image is recognized by the extreme learning machine of training.This method, the purpose of the hand-written script quickly recognized is reached by using HOG feature extractions and limit study, with it is quick and easy, not high to image pixel quality requirement degree to be identified the characteristics of.

Description

A kind of quick Manuscripted Characters Identification Method
Technical field
The present invention relates to a kind of recognition methods, more particularly to a kind of Manuscripted Characters Identification Method;Belong to computer vision, image Processing, machine learning field.
Background technology
Handwritten Digital Recognition is with a wide range of applications, such as postcode, statistical report form, bank money.In addition, Neutral net solves many conventional difficulties with its powerful learning ability and non-linear mapping capability in recent years.Meanwhile, with The arrival in big data epoch and the rise of cloud computing, many scholars successfully solve handwritten form using the method for neutral net Digital identification problem, and achieve good achievement.With the raising and the rise of Distributed Calculation of computer computation ability, more Handwritten Digital Recognition all is solved the problems, such as in the method successfully using neutral net come more scholars, and is achieved good Achievement.But the training time of many neural net methods is long, it is not suitable in real-time Handwritten Digit Recognition scene.Such as disclosure Disclose a kind of based on dynamic sample choosing in the Application No. CN201610346450.5 on October 12nd, 2016 Chinese patent Select the BP neural network handwriting recongnition system of strategy, its using sample from a distance from decision boundary come dynamic select sample, together Shi Caiyong gradient descent algorithms optimize network weight, though this method can obtain preferable recognition effect, the training time is long, It is unfavorable for promoting.And do not have reliability for the identification of blurred picture.In order to reduce the training time, Nanyang Technolohy University is yellow Wide refined teach proposed extreme learning machine (abbreviation ELM) model in 2004.
Extreme learning machine, full name Extreme Learning Machine, abbreviation ELM, because it inputs weights and hidden layer Biasing is to be assigned at random, without iteration adjustment, so its network training speed is far beyond other graders.Simultaneously as Gradient descent algorithm is not used, therefore avoids local minimum problem.Compared with CNN even depth learning models, the limit The training speed of habit machine is their tens to hundred times.But ELM recognition accuracy relies on training sample and figure to be identified The definition of picture, when the pixel qualities of training sample or images to be recognized are not high, its degree of accuracy classified will be greatly reduced.And Gradient orientation histogram feature extracting method can preferably solve this problem.
Gradient orientation histogram (Histogram of Oriented Gradient, abbreviation HOG) is characterized in a kind of in meter It is used for carrying out the Feature Descriptor of object detection in calculation machine vision and image procossing.It is by calculating and statistical picture regional area Gradient orientation histogram carry out constitutive characteristic.It can keep good consistency to image geometry and optical deformation.
The content of the invention
For the BP neural network handwriting recongnition systematic training time based on dynamic sample selection strategy in the prior art It is long, the problems such as recognition accuracy of extreme learning machine is excessively by the definition of training sample and images to be recognized, in order to up to To the purpose of quick identification handwriting picture, the present invention combines the advantage of HOG feature extraction algorithms and extreme learning machine, it is proposed that A kind of quick hand-written script recognition methods.Method proposed by the present invention can reach one on training time and recognition accuracy Allocate weighing apparatus, and for image taking it is unintelligible, it is fuzzy when can also have higher discrimination.
According to embodiment of the present invention, there is provided a kind of quick Manuscripted Characters Identification Method.
A kind of quick Manuscripted Characters Identification Method, this method comprises the following steps:
1) HOG characteristic processings are carried out to image pattern:HOG processing is carried out to image pattern, HOG features are collected, HOG is special Levy and be combined as HOG eigenmatrixes;
2) extreme learning machine network model parameter is set:Extreme learning machine is used as using the HOG eigenmatrixes of image pattern Network inputs, set activation primitive, the hidden node number of extreme learning machine network model, and carry out the training of network;
3) images to be recognized of quick handwritten form is read, and image is converted into gray-scale map;
4) image is recognized:HOG characteristic processings are carried out to the gray-scale map of the images to be recognized of quick handwritten form, obtained to be identified The HOG features of image, using the HOG features of images to be recognized as the parameter input limits learning machine of extreme learning machine, are carried out Image recognition.
In the present invention, the step 1) be specially:
1.1) whole image is standardized:The color space of image pattern is standardized using gamma correction method, obtained Standardized images;
1.2) image gradient is calculated:Each pixel abscissa and ordinate side in difference normalized image pattern To gradient, and calculate the gradient direction value of each location of pixels accordingly;
1.3) cutting is carried out to original image:Original image is divided into n cell factory cell, each cell factory cell Size be m*m, be a UNICOM interval block by the adjacent cell factory cell compositions of a*a, adjacent UNICOM interval Block has overlapping overlap, builds each cell factory cell characteristic vector;Wherein:N >=4, n > m >=3, m > a >= 2;
1.4) gradient orientation histogram is built for each cell factory cell:United using b binary system bin histogram The gradient information of image pattern pixel is counted, that is, cell factory cell 360 degree of gradient direction is divided into b direction block;It is right Each pixel is weighted projection with gradient direction in histogram in cell factory cell, can obtain this cell factory Cell gradient orientation histogram, gradient magnitude is exactly the weights as projection;Wherein:b≥2;
1.5) each UNICOM interval block gradient orientation histograms are built:By what is had in a UNICOM interval block more Cell factory cell characteristic vector is together in series, and obtains UNICOM interval block HOG features;
1.6) UNICOM interval block HOG feature normalizations:UNICOM interval block HOG features are carried out into contrast to return One changes, and obtains UNICOM interval block normalization HOG features;
1.7) HOG features are collected:All UNICOMs interval block normalization HOG features are collected, HOG features are combined into Matrix.
Preferably, pixel coverage is generally 20-100.N scope 16-64, m scope 4-28, a are 2-8.
In the present invention, the step 1.1) in gamma correction method gamma compression formula be:I (x, y)=I (x, y )gamma, wherein gamma is 0.4-0.5, preferably 0.42-0.48;Gamma is set to definite value.
In the present invention, the color space of image pattern is standardized as by image gray processing.
In the present invention, the step 1.2) image gradient is calculated, pixel (x, y) computational methods are as follows in image:
Gx(x, y)=H (x+1, y)-H (x-1, y),
Gy(x, y)=H (x, y+1)-H (x, y-1),
G in formulaX(x, y), Gy(x, y), H (x, y) represents the horizontal direction ladder at pixel (x, y) place in input picture respectively Degree, vertical gradient and pixel value;The gradient magnitude and gradient direction at pixel (x, y) place be respectively:
In the present invention, the step 1.3) in adjacent UNICOM interval block have 1/2nd region overlapping overlap。
In the present invention, the step 1.4) in weighted projection be that each pixel is mapped to fixed angle with gradient direction Spend scope.Preferably, angular range is 0-360 degree.
In the present invention, the step 1.6) normalization be specially:
If V is UNICOM interval block not normalized HOG characteristic vectors, normalization factor is set to: Wherein:ε is the standardization yield of very little, and it is 0 to prevent divisor, and span is in 1e-3To 5e-2Between effect it is best;V is figure The characteristic vector of picture,Refer to v second order norm.
ViValue is as follows:
If Vi>0.2, then it is 0.2 to block value, recalculates normalization factor;
After normalization, more preferable effect can be obtained to illumination variation and shade.
In the present invention, it is described that image gray processing (or image is converted into gray-scale map) is used into rgb2gray functions.It is excellent Elect as, the pixel at pixel I is I (R, G, B), then 256 grades of gray scale conversion formula of the gray processing pixel at I are:I(x) =0.29900 × R+0.58700 × G+0.11400 × B.
In the present invention, the activation primitive is sigmoid.
In the present invention, the hidden node number is 600-1000, preferably 700-900, such as 800.
In the present invention, described image samples sources are in MNIST data sets (http://yann.lecun.com/exdb/ ) and USPS data sets (http mnist/://www-i6.informatik.rwth-aachen.de/~keysers/ usps.html)。
In the present invention, in order to reduce the influence of illumination factor, it is necessary first to which whole image is standardized.In image Texture strength in, the proportion of local top layer exposure contribution is larger, so, this compression processing can be effectively reduced image Local shade and illumination variation.Because colouring information effect is little, gray-scale map is generally first converted into.
In the present invention, it is that mutual is overlapping between the block of each UNICOM interval, this means that:Each cell Feature can repeatedly be appeared in different results in last characteristic vector.We by after normalization block descriptor (to Amount) just referred to as HOG descriptors.Adjacent UNICOM interval block has 1/2nd region overlapping (when overlapping region elects two as When/mono-, feature extraction effect is best.
Compared with prior art, the present invention has following advantageous effects:
1st, recognize quickly, and support is used as without powerful hardware.It is particularly suitable for the mobile terminals such as touch-screen mobile phone, IPad to set It is standby.
2nd, when reaching approximate accuracy rate, training time of the invention is far less than other method, can substantially reduce and is The time complexity of system.
3rd, the definition and shooting angle for image to be identified are less demanding, are adapted in dim or more complicated bat Image recognition is carried out in the case of taking the photograph.
Brief description of the drawings
Fig. 1 is the flow chart of a specific embodiment of the invention;
Fig. 2 is the gradient direction piecemeal schematic diagram of the HOG features used in the present invention;
Fig. 3 is the block piecemeal schematic diagrames of the HOG features used in the present invention, with the part (preceding 14 of an image Illustrated exemplified by OK).
Embodiment
In order that those skilled in the art is better understood from the technical scheme of the application, implement below in conjunction with the application Accompanying drawing in example, clear, complete description is carried out to the technical scheme in the embodiment of the present application.It should be appreciated that described herein Specific embodiment only to explain the present invention, be not intended to limit the present invention.
According to embodiment of the present invention, there is provided a kind of quick Manuscripted Characters Identification Method.
A kind of quick Manuscripted Characters Identification Method, this method comprises the following steps:
1) HOG characteristic processings are carried out to image pattern:HOG processing is carried out to image pattern, HOG features are collected, HOG is special Levy and be combined as HOG eigenmatrixes;
2) extreme learning machine network model parameter is set:Extreme learning machine is used as using the HOG eigenmatrixes of image pattern Network inputs, set activation primitive, the hidden node number of extreme learning machine network model, and carry out the training of network;
3) images to be recognized of quick handwritten form is read, and image is converted into gray-scale map;
4) image is recognized:HOG characteristic processings are carried out to the gray-scale map of the images to be recognized of quick handwritten form, obtained to be identified The HOG features of image, using the HOG features of images to be recognized as the parameter input limits learning machine of extreme learning machine, are carried out Image recognition.
In the present invention, the step 1) be specially:
1.1) whole image is standardized:The color space of image pattern is standardized using gamma correction method, obtained Standardized images;
1.2) image gradient is calculated:Each pixel abscissa and ordinate side in difference normalized image pattern To gradient, and calculate the gradient direction value of each location of pixels accordingly;
1.3) cutting is carried out to original image:Original image is divided into n cell factory cell, each cell factory cell Size be m*m, be a UNICOM interval block by the adjacent cell factory cell compositions of a*a, adjacent UNICOM interval Block has overlapping overlap, builds each cell factory cell characteristic vector;Wherein:N >=4, n > m >=3, m > a >= 2;
1.4) gradient orientation histogram is built for each cell factory cell:United using b binary system bin histogram The gradient information of image pattern pixel is counted, that is, cell factory cell 360 degree of gradient direction is divided into b direction block;It is right Each pixel is weighted projection with gradient direction in histogram in cell factory cell, can obtain this cell factory Cell gradient orientation histogram, gradient magnitude is exactly the weights as projection;Wherein:b≥2;
1.5) each UNICOM interval block gradient orientation histograms are built:By what is had in a UNICOM interval block more Cell factory cell characteristic vector is together in series, and obtains UNICOM interval block HOG features;
1.6) UNICOM interval block HOG feature normalizations:UNICOM interval block HOG features are carried out into contrast to return One changes, and obtains UNICOM interval block normalization HOG features;
1.7) HOG features are collected:All UNICOMs interval block normalization HOG features are collected, HOG features are combined into Matrix.
Preferably, pixel coverage is generally 20-100.N scope 16-64, m scope 4-28, a are 2-8.
In the present invention, the step 1.1) in gamma correction method gamma compression formula be:I (x, y)=I (x, y )gamma, wherein gamma is 0.4-0.5, preferably 0.42-0.48;Gamma is set to definite value.
In the present invention, the color space of image pattern is standardized as by image gray processing.
In the present invention, the step 1.2) image gradient is calculated, pixel (x, y) computational methods are as follows in image:
Gx(x, y)=H (x+1, y)-H (x-1, y),
Gy(x, y)=H (x, y+1)-H (x, y-1),
G in formulaX(x, y), Gy(x, y), H (x, y) represents the horizontal direction ladder at pixel (x, y) place in input picture respectively Degree, vertical gradient and pixel value;The gradient magnitude and gradient direction at pixel (x, y) place be respectively:
In the present invention, the step 1.3) in adjacent UNICOM interval block have 1/2nd region overlapping overlap。
In the present invention, the step 1.4) in weighted projection be that each pixel is mapped to fixed angle with gradient direction Spend scope.Preferably, angular range is 0-360 degree.
In the present invention, the step 1.6) normalization be specially:
If V is UNICOM interval block not normalized HOG characteristic vectors, normalization factor is set to:Wherein:ε is the standardization yield of very little, and it is 0 to prevent divisor, and span is in 1e-3To 5e-2Between effect It is really best;V is the characteristic vector of image,Refer to v second order norm.
ViValue is as follows:
If Vi>0.2, then it is 0.2 to block value, recalculates normalization factor;
After normalization, more preferable effect can be obtained to illumination variation and shade.
In the present invention, it is described that image gray processing (or image is converted into gray-scale map) is used into rgb2gray functions.It is excellent Elect as, the pixel at pixel I is I (R, G, B), then 256 grades of gray scale conversion formula of the gray processing pixel at I are:I(x) =0.29900 × R+0.58700 × G+0.11400 × B.
In the present invention, the activation primitive is sigmoid.
In the present invention, the hidden node number is 600-1000, preferably 700-900, such as 800.
In the present invention, described image samples sources are in MNIST data sets (http://yann.lecun.com/exdb/ ) and USPS data sets (http mnist/://www-i6.informatik.rwth-aachen.de/~keysers/ usps.html)。
Embodiment 1
A kind of quick Manuscripted Characters Identification Method, comprises the following steps:
A, to sample carry out HOG characteristic processings:
A1, using gamma correction method the color space of image pattern is standardized:Wherein Gamma compresses formula:
I (x, y)=I (x, y)gamma, wherein gamma=0.45;
A2, image gradient is calculated respectively:The gradient in image abscissa and ordinate direction is calculated, and calculates each picture accordingly The gradient direction value of plain position.Pixel (x, y) computational methods are as follows in figure:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulaX(x, y), Gy(x, y), H (x, y) represents the horizontal direction ladder at pixel (x, y) place in input picture respectively Degree, vertical gradient and pixel value.The gradient magnitude and gradient direction at pixel (x, y) place be respectively:
A3, to original image (pixel is 28*28) carry out cutting.Original image is divided into 16 cell factories by us (cell), each cell size is 7*7, is a UNICOM interval (block) by 2*2 adjacent cell units compositions.Often Individual block overlap is 0.5 (1/2nd), i.e., each block has 1/2 region overlapping.So block has 9. So, all cell characteristic vector is together in series and just obtains the HOG features of the block in a block.
A4, for each cell factory build gradient orientation histogram:Divide the image into after 16 cell factories (cell), I The gradient information of this 28*28 pixel is counted using 9 bin histogram.Namely by cell 360 degree of gradient direction It is divided into 9 direction blocks.Projection is weighted with gradient direction in histogram to each pixel in cell, this can be obtained Cell gradient orientation histogram, is exactly corresponding 9 dimensional feature vectors of the cell (because having 9 bin).Gradient magnitude is exactly to make For the weights of projection.The gradient direction of this pixel is 20-40 degree, it is assumed that its gradient magnitude is 0.8, then histogram the 2nd Individual bin counting is exactly 0.8.
A5, structure block gradient orientation histograms:It will constitute in a block, a block and own per 2*2 cell Cell characteristic vector, which is together in series, just obtains the HOG features of the block.
A6, these local histograms image block carry out contrast normalization.If V is not normalized for block's HOG characteristic vectors, normalization factor is set to:
(ε is the standardization yield of very little, and it is 0 to prevent divisor, and span is in 1e-3To 5e-2Between Effect it is best).ViValue is as follows:
If Vi>0.2, then it is 0.2 to block value, recalculates normalization factor.
After normalization, more preferable effect can be obtained to illumination variation and shade.
A7, collection HOG features:All block in detection window are subjected to the collection of HOG features, and combined them into Final characteristic vector is used for classification.
A, setting extreme learning machine network model parameter, including:Activation primitive, hidden node number.And carry out network Training;
C, reading picture, and it is converted into gray-scale map.If the pixel at pixel I is I (R, G, B), then the gray processing picture at I
256 grades of gray scale conversion formula of vegetarian refreshments are:
I (x)=0.29900 × R+0.58700 × G+0.11400 × B;
D, identification image, including:
D1, to images to be recognized according to step A carry out HOG characteristic processings;
D2, using the HOG features of images to be recognized be used as extreme learning machine input, carry out image recognition;
Embodiment 2
Step A, the network training of extreme learning machine is carried out in service end:
A1, by taking MNIST training sets as an example (without standardizing gamma spaces and color space), training sample is subjected to HOG Feature extraction, and the HOG eigenmatrixes of training sample are constituted, the parameter selection of HOG feature extractions is as follows:
360 degree of the gradient direction that bin number elects 9, i.e. cell as is divided into 9 direction blocks, using 9 bin Nogata Figure carrys out the gradient information of statistical pixel;The size of each cell factory (cell) elects 7*7 as;Every 4 cell factories (cell) One UNICOM interval of composition (is referred to as block);There is 0.5 region overlapping between each block, i.e. overlap=0.5, institute 9 are had with block.Each block has 4 cell, each cell to have 9 bin, counts the pixel gradient on each bin, institute With each cell 9 dimensional feature vectors of correspondence.And then the data that can be released after progress HOG feature extractions are reduced to 4 from 28*28 (cell)*9(bin)*9(block);
A2, using HOG eigenmatrixes be used as the network inputs of extreme learning machine, train extreme learning machine network, hidden node 800 are set to, activation primitive elects sigmoid as;
Step B, client carries out images to be recognized acquisition, can pass through the mode such as photograph or scanning;
Step C, images to be recognized identification:
C1, to input picture carry out gray processing, the rgb2gray functions carried using MATLAB;
C2, to images to be recognized carry out HOG feature extractions;
C3, using the HOG features of images to be recognized be used as extreme learning machine input, output be the image classification class Not.

Claims (9)

1. a kind of quick Manuscripted Characters Identification Method, this method comprises the following steps:
1) HOG characteristic processings are carried out to image pattern:HOG processing is carried out to image pattern, HOG features are collected, by HOG feature groups It is combined into HOG eigenmatrixes;
2) extreme learning machine network model parameter is set:The network of extreme learning machine is used as using the HOG eigenmatrixes of image pattern Input, sets activation primitive, the hidden node number of extreme learning machine network model, and carries out the training of network;
3) images to be recognized of quick handwritten form is read, and image is converted into gray-scale map;
4) image is recognized:HOG characteristic processings are carried out to the gray-scale map of the images to be recognized of quick handwritten form, images to be recognized is obtained HOG features, using the HOG features of images to be recognized be used as extreme learning machine parameter input limits learning machine, carry out image Identification.
2. according to the method described in claim 1, it is characterised in that:The step 1) be specially:
1.1) whole image is standardized:The color space of image pattern is standardized using gamma correction method, standard is obtained Change image;
1.2) image gradient is calculated:Each pixel abscissa and ordinate direction in normalized image pattern respectively Gradient, and the gradient direction value of each location of pixels is calculated accordingly;
1.3) cutting is carried out to original image:Original image is divided into n cell factory cell, each cell factory cell's is big Small is m*m, is a UNICOM interval block, adjacent UNICOM interval block by a*a adjacent cell factory cell compositions There is overlapping overlap, build each cell factory cell characteristic vector;Wherein:N >=4, n > m >=3, m > a >=2;
1.4) gradient orientation histogram is built for each cell factory cell:Using b binary system bin histogram come statistical chart As the gradient information of sampled pixel, that is, cell factory cell 360 degree of gradient direction is divided into b direction block;To cell Each pixel is weighted projection with gradient direction in histogram in unit cell, can obtain this cell factory cell's Gradient orientation histogram, gradient magnitude is exactly the weights as projection;Wherein:b≥2;
1.5) each UNICOM interval block gradient orientation histograms are built:By the cell having in a UNICOM interval block more Unit cell characteristic vector is together in series, and obtains UNICOM interval block HOG features;
1.6) UNICOM interval block HOG feature normalizations:UNICOM interval block HOG features are subjected to contrast normalizing Change, obtain UNICOM interval block normalization HOG features;
1.7) HOG features are collected:All UNICOMs interval block normalization HOG features are collected, HOG eigenmatrixes are combined into.
3. method according to claim 2, it is characterised in that:The step 1.1) in gamma correction method gamma compression Formula is:I (x, y)=I (x, y)gamma, wherein gamma is 0.4-0.5, preferably 0.42-0.48;And/or
The color space of image pattern is standardized as by image gray processing.
4. method according to claim 2, it is characterised in that:The step 1.2) calculate pixel in image gradient, image Point (x, y) computational methods are as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y),
Gy(x, y)=H (x, y+1)-H (x, y-1),
G in formulaX(x, y), Gy(x, y), H (x, y) represents the horizontal direction gradient at pixel (x, y) place in input picture, hung down respectively Straight direction gradient and pixel value;The gradient magnitude and gradient direction at pixel (x, y) place be respectively:
<mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <msub> <mi>G</mi> <mi>x</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>G</mi> <mi>y</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>,</mo> </mrow>
<mrow> <mi>&amp;alpha;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>G</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>G</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>.</mo> </mrow> 1
5. method according to claim 2, it is characterised in that:The step 1.3) in adjacent UNICOM interval block have / 2nd overlapping overlap in region;And/or
The step 1.4) in weighted projection be that each pixel is mapped to fixed angular range with gradient direction.
6. method according to claim 2, it is characterised in that:The step 1.6) normalization be specially:
If V is UNICOM interval block not normalized HOG characteristic vectors, normalization factor is set to:Its In:ε is the standardization yield of very little, and it is 0 to prevent divisor, and span is in 1e-3To 5e-2Between;V is the characteristic vector of image,Refer to v second order norm;
ViValue is as follows:
If Vi>0.2, then it is 0.2 to block value, recalculates normalization factor,
After normalization, more preferable effect can be obtained to illumination variation and shade.
7. the method according to any one of claim 1-6, it is characterised in that:It is described to use image gray processing Rgb2gray functions;Preferably, the pixel at pixel I is I (R, G, B), then the gray processing pixel at I 256 grades of gray scales Conversion formula is:I (x)=0.29900 × R+0.58700 × G+0.11400 × B.
8. according to the method described in claim 1, it is characterised in that:The activation primitive is sigmoid;And/or
The hidden node number is 600-1000, preferably 700-900.
9. according to the method described in claim 1, it is characterised in that:Described image samples sources are in MNIST data sets and USPS Data set.
CN201710308717.6A 2017-05-04 2017-05-04 A kind of quick Manuscripted Characters Identification Method Pending CN107133647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710308717.6A CN107133647A (en) 2017-05-04 2017-05-04 A kind of quick Manuscripted Characters Identification Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710308717.6A CN107133647A (en) 2017-05-04 2017-05-04 A kind of quick Manuscripted Characters Identification Method

Publications (1)

Publication Number Publication Date
CN107133647A true CN107133647A (en) 2017-09-05

Family

ID=59716574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710308717.6A Pending CN107133647A (en) 2017-05-04 2017-05-04 A kind of quick Manuscripted Characters Identification Method

Country Status (1)

Country Link
CN (1) CN107133647A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052929A (en) * 2017-12-29 2018-05-18 湖南乐泊科技有限公司 Parking space state detection method, system, readable storage medium storing program for executing and computer equipment
CN108182442A (en) * 2017-12-29 2018-06-19 惠州华阳通用电子有限公司 A kind of image characteristic extracting method
CN109299663A (en) * 2018-08-27 2019-02-01 刘梅英 Hand-written script recognition methods, system and terminal device
CN109446873A (en) * 2018-08-27 2019-03-08 刘梅英 Hand-written script recognition methods, system and terminal device
CN111652186A (en) * 2020-06-23 2020-09-11 勇鸿(重庆)信息科技有限公司 Video category identification method and related device
WO2021237517A1 (en) * 2020-05-27 2021-12-02 京东方科技集团股份有限公司 Handwriting recognition method and apparatus, and electronic device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method
CN105678278A (en) * 2016-02-01 2016-06-15 国家电网公司 Scene recognition method based on single-hidden-layer neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method
CN105678278A (en) * 2016-02-01 2016-06-15 国家电网公司 Scene recognition method based on single-hidden-layer neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANQUAN OUYANG,QIANLEI HU: "Combining Extreme Learning Machine ,RF and HOG for Feature Extraction", 《MULTIMEDIA BIG DATA》 *
向征 等: "HOG在人脸识别中的性能研究", 《计算机工程》 *
徐光柱、雷帮军: "《实用性目标检测与跟踪算法原理及应用》", 30 April 2015, 国防工业出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052929A (en) * 2017-12-29 2018-05-18 湖南乐泊科技有限公司 Parking space state detection method, system, readable storage medium storing program for executing and computer equipment
CN108182442A (en) * 2017-12-29 2018-06-19 惠州华阳通用电子有限公司 A kind of image characteristic extracting method
CN109299663A (en) * 2018-08-27 2019-02-01 刘梅英 Hand-written script recognition methods, system and terminal device
CN109446873A (en) * 2018-08-27 2019-03-08 刘梅英 Hand-written script recognition methods, system and terminal device
WO2021237517A1 (en) * 2020-05-27 2021-12-02 京东方科技集团股份有限公司 Handwriting recognition method and apparatus, and electronic device and storage medium
CN111652186A (en) * 2020-06-23 2020-09-11 勇鸿(重庆)信息科技有限公司 Video category identification method and related device

Similar Documents

Publication Publication Date Title
CN107133647A (en) A kind of quick Manuscripted Characters Identification Method
CN109165623B (en) Rice disease spot detection method and system based on deep learning
CN111461134A (en) Low-resolution license plate recognition method based on generation countermeasure network
US8750573B2 (en) Hand gesture detection
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN106650740B (en) A kind of licence plate recognition method and terminal
CN110991435A (en) Express waybill key information positioning method and device based on deep learning
CN111985621A (en) Method for building neural network model for real-time detection of mask wearing and implementation system
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN103345631B (en) Image characteristics extraction, training, detection method and module, device, system
CN111428625A (en) Traffic scene target detection method and system based on deep learning
CN103824373B (en) A kind of bill images amount of money sorting technique and system
CN103065163B (en) A kind of fast target based on static images detects recognition system and method
CN108446616A (en) Method for extracting roads based on full convolutional neural networks integrated study
CN112149533A (en) Target detection method based on improved SSD model
CN110728307A (en) Method for realizing small sample character recognition of X-ray image by self-generating data set and label
CN104143091A (en) Single-sample face recognition method based on improved mLBP
CN113688821A (en) OCR character recognition method based on deep learning
CN113095316B (en) Image rotation target detection method based on multilevel fusion and angular point offset
CN116363535A (en) Ship detection method in unmanned aerial vehicle aerial image based on convolutional neural network
CN111597875A (en) Traffic sign identification method, device, equipment and storage medium
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
CN112924037A (en) Infrared body temperature detection system and detection method based on image registration
CN116543308A (en) Landslide detection early warning model and early warning method based on multi-model fusion
CN115690934A (en) Master and student attendance card punching method and device based on batch face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170905