CN110263784A - The English paper achievement of intelligence identifies input method - Google Patents
The English paper achievement of intelligence identifies input method Download PDFInfo
- Publication number
- CN110263784A CN110263784A CN201910510865.5A CN201910510865A CN110263784A CN 110263784 A CN110263784 A CN 110263784A CN 201910510865 A CN201910510865 A CN 201910510865A CN 110263784 A CN110263784 A CN 110263784A
- Authority
- CN
- China
- Prior art keywords
- paper
- achievement
- image
- subregion
- straight line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004458 analytical method Methods 0.000 claims abstract description 8
- 239000000284 extract Substances 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000008569 process Effects 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 230000000116 mitigating effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Economics (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to artificial intelligence instructional technology fields, are related to a kind of intelligent English paper achievement identification input method, comprising the following steps: 1) accurately extract the candidate region comprising paper achievement from collected paper figure;2) several straight lines are extracted from candidate region, these several straight lines intersect to form rectangle frame, constitute the achievement region on paper;3) the accurate mapping relations between acquisition equipment and paper image are established;4) from candidate region, according to image registration as a result, divided automatically to the handwritten numeral within paper achievement outer profile, several extracted only include the subregion of handwritten numeral;5) from several sub-regions extracted, handwritten numeral is accurately identified, presses particular order input computer after Macro or mass analysis.Present device is simple, can improve efficiency of inputting, the burden of mitigation teacher, reduce manual entry mistake, is time saving and energy saving.
Description
Technical field
The invention belongs to artificial intelligence instructional technology fields, are related to a kind of achievement identification input method, in particular to a kind of
The English paper achievement of intelligence identifies input method.
Background technique
With the development of computer technology and information technology, the concept of artificial intelligence has been applied to each side of life
Face, and the computer vision based on image procossing and mode identification technology also has a wide range of applications in practice, as face is known
Not, pedestrian detection, unmanned etc.;In artificial intelligence teaching, paper achievement automatic identification typing is also a Xiang Chong therein
It applies.
In general, needing Data Input computer, after the completion of paper is corrected so that achievement is saved, counts and analyzed;Mesh
Before, paper Data Input mode includes manually entering and automatic input, need teacher complete paper step on point, conjunction graded tasks,
It is generally necessary to be completed in a relatively short time, heavy workload, task is cumbersome, and deviation easily occurs.
There are also paper achievement method for automatically inputting is used, it is mostly based on basic image processing algorithm, as image is filtered
Wave, Threshold segmentation etc., or use OCR technique.These technologies need using high-precision digital scanner or it is dedicated at
As platform, the paper with the hand-written achievement of teacher is scanned into electronic document, then carries out Handwritten Digits Recognition, thus to imaging
Environment, scanning device have higher requirement, while the also resources such as labor intensive, material resources and place.On the other hand, due to paper
It is frequently necessary to bind into book form, virtually increases the difficulty of scanning, and be difficult to ensure the regular of scan image.Meanwhile examination at present
Volume corrects the system of sharing out the work and help one another that mostly uses, i.e., corrects mode using assembly line by several teachers, every teacher is merely responsible for one to number
Topic is corrected.And every teacher has a different writing styles to number, such as number is smaller, larger, beyond defined grid,
Or write using pen, ball pen, signature pen, it is also possible to use different ink marks colors, such as red, blue or black
Color.This brings huge challenge to image processing method.Thus, design a kind of cheap, simple, efficient paper Data Input side
Method just seems particularly significant.
Summary of the invention
In order to solve the technical issues of background technique kind occurs, the present invention provide it is a kind of it is simple, can improve efficiency, mitigate religion
The burden of teacher, the English paper achievement identification input method for reducing manual entry mistake, time saving and energy saving intelligence.
In order to achieve the above purpose, the technical solution adopted by the present invention are as follows:
A kind of intelligent English paper achievement identification input method, it is characterised in that: the intelligent English paper achievement
Identify input method the following steps are included:
1) candidate region comprising paper achievement is accurately extracted from collected paper image;
2) several straight lines are extracted from candidate region, these several straight lines intersect to form rectangle frame, constitute on paper
Achievement region;
3) the accurate mapping relations between acquisition equipment and paper image are established;
4) from candidate region, according to image registration as a result, being carried out to the handwritten numeral within paper achievement outer profile
Automatic segmentation extracts the subregion that several include handwritten numeral;
5) from several sub-regions extracted, handwritten numeral therein is accurately identified, after Macro or mass analysis, according to specific
Sequence input computer completes the typing work of paper achievement.
Above-mentioned steps 1) specific implementation process be:
1.1) collected paper image is filtered;
1.2) histogram equalization operation is carried out to filtered image;
1.3) operation of piecemeal thresholding is carried out to the image after histogram equalization, to obtain accurate binary map;
1.4) morphology is carried out to the binary map obtained after piecemeal thresholding and connected component labeling operates, be calculated each
The sequence of candidate region, the candidate region for selecting highest sequence to identify as paper achievement.
Above-mentioned piecemeal thresholding operation is to divide an image into sub-block not of uniform size, is independently carried out certainly in each sub-block
Dynamic thresholding operation, finally merges the thresholding in all sub-blocks as a result, obtaining final binary map.
Above-mentioned steps 2) specific implementation process be:
2.1) coordinate position of all non-zero pixels in candidate region is recorded;The coordinate position is empty in the parameter of straight line
Between in respectively correspond straight line;In the parameter space of straight line, the parameter of the corresponding straight line of all coordinate positions of simultaneous solution
Equation group, the parameter that can obtain candidate straight line indicate;
2.2) parameter of all candidate straight lines is indicated, carries out cluster operation, and extract composition from cluster result
All straight line parameters in the achievement region on paper are determined to constitute in the outer profile straight line and composition of exterior contour rectangle frame
The inward flange straight line of portion's intersection rectangle frame.
Above-mentioned steps 3) specific implementation process be:
3.1) described outer by the outer profile straight line of the exterior contour rectangle frame in the available paper image of lines detection
It is matched between profile straight line and the outer profile in Examination Paper Template, establishes the corresponding relationship between them, so as to calculate
Obtain the positional relationship and angular relationship between acquisition equipment and paper;The positional relationship indicates with translation vector, the angle
Degree relationship is indicated with Eulerian angles;
3.2) to the positional relationship and angular relationship being calculated, i.e. calculated result, verifying confirmation is carried out;The verifying is true
Firmly believing then includes algorithm criterion and engineening instruments, meets algorithm criterion and engineening instruments simultaneously and if only if calculated result
When, it just can confirm that the calculated result is accurate.
Above-mentioned steps 4) specific implementation process be:
4.1) according to image registration results, Examination Paper Template is mapped on collected paper image, so that it is determined that initially
Paper achievement subregion, record each subregion outer profile coordinate position;
4.2) it to initial paper achievement subregion, is expanded to the right and downwards respectively, in subregion upon inflation,
Sequence carries out image background compensation and image filtering, to obtain final subregion.
Above-mentioned image background compensation is in current subregion, to coordinate bit identical with initial subregion outer profile
The pixel value set is reset, and the pixel value after resetting is equal to the difference of current pixel value and particular value.
Above-mentioned image filtering uses mean filter.
Above-mentioned steps 5) specific implementation process be:
5.1) Handwritten Digit Recognition is carried out by the way of parallelization to final subregion;
5.2) to each subregion, using prefabricated machine learning model, its response is calculated, according to obtained response
Value, predicts the numerical value of handwritten numeral in the region, and export prediction result;
5.3) continuous acquisition multiframe paper image, the prediction result of identical subregion in comprehensive analysis multiple image, if
Current predictive result is consistent with multiple prediction result before, then exports current predictive result as final result.
Above-mentioned Weigh sensor input system includes computer and the acquisition equipment being connected to a computer;The acquisition
Equipment is camera.
The invention has the advantages that:
1, the present invention provides the Weigh sensor input system for realizing the English paper achievement identification input method of intelligence,
Be characterized in that: the Weigh sensor input system includes computer and the camera being connected to a computer.Of the invention
Equipment requirement is simple, does not need high-precision scanner, and common camera is just able to satisfy requirement.
2, outer profile straight line of the present invention by the exterior contour rectangle frame in the available paper image of lines detection, institute
It states and is matched between outer profile straight line and the outer profile in Examination Paper Template, establish the corresponding relationship between them, so as to
Positional relationship and angular relationship between camera and paper is calculated;The positional relationship is indicated with translation vector, described
Angular relationship is indicated with Eulerian angles;To the positional relationship and angular relationship being calculated, verifying confirmation is carried out;Verifying confirmation criterion
Including algorithm criterion and engineening instruments, when calculated result meets algorithm criterion and engineening instruments simultaneously, ability
Confirm that the calculated result is accurate.It does not need to acquire paper under the conditions of facing, different image-forming ranges and angle can be complete
It is identified at achievement, flexible operation mode.
3, the present invention carries out Handwritten Digit Recognition to final subregion by the way of parallelization;To each subregion,
Using prefabricated machine learning model, its response is calculated, according to obtained response, to the numerical value of handwritten numeral in the region
It is predicted, and exports prediction result;The prediction result of identical subregion in comprehensive analysis continuous multiple frames image, if current pre-
It is consistent with multiple prediction result before to survey result, then is exported current predictive result as final result.To different person's handwritings, writing
The robustness with higher such as habit, font color, has a wide range of application.
4, the present invention by image registration calculate determine paper position and rotation angle, even if paper placement out-of-flatness or
Person's slant setting also correctly can be identified and be calculated, and dedicated scanning platform is not needed, and paper flattening is not needed in use
Or alignment camera, therefore achievement identification and typing, user can be carried out to paper bound or unbound
Just.
Detailed description of the invention
Fig. 1 is that intelligent English paper achievement provided by the invention identifies input method equipment connection schematic diagram;
Fig. 2 is that coordinate system defines schematic diagram in the present invention;
Fig. 3 is that intelligent English paper achievement provided by the invention identifies input method schematic diagram;
Specific embodiment
The present invention is described in detail now in conjunction with attached drawing.
As shown in Figure 1, the recording device that the present invention uses include computer and and camera, when implementation, by camera
It is fixed on the alignment paper of suitable position and camera, and computer is connected with camera by data line, it will be collected
Paper image transmitting is handled into computer.
Referring to fig. 2, when implementing, the coordinate of camera and paper determines that camera coordinate system uses right-handed coordinate system,
Origin is located at optical center, and Z axis is before optical axis direction, and Y-axis is straight down;Paper coordinate system uses right-handed coordinate system, and origin is located at foreign steamer
The wide upper left corner, straight down, horizontally to the right, the vertical paper of Z axis is outside for Y-axis for X-axis.
Referring to Fig. 3, intelligent English paper achievement provided by the invention identifies input method, mainly includes 5 steps, point
Not are as follows: image preprocessing, straight-line detection, image registration, Target Segmentation and number identification.In image registration step, using Fig. 2
The camera coordinate system and paper coordinate system of definition.
Specifically, intelligent English paper achievement provided by the invention identifies input method, comprising the following steps:
1) image preprocessing
The purpose of image preprocessing is accurately to extract the candidate regions comprising paper achievement from collected paper image
Domain, the candidate region are automatically determined by the position of camera and paper, and the usual candidate region is located at the upper left of paper,
And within a biggish rectangle frame.
The paper image of input is filtered first, to reduce noise level, reduces light environment, printing quality
Etc. image-quality problems caused by factors.Filtering uses convolution operation, and convolution kernel size is 5 × 5.
Secondly, carry out histogram equalization operation to filtered image, to enhance the contrast of target area, reduce by
In shade, the factors bring image light and shade unevenness problem such as block and bind.It uses in histogram equalization operation and reflects as follows
Penetrate function:
Wherein, N is the summation of pixel in image, NtIt is the number of pixels of current gray level grade, L is the grade of image pixel value
Number.
Then, the operation of piecemeal thresholding is carried out to the image after histogram equalization, to obtain accurate binary map, so as to
In the extraction to candidate region, and calculation amount is greatly reduced.Whole image is specially divided into sub-block not of uniform size, every
Automatic threshold operation is independently carried out in a sub-block, finally merges the thresholding in all sub-blocks as a result, obtaining final two-value
Scheme, background pixel value is set to 0 in binary map, and foreground pixel value is set to 1.
Finally, carrying out morphology and connected component labeling operation, comprehensive each company to the binary map obtained after piecemeal thresholding
Area, the perimeter, duty ratio, Euler's numbers in logical domain, are calculated the sequence of each candidate region, select highest sequence as paper
The candidate region of achievement identification.To any connected domain, area is defined as the sum of pixel in prospect, and perimeter is defined as position in prospect
In the sum of the pixel of marginal position, duty ratio is defined as area and is defined as connecting with the ratio of the high product of field width, Euler's numbers are connected to
Empty number in logical domain.
2) straight-line detection
The purpose of straight-line detection is that each straight line is extracted from candidate region, these straight lines intersect to form various specifications
Rectangle frame together constitutes the achievement region on paper.
Firstly, all pixels in traversal candidate region, record the coordinate position (u of wherein non-zero pixelsi,vi),i∈[1,
N], wherein N is the number of non-zero pixels in candidate region.Linear equation is parameterized, then any coordinate position is in straight line
Parameter space in correspond to straight line, parametric equation are as follows: ua+vb+1=0.Therefore, the corresponding straight line parameter of N number of coordinate position
Equation group are as follows:
Above-mentioned equation group is solved in parameter space, the parameter that can obtain candidate straight line indicates (aj,bj), j ∈ [1, K], wherein
K is the number of candidate straight line.
Secondly, the parameter to all candidate straight lines indicates, cluster operation is carried out, cluster is according to the slope for being straight line and cuts
Away from.It is found that the slope and intercept of straight line are respectively as follows: from the expression of the parameter of straight line
From cluster result, all straight line parameters for constituting the achievement region on paper can be extracted, and can be according to straight
Line intercept, determine constitute exterior contour rectangle frame outer profile straight line, and constitute internal chiasma rectangle frame inward flange it is straight
Line.
3) image registration
The purpose of image registration is position and the angular relationship calculated between camera and paper, to establish Examination Paper Template
Accurate mapping relations between image object, to instruct subsequent Target Segmentation.
Firstly, by straight line parameter (aj,bj), j ∈ [1, K] is expressed as the form (a of homogeneous coordinatesj,bj,1),j∈[1,K]。
In image registration, using 4 outer profile straight lines, j=12,3,4, constrain according to perspective projection at this time, can obtain:
Wherein, ljFor scale factor, M is perspective projection matrix, and size is that 3 × 4, R is spin matrix, and size is 3 × 3, T
For translation vector, size is 3 × 1.
In above-mentioned equation group, value to be solved is M, and unknown quantity number is 12.
Secondly, by the outer profile straight line in the available paper image of lines detection, by the outer profile straight line and paper
It is matched between outer profile in template, establishes the corresponding relationship between them, so as to which camera and examination is calculated
Positional relationship and angular relationship between volume, wherein positional relationship translation vector (Tx,Ty,Tz) indicate, angular relationship uses Euler
Angle (α, β, γ) indicates.
Finally, to the positional relationship and angular relationship that are calculated, as calculated result carries out verifying confirmation.Verifying is quasi-
Two classes are then shared, one kind is algorithm criterion, including the number of iterations and re-projection error;Another kind of is engineening instruments, including physics
Meaning and ambiguousness mark.Wherein algorithm criterion and engineening instruments are all made of existing algorithm, by the number of iterations and again
The calculating of projection error etc. well verifies the calculated result of positional relationship and angular relationship;It is tied and if only if calculating
When fruit meets these two types of verifying criterion simultaneously, it just can confirm that the calculated result is accurate.
4) Target Segmentation
The purpose of Target Segmentation is from candidate region, according to image registration results, within paper achievement outer profile
Handwritten numeral is divided automatically, extracts several only and include the subregion of handwritten numeral, for subsequent number identification.
Firstly, Examination Paper Template is mapped on paper image according to image registration results, so that it is determined that initial paper at
Achievement subregion records the coordinate position of each subregion outer profile.
Secondly, being expanded to the right and downwards respectively to initial paper achievement subregion.Assuming that initial paper achievement
Subregion size is (u0,v0, W, H), wherein (u0,v0) it is subregion upper left corner image coordinate location, W is subregion width, H
For subregion height, enabling expansion factor is α, and the subregion size after expansion is (u0,v0,W(1+α),H(1+α)).Upon inflation
Subregion in, sequence carries out image background compensation and image filtering, to obtain final subregion.Wherein, image background
Compensation is that the pixel value on coordinate position identical with initial subregion outer profile is carried out weight in current subregion
It sets, the pixel value after resetting is equal to the difference of current pixel value and particular value.Image filtering use convolution kernel size for 3 × 3 it is equal
Value filtering repairs the fracture being likely to occur.
5) number identification
The purpose of number identification is to accurately identify handwritten numeral therein, Macro or mass analysis from each sub-regions of input
Afterwards, according to particular order input computer, it is finally completed the typing work of paper achievement.
Firstly, since it is independent mutually between each sub-regions of input, therefore hand-written number is carried out by the way of parallelization
Word identification, to improve recognition efficiency.
Secondly, using prefabricated machine learning model, its response is calculated, according to obtained response to each subregion
Value, predicts the numerical value of handwritten numeral in the region, and export prediction result.Assuming that sub-district area image is Isub, then identify
Rule are as follows:
Wherein, n*For the prediction handwritten numeral of output, f () is the receptance function of machine learning model, and n is digital parameters,
Thresh is corresponding response lag.
Finally, continuous acquisition multiframe paper image, the prediction result of identical subregion in comprehensive analysis continuous multiple frames image,
If current predictive result is consistent with multiple prediction result before, exported current predictive result as final result, with row
Except interference, recognition accuracy is improved.In practice, using " 5 take 3 " criterion, i.e. hypothesis current predictive result is n*, by n*With before 5
Secondary prediction result (n1,n2,n3,n4,n5) be respectively compared, if | ni-n*|=0, i=1 ..., 5, then confidence level adds 1, when and only
When confidence level is more than or equal to 3, then it is assumed that current predictive result is effective, exports current predictive result as final result.
Claims (10)
1. a kind of intelligent English paper achievement identifies input method, it is characterised in that: the intelligent English paper achievement is known
Other input method the following steps are included:
1) candidate region comprising paper achievement is accurately extracted from collected paper image;
2) several straight lines are extracted from candidate region, these several straight lines intersect to form rectangle frame, constitute the achievement on paper
Region;
3) the accurate mapping relations between acquisition equipment and paper image are established;
4) from candidate region, according to image registration as a result, being carried out to the handwritten numeral within paper achievement outer profile automatic
Segmentation extracts the subregion that several include handwritten numeral;
5) from several sub-regions extracted, handwritten numeral therein is accurately identified, after Macro or mass analysis, according to particular order
Input computer completes the typing work of paper achievement.
2. intelligent English paper achievement according to claim 1 identifies input method, it is characterised in that: the step 1)
Specific implementation process be:
1.1) collected paper image is filtered;
1.2) histogram equalization operation is carried out to filtered image;
1.3) operation of piecemeal thresholding is carried out to the image after histogram equalization, to obtain accurate binary map;
1.4) morphology is carried out to the binary map obtained after piecemeal thresholding and connected component labeling operates, each candidate is calculated
The sequence in region, the candidate region for selecting highest sequence to identify as paper achievement.
3. intelligent English paper achievement according to claim 2 identifies input method, it is characterised in that: the piecemeal threshold
Value operation is to divide an image into sub-block not of uniform size, automatic threshold operation is independently carried out in each sub-block, finally
The thresholding in all sub-blocks is merged as a result, obtaining final binary map.
4. intelligent English paper achievement according to claim 1 or 2 or 3 identifies input method, it is characterised in that: described
The specific implementation process of step 2) is:
2.1) coordinate position of all non-zero pixels in candidate region is recorded;The coordinate position is in the parameter space of straight line
Respectively correspond straight line;In the parameter space of straight line, the parametric equation of the corresponding straight line of all coordinate positions of simultaneous solution
Group, the parameter that can obtain candidate straight line indicate;
2.2) parameter of all candidate straight lines is indicated, carries out cluster operation, and extract composition paper from cluster result
On achievement region all straight line parameters, determine the outer profile straight line for constituting exterior contour rectangle frame and constitute internal hand over
Pitch the inward flange straight line of rectangle frame.
5. intelligent English paper achievement according to claim 4 identifies input method, it is characterised in that: the step 3)
Specific implementation process be:
3.1) pass through the outer profile straight line of the exterior contour rectangle frame in the available paper image of lines detection, the outer profile
It is matched between straight line and the outer profile in Examination Paper Template, establishes the corresponding relationship between them, so as to be calculated
Acquire the positional relationship and angular relationship between equipment and paper;The positional relationship indicates that the angle is closed with translation vector
System is indicated with Eulerian angles;
3.2) to the positional relationship and angular relationship being calculated, i.e. calculated result, verifying confirmation is carried out;The verifying confirmation is quasi-
It then include algorithm criterion and engineening instruments, when calculated result meets algorithm criterion and engineening instruments simultaneously,
It can confirm that the calculated result is accurate.
6. intelligent English paper achievement according to claim 5 identifies input method, it is characterised in that: the step 4)
Specific implementation process be:
4.1) according to image registration results, Examination Paper Template is mapped on collected paper image, so that it is determined that initial examination
It is rolled into achievement subregion, records the coordinate position of each subregion outer profile;
4.2) it to initial paper achievement subregion, is expanded to the right and downwards respectively, in subregion upon inflation, sequence
Image background compensation and image filtering are carried out, to obtain final subregion.
7. intelligent English paper achievement according to claim 6 identifies input method, it is characterised in that: described image back
Scape compensation is to carry out weight to the pixel value on coordinate position identical with initial subregion outer profile in current subregion
It sets, the pixel value after resetting is equal to the difference of current pixel value and particular value.
8. intelligent English paper achievement according to claim 7 identifies input method, it is characterised in that: described image filter
Wave uses mean filter.
9. intelligent English paper achievement according to claim 8 identifies input method, it is characterised in that: the step 5)
Specific implementation process be:
5.1) Handwritten Digit Recognition is carried out by the way of parallelization to final subregion;
5.2) to each subregion, using prefabricated machine learning model, its response is calculated, it is right according to obtained response
The numerical value of handwritten numeral is predicted in the region, and exports prediction result;
5.3) continuous acquisition multiframe paper image, the prediction result of identical subregion in comprehensive analysis multiple image, if currently
Prediction result is consistent with multiple prediction result before, then exports current predictive result as final result.
10. one kind identifies typing based on the English paper achievement for realizing the intelligence as described in claim 1-9 any claim
The Weigh sensor input system of method, it is characterised in that: the Weigh sensor input system include computer and with meter
The acquisition equipment that calculation machine is connected;The acquisition equipment is camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910510865.5A CN110263784A (en) | 2019-06-13 | 2019-06-13 | The English paper achievement of intelligence identifies input method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910510865.5A CN110263784A (en) | 2019-06-13 | 2019-06-13 | The English paper achievement of intelligence identifies input method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110263784A true CN110263784A (en) | 2019-09-20 |
Family
ID=67918031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910510865.5A Withdrawn CN110263784A (en) | 2019-06-13 | 2019-06-13 | The English paper achievement of intelligence identifies input method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110263784A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112163529A (en) * | 2020-09-30 | 2021-01-01 | 珠海读书郎网络教育有限公司 | System and method for uniformly dividing test paper |
CN112215192A (en) * | 2020-10-22 | 2021-01-12 | 常州大学 | Test paper and method for quickly inputting test paper score based on machine vision technology |
WO2021134416A1 (en) * | 2019-12-31 | 2021-07-08 | 深圳市优必选科技股份有限公司 | Text transformation method and apparatus, computer device, and computer readable storage medium |
-
2019
- 2019-06-13 CN CN201910510865.5A patent/CN110263784A/en not_active Withdrawn
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021134416A1 (en) * | 2019-12-31 | 2021-07-08 | 深圳市优必选科技股份有限公司 | Text transformation method and apparatus, computer device, and computer readable storage medium |
CN112163529A (en) * | 2020-09-30 | 2021-01-01 | 珠海读书郎网络教育有限公司 | System and method for uniformly dividing test paper |
CN112215192A (en) * | 2020-10-22 | 2021-01-12 | 常州大学 | Test paper and method for quickly inputting test paper score based on machine vision technology |
CN112215192B (en) * | 2020-10-22 | 2024-01-23 | 常州大学 | Method for quickly inputting test paper score based on machine vision technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240078646A1 (en) | Image processing method, image processing apparatus, and non-transitory storage medium | |
CN109086714B (en) | Form recognition method, recognition system and computer device | |
CN112949564B (en) | Pointer type instrument automatic reading method based on deep learning | |
US8472726B2 (en) | Document comparison and analysis | |
US8472727B2 (en) | Document comparison and analysis for improved OCR | |
CN109977723B (en) | Large bill picture character recognition method | |
CN110363199A (en) | Certificate image text recognition method and system based on deep learning | |
CN111325203A (en) | American license plate recognition method and system based on image correction | |
CN106960208A (en) | A kind of instrument liquid crystal digital automatic segmentation and the method and system of identification | |
CN104568986A (en) | Method for automatically detecting printing defects of remote controller panel based on SURF (Speed-Up Robust Feature) algorithm | |
CN105913093A (en) | Template matching method for character recognizing and processing | |
CN110263784A (en) | The English paper achievement of intelligence identifies input method | |
Chen et al. | Shadow-based Building Detection and Segmentation in High-resolution Remote Sensing Image. | |
CN108334955A (en) | Copy of ID Card detection method based on Faster-RCNN | |
CN110287787B (en) | Image recognition method, image recognition device and computer-readable storage medium | |
CN111353961A (en) | Document curved surface correction method and device | |
CN108460833A (en) | A kind of information platform building traditional architecture digital protection and reparation based on BIM | |
CN110689000A (en) | Vehicle license plate identification method based on vehicle license plate sample in complex environment | |
CN111145124A (en) | Image tilt correction method and device | |
CN110659637A (en) | Electric energy meter number and label automatic identification method combining deep neural network and SIFT features | |
CN115761773A (en) | Deep learning-based in-image table identification method and system | |
CN113033558A (en) | Text detection method and device for natural scene and storage medium | |
CN117557565B (en) | Detection method and device for lithium battery pole piece | |
CN112396057A (en) | Character recognition method and device and electronic equipment | |
CN112541943A (en) | Robot positioning method based on visual road signs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190920 |
|
WW01 | Invention patent application withdrawn after publication |