CN103034856B - The method of character area and device in positioning image - Google Patents

The method of character area and device in positioning image Download PDF

Info

Publication number
CN103034856B
CN103034856B CN201210552389.1A CN201210552389A CN103034856B CN 103034856 B CN103034856 B CN 103034856B CN 201210552389 A CN201210552389 A CN 201210552389A CN 103034856 B CN103034856 B CN 103034856B
Authority
CN
China
Prior art keywords
image
pixel
gray level
gray
positional information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210552389.1A
Other languages
Chinese (zh)
Other versions
CN103034856A (en
Inventor
李冰
陈小平
肖方明
汪利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd
Original Assignee
SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd filed Critical SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd
Priority to CN201210552389.1A priority Critical patent/CN103034856B/en
Publication of CN103034856A publication Critical patent/CN103034856A/en
Application granted granted Critical
Publication of CN103034856B publication Critical patent/CN103034856B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Character Input (AREA)

Abstract

The present invention relates to the method for character area in a kind of positioning image, comprising: obtain original image; By Roberts operator, conversion is carried out to described original image and generate gray level image; Described Binary Sketch of Grey Scale Image is obtained edge image; Described edge image is expanded, extracts the positional information of the connected domain of described edge image.In addition, the device of character area in a kind of positioning image is also comprised.In above-mentioned positioning image, the method for character area and device can improve the accuracy of location.

Description

The method of character area and device in positioning image
Technical field
The present invention relates to image processing field, particularly relate to method and the device of character area in a kind of positioning image.
Background technology
In commercial activity, user adopts business card as the instrument of indicate identification usually.But the paper business card in conventional art is inconvenient to carry and take care of, user usually need manually by the Data Enter on paper business card in intelligent terminal.
In order to the information on paper business card automatically can be identified, used visiting-card management software needs the character area in the business card image that first locating takes pictures obtains usually, and then by OCR(OpticalCharacterRecognition, optical character identification) character area changes into text message by system.
But in conventional art, the method for character area is inaccurate in positioning image, can miss key message, make the accuracy of location not high.
Summary of the invention
Based on this, be necessary to provide a kind of method that can improve character area in the positioning image of accuracy.
A method for character area in positioning image, comprising:
Obtain original image;
By Roberts operator, conversion is carried out to described original image and generate gray level image;
Described Binary Sketch of Grey Scale Image is obtained edge image;
Described edge image is expanded, extracts the positional information of the connected domain of described edge image.
Wherein in an embodiment, the described step described original image being converted to gray level image by Roberts operator is:
According to formula:
A 1 = ( I ( i + 1 , j + 1 , R ) - I ( i , j , R ) ) 2 + ( I ( i + 1 , j + 1 , G ) - I ( i , j , G ) ) 2 + ( I ( i + 1 , j + 1 , B ) - I ( i , j , B ) ) 2 ;
A 2 = ( I ( i , j + 1 , R ) - I ( i + 1 , j + 1 , R ) ) 2 + ( I ( i , j + 1 , G ) - I ( i + 1 , j + 1 , G ) ) 2 + ( I ( i , j + 1 , B ) - I ( i + 1 , j + 1 , B ) ) 2 ;
I g ( i , j ) = A 1 2 + A 2 2
Generate gray level image; Wherein (i, j) is the pixel in described original image, and I (i, j, R), I (i, j, G) and I (i, j, B) are respectively the RGB color component of pixel (i, j), A 1for pixel (i, j) and colored Euler's distance of contiguous pixel (i+1, j+1), A 2for pixel (i, j+1) and colored Euler's distance of contiguous pixel (i+1, j+1), I gthe gray-scale value of gray level image at pixel (i, j) place that (i, j) is described generation.
Wherein in an embodiment, the described step described Binary Sketch of Grey Scale Image being obtained edge image is:
By maximum between-cluster variance algorithm, binaryzation is carried out to described gray level image and obtain edge image.
Wherein in an embodiment, the step of the positional information of the connected domain of the described edge image of described extraction is:
The positional information of the connected domain of described edge image is extracted according to zone marker algorithm.
Wherein in an embodiment, also comprise after the step of the positional information of the connected domain of the described edge image of described extraction:
Obtain textural characteristics and/or the histogram feature of described connected domain;
Obtain the sorter of the support vector machine preset;
Described sorter is utilized to screen described positional information according to described textural characteristics and/or histogram feature.
In addition, there is a need to provide a kind of device that can improve character area in the positioning image of accuracy.
A device for character area in positioning image, comprising:
Image collection module, for obtaining original image;
Gray level image generation module, generates gray level image for carrying out conversion by Roberts operator to described original image;
Image binaryzation module, for obtaining edge image by described Binary Sketch of Grey Scale Image;
Zone location module, for being expanded by described edge image, extracts the positional information of the connected domain of described edge image.
Wherein in an embodiment, described gray level image generation module is also for according to formula:
A 1 = ( I ( i + 1 , j + 1 , R ) - I ( i , j , R ) ) 2 + ( I ( i + 1 , j + 1 , G ) - I ( i , j , G ) ) 2 + ( I ( i + 1 , j + 1 , B ) - I ( i , j , B ) ) 2 ;
A 2 = ( I ( i , j + 1 , R ) - I ( i + 1 , j + 1 , R ) ) 2 + ( I ( i , j + 1 , G ) - I ( i + 1 , j + 1 , G ) ) 2 + ( I ( i , j + 1 , B ) - I ( i + 1 , j + 1 , B ) ) 2 ;
I g ( i , j ) = A 1 2 + A 2 2
Generate gray level image; Wherein (i, j) is the pixel in described original image, and I (i, j, R), I (i, j, G) and I (i, j, B) are respectively the RGB color component of pixel (i, j), A 1for pixel (i, j) and colored Euler's distance of contiguous pixel (i+1, j+1), A 2for pixel (i, j+1) and colored Euler's distance of contiguous pixel (i+1, j+1), I gthe gray-scale value of gray level image at pixel (i, j) place that (i, j) is described generation.
Wherein in an embodiment, described image binaryzation module also obtains edge image for carrying out binaryzation by maximum between-cluster variance algorithm to described gray level image.
Wherein in an embodiment, described zone location module is also for extracting the positional information of the connected domain of described edge image according to zone marker algorithm.
Wherein in an embodiment, described device also comprises region screening module, for obtaining textural characteristics and/or the histogram feature of described connected domain, obtain the sorter of the support vector machine preset, utilize described sorter to screen described positional information according to described textural characteristics and/or histogram feature.
The method of character area and device in above-mentioned positioning image, first by Roberts operator, original image is changed, obtain the gray level image that gray-scale value contains marginal information, then by Binary Sketch of Grey Scale Image is obtained edge image, thus the marginal information extracted in gray level image, then namely being obtained the positional information of connected domain by expanding, the position that namely in image, character area is residing in original image, making location more accurate.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the method for character area in positioning image in an embodiment;
Fig. 2 is the design sketch of original image in an embodiment;
Fig. 3 is the design sketch of edge image in an embodiment;
Fig. 4 is the schematic diagram of the positional information of the connected domain got in an embodiment;
Fig. 5 is the schematic diagram of positional information of the connected domain in an embodiment after screening;
Fig. 6 is the structural representation of the device of character area in positioning image in an embodiment;
Fig. 7 is the structural representation of the device of character area in positioning image in another embodiment.
Embodiment
In one embodiment, as shown in Figure 1, a kind of method of character area in positioning image, comprising:
Step S102, obtains original image.
Original image obtains by taking pictures.Such as, in one scenario, by mobile terminal, business card is taken pictures, obtain the photomap picture obtained of taking pictures.
Step S104, carries out conversion by Roberts operator to original image and generates gray level image.
Roberts operator is a kind of operator being found edge by local difference.In the present embodiment, can according to formula:
A 1 = ( I ( i + 1 , j + 1 , R ) - I ( i , j , R ) ) 2 + ( I ( i + 1 , j + 1 , G ) - I ( i , j , G ) ) 2 + ( I ( i + 1 , j + 1 , B ) - I ( i , j , B ) ) 2 ;
A 2 = ( I ( i , j + 1 , R ) - I ( i + 1 , j + 1 , R ) ) 2 + ( I ( i , j + 1 , G ) - I ( i + 1 , j + 1 , G ) ) 2 + ( I ( i , j + 1 , B ) - I ( i + 1 , j + 1 , B ) ) 2 ;
I g ( i , j ) = A 1 2 + A 2 2
Generate gray level image; Wherein (i, j) is the pixel in original image, and I (i, j, R), I (i, j, G) and I (i, j, B) are respectively the RGB color component of pixel (i, j), A 1for pixel (i, j) and colored Euler's distance of contiguous pixel (i+1, j+1), A 2for pixel (i, j+1) and colored Euler's distance of contiguous pixel (i+1, j+1), I g(i, j) gray level image gray-scale value at pixel (i, j) place for generating.
As can be seen from above-mentioned formula, if the pixel (i, j) that is adjacent of the pixel (i+1, j+1) in original image and (i, j+1) gap in RGB color deal is comparatively large, then the gray level image I generated gin, the gray-scale value of pixel (i, j) is larger.That is, gray level image I gthe part that middle gray-scale value is higher is the fringe region in original image.
Step S106, obtains edge image by Binary Sketch of Grey Scale Image.
In the present embodiment, step S106 generation gray level image can be carried out binaryzation by maximum between-cluster variance algorithm to region unit and obtain edge image.
Maximum between-cluster variance algorithm is again Otsu algorithm (OTSU algorithm).This algorithm obtains threshold value T by the pixel in traversal image-region, and threshold value T makes the value of the maximum between-cluster variance of gray level image be maximal value, then according to threshold value T by image-region binaryzation.The gray-scale value of pixel corresponding in the edge image that the pixel that namely in gray level image, gray-scale value is less than threshold value T obtains after binarization is 0; The gray-scale value of pixel corresponding in the edge image that the pixel that in gray level image, gray-scale value is greater than threshold value T obtains after binarization is 255.
In the present embodiment, by traversal image-region in pixel obtain the step of threshold value T before also can according to preset threshold interval pixel is filtered.Preferably, threshold interval is:
(Min+w 1×Len,Max+w 2×Len);
And Len=Max-Min+1
Wherein, Min is the minimum value of gray-scale value in gray level image, and Max is the maximal value of gray-scale value in gray level image, and Len is intermediate variable, w 1and w 2for weight coefficient, preferred w 1and w 2equal value is between 0.1 to 0.4.
That is, can filter out according to above-mentioned threshold interval the pixel that gray-scale value is too low or gray-scale value is too high in advance, then obtain threshold value T according to the maximum between-cluster variance algorithm traversal pixel belonged in above-mentioned threshold interval.Filter out the part that gray-scale value is too low or too high, low gray-scale pixels point and high gray-scale pixels point can be removed on the impact of threshold value T, thus make location more accurate.
Step S108, expands edge image, extracts the positional information of the connected domain of edge image.
The expansion of bianry image is namely by adopting the pixel in structural element traversing graph picture, if there is the pixel that gray-scale value is 0 in the pixel that structural element covers, in edge image after then expanding, the gray-scale value of the pixel in the region that this structural element covers is 0.Structural element and pixel size are the template of M × N, and in the present embodiment, M and N is 3, and namely structural element is the template of 3 × 3 sizes.
In the present embodiment, the step extracting the positional information of the connected domain of edge image can be specially: the positional information extracting the connected domain of edge image according to zone marker algorithm.
Identical for gray-scale value in edge image (gray level image after binaryzation) after expansion and that continuous print pixel is formed connected domain (continuous print image-region) can be marked by zone marker algorithm, and obtains its positional information.
In the present embodiment, further, textural characteristics and/or the histogram feature of connected domain also can be obtained after extracting the step of the positional information of the connected domain of edge image, obtain the sorter of the support vector machine preset, utilize sorter according to textural characteristics and/or histogram feature screening positional information.
Textural characteristics and/or the histogram feature of the training image with typical writings feature can be extracted in advance, then to be inputted in support vector machine thus to generate the kernel function of sorter.After the positional information of connected domain getting edge image, the textural characteristics of the image-region that the positional information obtaining connected domain is again demarcated and/or histogram feature, and be input in support vector machine, by the kernel function of sorter, connected domain is screened, thus filter out the image-region meeting typical writings feature.
Due in the image-region that obtained by above-mentioned steps S102, step S102, step S106 step S108, the image-region of non-legible type may be comprised, such as, the logo of character type, polar plot etc., by the sorter of support vector machine, it is screened, the image-region with typical writings feature can be obtained more exactly, thus make location more accurate.
In one embodiment, please also refer to Fig. 2, Fig. 3, Fig. 4 and Fig. 5, wherein, Fig. 2 is the original image of business card (virtual portrait) gathered of taking pictures, Fig. 3 obtains edge image after binaryzation, and Fig. 4 is the design sketch of the positional information of the connected domain adopting zone marker algorithm to obtain, and this positional information is shown with the form of rectangle frame, Fig. 5 is the positional information of the connected domain obtained after sorter screening, and this positional information is shown with the form of rectangle frame.As can be seen from above-mentioned exemplary plot, the positional information of the character area in this business card image is accurately extracted at the receiving end out.
In one embodiment, as shown in Figure 6, the device of character area in a kind of positioning image, comprising: image collection module 102, image blurring module 104, mixing constant acquisition module 106 and image blend module 108.Wherein:
Image collection module 102, for obtaining original image.
Original image obtains by taking pictures.Such as, in one scenario, by mobile terminal, business card is taken pictures, obtain the photomap picture obtained of taking pictures.
Gray level image generation module 104, generates gray level image for carrying out conversion by Roberts operator to original image.
Roberts operator is a kind of operator being found edge by local difference.In the present embodiment, gray level image generation module 104 can be used for according to formula:
A 1 = ( I ( i + 1 , j + 1 , R ) - I ( i , j , R ) ) 2 + ( I ( i + 1 , j + 1 , G ) - I ( i , j , G ) ) 2 + ( I ( i + 1 , j + 1 , B ) - I ( i , j , B ) ) 2 ;
A 2 = ( I ( i , j + 1 , R ) - I ( i + 1 , j + 1 , R ) ) 2 + ( I ( i , j + 1 , G ) - I ( i + 1 , j + 1 , G ) ) 2 + ( I ( i , j + 1 , B ) - I ( i + 1 , j + 1 , B ) ) 2 ;
I g ( i , j ) = A 1 2 + A 2 2
Generate gray level image; Wherein (i, j) is the pixel in original image, and I (i, j, R), I (i, j, G) and I (i, j, B) are respectively the RGB color component of pixel (i, j), A 1for pixel (i, j) and colored Euler's distance of contiguous pixel (i+1, j+1), A 2for pixel (i, j+1) and colored Euler's distance of contiguous pixel (i+1, j+1), I g(i, j) gray level image gray-scale value at pixel (i, j) place for generating.
As can be seen from above-mentioned formula, if the pixel (i, j) that is adjacent of the pixel (i+1, j+1) in original image and (i, j+1) gap in RGB color deal is comparatively large, then the gray level image I generated gin, the gray-scale value of pixel (i, j) is larger.That is, gray level image I gthe part that middle gray-scale value is higher is the fringe region in original image.
Image binaryzation module 106, for obtaining edge image by Binary Sketch of Grey Scale Image.
In the present embodiment, image binaryzation module 106 can be used for gray level image generation module 104 to generate gray level image and carries out binaryzation by maximum between-cluster variance algorithm to region unit and obtain edge image.
Maximum between-cluster variance algorithm is again Otsu algorithm (OTSU algorithm).This algorithm obtains threshold value T by the pixel in traversal image-region, and threshold value T makes the value of the maximum between-cluster variance of gray level image be maximal value, then according to threshold value T by image-region binaryzation.The gray-scale value of pixel corresponding in the edge image that the pixel that namely in gray level image, gray-scale value is less than threshold value T obtains after binarization is 0; The gray-scale value of pixel corresponding in the edge image that the pixel that in gray level image, gray-scale value is greater than threshold value T obtains after binarization is 255.
In the present embodiment, the threshold interval that image binaryzation module 106 also can be used for according to presetting filters pixel.Preferably, threshold interval is:
(Min+w 1×Len,Max+w 2×Len);
And Len=Max-Min+1
Wherein, Min is the minimum value of gray-scale value in gray level image, and Max is the maximal value of gray-scale value in gray level image, and Len is intermediate variable, w 1and w 2for weight coefficient, preferred w 1and w 2equal value is between 0.1 to 0.4.
That is, can filter out according to above-mentioned threshold interval the pixel that gray-scale value is too low or gray-scale value is too high in advance, then obtain threshold value T according to the maximum between-cluster variance algorithm traversal pixel belonged in above-mentioned threshold interval.Filter out the part that gray-scale value is too low or too high, low gray-scale pixels point and high gray-scale pixels point can be removed on the impact of threshold value T, thus make location more accurate.
Zone location module 108, for being expanded by edge image, extracts the positional information of the connected domain of edge image.
The expansion of bianry image is namely by adopting the pixel in structural element traversing graph picture, if there is the pixel that gray-scale value is 0 in the pixel that structural element covers, in edge image after then expanding, the gray-scale value of the pixel in the region that this structural element covers is 0.Structural element and pixel size are the template of M × N, and in the present embodiment, M and N is 3, and namely structural element is the template of 3 × 3 sizes.
In the present embodiment, zone location module 108 can be used for the positional information of the connected domain extracting edge image according to zone marker algorithm.
Identical for gray-scale value in edge image (gray level image after binaryzation) after expansion and that continuous print pixel is formed connected domain (continuous print image-region) can be marked by zone marker algorithm, and obtains its positional information.
In the present embodiment, as shown in Figure 7, further, in positioning image, the device of character area also comprises region screening module 110, for obtaining textural characteristics and/or the histogram feature of connected domain, obtain the sorter of the support vector machine preset, utilize sorter according to textural characteristics and/or histogram feature screening positional information.
Textural characteristics and/or the histogram feature of the training image with typical writings feature can be extracted in advance, then to be inputted in support vector machine thus to generate the kernel function of sorter.After the positional information of connected domain getting edge image, the textural characteristics of the image-region that the positional information that region screening module 110 can be used for obtaining connected domain is demarcated and/or histogram feature, and be input in support vector machine, by the kernel function of sorter, connected domain is screened, thus filter out the image-region meeting typical writings feature.
Due in the image-region that obtained by aforementioned modules, the image-region of non-legible type may be comprised, such as, the logo of word shape, polar plot etc., by the sorter of support vector machine, it is screened, the image-region with typical writings feature can be obtained more exactly, thus make location more accurate.
In one embodiment, please also refer to Fig. 2, Fig. 3, Fig. 4 and Fig. 5, wherein, Fig. 2 is the original image of business card (virtual portrait) gathered of taking pictures, Fig. 3 obtains edge image after binaryzation, and Fig. 4 is the design sketch of the positional information of the connected domain adopting zone marker algorithm to obtain, and this positional information is shown with the form of rectangle frame, Fig. 5 is the positional information of the connected domain obtained after sorter screening, and this positional information is shown with the form of rectangle frame.As can be seen from above-mentioned exemplary plot, the positional information of the character area in this business card image is accurately extracted at the receiving end out.
The method of character area and device in above-mentioned positioning image, first by Roberts operator, original image is changed, obtain the gray level image that gray-scale value contains marginal information, then by Binary Sketch of Grey Scale Image is obtained edge image, thus the marginal information extracted in gray level image, then namely being obtained the positional information of connected domain by expanding, the position that namely in image, character area is residing in original image, making location more accurate.
The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but therefore can not be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (8)

1. the method for character area in positioning image, comprising:
Obtain original image;
By Roberts operator, conversion is carried out to described original image and generate gray level image;
Described Binary Sketch of Grey Scale Image is obtained edge image, be specially according to preset threshold interval the pixel in described gray level image is filtered after, carry out binaryzation by maximum between-cluster variance algorithm to described gray level image and obtain edge image, described default threshold interval is (Min+w 1× Len, Max+w 2× Len), wherein, Len=Max-Min+1, Min are the minimum value of gray-scale value in described gray level image, and Max is the maximal value of gray-scale value in described gray level image, and Len is intermediate variable, w 1and w 2for weight coefficient;
Described edge image is expanded, extracts the positional information of the connected domain of described edge image.
2. the method for character area in positioning image according to claim 1, is characterized in that, the described step described original image being converted to gray level image by Roberts operator is:
According to formula:
A 1 = ( I ( i + 1 , j + 1 , R ) - I ( i , j , R ) ) 2 + ( I ( i + 1 , j + 1 , G ) - I ( i , j , G ) ) 2 + ( I ( i + 1 , j + 1 , B ) - I ( i , j , B ) ) 2 ;
A 2 = ( I ( i , j + 1 , R ) - I ( i + 1 , j + 1 , R ) ) 2 + ( I ( i , j + 1 , G ) - I ( i + 1 , j + 1 , G ) ) 2 + ( I ( i , j + 1 , B ) - I ( i + 1 , j + 1 , B ) ) 2 ;
I g ( i , j ) = A 1 2 + A 2 2
Generate gray level image; Wherein (i, j) is the pixel in described original image, and I (i, j, R), I (i, j, G) and I (i, j, B) are respectively the RGB color component of pixel (i, j), A 1for pixel (i, j) and colored Euler's distance of contiguous pixel (i+1, j+1), A 2for pixel (i, j+1) and colored Euler's distance of contiguous pixel (i+1, j+1), I gthe gray-scale value of gray level image at pixel (i, j) place that (i, j) is described generation.
3. the method for character area in positioning image according to claim 1, it is characterized in that, the step of the positional information of the connected domain of the described edge image of described extraction is:
The positional information of the connected domain of described edge image is extracted according to zone marker algorithm.
4. the method for character area in positioning image according to claim 1, is characterized in that, also comprise after the step of the positional information of the connected domain of the described edge image of described extraction:
Obtain textural characteristics and/or the histogram feature of described connected domain;
Obtain the sorter of the support vector machine preset;
Described sorter is utilized to screen described positional information according to described textural characteristics and/or histogram feature.
5. the device of character area in positioning image, is characterized in that, comprising:
Image collection module, for obtaining original image;
Gray level image generation module, generates gray level image for carrying out conversion by Roberts operator to described original image;
Image binaryzation module, for described Binary Sketch of Grey Scale Image is obtained edge image, after the pixel of threshold interval to described gray level image also preset for basis filters, carry out binaryzation by maximum between-cluster variance algorithm to described gray level image and obtain edge image, described default threshold interval is (Min+w 1× Len, Max+w 2× Len), wherein, Len=Max-Min+1, Min are the minimum value of gray-scale value in described gray level image, and Max is the maximal value of gray-scale value in described gray level image, and Len is intermediate variable, w 1and w 2for weight coefficient;
Zone location module, for being expanded by described edge image, extracts the positional information of the connected domain of described edge image.
6. the device of character area in positioning image according to claim 5, is characterized in that, described gray level image generation module is also for according to formula:
A 1 = ( I ( i + 1 , j + 1 , R ) - I ( i , j , R ) ) 2 + ( I ( i + 1 , j + 1 , G ) - I ( i , j , G ) ) 2 + ( I ( i + 1 , j + 1 , B ) - I ( i , j , B ) ) 2 ;
A 2 = ( I ( i , j + 1 , R ) - I ( i + 1 , j + 1 , R ) ) 2 + ( I ( i , j + 1 , G ) - I ( i + 1 , j + 1 , G ) ) 2 + ( I ( i , j + 1 , B ) - I ( i + 1 , j + 1 , B ) ) 2 ;
I g ( i , j ) = A 1 2 + A 2 2
Generate gray level image; Wherein (i, j) is the pixel in described original image, and I (i, j, R), I (i, j, G) and I (i, j, B) are respectively the RGB color component of pixel (i, j), A 1for pixel (i, j) and colored Euler's distance of contiguous pixel (i+1, j+1), A 2for pixel (i, j+1) and colored Euler's distance of contiguous pixel (i+1, j+1), I gthe gray-scale value of gray level image at pixel (i, j) place that (i, j) is described generation.
7. the device of character area in positioning image according to claim 5, it is characterized in that, described zone location module is also for extracting the positional information of the connected domain of described edge image according to zone marker algorithm.
8. the device of character area in positioning image according to claim 5, it is characterized in that, described device also comprises region screening module, for obtaining textural characteristics and/or the histogram feature of described connected domain, obtain the sorter of the support vector machine preset, utilize described sorter to screen described positional information according to described textural characteristics and/or histogram feature.
CN201210552389.1A 2012-12-18 2012-12-18 The method of character area and device in positioning image Expired - Fee Related CN103034856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210552389.1A CN103034856B (en) 2012-12-18 2012-12-18 The method of character area and device in positioning image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210552389.1A CN103034856B (en) 2012-12-18 2012-12-18 The method of character area and device in positioning image

Publications (2)

Publication Number Publication Date
CN103034856A CN103034856A (en) 2013-04-10
CN103034856B true CN103034856B (en) 2016-01-20

Family

ID=48021735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210552389.1A Expired - Fee Related CN103034856B (en) 2012-12-18 2012-12-18 The method of character area and device in positioning image

Country Status (1)

Country Link
CN (1) CN103034856B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033528A (en) * 2015-03-09 2016-10-19 富士通株式会社 Method and equipment for extracting specific area from color document image
CN107093172B (en) * 2016-02-18 2020-03-17 清华大学 Character detection method and system
CN106127751B (en) * 2016-06-20 2020-04-14 北京小米移动软件有限公司 Image detection method, device and system
CN106250831A (en) * 2016-07-22 2016-12-21 北京小米移动软件有限公司 Image detecting method, device and the device for image detection
CN108573251B (en) 2017-03-15 2021-09-07 北京京东尚科信息技术有限公司 Character area positioning method and device
CN109993749A (en) * 2017-12-29 2019-07-09 北京京东尚科信息技术有限公司 The method and apparatus for extracting target image
CN108597003A (en) * 2018-04-20 2018-09-28 腾讯科技(深圳)有限公司 A kind of article cover generation method, device, processing server and storage medium
CN108647680B (en) * 2018-04-28 2021-11-12 北京盒子鱼教育科技有限公司 Image positioning frame detection method and device
CN109389150B (en) * 2018-08-28 2022-04-05 东软集团股份有限公司 Image consistency comparison method and device, storage medium and electronic equipment
CN110058991A (en) * 2018-11-30 2019-07-26 阿里巴巴集团控股有限公司 A kind of automatic test approach and system of application software
CN109874051A (en) * 2019-02-21 2019-06-11 百度在线网络技术(北京)有限公司 Video content processing method, device and equipment
CN110211484B (en) 2019-06-13 2021-10-26 深圳云里物里科技股份有限公司 Electronic price tag display method, system, server and storage medium
CN112308057A (en) * 2020-10-13 2021-02-02 山东国赢大数据产业有限公司 OCR (optical character recognition) optimization method and system based on character position information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430768A (en) * 2007-11-07 2009-05-13 刘涛 Two-dimension bar code system and its positioning method
CN102375985A (en) * 2010-08-10 2012-03-14 富士通株式会社 Target detection method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08186714A (en) * 1994-12-27 1996-07-16 Texas Instr Inc <Ti> Noise removal of picture data and its device
CN102496020B (en) * 2011-10-31 2013-07-31 天津大学 Image binarization method based on accumulative edge point visual gray range histogram

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430768A (en) * 2007-11-07 2009-05-13 刘涛 Two-dimension bar code system and its positioning method
CN102375985A (en) * 2010-08-10 2012-03-14 富士通株式会社 Target detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于彩色图像的文本区域提取研究;刘倩;《西南交通大学研究生学位论文》;20030915;第40-41页、第48-50页、第56-61页 *

Also Published As

Publication number Publication date
CN103034856A (en) 2013-04-10

Similar Documents

Publication Publication Date Title
CN103034856B (en) The method of character area and device in positioning image
CN104200209B (en) A kind of pictograph detection method
CN103942797B (en) Scene image text detection method and system based on histogram and super-pixels
CN103020619B (en) A kind of method of handwritten entries in automatic segmentation electronization notebook
CN104616021B (en) Traffic sign image processing method and device
CN102956029B (en) Image processing apparatus and image processing method
US9785850B2 (en) Real time object measurement
CN107659799B (en) Image pickup apparatus, image processing method, and storage medium
CN105516590B (en) A kind of image processing method and device
KR20130066819A (en) Apparus and method for character recognition based on photograph image
CN104794479A (en) Method for detecting text in natural scene picture based on local width change of strokes
CN114283156B (en) Method and device for removing document image color and handwriting
CN104699663A (en) Information inputting method and device thereof
GB2517674A (en) Image capture using client device
CN109741273A (en) A kind of mobile phone photograph low-quality images automatically process and methods of marking
CN104361335A (en) Method for automatically removing black edges of scanning images
US20060164517A1 (en) Method for digital recording, storage and/or transmission of information by means of a camera provided on a comunication terminal
CN111145305A (en) Document image processing method
CN104951749A (en) Image content recognition device and image content recognition method
CN102915522A (en) Smart phone name card extraction system and realization method thereof
JP2010074342A (en) Image processing apparatus, image forming apparatus, and program
Grover et al. Text extraction from document images using edge information
US20170352170A1 (en) Nearsighted camera object detection
CN111191716B (en) Method and device for classifying printed pictures
Bala et al. Image simulation for automatic license plate recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160120

Termination date: 20181218