CN102332097B - Method for segmenting complex background text images based on image segmentation - Google Patents

Method for segmenting complex background text images based on image segmentation Download PDF

Info

Publication number
CN102332097B
CN102332097B CN 201110322549 CN201110322549A CN102332097B CN 102332097 B CN102332097 B CN 102332097B CN 201110322549 CN201110322549 CN 201110322549 CN 201110322549 A CN201110322549 A CN 201110322549A CN 102332097 B CN102332097 B CN 102332097B
Authority
CN
China
Prior art keywords
subgraph
image
segmentation
polarity
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110322549
Other languages
Chinese (zh)
Other versions
CN102332097A (en
Inventor
王春恒
史存召
肖柏华
周文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infan Technology (beijing) Co Ltd
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN 201110322549 priority Critical patent/CN102332097B/en
Publication of CN102332097A publication Critical patent/CN102332097A/en
Application granted granted Critical
Publication of CN102332097B publication Critical patent/CN102332097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Character Input (AREA)

Abstract

The invention discloses a method for segmenting complex background text images based on image segmentation. The method comprises the following steps of: 1) coarsely segmenting an original text block image into sub-images; 2) estimating the polarity of each sub-image to finally determine the polarity of the whole text block image; 3) according to the polarity of the text block image, automatically providing certain foreground and background points with high confidence level as hard constraints of the image segmentation by combining inherent characteristics of character strokes; 4) applying corresponding soft constraints to the sub-images, propagating the hard constraints to the whole sub-images by using the image segmentation, and then segmenting the sub-images; and 5) merging the segmented sub-images to obtain an integral text segmented image. The method adopts a segmentation-combination technique, and has local space adaptability, so the method can be used for segmenting complex background text block images with non-uniform backgrounds; and simultaneously, by the method, the hard restraints are automatically provided for the image segmentation and are expanded to the whole sub-images by combining the soft restraints, so a good segmentation effect can be achieved for the text images with complex backgrounds.

Description

A kind of complex background Document Segmentation method of cutting based on figure
Technical field
The present invention relates to the Document Segmentation technical field in pattern-recognition and field of machine vision, specifically a kind of complex background Document Segmentation method of cutting based on figure.
Background technology
Along with the widespread use of the image acquisition equipments such as digital camera, camera, hypervelocity scanner, the information in image more and more causes people's concern, yet the content of computer understanding image is also very difficult at present.The word that embeds in image can provide some people desired important information, and the content of understanding in image is had important help.Allow computing machine word in recognition image as the mankind, it is the automatic detection recognition system of word, more and more caused in recent years people's concern, it is extremely important for storage, classification, understanding and the retrieval etc. of image and video, has a wide range of applications and commercial value.In many cases, scene word in image becomes even that image is main, the information of most critical, therefore Many researchers is devoted to study the detection method research of image Chinese version piece, yet because the text block in image often has very complicated background, such as illumination, word size, resolution etc., the text block that detects is directly sent into traditional OCR identification engine, the non-constant of its recognition effect, therefore, the cutting techniques of text block is the important technology that connects text detection and identification, and is indispensable for the superperformance of whole system.
Present most of text block dividing method can roughly be classified as two classes: statistical threshold method and machine learning method.Wherein the statistical threshold method calculates global threshold according to the statistical property of the gray scale of image or color or local threshold is just cut apart text image, these class methods are passable for traditional scanned document or the comparatively simple text block segmentation effect of background, yet when word and background have close brightness, can't be fine cut apart.The method of machine learning comprises unsupervised color cluster, various model learning methods.When word and background have close color, the method for color cluster will lose efficacy; If can learn out suitable model, model selection method can obtain satisfied effect, is difficult to realize yet learn out a kind of model that can cut apart the text block of any complex background.
The statistical threshold method does not take full advantage of the architectural characteristic of strokes of characters, and the needed a large amount of training samples of study appropriate model are difficult to obtain.Its real literal is also a kind of special target, therefore can adopt various Target Segmentation methods.Wherein Interactive Object is cut apart and more and more is subject to people and welcomes, and the figure technology of cutting is used widely in this regard.Traditional Interactive Object is cut apart needs the user to provide some labels, yet considers the inherent characteristic of word, can automatically provide some labels for figure cuts, thereby realizing cutting with figure cuts apart word.
Summary of the invention
The purpose of this invention is to provide a kind of complex background Document Segmentation method of cutting based on figure, adopt and divide-close technology, this method has the local space adaptivity, therefore can process the inhomogeneous complex background text image of background; Simultaneously, according to the inherent feature of strokes of characters, automatically provide some labels as hard constraint for figure cuts, in conjunction with soft-constraint, these hard constraints are diffused into whole subgraph and then cut apart subgraph.Subgraph after cutting apart forms whole text segmentation image through merging.
For achieving the above object, technical solution of the present invention is as follows:
A kind of complex background Document Segmentation method of cutting based on figure is characterized in that, comprises the following steps:
Step 1 is several subgraphs with the rough segmentation of urtext piece image;
Step 2 by judging the polarity of each subgraph, is determined the polarity of whole text block image;
Step 3 according to the polarity of text block image, in conjunction with the inherent feature of character stroke, cuts for figure the hard constraint that provides the higher foreground point of some degree of confidence and background dot to cut as figure automatically;
Step 4, the hard constraint according to obtaining applies corresponding soft-constraint to subgraph, cuts with figure hard constraint is propagated into whole subgraph, and then obtain the optimum segmentation of subgraph;
Step 5 merges the subgraph of the optimum segmentation that obtains and obtains whole text segmentation image.
The present invention adopts and divide-to close technology, at first text image is divided into roughly subgraph, then subgraph is operated, so this method has the local space adaptivity, can process the inhomogeneous complex background text image of background; Simultaneously, according to the inherent feature of strokes of characters, this method provides some labels as hard constraint for figure cuts automatically, in conjunction with soft-constraint, these hard constraints is diffused into whole subgraph and then cuts apart subgraph.This method has good segmentation effect to the text image of complex background.
Description of drawings
Fig. 1 is the process flow diagram of a kind of complex background Document Segmentation method of cutting based on figure of proposing of the present invention.
Fig. 2 in the present invention is divided into text image the result schematic diagram of subgraph.
Fig. 3 is that in the present invention, hard constraint obtains criterion and result schematic diagram.
Fig. 4 is the Document Segmentation result schematic diagram according to the embodiment of the present invention.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Fig. 1 is the embodiment process flow diagram of the method for the invention, and with reference to Fig. 1, a kind of complex background Document Segmentation method of cutting based on figure that the present invention proposes specifically comprises the following steps:
Step 1 is several subgraphs with the rough segmentation of urtext piece image;
At first, input a secondary urtext piece image, ask for the edge image of urtext piece image, then, the edge image carries out connected domain analysis, according to some features of character connected domain, finds the connected domain subgraph of some its characteristic conforms character properties, as " seed " subgraph, be several subgraphs according to these " seed " subgraphs with the rough segmentation of urtext piece image.
Wherein, when finding " seed " subgraph according to " seed " subgraph, the urtext image to be carried out rough segmentation afterwards, because this step cut apart the integrality of wanting guarantee information, therefore all to be considered except the All Ranges seed subgraph zone, here partly adopt force-splitting for the All Ranges beyond seed subgraph zone, all can be segmented in the middle of subgraph to guarantee all words.
Urtext piece image to input carries out the subgraph that obtains after coarse segmentation as shown in Figure 2.
Step 2, the polarity of each subgraph that is partitioned into by judgement is finally determined the polarity of whole text block image;
At first, each subgraph is carried out initial binaryzation with traditional method, the stroke width Stroke_width of statistics subgraph word Origin, and the subgraph of adding up after initial binaryzation expands and corrodes the stroke width Stroke_width of word afterwards DilateAnd Stroke_width Erode, according to the polarity of following rule judgment subgraph, if after namely expanding, strokes of characters broadens, corrodes rear stroke and attenuates, the polarity of subgraph, namely prospect, be 1, otherwise be 0:
Figure BDA0000100803810000041
Wherein, Foreground is the subgraph prospect, and white represents that subgraph polarity is 1 (black matrix wrongly written or mispronounced character), and vice versa; Dilation and erosion adopts identical template, simultaneously, objectivity in order to ensure the stroke width statistics, namely all can adapt to the subgraph of all polarity, preferably, stroke width needs to add up on edge image, namely, at first the subgraph after binaryzation is asked for edge image, and then add up stroke width on edge image.
Then, add up the polarity of each subgraph, choose the polarity of whole text block image in a vote.
Add up the polarity of each subgraph, choose the polarity of whole text block image in a vote, be specially: be 0 subgraph quantity greater than polarity if a certain text block image Semi-polarity is 1 subgraph quantity, the polarity of text piece image is 1.
After the polarity of text piece image is defined as 1, think that the polarity of all subgraphs that text piece image comprises is 1.
Step 3 is according to the polarity of whole text block image, in conjunction with the inherent feature of character stroke, for figure cuts the hard constraint that automatic lifting supplies the higher foreground point of some degree of confidence and background dot to cut as figure;
At first, consider the feature that character stroke has: 1) stroke of same word generally has identical stroke width; 2) same section stroke generally has identical color or brightness; 3) read for the ease of people, near the color of the point stroke is general different from color or the brightness of stroke.According to above feature, level, each subgraph of vertical scanning, obtain changing oscillogram corresponding to the brightness of each subgraph respectively.
Then, determine candidate's prospect and background dot according to the polarity of brightness variation oscillogram and text block image.Such as, if the polarity of text block image is 1, be that prospect is 1, the width of its crest should be between 1 to 7 (pixel), and choosing the crest that meets certain condition is prospect as candidate's stroke, trough is as candidate background, otherwise if the polarity of text block image is 0, namely prospect is 0, choose meet above width trough as candidate's prospect, crest is as candidate background.Preferably, selecting the stroke of brightness on the crest mean flow rate of crest is candidate's stroke, i.e. prospect, and remaining is candidate background.
At last, be that prospect and candidate background are carried out cluster to these candidate's strokes, get the prospect close to, the hard constraint point that background dot cuts as figure from cluster centre point.
This be because, from cluster centre more close to, its possibility that belongs to prospect or background is larger, namely its degree of confidence is higher.
Figure 3 shows that the inherent feature according to the polar bond word of text block image, for subgraph obtains the higher prospect of degree of confidence and the schematic diagram of background pixel point.in Fig. 3, leftmost image is original image, providing by the resulting four lines brightness of horizontal scanning left side original image variation diagram of middle example images, because the polarity of text piece image is 1, so select black arrow mark crest be corresponding candidate's prospect corresponding to the point in former figure, remaining is candidate background, according to this principle automatic acquisition candidate's prospect and background dot, and then pick out the higher prospect of degree of confidence and background dot as hard constraint through cluster, as shown in Fig. 3 rightmost image, the higher foreground point of white expression degree of confidence wherein, black represents the background dot that degree of confidence is higher, rest of pixels in image represents with grey.
Step 4 according to the hard constraint that obtains, and to each subgraph that rough segmentation in step 1 obtains, applies corresponding soft-constraint, cuts with figure these hard constraints are propagated into whole subgraph, and then obtain the optimum segmentation of subgraph;
At first, the hard constraint that obtains according to step 3 is for figure cuts the setting soft-constraint.Suppose that all pixels of subgraph are the node of " figure ", adjacent 8 neighborhoods that pixel is figure of each node represent the set of node with P, use L={L 1, L 2... L p... represent the segmentation tag of each node, if be prospect, L p=1, on the contrary be 0.With the soft-constraint that loss function E (L) presentation graphs cuts, comprise area loss R (L) and border loss B (L) two parts, be shown below:
E(L)=λR(L)+B(L),
Wherein, λ has reflected the proportion relation between R (L) and B (L); R ( L ) = Σ p ∈ P R p ( L p ) , P is a certain node in figure; B ( L ) = Σ { p , q } ∈ N B { p , q } * δ ( L p , L q ) , P, q are 2 points adjacent in figure; N is the set of neighbor pixel, B { p, q}Border loss for adjacent 2; δ (L p, L q) be impulse function, be 1 as p, when q has same label, all the other situations are 0: δ ( L p , L q ) = 1 if L p ≠ L q 0 otherwise . .
Area loss is that certain pixel is divided into the loss that prospect or background are brought, the area loss R of each pixel p(L p) comprise two parts losses, R p(L p)=R p(0)+R p(1), R wherein p(1) be this pixel to be categorized as the loss of prospect, R pTherefore (0) be this pixel to be divided into the loss of background, can stipulate: if the color of pixel close to the color of prospect, prospect loss R p(1) should be less, and background loss R p(0) should be larger.
The circular of area loss is as follows:
1) with the prospect background point difference cluster that obtains in step 3, supposing poly-respectively is n class and m class, and the foreground point cluster centre is Center{fore} n, the background dot cluster centre is Center{back} m
2) for certain pixel p, calculate this point to the distance of each cluster centre
Figure BDA0000100803810000061
With
Figure BDA0000100803810000062
3) R so, p(1) and R p(0) can be defined as follows;
R p ( 1 ) = min { Dist { fore } k p } , k=1,2,...n
R p ( 0 ) = min { Dist { back } k p } , k=1,2,...m.
Border loss B (L) is the discontinuous loss that causes of neighbor, namely to the discontinuous punishment of neighbor, that is to say, if the neighbor feature similarity, should B { p, q}Larger, on the contrary little.
B (L) can be set to neighbor pixel p, the decreasing function of distance between q, and B (L) adopts as minor function here:
B { p , q } = exp ( - ( color p - color q ) 2 2 σ 2 ) ,
Wherein, color p, color qBe respectively pixel p, the R of q, G, B color characteristic, σ is scale factor, is made as 0.25.
Then, find with max-flow/minimal cut algorithm the best dividing method that satisfies hard constraint.That is, use max-flow/minimal cut algorithm to obtain making the as above minimum subgraph segmentation result of loss function E (L) (soft-constraint) for whole text block image under hard constraint, be the optimum segmentation result of subgraph.
That is to say, cut the hard constraint that step 3 is obtained with figure, be diffused into whole subgraph by the soft-constraint (border loss and area loss) that defines in step 4, namely solve the minimal solution of loss function, obtain the minimal cut result of text block image.
Step 5 merges the subgraph of the optimum segmentation that obtains and obtains whole text segmentation image.
The subgraph of the optimum segmentation that obtains is merged, and the subgraph splicing of the white gravoply, with black engraved characters that is about to be partitioned into is merged into final binary segmentation image, i.e. the text segmentation image on black matrix.
According to hard constraint, in conjunction with soft-constraint, to cut with figure and cut apart subgraph, the result after the subgraph that then optimum segmentation is obtained merges is as shown in Figure 4.Above Fig. 4, a secondary picture is the text block image of original input, below a pair cut apart subgraph for cutting with figure, the text segmentation image of the integral body that the subgraph that then optimum segmentation obtained obtains after merging.
The above; only be the embodiment in the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected, all should be encompassed in of the present invention comprise scope within.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (7)

1. a complex background Document Segmentation method of cutting based on figure, is characterized in that, comprises the following steps:
Step 1 is several subgraphs with the rough segmentation of urtext piece image;
Step 2 by judging the polarity of each subgraph, is determined the polarity of whole text block image;
Step 3 according to the polarity of text block image, in conjunction with the inherent feature of character stroke, cuts for figure the hard constraint that provides the higher foreground point of some degree of confidence and background dot to cut as figure automatically;
Step 4, the hard constraint according to obtaining applies corresponding soft-constraint to subgraph, cuts with figure hard constraint is propagated into whole subgraph, and then obtain the optimum segmentation of subgraph;
Step 5 merges the subgraph of the optimum segmentation that obtains and obtains whole text segmentation image;
Soft-constraint in described step 4 is the loss function that figure cuts, and described loss function E (L) comprises area loss R (L) and border loss B (L):
E(L)=λR(L)+B(L),
Wherein, λ is the proportion relation between R (L) and B (L);
Described area loss R (L) is divided into certain pixel the loss that prospect or background are brought:
R ( L ) = Σ p ∈ P R p ( L p ) ,
Wherein, p is a certain node in figure, and P represents the set of node; L pSegmentation tag for node p; The area loss R of each pixel p(L p) comprise two parts:
R p(L p)=R p(0)+R p(1),
Wherein, R p(1) be this pixel to be divided into the loss of prospect, R p(0) be this pixel to be divided into the loss of background;
Described border loss B (L) is the discontinuous loss that causes of neighbor:
B ( L ) = Σ { p , q } ∈ N B { p , q } * δ ( L p , L q ) ,
Wherein, p, q are 2 points adjacent in figure, and N is the set of neighbor pixel, B { p, q}Be the border loss of adjacent 2, δ (L p, L q) be impulse function.
2. the method for claim 1, is characterized in that, described step 1 is specially:
Ask for the edge image of urtext piece image, the edge image carries out connected domain analysis and obtains " seed " subgraph, is several subgraphs according to described " seed " subgraph with the rough segmentation of urtext piece image.
3. method as claimed in claim 2, it is characterized in that, when according to " seed " subgraph, the rough segmentation of urtext piece image being several subgraphs, integrality for guarantee information, partly adopt force-splitting for " seed " subgraph zone All Ranges in addition, all can be segmented in subgraph to guarantee all words.
4. the method for claim 1, is characterized in that, in described step 2, the polarity of judgement subgraph is specially:
Each subgraph is carried out initial binaryzation, add up the stroke width of subgraph word, and subgraph expands and the stroke width of the rear word of corrosion, attenuate if the stroke of the rear word of subgraph expansion broadens, corrodes rear stroke, the polarity of this subgraph is 1, otherwise is 0.
5. the method for claim 1, is characterized in that, determines in described step 2 that the polarity of whole text block image is specially:
According to the polarity of subgraph, by choosing the polarity of whole text block image in a vote.
6. the method for claim 1, is characterized in that, described step 3 specifically comprises:
According to the feature that character stroke has, level, each subgraph of vertical scanning, obtain changing oscillogram corresponding to the brightness of each subgraph respectively;
The polarity that changes oscillogram and text block image according to brightness is determined candidate foreground point and background dot;
Cluster is carried out in candidate foreground point and background dot, get the prospect close to, the hard constraint point that background dot cuts as figure from cluster centre point.
7. the method for claim 1, it is characterized in that, cut with figure in described step 4 hard constraint is propagated into whole subgraph, and then the optimum segmentation that obtains subgraph is specially: use max-flow/minimal cut algorithm obtains making the subgraph segmentation result of soft-constraint minimum under the hard constraint of step 3, is the optimum segmentation result of subgraph.
CN 201110322549 2011-10-21 2011-10-21 Method for segmenting complex background text images based on image segmentation Active CN102332097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110322549 CN102332097B (en) 2011-10-21 2011-10-21 Method for segmenting complex background text images based on image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110322549 CN102332097B (en) 2011-10-21 2011-10-21 Method for segmenting complex background text images based on image segmentation

Publications (2)

Publication Number Publication Date
CN102332097A CN102332097A (en) 2012-01-25
CN102332097B true CN102332097B (en) 2013-06-26

Family

ID=45483866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110322549 Active CN102332097B (en) 2011-10-21 2011-10-21 Method for segmenting complex background text images based on image segmentation

Country Status (1)

Country Link
CN (1) CN102332097B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855636B (en) * 2012-06-26 2015-01-14 北京工业大学 Optimization method for foreground segmentation problem
CN103218810B (en) * 2013-03-27 2016-04-20 华北电力大学 A kind of power tower bar image meaning of one's words dividing method
CN103310450B (en) * 2013-06-17 2016-12-28 北京工业大学 A kind of image partition method merging direct-connected commensurability bundle
CN103927533B (en) * 2014-04-11 2017-03-01 北京工业大学 The intelligent processing method of graph text information in a kind of scanned document for earlier patents
US9633444B2 (en) 2014-05-05 2017-04-25 Xiaomi Inc. Method and device for image segmentation
CN105160300B (en) * 2015-08-05 2018-08-21 山东科技大学 A kind of text abstracting method based on level-set segmentation
CN108734712B (en) * 2017-04-18 2020-12-25 北京旷视科技有限公司 Background segmentation method and device and computer storage medium
CN108171237A (en) * 2017-12-08 2018-06-15 众安信息技术服务有限公司 A kind of line of text image individual character cutting method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1588431A (en) * 2004-07-02 2005-03-02 清华大学 Character extracting method from complecate background color image based on run-length adjacent map
CN101030257A (en) * 2007-04-13 2007-09-05 中国传媒大学 File-image cutting method based on Chinese characteristics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1588431A (en) * 2004-07-02 2005-03-02 清华大学 Character extracting method from complecate background color image based on run-length adjacent map
CN101030257A (en) * 2007-04-13 2007-09-05 中国传媒大学 File-image cutting method based on Chinese characteristics

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Combined Algorithm for Video Text Extraction;Xin Zhang, et al.;《2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2010)》;20101231;2294-2298 *
An Extraction Method of Video Text in Complex Background;Jingchao Zhou, et al.;《International Conference on Computational Intelligence and Multimedia Applications 2007》;20071231;355-359 *
Jingchao Zhou, et al..An Extraction Method of Video Text in Complex Background.《International Conference on Computational Intelligence and Multimedia Applications 2007》.2007,355-359.
Xin Zhang, et al..A Combined Algorithm for Video Text Extraction.《2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2010)》.2010,2294-2298.
一种基于条件随机场的复杂背景图像文本抽取方法;李敏花等;《模式识别与人工智能》;20091231;第22卷(第6期);827-832 *
李敏花等.一种基于条件随机场的复杂背景图像文本抽取方法.《模式识别与人工智能》.2009,第22卷(第6期),827-832.

Also Published As

Publication number Publication date
CN102332097A (en) 2012-01-25

Similar Documents

Publication Publication Date Title
CN102332097B (en) Method for segmenting complex background text images based on image segmentation
CN101453575B (en) Video subtitle information extracting method
CN102567300B (en) Picture document processing method and device
CN102915438B (en) The extracting method of a kind of video caption and device
CN111860348A (en) Deep learning-based weak supervision power drawing OCR recognition method
CN103020618B (en) The detection method of video image character and system
CN101122953B (en) Picture words segmentation method
CN102663377A (en) Character recognition method based on template matching
CN102663382B (en) Video image character recognition method based on submesh characteristic adaptive weighting
CN102968635B (en) Image visual characteristic extraction method based on sparse coding
CN105608456A (en) Multi-directional text detection method based on full convolution network
CN102968637A (en) Complicated background image and character division method
CN104809481A (en) Natural scene text detection method based on adaptive color clustering
CN101266654A (en) Image text location method and device based on connective component and support vector machine
CN105447522A (en) Complex image character identification system
CN103336961A (en) Interactive natural scene text detection method
CN104156706A (en) Chinese character recognition method based on optical character recognition technology
CN102629322A (en) Character feature extraction method based on stroke shape of boundary point and application thereof
CN111553346A (en) Scene text detection method based on character region perception
CN103049756A (en) Method for automatically extracting and removing words in color image on basis of CEMA (Cellular Message Encryption Algorithm) and texture matching repairing technology
CN103632153A (en) Region-based image saliency map extracting method
CN102136074B (en) Man-machine interface (MMI) based wood image texture analyzing and identifying method
CN112598004A (en) English composition test paper layout analysis method based on scanning
CN110751606A (en) Foam image processing method and system based on neural network algorithm
CN104834891A (en) Method and system for filtering Chinese character image type spam

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190703

Address after: 100098 Beijing Haidian District Zhichun Road 56 West District 8 Floor Central 801-803

Patentee after: INFAN TECHNOLOGY (BEIJING) CO., LTD.

Address before: 100190 Zhongguancun East Road, Haidian District, Haidian District, Beijing

Patentee before: Institute of Automation, Chinese Academy of Sciences