CN110544222A - Visual transmission image sharpening processing method and system - Google Patents

Visual transmission image sharpening processing method and system Download PDF

Info

Publication number
CN110544222A
CN110544222A CN201910838496.2A CN201910838496A CN110544222A CN 110544222 A CN110544222 A CN 110544222A CN 201910838496 A CN201910838496 A CN 201910838496A CN 110544222 A CN110544222 A CN 110544222A
Authority
CN
China
Prior art keywords
image
character
characters
sub
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910838496.2A
Other languages
Chinese (zh)
Other versions
CN110544222B (en
Inventor
朱艳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ruixin Exhibition Co Ltd
Original Assignee
Chongqing Ruixin Exhibition Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ruixin Exhibition Co Ltd filed Critical Chongqing Ruixin Exhibition Co Ltd
Priority to CN201910838496.2A priority Critical patent/CN110544222B/en
Publication of CN110544222A publication Critical patent/CN110544222A/en
Application granted granted Critical
Publication of CN110544222B publication Critical patent/CN110544222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

a method for processing the clear visual transmission image includes such steps as dividing the characters in electronic image equally to form multiple groups of sub-area data. By comparing the sub-region data of each character region in the electronic image with the corresponding sub-region data of the characters in the database, selecting the most characters stored in the database with the same number as the sub-region data of the character region as the characters of the corresponding character region, and then storing and displaying the selected characters and the corresponding character region on the display panel. And then the character content is clearly displayed through the display panel.

Description

visual transmission image sharpening processing method and system
Technical Field
The invention relates to the technical field of visual image processing, in particular to a method and a system for processing a visual transmission image in a clear mode.
Background
The character vestige is one of ancient heritage culture vestige, belongs to widely known and concerned types of vestige, and is an important channel for understanding the known history of people. Many character vestiges can take part in exhibition after going out of the earth, so as to popularize historical culture and provide cultural relics to learn and study. However, due to the time problem, many character remnants are accompanied by loss or damage after earthed, thereby affecting the real interpretation history of people. Meanwhile, the contents of some written ancient writing are not easy to see, such as inscriptions. Therefore, in the prior art, exhibitors can restore the ancient writing so as to be convenient for people to watch and study, and how to clearly present the content of the written historical relics before the world people becomes the technical problem to be solved by the existing cultural relic exhibitors.
Disclosure of Invention
The invention aims to provide a method for processing a visual transmission image in a clear way, which can clearly present the contents of the characters and the ancient trails in an electronic image way.
The above object of the present invention is achieved by the following technical solutions:
A visual transmission image sharpening processing method comprises the following steps:
Step 1: collecting an electronic image of the article;
step 2: processing the electronic image through convolution operation and deconvolution operation to generate an intermediate image;
And step 3: filtering the damaged area in the intermediate image to generate an image to be processed;
and 4, step 4: adding a rectangular standard frame surrounding the character area and a maximum rectangular frame with the largest range in the standard frames to each character area, wherein diagonal positions of the standard frames and the maximum rectangular frame are overlapped;
And 5: performing standard division on each maximum rectangular frame to form at least 12 sub-areas with equal size;
step 6: calling database information, comparing the sub-region data of each character region with the corresponding sub-region data of the characters in the database, and selecting the most characters stored in the database with the same number as the sub-region data of the character region as the characters of the corresponding character region;
And 7: and (4) storing and displaying the characters selected in the step (6) and the corresponding character areas on a display panel.
By adopting the technical scheme, the convolution operation and the deconvolution operation realize the deep processing of the text contents in the image, after the damaged area is filtered and adjusted, only the text contents are left in the image, the frame selection of each text content can be realized by setting a rectangular frame, the text can be divided into a plurality of sub-areas by matching with standard division, and if the contents of all the sub-areas of a single text area are the same as the data of the corresponding sub-areas pre-stored in the database, the text area contents are proved to be the text pre-stored in the database. And then the character content is clearly displayed through the display panel. Meanwhile, if partial characters are incomplete due to the fact that the article to be detected is incomplete, the residual content of the incomplete characters is compared with the content of a database of a corresponding subarea of prestored characters in the database, the prestored characters with the same quantity and the largest quantity of subarea data are selected as identification characters, and therefore completion of the incomplete characters of the article to be detected is achieved, and the characters and the ancient trace content are cleaned and displayed in an electronic image mode. When the database information is selected, the corresponding character database such as a handwriting of Fuxi and a sushi can be selected according to the character style displayed by the cultural relic to be tested.
As an improvement of the invention, the database information is provided in plurality according to the calligraphy types.
By adopting the technical scheme, the type of character recognition can be provided.
As an improvement of the present invention, in step 7, the selecting a word processing mode further includes:
when the number of the characters stored in the database which is the same as the sub-area data of the character area and has the largest quantity is one, selecting the corresponding pre-stored character as the selected character;
when the number of the characters stored in the database with the same sub-area data and the largest number in the character area is more than one, all the characters with the same sub-area data are stored and displayed on the display panel, and the characters are selected in a manual screening mode.
By adopting the technical scheme, because a public screening mode is added, the character and historic site contents can be more clearly displayed, and errors in the electronic identification process are avoided.
as an improvement of the present invention, in the step 4, the text area in the image to be processed is extracted through an area recommendation network.
As a refinement of the present invention, said step 1 comprises the steps of:
Step 1-1: controlling the inclination angle between the light source and the object to be measured to meet the preset requirement; step 1-2: performing rotation control on the light source; step 1-3: adjusting the brightness of the light source; step 1-4: and carrying out rotation control on the object to be measured.
As a refinement of the present invention, said step 2 comprises the steps of:
step 2-1: carrying out first processing on the acquired image through convolution operation to obtain a characteristic diagram;
Step 2-2: and carrying out second processing on the characteristic graph through deconvolution operation to obtain the intermediate image.
By adopting the technical scheme, the convolution operation and the deconvolution operation can make the edges and corners of the image more distinct, and the image is cleaned, thereby providing a technical basis for filtering damaged areas of the image.
Another object of the present invention is to provide a visual transmission image sharpening processing system which can present the contents of the ancient writing clearly in an electronic image manner.
The above object of the present invention is achieved by the following technical solutions:
a visual presentation image sharpening processing system, comprising:
The image acquisition module is used for photographing a to-be-detected object and generating an electronic image; at least one processor; at least one memory for storing at least one program;
When the program is executed by the processor, the processor realizes steps 2 to 7 in the visual transmission image sharpening processing method as described above.
By adopting the technical scheme, the image acquisition module realizes acquisition of electronic images, the storage of programs is realized by the arrangement of the storage, and the arrangement of the processor and the programs in the storage realize the processing of the visual transmission image clarification processing method on the character ancient writing images, so that the character ancient writing contents are presented clearly in an electronic image mode.
As an improvement of the invention, the image acquisition module comprises a shooting table for placing the object to be measured, a sidelight for lighting the sidelight of the object to be measured and a binocular camera for acquiring the image of the object to be measured.
Through adopting above-mentioned technical scheme, throw light on to the characters historical relic through the sidelight, avoided the strong reflection of direct light for electronic image obtains more clearly.
In conclusion, the beneficial technical effects of the invention are as follows:
Because the character region is subjected to standard division, and the character recognition is realized by comparing the sub-regions, the recognition of the complete character is realized, and the data support is provided for the recovery of the incomplete character in the later period, thereby being beneficial to the clear display of the character ancient writing.
drawings
FIG. 1 is a simplified view of the internal structure of an image acquisition box;
FIG. 2 is a system diagram of a visual presentation image sharpening processing system;
FIG. 3 is a flow chart of a method of visual communication image sharpening processing;
FIG. 4 is an exemplary diagram of different equipartition of pre-stored words in a database by an equipartition model.
In the figure, 1, an image acquisition module; 11. side light lamps; 12. a shooting table; 13. a rotating electric machine; 14. a binocular camera; 2. a collection box.
Detailed Description
the present invention will be described in further detail with reference to the accompanying drawings.
referring to fig. 1 and 2, a visual transmission image sharpening processing system disclosed in the present invention includes an image acquisition module 1, a processor and a memory. The image acquisition module comprises a shooting table 12 for placing a measured object, a sidelight 11 for lighting the sidelight of the measured object and a binocular camera 14 for acquiring an image of the measured object.
Referring to fig. 1, the camera table 12, the sidelight 11 and the binocular camera 14 are all disposed in an image collection box 2, wherein the lower portion of the camera table 12 is fixedly connected to the bottom of the collection box 2 through a rotating motor 13. The horizontal angle of the shooting table 12 can be adjusted in the working process of the rotating motor 13. The sidelight lamps 11 are at least four and are located on four side walls of the rectangular image collection box 2, preferably four, in this embodiment, and the four sidelight lamps 11 are respectively arranged on the four side walls of the image collection box 2.
The four side walls of the image acquisition box 2 are provided with vertical slideways, and the sidelight 11 positioned on the corresponding side wall is respectively connected with the corresponding slideways in a sliding manner. During use, the position of the sidelight lamp 11 can be adjusted by sliding the sidelight lamp 11. Preferably, the slideway is provided with a vertical cylindrical rod, and the sidelight 11 is fixed at the corresponding position of the slideway through the friction force between the sidelight and the cylindrical rod.
Referring again to fig. 1 and 2, the memory is used for storing programs, and the processor controls the rotating motor 13, the sidelight 11 and the binocular camera 14 to work according to the stored programs in the memory. In this embodiment, the number of the memory and the number of the processors are not limited, and here, one processor and one memory are preferred, and the processor is preferably an embedded single chip microcomputer.
the processor is also connected with a display panel which is set as a liquid crystal screen. The images captured by the binocular camera 14 are processed by the processor and displayed on the liquid crystal screen.
Referring to fig. 3 again, the image sharpening method based on the above-mentioned visual transmission image sharpening processing system disclosed in this embodiment specifically includes the following steps.
step 1: an electronic image of the article is collected.
Step 1-1: and controlling the inclination angle between the light source and the object to be measured to meet the preset requirement. By adjusting the height of the sidelight 11, the angle between the sidelight 11 and the shooting table 12 is adjusted, so that when the sidelight 11 irradiates an article on the shooting table 12, the article cannot be reflected by strong light to affect the collection of images.
Step 1-2: the light source is rotationally controlled.
the control of the light source here is an adjustment of the irradiation position of the light source. By passing through different positions of the object irradiated by the sidelight 11, strong reflection generated when the object is directly irradiated by a light source is further avoided.
Step 1-3: the light source is light-modulated.
The adjustment of the light source here comprises an adjustment of the brightness of the light source as well as an adjustment of the color of the light source. The illumination of different articles can be realized by adjusting the brightness of the light source; the adjustment of the color of the light source can realize the highlighting of characters in articles with different colors.
step 1-4: and carrying out rotation control on the object to be measured. According to the position of the article to be shot, the processor controls the rotating motor 13 to work, so that the article is horizontally rotated. After the article rotates, the processor controls the binocular camera 14 to acquire an image of the article, and an electronic image is generated.
step 2: the electronic image is processed by a convolution operation and a deconvolution operation to generate an intermediate image. Comprises the following steps of 2-1: carrying out first processing on the acquired image through convolution operation to obtain a characteristic diagram; step 2-2: and carrying out second processing on the characteristic diagram through deconvolution operation to obtain an intermediate image.
The intermediate image processed in the step 2 has high color depth of displayed content and clear content, and the damaged area in the article is displayed more clearly, so that the damaged area is deleted favorably.
And step 3: and filtering the damaged area in the intermediate image to generate the image to be processed.
the method of filtering the damaged area is not limited herein, and the method of filtering the irregular pattern is within the scope of the description of the present embodiment. Here, it is preferable that the damaged area in the intermediate image is selected by manual selection. Because the electronic identification mode has low identification precision in the electronic image, the manual screening mode can improve the reliability of image filtering to a certain extent.
And 4, step 4: and adding a standard box of a rectangle surrounding the character area and a maximum rectangular box with the largest range in the standard boxes to each character area, wherein the diagonal positions of the standard boxes and the maximum rectangular box are superposed.
Firstly, character areas in an image to be processed are extracted through an area recommendation network, and then a minimum rectangular frame surrounding the character areas is added to each character area. The largest rectangular box with the largest range in the rectangular boxes is selected by the processor, and the largest rectangular box is added to the outside of each character, and the diagonal lines of the largest rectangular box and the smallest rectangular box are overlapped.
and 5: and performing standard division on each maximum rectangular frame to form at least 12 sub-areas with equal size.
The average value is first set by the staff member in advance through the processor. Here, the average value is preferably 12, that is, the text area is divided into 12 standard areas.
Step 6: and calling database information, comparing the sub-region data of each character region with the corresponding sub-region data of the characters in the database, and selecting the most characters stored in the database with the same number as the sub-region data of the character region as the characters of the corresponding character region.
Here, the database information is provided in plural according to the kind of calligraphy. Before the staff collects the images, the calligraphy types are selected through the processor to limit the types of the characters, and the staff is not limited to add other types of limits such as character authors and the like. And after the processor selects the maximum rectangular frame, setting an equipartition model by the processor according to the maximum rectangular frame and the equipartition value. Referring to fig. 4, in the process of data comparison, in the process of identifying each character, the equipartition model is superimposed on the corresponding character of the database information, multiple groups of equipartition data corresponding to each character are formed by changing the position of the character, each equipartition data corresponds to one sub-region, and four groups of equipartition forms under the condition of 12 equipartition are only displayed in the graph, wherein the position of the character is changed into at least one horizontal pixel point or one vertical pixel point each time according to the size of the maximum rectangular frame, and the character does not exceed the range of the equipartition model. And in the data comparison process, sequentially comparing each equipartition data of each character with the sub-region data formed after division in the step 5.
And 7: and (6) storing and displaying the characters selected in the step 6 and the corresponding character areas on the liquid crystal screen.
When the number of the characters stored in the database is one, the number of the characters is the same as that of the sub-area data of the character area, and the number of the characters is the largest, selecting the corresponding pre-stored characters as the selected characters; when the number of the characters stored in the database with the same sub-area data and the largest number in the character area is more than one, all the characters with the same sub-area data are stored and displayed on the display panel, and the characters are selected in a manual screening mode.
From the above, by dividing the text into regions, the region comparison between the image text and the pre-stored text in the database is realized, and the recognition of the image text is realized. The method not only realizes the identification of the complete characters, but also can identify the incomplete characters to a certain extent, thereby reducing the workload of restoring the character vestige and realizing the digital intelligent clear identification of the character vestige.
The embodiments of the present invention are preferred embodiments of the present invention, and the scope of the present invention is not limited by these embodiments, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (8)

1. A method for processing a visual transmission image for sharpening, comprising: the method comprises the following steps:
step 1: collecting an electronic image of the article;
Step 2: processing the electronic image through convolution operation and deconvolution operation to generate an intermediate image;
and step 3: filtering the damaged area in the intermediate image to generate an image to be processed;
And 4, step 4: adding a minimum rectangular frame surrounding the character area to each character area;
And 5: performing standard division on the rectangular frame to form at least 12 sub-areas with equal size;
step 6: calling database information, comparing the sub-region data of each character region with the corresponding sub-region data of the characters in the database, and selecting the most characters stored in the database with the same number as the sub-region data of the character region as the characters of the corresponding character region;
and 7: and (4) storing and displaying the characters selected in the step (6) and the corresponding character areas on a display panel.
2. the method as claimed in claim 1, wherein the database information is provided in plurality according to the kind of calligraphy.
3. The method of claim 2, wherein the selecting a word processing mode in step 7 further comprises:
when the number of the characters stored in the database which is the same as the sub-area data of the character area and has the largest quantity is one, selecting the corresponding pre-stored character as the selected character;
When the number of the characters stored in the database with the same sub-area data and the largest number in the character area is more than one, all the characters with the same sub-area data are stored and displayed on the display panel, and the characters are selected in a manual screening mode.
4. The method as claimed in claim 1, wherein in step 4, the text area in the image to be processed is extracted through an area recommendation network.
5. the method according to claim 4, wherein the step 1 comprises the following steps:
step 1-1: controlling the inclination angle between the light source and the object to be measured to meet the preset requirement;
Step 1-2: performing rotation control on the light source;
Step 1-3: adjusting the brightness of the light source;
Step 1-4: and carrying out rotation control on the object to be measured.
6. a method for sharpening a visually conveyed image according to claim 1, wherein said step 2 comprises the steps of:
Step 2-1: carrying out first processing on the acquired image through convolution operation to obtain a characteristic diagram;
step 2-2: and carrying out second processing on the characteristic graph through deconvolution operation to obtain the intermediate image.
7. a visual presentation image sharpening processing system, comprising: the image acquisition module (1) is used for photographing a to-be-detected object and generating an electronic image; at least one processor; at least one memory for storing at least one program; when the program is executed by the processor, the processor realizes steps 2 to 7 in the visual communication image sharpening processing method according to any one of claims 1 to 6.
8. the visual transmission image sharpening processing system according to claim 7, wherein the image acquisition module (1) comprises a shooting table (12) for placing the object to be measured, a sidelight (11) for sidelight of the object to be measured, and a binocular camera (14) for acquiring the image of the object to be measured.
CN201910838496.2A 2019-09-05 2019-09-05 Visual transmission image sharpening processing method and system Active CN110544222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838496.2A CN110544222B (en) 2019-09-05 2019-09-05 Visual transmission image sharpening processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838496.2A CN110544222B (en) 2019-09-05 2019-09-05 Visual transmission image sharpening processing method and system

Publications (2)

Publication Number Publication Date
CN110544222A true CN110544222A (en) 2019-12-06
CN110544222B CN110544222B (en) 2023-01-03

Family

ID=68712588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838496.2A Active CN110544222B (en) 2019-09-05 2019-09-05 Visual transmission image sharpening processing method and system

Country Status (1)

Country Link
CN (1) CN110544222B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291761A (en) * 2020-02-17 2020-06-16 北京百度网讯科技有限公司 Method and device for recognizing characters

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054389A (en) * 2009-11-04 2011-05-11 郑阿奇 Video identification method for structure-damaged character codes
JP2014203249A (en) * 2013-04-04 2014-10-27 株式会社東芝 Electronic apparatus and data processing method
CN106663212A (en) * 2014-10-31 2017-05-10 欧姆龙株式会社 Character recognition device, character recognition method, and program
CN107515920A (en) * 2017-08-22 2017-12-26 湖北大学 A kind of image big data analysis method based on dynamic aerial survey
CN107622104A (en) * 2017-09-11 2018-01-23 中央民族大学 A kind of character image identification mask method and system
CN108154136A (en) * 2018-01-15 2018-06-12 众安信息技术服务有限公司 For identifying the method, apparatus of writing and computer-readable medium
CN108334487A (en) * 2017-07-14 2018-07-27 腾讯科技(深圳)有限公司 Lack semantics information complementing method, device, computer equipment and storage medium
CN109726389A (en) * 2018-11-13 2019-05-07 北京邮电大学 A kind of Chinese missing pronoun complementing method based on common sense and reasoning
CN109801287A (en) * 2019-01-30 2019-05-24 温州大学 A kind of labeling damage testing method based on template matching and image quality measure
CN109919157A (en) * 2019-03-28 2019-06-21 北京易达图灵科技有限公司 A kind of vision positioning method and device
CN110175603A (en) * 2019-04-01 2019-08-27 佛山缔乐视觉科技有限公司 A kind of engraving character recognition methods, system and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102054389A (en) * 2009-11-04 2011-05-11 郑阿奇 Video identification method for structure-damaged character codes
JP2014203249A (en) * 2013-04-04 2014-10-27 株式会社東芝 Electronic apparatus and data processing method
CN106663212A (en) * 2014-10-31 2017-05-10 欧姆龙株式会社 Character recognition device, character recognition method, and program
CN108334487A (en) * 2017-07-14 2018-07-27 腾讯科技(深圳)有限公司 Lack semantics information complementing method, device, computer equipment and storage medium
CN107515920A (en) * 2017-08-22 2017-12-26 湖北大学 A kind of image big data analysis method based on dynamic aerial survey
CN107622104A (en) * 2017-09-11 2018-01-23 中央民族大学 A kind of character image identification mask method and system
CN108154136A (en) * 2018-01-15 2018-06-12 众安信息技术服务有限公司 For identifying the method, apparatus of writing and computer-readable medium
CN109726389A (en) * 2018-11-13 2019-05-07 北京邮电大学 A kind of Chinese missing pronoun complementing method based on common sense and reasoning
CN109801287A (en) * 2019-01-30 2019-05-24 温州大学 A kind of labeling damage testing method based on template matching and image quality measure
CN109919157A (en) * 2019-03-28 2019-06-21 北京易达图灵科技有限公司 A kind of vision positioning method and device
CN110175603A (en) * 2019-04-01 2019-08-27 佛山缔乐视觉科技有限公司 A kind of engraving character recognition methods, system and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SS KOTA等: "Digital Enhancement of Indian Manuscript, Yashodar Charitra", 《THE SIXTH INTERNATIONAL CONFERENCE ON WIRELESS & MOBILE NETWORKS (WIMONE - 2014)》 *
VALENTINA GIUFFRA等: "A New Case of Ancient Restoration on an Egyptian Mummy", 《THE JOURNAL OF EGYPTIAN ARCHAEOLOGY》 *
徐媛媛: "新疆出土写本《诗经》残片补考", 《文献》 *
解姗姗等: "融合语义与图像的大规模图像集检索算法", 《重庆理工大学学报(自然科学)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291761A (en) * 2020-02-17 2020-06-16 北京百度网讯科技有限公司 Method and device for recognizing characters
CN111291761B (en) * 2020-02-17 2023-08-04 北京百度网讯科技有限公司 Method and device for recognizing text

Also Published As

Publication number Publication date
CN110544222B (en) 2023-01-03

Similar Documents

Publication Publication Date Title
US11562473B2 (en) Automated system and method for clarity measurements and clarity grading
CN104825127B (en) A kind of dynamic visual acuity detection method
CN103196917B (en) Based on online roll bending material surface blemish detection system and the detection method thereof of CCD line-scan digital camera
CN102590218B (en) Device and method for detecting micro defects on bright and clean surface of metal part based on machine vision
CN108416765B (en) Method and system for automatically detecting character defects
CN110514406B (en) Detection method of LED lamp panel, electronic equipment and storage medium
CN1110001C (en) Seal imprint verifying apparatus
CN111474177A (en) Liquid crystal screen backlight foreign matter defect detection method based on computer vision
CN109345528A (en) A kind of display screen defect inspection method and device based on human-eye visual characteristic
CN111160261A (en) Sample image labeling method and device for automatic sales counter and storage medium
US11170536B2 (en) Systems and methods for home improvement visualization
CN114486903B (en) Gray-scale self-adaptive coiled material detection system, device and algorithm
CN114264661B (en) Definition self-adaptive coiled material detection method, device and system
CN107966836A (en) TFT-L CD defect optical automatic detection system
CN110544222B (en) Visual transmission image sharpening processing method and system
CN110648301A (en) Device and method for eliminating imaging reflection
CN114689591A (en) Coiled material detection device, system and detection method based on line scanning camera
CN108184286A (en) The control method and control system and electronic equipment of lamps and lanterns
CN206282334U (en) A kind of real-time automatic counter system of electronic component based on image procossing
CN105699387A (en) Electronic product appearance defect detection system
CN109410197A (en) A kind of method and device positioning liquid crystal display detection zone
CN108235831A (en) The control method and control system and electronic equipment of lamps and lanterns
CN110210401B (en) Intelligent target detection method under weak light
CN112488986A (en) Cloth surface flaw identification method, device and system based on Yolo convolutional neural network
CN1271569C (en) Apparatus and method to locate an object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant