CN113780281A - Water meter reading identification method based on character template - Google Patents
Water meter reading identification method based on character template Download PDFInfo
- Publication number
- CN113780281A CN113780281A CN202111079798.XA CN202111079798A CN113780281A CN 113780281 A CN113780281 A CN 113780281A CN 202111079798 A CN202111079798 A CN 202111079798A CN 113780281 A CN113780281 A CN 113780281A
- Authority
- CN
- China
- Prior art keywords
- character
- water meter
- template
- value
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 70
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000004519 manufacturing process Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A20/00—Water conservation; Efficient water supply; Efficient water use
Landscapes
- Character Discrimination (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a water meter reading identification method based on a character template.A camera device at a meter end performs horizontal projection on a shot image; and reading identification is carried out according to the character template horizontal projection curve of the water meter, and finally, an identification result is sent to a back-end server. When the template matching is carried out, the method does not use a line-by-line traversal mode, and directly carries out targeted matching through the top and bottom positions of the recorded characters in the long-strip template, thereby reducing the calculated amount.
Description
Technical Field
The invention relates to the technical field of water meter image recognition, in particular to a water meter reading recognition method based on a character template.
Background
The camera direct-reading water meter is characterized in that an image acquisition device is additionally arranged on the original water meter to realize data acquisition, a small digital camera is used for acquiring images of a character wheel on the water meter, the images can be converted into digital signals through digital image processing, then the data is transmitted through a bus, the meter reading is realized, and the purpose of actually checking the meter with human eyes is really realized. The camera direct-reading water meter has many advantages, such as realization of remote direct reading, and convenience.
Although the camera direct-reading water meter has a wide application prospect, some considerable defects exist. Most manufacturers collect and compress images at the meter end, transmit the images to an upper computer for processing, and complete the identification and processing of the images in the background, so the data volume required to be transmitted is large, and the existing better compression technology can only compress the data at 1-3 k, but the data is still hard to bear for the original data bus, so that the original centralized meter reading communication line cannot be used. Although a few manufacturers also identify the table end, the technology is still immature and cannot meet the market demand. And after the camera is additionally arranged on the original water meter, the power consumption of the camera is large, the power supply difficulty is very high, and the problem of camera power supply cannot be well solved in the prior art.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a water meter reading identification method based on a character template from the point that how to realize reading at a meter end and reduce the battery consumption of the meter end, so that the accurate reading of the water meter is realized and the battery consumption of the meter end is reduced.
The invention protects a water meter reading identification method based on a character template.A camera device at a meter end performs horizontal projection on a shot image; and reading identification is carried out according to the character template horizontal projection curve of the water meter, and finally, an identification result is sent to a back-end server.
Further, the character template horizontal projection curve of the water meter is manufactured by a rear-end server and is sent to a meter end.
Furthermore, the manufacturing process of the character template horizontal projection curve of the water meter comprises the following steps:
step a1, the camera device at the table end transmits the shot image to the back-end server.
Step A2, the rear-end server identifies the type of the water meter, and positions four corners of the character wheel area and four corners of each character wheel; and calculating the included angle of the whole character wheel area relative to the horizontal line according to the positions of the four corners of the character wheel area.
Step a3, the back-end server sends the four corner positions of each character wheel to the table end.
Step A4, according to the recognized water meter model and the inclination angle of the water meter character wheel area, inclining the character template corresponding to the water meter by the same angle;
further, the manufacturing steps of the character template of the water meter are as follows:
step A4.1: installing a water meter character extraction device on a water meter of a character template to be extracted;
step A4.2: selecting a character at a certain position in the water meter, and adjusting the selected character to an initial position;
further, the start position is that the center line position of the numerals 0 and 9 is located at the center of the character frame.
Step A4.3: the water meter character extraction device shoots a rotating water meter in real time to obtain N frames of images;
furthermore, the blower is aligned to the water inlet of the water meter to blow air at a constant speed, so that the water meter rotates.
Further, the frame rate of the water meter character extraction device is 25 fps.
Step A4.4: constructing a background picture;
further, the specific process of step a4.4 is as follows:
step A4.4.1: uniformly extracting n frames of images obtained in the step A4.3;
step A4.4.2: respectively extracting pixels at the same position in n frames of images, comparing the pixel values of n pixels at the same position, selecting the minimum 10% pixel value, and taking the average value as the pixel at the position in the background image;
step A4.4.3: step A4.4.2 is repeated until all the loxels are traversed, resulting in a background map.
Step A4.5: the positions of pixel points corresponding to the N frames of images and the background image are differentiated, the pixel brightness value of the pixel point with the difference value larger than the threshold m is set to be 255, the brightness value of the pixel point with the difference value not larger than the threshold m is set to be 0, the N frames of foreground images are obtained, and then the N frames of current images are obtained through positioning and cutting;
step A4.6: selecting 10 current graphs with the best character positions and 10 character spacing values from the N frames of current graphs through a connected domain;
wherein the 10 character spacing values are 0 and 1, 1 and 2, up to a spacing between 9 and 0, for a total of 10 character spacing values.
Further, the specific process of step a4.6 is as follows: firstly, extracting connected domains in N frames of current images; then, according to the sequence of the shot pictures, judging the number of connected domains in the N frames of current pictures, if only 1 connected domain exists, selecting the current pictures of 10 different characters with the center points of the connected domains closest to the horizontal central line of the character wheel frame; if two connected domains exist, 10 different current graphs of which the horizontal center lines of the lower edge of the upper connected domain and the upper edge of the lower connected domain are closest to the horizontal center line of the character wheel frame are selected, wherein the 10 different current graphs respectively comprise current graphs of two different half-wheel characters, and then 10 character spacing values are obtained through measurement.
Step A4.7: arranging 10 character current graphs according to the sequence of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and 0, and splicing the character current graphs according to the character space value to form a water meter character template.
Step A5, calibrating the brightness value of the character template to make the brightness of the character template consistent with the table end.
Further, the specific process of step a5 is as follows:
step A5.1, extracting 10 original shot images corresponding to a current image with the best character positions used for manufacturing the character template from a rear-end server according to the character template, and calculating the average brightness value of the 10 images;
step A5.2, calculating the average value of 10 average brightness values, and outputting the average brightness mean _ temp of the template;
step A5.3, calculating the average brightness value of the image to be identified in the step 1, and outputting the current average brightness mean _ curr;
step a5.4, calculating a difference between the current average brightness and the template average brightness, and outputting a current difference mean _ diff ═ mean n _ curr-mean _ temp;
step a5.5, performing brightness calibration on each pixel of the character template according to the current difference, and outputting a new brightness value v _ new ═ v + mean _ diff;
where v represents the luminance value of the pre-calibration character template.
A6, making a horizontal projection curve of the character template;
further, the step a6 specifically includes: adding the pixel values of each row of pixels of the character template to be used as the row pixel values of the row, wherein all the row pixel values form a horizontal projection curve of the character template; and setting a threshold value to detect the upper and lower boundaries of each character.
Further, the specific reading identification process is as follows:
step B1, setting a projection threshold and a gap width threshold;
wherein, if the horizontal projection curve of the image has continuous lines which are larger than the gap width threshold value, the image has character gaps and two half character wheels; otherwise the image is a complete character.
Step B2, judging whether the line projection value of each line of the horizontal projection of the shot image is lower than the projection threshold value, if so, taking the line as a gap line; if the number of the continuous gap lines is larger than the gap width threshold value, determining that a character gap exists in the shot image, otherwise, determining that a character gap area does not exist in the shot image;
step B3, if there is a character gap, firstly intercepting the projection curve of the upper half character, and aligning the bottom position of the intercepted curve with the horizontal projection bottom position of each character in the character template; then, calculating the numerical difference between the projection value of each line of the intercepted curve and the projection value of the alignment position of the character template, and adding the numerical differences of each line to obtain the total difference value of the first half character, which is sum _ up respectively0,…,sum_up9Total 10 sum _ up; then, the projection curve of the lower half character is intercepted, the top position of the intercepted curve is aligned with the horizontal projection top position of each character in the character template, the numerical value difference of the projection value of each line of the intercepted curve and the projection value of the alignment position of the character template is calculated, and the numerical value difference of each line is added to obtain the projection curveThe total difference of the lower half characters is sum _ bottom0,…,sum_bottom9Total 10 sum _ bottom; then, the sum of the errors of the shot images at a certain gap position is calculated, and is sum _ total0=sum_up0+sum_botto m1,…,sum_total9=sum_up9+sum_bottom0(ii) a Finally, comparing the 10 error sums, selecting the position with the minimum error sum as the position of the shot image, and then converting the distance between the vertical central line of the gap area between the upper half character and the lower half character relative to the vertical central line of the character wheel frame to obtain the recognition result of the image;
step B4, if there is no character gap, only one character in the image intercepts the projection curve of the character; then, the intercepted projection curves are respectively compared with the projection curves of the character template, the difference value of the projection value of each line is calculated, then the pixel difference values of all the lines are added to obtain 10 total difference values, sum _ total0,…,sum_total9(ii) a And finally, selecting the character with the minimum total difference value, and converting the character according to the distance of the vertical central line of the character relative to the vertical central line of the character frame to obtain the recognition result of the image.
The invention has the beneficial effects that: 1. when the template matching is carried out, the top and bottom positions of the recorded characters in the long-strip template are directly matched in a line-by-line traversal mode without using the line-by-line traversal mode, so that the calculated amount is reduced; 2. the invention utilizes the projection curve for matching, changes the traditional quadruple calculation into one-time calculation, and greatly reduces the calculation amount; 3. the invention performs rotation, brightness correction and horizontal projection at the server end, thereby saving the calculation amount of the meter end and saving the battery consumption.
Drawings
FIG. 1 is a schematic flow chart of a water meter reading identification method based on a character template;
FIG. 2 is a schematic view of an image to be recognized in embodiment 1;
FIG. 3 is a schematic view of a horizontal projection curve of the water meter character template in example 1;
fig. 4 is a schematic image of the water meter at any time in embodiment 1.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. The embodiments of the present invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Example 1
In this embodiment, an LXS series of liquid-sealed water meters is taken as an example to specifically describe the technical solution (the main process is shown in fig. 1) of the present invention.
A water meter reading identification method based on a character template comprises the following steps:
step 1, horizontally projecting a shot image by a camera at a surface end as shown in fig. 2;
step 2, according to the character template horizontal projection curve of the water meter with the model, reading identification is carried out as shown in fig. 3, and a result 0019 is output;
and 3, sending the identification result to a back-end server.
Specifically, the character template horizontal projection curve of the water meter is manufactured by a back-end server and is sent to a meter end.
More specifically, the manufacturing process of the character template horizontal projection curve of the water meter comprises the following steps:
step a1, the camera at the table end transmits the image shot at any time to the back-end server, see fig. 4.
Step A2, the rear-end server identifies the type of the water meter, and positions four corners of the character wheel area and four corners of each character wheel; and calculating the included angle of the whole character wheel area relative to the horizontal line according to the positions of the four corners of the character wheel area. Because the calculation of the part is finished by the back-end server, the battery of the meter end cannot be consumed, and the electric quantity of the battery of the meter end is saved.
Step a3, the back-end server sends the four corner positions of each character wheel to the table end.
And step A4, according to the recognized water meter model and the inclination angle of the water meter character wheel area, inclining the character template corresponding to the water meter by the same angle. Because the rotation of the character template is completed by the back-end server, the electric quantity of the battery at the meter end is not consumed.
Step A5, calibrating the brightness value of the character template to make the brightness of the character template consistent with the table end. Due to different water meters, there is inevitably a difference in illumination after the image is taken, thereby causing a change in brightness of the image. This affects the subsequent recognition step, and it is necessary to adjust the brightness values of the current map and the template map to the same level.
Specifically, the process of step 5 is as follows:
step A5.1, extracting 10 original shot images corresponding to a current image with the best character positions used for manufacturing the character template from a rear-end server according to the character template, and calculating the average brightness value of the 10 images;
step A5.2, calculating the average value of 10 average brightness values, and outputting the average brightness mean _ temp of the template;
step A5.3, calculating the average brightness value of the image to be identified in the step 1, and outputting the current average brightness mean _ curr;
step a5.4, calculating a difference between the current average brightness and the template average brightness, and outputting a current difference mean _ diff ═ mean n _ curr-mean _ temp;
step a5.5, performing brightness calibration on each pixel of the character template according to the current difference, and outputting a new brightness value v _ new ═ v + mean _ diff;
where v represents the luminance value of the pre-calibration character template.
A6, making a horizontal projection curve of the character template;
further, the step 6 specifically includes: adding the pixel values of each row of pixels of the character template to be used as the row pixel values of the row, wherein all the row pixel values form a horizontal projection curve of the character template; and setting a threshold value to detect the upper and lower boundaries of each character. After the upper and lower positions of each character in the template map are obtained, the projection curve of the character can be cut out separately. When a single character in the current graph to be recognized is matched, the curve can be directly compared with the intercepted curve. If the upper and lower positions of each character are not known, the interception curve cannot be directly used for comparison, and only the trial and matching from the upper line to the next line can consume a large amount of time.
Specifically, the reading identification process is as follows:
step B1, setting the projection threshold value to be 20 and the gap width threshold value to be 10;
wherein, if the horizontal projection curve of the image has continuous lines which are larger than the gap width threshold value, the image has character gaps and two half character wheels; otherwise the image is a complete character.
Step B2, determining whether the line projection value of each line of the horizontal projection of the photographed image is lower than the projection threshold, wherein in the horizontal projection curve of the four characters in fig. 2, a character gap exists between the ones character and the tens character, and a character gap does not exist between the hundreds character and the thousands character;
step B3, for the ones and the tens characters, firstly intercepting the projection curve of the upper half character, and aligning the bottom position of the intercepted curve with the horizontal projection bottom position of each character in the character template; then, calculating the numerical difference between the projection value of each line of the intercepted curve and the projection value of the alignment position of the character template, and adding the numerical differences of each line to obtain the total difference value of the first half character, which is sum _ up respectively0,…,sum_up9Total 10 sum _ up; then, intercepting the projection curve of the lower half character, aligning the top of the intercepted curve with the horizontal projection top of each character in the character template, calculating the numerical difference between the projection value of each line of the intercepted curve and the projection value of the alignment position of the character template, and adding the numerical differences of each line to obtain the total difference of the lower half character, which is sum _ bottom0,…,sum_bottom9Total 10 sum _ bottom; then, the error sum of the shot image at a certain gap position is calculatedAnd, respectively, sum _ total0=sum_up0+sum_bottom1,…,sum_total9=sum_up9+sum_bottom0(ii) a Finally, comparing the 10 error sums, selecting the position with the minimum error sum as the position of the shot image, and then converting the distance between the vertical central line of the gap area between the upper half character and the lower half character relative to the vertical central line of the character wheel frame to obtain the recognition result of the image; the ones digit character is 9 and the tens digit character is 1.
Step B4, for hundreds digit character and thousands digit character, firstly cutting out the projection curve of the character; then, the intercepted projection curves are respectively compared with the projection curves of the character template, the difference value of the projection value of each line is calculated, then the pixel difference values of all the lines are added to obtain 10 total difference values, sum _ total0,…,sum_total9(ii) a And finally, selecting the character with the minimum total difference value as the number of the shot image, and obtaining the hundred-digit character of 0 and the thousand-digit character of 0 through calculation. The meter reads 0019.
It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art and related arts based on the embodiments of the present invention without any creative effort, shall fall within the protection scope of the present invention.
Claims (10)
1. A water meter reading identification method based on a character template is characterized by comprising the following steps: the camera device at the surface end horizontally projects the shot image; and reading identification is carried out according to the character template horizontal projection curve of the water meter, and finally, an identification result is sent to a back-end server.
2. The water meter reading identification method of claim 1, wherein: the reading identification comprises the following steps:
step B1, setting a projection threshold and a gap width threshold;
wherein, if the horizontal projection curve of the image has continuous lines which are larger than the gap width threshold value, the image has character gaps and two half character wheels; otherwise, the image is a complete character;
step B2, judging whether the line projection value of each line of the horizontal projection of the shot image is lower than the projection threshold value, if so, taking the line as a gap line; if the number of the continuous gap lines is larger than the gap width threshold value, determining that a character gap exists in the shot image, otherwise, determining that a character gap area does not exist in the shot image;
step B3, if there is a character gap, firstly intercepting the projection curve of the upper half character, and aligning the bottom position of the intercepted curve with the horizontal projection bottom position of each character in the character template; then, calculating the numerical difference between the projection value of each line of the intercepted curve and the projection value of the alignment position of the character template, and adding the numerical differences of each line to obtain the total difference value of the first half character, which is sum _ up respectively0,…,sum_up9Total 10 sum _ up; then, intercepting the projection curve of the lower half character, aligning the top position of the intercepted curve with the horizontal projection top position of each character in the character template, calculating the numerical difference between the projection value of each line of the intercepted curve and the projection value of the alignment position of the character template, and adding the numerical differences of each line to obtain the total difference of the lower half character, which is sum _ bottom0,…,sum_bottom9Total 10 sum _ bottom; then, the sum of the errors of the shot images at a certain gap position is calculated, and is sum _ total0=sum_up0+sum_bottom1,…,sum_total9=sum_up9+sum_bottom0(ii) a Finally, comparing the 10 error sums, selecting the position with the minimum error sum as the position of the shot image, and then converting the distance between the vertical central line of the gap area between the upper half character and the lower half character relative to the vertical central line of the character wheel frame to obtain the recognition result of the image;
step B4, if there is no character gap, there is only one character in the image, firstly, the projection curve of the character is cut out; then, the cut projection curves are respectively compared with the projection curves of the character template, and the projection of each line is calculatedAdding all the differences to obtain 10 total differences, which are sum _ total respectively0,…,sum_total9(ii) a And finally, selecting the character with the minimum total difference value, and converting the character according to the distance of the vertical central line of the character relative to the vertical central line of the character frame to obtain the recognition result of the image.
3. The water meter reading identification method of claim 1, wherein the character template horizontal projection curve of the water meter is made by a back-end server and sent to a meter end.
4. The water meter reading identification method of claim 3, wherein the process of making the character template horizontal projection curve of the water meter is as follows:
step A1, the camera device at the table end transmits the shot image to the back-end server;
step A2, the rear-end server identifies the type of the water meter, and positions four corners of the character wheel area and four corners of each character wheel; calculating the included angle of the whole character wheel area relative to the horizontal line according to the positions of the four corners of the character wheel area;
step A3, the back-end server sends the four corners of each character wheel to the table end;
step A4, according to the recognized water meter model and the inclination angle of the water meter character wheel area, inclining the character template corresponding to the water meter by the same angle;
step A5, calibrating the brightness value of the character template to make the brightness of the character template consistent with the surface image;
step A6, a horizontal projection curve of the character template is made.
5. The water meter reading identification method according to claim 4, wherein the specific process of the step A5 is as follows:
step A5.1, extracting 10 original shot images corresponding to a current image with the best character positions used for manufacturing the character template from a rear-end server according to the character template, and calculating the average brightness value of the 10 images;
step A5.2, calculating the average value of 10 average brightness values, and outputting the average brightness mean _ temp of the template;
step A5.3, calculating the average brightness value of the image to be identified in the step 1, and outputting the current average brightness mean _ curr;
step A5.4, calculating the difference between the current average brightness and the template average brightness, and outputting a current difference mean _ diff ═ mean _ curr-mean _ temp;
step a5.5, performing brightness calibration on each pixel of the character template according to the current difference, and outputting a new brightness value v _ new ═ v + mean _ diff;
where v represents the luminance value of the pre-calibration character template.
6. The water meter reading identification method of claim 4, wherein the step A6 is specifically that: and adding the pixel values of each line of pixels of the character template to obtain the line pixel value of the line, wherein all the line pixel values form a horizontal projection curve of the character template.
7. The water meter reading identification method of claim 4, wherein in step a4, the character template of the water meter is prepared by the steps of:
step A4.1: installing a water meter character extraction device on a water meter of a character template to be extracted;
step A4.2: selecting a character at a certain position in the water meter, and adjusting the selected character to an initial position;
step A4.3: the water meter character extraction device shoots a rotating water meter in real time to obtain N frames of images;
step A4.4: constructing a background picture;
step A4.5: the positions of pixel points corresponding to the N frames of images and the background image are differentiated, the pixel brightness value of the pixel point with the difference value larger than the threshold m is set to be 255, the brightness value of the pixel point with the difference value not larger than the threshold m is set to be 0, the N frames of foreground images are obtained, and then the N frames of current images are obtained through positioning and cutting;
step A4.6: selecting 10 current graphs with the best character positions and 10 character spacing values from the N frames of current graphs through a connected domain;
wherein the 10 character spacing values are 0 and 1, 1 and 2, up to a spacing between 9 and 0, for a total of 10 character spacing values;
step A4.7: arranging 10 character current graphs according to the sequence of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and 0, and splicing the character current graphs according to the character space value to form a water meter character template.
8. The water meter reading identification method of claim 7, wherein the starting position is the center line position of the numbers 0 and 9 at the center of the character box.
9. The water meter reading identification method according to claim 7, wherein the specific process of step a4.4 is as follows:
step A4.4.1: uniformly extracting n frames of images obtained in the step A4.3;
step A4.4.2: respectively extracting pixels at the same position in n frames of images, comparing the pixel values of n pixels at the same position, selecting the minimum 10% pixel value, and taking the average value as the pixel at the position in the background image;
step A4.4.3: step A4.4.2 is repeated until all the loxels are traversed, resulting in a background map.
10. The water meter reading identification method of claim 7, wherein the specific process of step a4.6 is as follows: firstly, extracting connected domains in N frames of current images; then, according to the sequence of the shot pictures, judging the number of connected domains in the N frames of current pictures, if only 1 connected domain exists, selecting the current pictures of 10 different characters with the center points of the connected domains closest to the horizontal central line of the character wheel frame; if two connected domains exist, 10 different current graphs of which the horizontal center lines of the lower edge of the upper connected domain and the upper edge of the lower connected domain are closest to the horizontal center line of the character wheel frame are selected, wherein the 10 different current graphs respectively comprise current graphs of two different half-wheel characters, and then 10 character spacing values are obtained through measurement.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111079798.XA CN113780281B (en) | 2021-09-15 | 2021-09-15 | Water meter reading identification method based on character template |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111079798.XA CN113780281B (en) | 2021-09-15 | 2021-09-15 | Water meter reading identification method based on character template |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113780281A true CN113780281A (en) | 2021-12-10 |
CN113780281B CN113780281B (en) | 2024-07-02 |
Family
ID=78844158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111079798.XA Active CN113780281B (en) | 2021-09-15 | 2021-09-15 | Water meter reading identification method based on character template |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113780281B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016127545A1 (en) * | 2015-02-13 | 2016-08-18 | 广州广电运通金融电子股份有限公司 | Character segmentation and recognition method |
US20180182088A1 (en) * | 2016-12-27 | 2018-06-28 | Fordaq SA | Automatic Detection, Counting, and Measurement of Lumber Boards Using a Handheld Device |
CN112818993A (en) * | 2020-03-30 | 2021-05-18 | 深圳友讯达科技股份有限公司 | Character wheel reading meter end identification method and equipment for camera direct-reading meter reader |
CN113205001A (en) * | 2021-04-08 | 2021-08-03 | 南京邮电大学 | Multi-pointer water meter reading identification method |
-
2021
- 2021-09-15 CN CN202111079798.XA patent/CN113780281B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016127545A1 (en) * | 2015-02-13 | 2016-08-18 | 广州广电运通金融电子股份有限公司 | Character segmentation and recognition method |
US20180182088A1 (en) * | 2016-12-27 | 2018-06-28 | Fordaq SA | Automatic Detection, Counting, and Measurement of Lumber Boards Using a Handheld Device |
CN112818993A (en) * | 2020-03-30 | 2021-05-18 | 深圳友讯达科技股份有限公司 | Character wheel reading meter end identification method and equipment for camera direct-reading meter reader |
CN113205001A (en) * | 2021-04-08 | 2021-08-03 | 南京邮电大学 | Multi-pointer water meter reading identification method |
Non-Patent Citations (2)
Title |
---|
徐平;许彬;常英杰;: "双半字识别算法在水表字符识别***中的应用", 杭州电子科技大学学报(自然科学版), no. 01, 15 January 2016 (2016-01-15) * |
高菊;叶桦;: "一种有效的水表数字图像二次识别算法", 东南大学学报(自然科学版), no. 1, 20 July 2013 (2013-07-20) * |
Also Published As
Publication number | Publication date |
---|---|
CN113780281B (en) | 2024-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105474263B (en) | System and method for generating three-dimensional face model | |
CN109218524B (en) | Mobile phone APP and method for generating house type graph through house measurement based on photographing and video recording | |
CN108230397A (en) | Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium | |
WO2022135588A1 (en) | Image correction method, apparatus and system, and electronic device | |
CN104596929A (en) | Method and equipment for determining air quality | |
WO2022166316A1 (en) | Light supplementing method and apparatus for facial recognition, and facial recognition device and system therefor | |
CN111586384A (en) | Projection image geometric correction method based on Bessel curved surface | |
CN109977882B (en) | A kind of half coupling dictionary is to the pedestrian of study again recognition methods and system | |
CN112753047B (en) | Method and system for in-loop calibration and target point setting of hardware of camera and related equipment | |
CN114267267B (en) | Bright and dark seam repairing method, device and system for virtual pixel LED display screen | |
CN116030453A (en) | Digital ammeter identification method, device and equipment | |
WO2023040176A1 (en) | Power supply port positioning method and system for insulation test of electrical product | |
WO2019134317A1 (en) | Full-screen correction method and correction system for led display screen, and storage medium | |
CN113780281A (en) | Water meter reading identification method based on character template | |
CN110991434A (en) | Self-service terminal certificate identification method and device | |
KR100854281B1 (en) | Apparatus for alining curved-screen type and system and method for controlling as the same | |
CN101729739A (en) | Method for rectifying deviation of image | |
CN113487514A (en) | Image processing method, device, terminal and readable storage medium | |
CN117095417A (en) | Screen shot form image text recognition method, device, equipment and storage medium | |
Tu et al. | 2D in situ method for measuring plant leaf area with camera correction and background color calibration | |
CN103747247B (en) | A kind of colour correction card | |
US20130208976A1 (en) | System, method, and computer program product for calculating adjustments for images | |
CN117455818B (en) | Correction method and device for intelligent glasses screen, electronic equipment and storage medium | |
US11232289B2 (en) | Face identification method and terminal device using the same | |
CN117853882B (en) | Data acquisition method for intelligent water meter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |