WO2022126978A1 - Invoice information extraction method and apparatus, computer device and storage medium - Google Patents

Invoice information extraction method and apparatus, computer device and storage medium Download PDF

Info

Publication number
WO2022126978A1
WO2022126978A1 PCT/CN2021/090807 CN2021090807W WO2022126978A1 WO 2022126978 A1 WO2022126978 A1 WO 2022126978A1 CN 2021090807 W CN2021090807 W CN 2021090807W WO 2022126978 A1 WO2022126978 A1 WO 2022126978A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
printed
machine
image
model
Prior art date
Application number
PCT/CN2021/090807
Other languages
French (fr)
Chinese (zh)
Inventor
何小臻
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022126978A1 publication Critical patent/WO2022126978A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/174Form filling; Merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables

Definitions

  • the present application relates to the field of artificial intelligence technology, and in particular, to a method, device, computer equipment and storage medium for extracting invoice information.
  • OCR Optical Character Recognition, Optical Character Recognition
  • OCR is an important research direction in the field of pattern recognition.
  • OCR has a wider range of application scenarios, from the previous character recognition of scanned documents to the recognition of pictures and texts in natural scenes.
  • OCR technology is used to automatically identify and extract the field information on the bill and complete the structured output. happensing. Therefore, how to improve the recognition accuracy of bills has become an urgent problem to be solved.
  • the present application provides a method, device, computer equipment and storage medium for extracting invoice information, so as to solve the problem of low recognition accuracy of the image of the invoice when the printed text and the printed text of the invoice overlap or stray in the prior art.
  • the embodiment of the present application provides a method for extracting invoice information, including:
  • the machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
  • the machine-typed text and the printed text are matched correspondingly to form the receipt text.
  • the embodiment of the present application also provides an invoice information extraction device, and the device includes:
  • the acquisition module is used to acquire the ticket image
  • a separation module is used to separate layers of the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, and the separation model is obtained by training based on a confrontational generation network model;
  • the recognition module is used for recognizing the machine-printed image and the printed image by using the corresponding pre-trained recognition model respectively, and converting the machine-printed image and the printed image into machine-printed text and printed text, and the recognition model is based on the volume
  • the product recurrent neural network model is trained;
  • the matching module is used for correspondingly matching the machine-typed text and the printed text to form the bill text.
  • an embodiment of the present application further provides a computer device, including at least one processor; and,
  • the memory stores computer-readable instructions, and the processor implements the following steps when executing the computer-readable instructions:
  • the machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
  • the machine-typed text and the printed text are matched correspondingly to form the receipt text.
  • an embodiment of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the processing The device performs the following steps:
  • the machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
  • the machine-typed text and the printed text are matched correspondingly to form the receipt text.
  • invoice information extraction method, device, computer equipment and storage medium provided according to the embodiments of the present application have at least the following beneficial effects:
  • the machine-printed image and the printed image are obtained by acquiring the bill image, and using a pre-trained separation model to separate the bill image into layers; the machine-printed image and the printed image are separated and processed to facilitate the processing of subsequent steps; The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text. The text recognition rate is high; finally, the machine-typed text and the printed text are matched correspondingly to form the bill text.
  • FIG. 1 is a schematic flowchart of a method for extracting invoice information provided by an embodiment of the present application
  • FIG. 2 is a schematic block diagram of an invoice information extraction device provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • the present application provides a method for extracting invoice information.
  • FIG. 1 it is a schematic flowchart of a method for extracting invoice information according to an embodiment of the present application.
  • the method for extracting invoice information includes:
  • the bill images include images of VAT invoices, medical invoices, and the like.
  • the bill image is only a scanned image file of an issued bill (obtained by shooting, etc.), rather than an electronic invoice image file.
  • the method further includes:
  • the signature verification method is an RSA asymmetric encryption method.
  • a large number of actual bill images are stored in the database. Since the information displayed on the bill has privacy attributes, it needs to be encrypted and stored. When acquiring the bill image, the step of signature verification needs to be performed; The bill image obtained by the business system in real time.
  • the security of the bill image data is ensured through signature verification.
  • processing is performed directly by receiving the bill image sent by the business system, and after the processing is completed, it is directly fed back to the business system, or transferred to the next processing system for further processing.
  • the adversarial generative network model is an adversarial generative network model using pix2pix.
  • pix2pix consists of two networks: a generative network and a discriminant network.
  • pix2pix regards the generative network as a kind of mapping, that is, the picture is mapped to another required picture ; discriminative network to judge the generated image and the original image.
  • the method further includes:
  • the training data is input into an adversarial generative network model for training to obtain the separation model.
  • the real ticket can be an unprinted ticket or an opened ticket. If it is an opened ticket, the opened ticket is preprocessed by means of PS tools and image enhancement, and the text content of the machine-printed information is removed, and only the plate information of the ticket is retained, that is, Unprinted template; if it is an unprinted ticket, the image of the unprinted ticket is enhanced to make the unprinted ticket clearer. Use the processed opened or unprinted ticket as a ticket template;
  • This application can collect bill data for specific scenarios for training. For example, if medical bills are collected, correspondingly, the scene-specific corpus is medical language corpus, and medical language and other corpus are collected on the Internet as expanded corpus. According to the attributes of the expanded corpus Classify it, and fill it into the corresponding area of the bill template according to the attributes of the expanded corpus, that is, to obtain training data;
  • the attributes of the above-mentioned medical language corpus it can be divided into items/specifications (drugs such as Cefixime dry suspension, inspection items), prices corresponding to each item/specification (prices of drugs such as Cefixime suspension, various The cost of inspection items), the quantity corresponding to each item/specification (the number of drugs such as Cefixime Dry Suspension, the number of various inspection items), the total amount (the total amount in Chinese capital); Fill in the corresponding area of the above ticket template to obtain massive training data.
  • drugs such as Cefixime dry suspension, inspection items
  • prices corresponding to each item/specification prices of drugs such as Cefixime suspension, various The cost of inspection items
  • the quantity corresponding to each item/specification the number of drugs such as Cefixime Dry Suspension, the number of various inspection items
  • the total amount the total amount in Chinese capital
  • the corresponding area When filling the scene-specific corpus into the corresponding area of the ticket template according to attributes, the corresponding area includes a normal area and an abnormal area, and the normal area, that is, the scene-specific corpus, is exactly the same as the text on the ticket template to which it belongs.
  • the abnormal area is the situation where the scene-specific corpus overlaps with the text on the corresponding ticket template or the corresponding irregularity occurs.
  • the training data is generated according to the real data, and the model is trained with the training data, and a separation model with better separation effect can be obtained.
  • the training data it also includes:
  • the randomly selected part of the training data is subjected to angle change processing to obtain the training data after morphological change processing.
  • random digital image processing is performed on the training data to simulate possible situations in reality; digital image processing includes one or more of illumination change processing, blur degree change processing, and morphological change processing, wherein all the The illumination change processing is to process the brightness or shadow of the picture; the blurring degree change processing is to simulate the situation that the photo is not very clear, and it is obtained by processing algorithms such as Gaussian blur or box blur; , not necessarily parallel to the bill, resulting in inconsistent shape of the photographed bill, obtained through rotation, angle change, etc.
  • digital image processing of the training data the real situation is further simulated, so that the model trained based on the adversarial generative network model has better effect and is closer to the real situation.
  • the trained model is closer to the real situation, and its processing effect is also better.
  • S3 respectively adopt the corresponding pre-trained recognition model to identify the machine-printed image and the printed image, convert the machine-printed image and the printed image into machine-printed text and printed text, and the recognition model is based on a convolutional cyclic neural network.
  • the network model is trained;
  • Different recognition models are used to recognize machine-printed images and printed images, but they are both trained based on convolutional recurrent neural network models, and are trained based on different training data.
  • the recognition model corresponding to machine-printed images is trained with images of corresponding fonts , and similarly, the recognition model corresponding to the printed image is trained by using the image of the corresponding font.
  • the convolutional recurrent neural network model includes a convolutional layer (CNN), a recurrent layer (RNN) and a transcription layer (CTC loss);
  • the convolutional layer uses a deep CNN to extract features from the input image to obtain a feature map;
  • the recurrent layer uses a bidirectional RNN (BLSTM) predicts the feature sequence, learns each feature vector in the sequence, and outputs the predicted label (true value) distribution;
  • the transcription layer uses the CTC loss to convert a series of label distributions obtained from the recurrent layer into the final tag sequence.
  • CNN convolutional layer
  • RNN recurrent layer
  • CTC loss transcription layer
  • Convolutional recurrent neural network models are used to solve image-based sequence recognition problems, especially scene text recognition problems.
  • the method further includes:
  • the machine-printed image and the printing image are divided into multiple regional images, and the regional coordinates corresponding to each of the regional images are obtained, and the positioning and cutting model is obtained by training based on the DBNet model.
  • the entire machine-printed image and the printed image are respectively divided into multiple area images by using the positioning and cutting model.
  • the area images are segmented in a rectangular manner, and the rectangles corresponding to the multiple area images are obtained.
  • the coordinate data of the four points that is, the regional coordinates, the coordinate data takes the adjacent two sides of the entire bill as the coordinate axis, and the entire bill is located in the first quadrant, so as to obtain the corresponding coordinate data, and the machine-printed image and the printed image share the same Axis.
  • the division of the area of the machine-printed image is distinguished by judging whether there is a gap between adjacent fields.
  • the area division of the printed image is performed based on text boxes.
  • the DBNet model is a text detection model with high accuracy and speed.
  • Both the machine-printed image and the printed image are divided into multiple regional images by the positioning and cutting model, so that the machine-printed text and the printed text can be matched after the subsequent text recognition, and it is convenient for the machine-printed text to be filled in the corresponding printed text.
  • the bill text is formed; the machine-printed text in each area of the formed bill text corresponds to the printed text neatly, that is, the typesetting is carried out, which avoids the original bill directly obtained by machine printing. , there is a problem that the machine-typed text spans lines or covers the printed text, and the structuring of the bill text is realized.
  • the machine-typed text and the printed text are matched correspondingly to form the bill text, including:
  • the first area text is filled into the corresponding second area text to form the bill text.
  • each area text corresponding to the machine-printed text is matched with each area text corresponding to the printed text.
  • the area image is in the shape of a rectangle; the center coordinate refers to the center of the area coordinate, that is, the intersection of the diagonal lines of the rectangle corresponding to the area image.
  • the machine-typed text and the printed text are filled into the corresponding printed text based on the distance of the regional coordinates, so as to realize the corresponding re-typesetting of the bill.
  • the machine-typed text and the printed text are matched correspondingly to form the bill text, including:
  • each region text in the machine-printed text is matched with each region text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
  • the matching value is greater than or equal to a preset value, based on the area coordinates, fill in the area texts of the machine-printed texts into the area texts corresponding to the printed texts to form bill texts.
  • the region coordinates corresponding to each region text in the machine-typed text must be completely within the region coordinates of the corresponding region text in the printed text, so that the region text in the machine-typed text can be accurately filled into the corresponding region text in the printed text. middle.
  • the BIMPM model is a text matching model.
  • the machine-typed text and the printed text are text-matched through a matching model. After the matching is completed, if the preset requirements are met, the machine-typed text is filled in the corresponding printed text, and the receipt is also retyped.
  • all the data of the bill image can also be stored in a node of a blockchain.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the machine-printed image and the printed image are obtained by acquiring the bill image, and using a pre-trained separation model to separate the bill image into layers; the machine-printed image and the printed image are separated and processed to facilitate the processing of subsequent steps; The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text. The text recognition rate is high; finally, the machine-typed text and the printed text are matched correspondingly to form the bill text.
  • FIG. 2 it is a functional block diagram of the apparatus for extracting invoice information of the present application.
  • the apparatus 100 for extracting invoice information described in this application may be installed in an electronic device.
  • the invoice information extraction apparatus 100 may include an acquisition module 101 , a separation module 102 , an identification module 103 and a matching module 104 .
  • the modules described in this application may also be referred to as units, which refer to a series of computer-readable instruction segments that can be executed by the electronic device processor and can perform fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • an acquisition module 101 configured to acquire a ticket image
  • invoice information extraction device 100 further includes a sending request module and a calling module;
  • the sending request module is configured to send a calling request to the database, where the calling request carries a signature verification token;
  • the calling module receives the signature verification result returned by the database, and when the signature verification result is passed, invokes the bill image in the database;
  • the signature verification method is an RSA asymmetric encryption method.
  • the separation module 102 is configured to perform layer separation on the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, and the separation model is obtained by training based on a confrontational generation network model;
  • the invoice information extraction device 100 further includes a collection module, a preprocessing module, a filling module and a training module;
  • the collection module is used to collect bill data and scene-specific corpus
  • the preprocessing module is used to preprocess the bill data to obtain a bill template
  • the filling module is used to fill the scene-specific corpus into the corresponding area of the ticket template according to the attributes, to obtain training data;
  • the training module is used for inputting the training data into the confrontation generative network model for training to obtain the separation model.
  • the collection module collects different real bills in each region (province or municipality).
  • the real ticket can be an unprinted ticket or an opened ticket. If it is an opened ticket, the preprocessing module preprocesses the opened ticket by means of PS tools and image enhancement, removes the text content of the machine-printed information, and only retains the plate form of the ticket. Information, that is, an unprinted template; if it is an unprinted ticket, the image of the unprinted ticket is enhanced to make the unprinted ticket clearer. Use the processed opened or unprinted ticket as a ticket template;
  • a separation model with better separation effect can be obtained through the cooperative use of the collection module, the preprocessing module, the filling module and the training module.
  • the filling module includes an illumination change sub-module, a blur degree change sub-module and a shape change sub-module;
  • the illumination change sub-module performs brightness or shadow change processing on the randomly selected part of the training data to obtain the training data after illumination change processing;
  • the fuzzy degree change sub-module uses Gaussian blur or box blur to perform fuzzy processing on a part of the randomly selected training data to obtain the training data after fuzzy change processing; and/or
  • the morphological change sub-module performs angle change processing on part of the randomly selected training data, and obtains the training data after morphological change processing.
  • the bills in the real situation are further simulated, so that the trained model is closer to the real situation, and its processing effect is also better.
  • the identification module 103 is used to identify the machine-printed image and the printed image using a corresponding pre-trained identification model, and convert the machine-printed image and the printed image into machine-printed text and printed text.
  • the identification model is based on The convolutional recurrent neural network model is trained;
  • the recognition module 103 uses different recognition models to recognize the machine-printed image and the printed image, but both are trained based on the convolutional recurrent neural network model, and are trained based on different training data.
  • the recognition model corresponding to the machine-printed image adopts The image corresponding to the font is used for training.
  • the recognition model corresponding to the printed image is trained using the image corresponding to the font.
  • invoice information extraction device 100 further includes a positioning and cutting module
  • the positioning and cutting module divides the machine-printed image and the printing image into a plurality of regional images based on a pre-trained positioning and cutting model, and obtains the regional coordinates corresponding to each of the regional images, and the positioning and cutting model is based on DBNet model training. get.
  • the specific positioning and cutting module divides the entire machine-printed image and the printed image into multiple regional images through the positioning model.
  • the regional images are segmented in a rectangular manner, and the corresponding regional images are obtained.
  • the coordinate data of the four points of the rectangle, that is, the area coordinates, the coordinate data takes the adjacent two sides of the entire bill as the coordinate axis, and the entire bill is located in the first quadrant, so as to obtain the corresponding coordinate data, the machine-printed image and the printed image are shared the same coordinate axis.
  • the positioning module divides both the machine-printed image and the printed image into multiple regional images through the positioning and cutting model, so that the machine-printed text and the printed text can be matched after subsequent text recognition, and it is convenient for the machine-printed text to be filled in the corresponding printed text.
  • the matching module 104 is configured to match the machine-typed text and the printed text correspondingly to form the bill text.
  • the matching module 104 re-matches the machine-typed text and the printed text to form a bill text; the machine-typed text in each area of the formed bill text corresponds to the printed text neatly, that is, the typesetting is performed again, avoiding the original direct For bills obtained by machine printing, there is a problem that the machine-printed text crosses lines or covers the printed text, so that the structuring of the bill text is realized.
  • the matching module 104 includes a coordinate matching sub-module and a first corresponding filling sub-module
  • the matching submodule matches each first area text in the machine-printed text with each second area text in the printed text based on the area coordinates;
  • the first corresponding filling sub-module fills in the text of the first area into the corresponding text of the second area based on the coordinate of the area to form the text of the bill.
  • the matching submodule matches each area text corresponding to the machine-printed text with each area text corresponding to the printed text according to the area coordinates.
  • the area image is in the shape of a rectangle; the center coordinate refers to the center of the area coordinate, that is, the intersection of the diagonal lines of the rectangle corresponding to the area image.
  • the machine-typed text and the printed text are filled in the corresponding printed text based on the distance of the regional coordinates, and the machine-typed text is filled in the corresponding printed text, so as to realize the corresponding rewriting of the bill. typesetting.
  • the matching module 104 includes a text matching submodule and a second corresponding filling submodule;
  • the text matching submodule utilizes a pre-trained matching model to match each regional text in the machine-printed text with each regional text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
  • the second corresponding filling sub-module fills in the regional texts in the machine-printed texts into the regional texts corresponding to the printed texts based on the regional coordinates to form bill texts.
  • the text matching sub-module matches the text in each area of the machine-typed text with the text in each area of the printed text, and obtains the matching value, and the second corresponding filling sub-module
  • the matching value is greater than or equal to the preset value
  • the machine-typed text will be included in the matching value.
  • Each area text is filled in the area text corresponding to the printed text.
  • the region coordinates corresponding to each region text in the machine-typed text must be completely within the region coordinates of the corresponding region text in the printed text, so that the region text in the machine-typed text can be accurately filled into the corresponding region text in the printed text. middle.
  • text matching is performed between the machine-typed text and the printed text. After the matching is completed, if the preset requirements are met, the machine-typed text is filled in the corresponding printed text. Also realize the rearrangement of the ticket.
  • the invoice information extraction device 100 can separate the bill image into a machine-printed image and a printed image through the cooperation of the acquisition module 101, the separation module 102, the identification module 103, and the matching module 104, and then the machine-printed image is After the identification and processing of the printed image, the corresponding matching is performed, the accuracy of text identification is improved, and the bill information is rearranged to obtain the bill text.
  • FIG. 3 is a block diagram of a basic structure of a computer device according to this embodiment.
  • the computer device 4 includes a memory 41, a processor 42, and a network interface 43 that communicate with each other through a system bus. It should be noted that only the computer device 4 with components 41-43 is shown in the figure, but it should be understood that it is not required to implement all of the shown components, and more or less components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
  • the memory 41 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc.
  • the memory 41 may be an internal storage unit of the computer device 4 , such as a hard disk or a memory of the computer device 4 .
  • the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 41 may also include both the internal storage unit of the computer device 4 and its external storage device.
  • the memory 41 is generally used to store the operating system and various application software installed on the computer device 4 , such as computer-readable instructions of the method for extracting invoice information.
  • the memory 41 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 42 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments. This processor 42 is typically used to control the overall operation of the computer device 4 . In this embodiment, the processor 42 is configured to execute computer-readable instructions or process data stored in the memory 41, for example, computer-readable instructions for executing the method for extracting invoice information.
  • CPU Central Processing Unit
  • controller central processing unit
  • microcontroller a microcontroller
  • microprocessor microprocessor
  • This processor 42 is typically used to control the overall operation of the computer device 4 .
  • the processor 42 is configured to execute computer-readable instructions or process data stored in the memory 41, for example, computer-readable instructions for executing the method for extracting invoice information.
  • the network interface 43 may include a wireless network interface or a wired network interface, and the network interface 43 is generally used to establish a communication connection between the computer device 4 and other electronic devices.
  • the steps of the method for extracting invoice information as in the above-mentioned embodiment are implemented.
  • Obtain the machine-printed image and the printed image Obtain the machine-printed image and the printed image; separate the machine-printed image and the printed image to facilitate the processing of subsequent steps; use the corresponding pre-trained recognition model to identify the machine-printed image and the printed image, respectively.
  • the printed image and the printed image are converted into machine-typed text and printed text, and the text recognition rate of the machine-typed image and the printed image is improved by using the double recognition model; finally, the machine-typed text and the printed text are matched correspondingly to form the bill text .
  • the text recognition accuracy is improved, and the bill information is rearranged to obtain bill text.
  • the present application also provides another embodiment, that is, to provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be executed by at least one processor to The at least one processor is caused to execute the steps of the above-mentioned method for extracting invoice information, by acquiring a ticket image, and using a pre-trained separation model to separate the layers of the ticket image to obtain a machine-printed image and a printed image; The machine-printed image and the printed image are separated and processed to facilitate the processing of the subsequent steps; the machine-printed image and the printed image are respectively recognized by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Character Input (AREA)

Abstract

The present invention relates to the artificial intelligence technology, is specifically applied to image processing, and provides an invoice information extraction method and apparatus, a computer device and a storage medium. The method comprises: acquiring a bill image (S1); performing layer separation on the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, the separation model being obtained by training on the basis of an adversarial generative network model (S2); identifying the machine-printed image and the printed image by adopting corresponding pre-trained identification models respectively, and converting the machine-printed image and the printed image into a machine-printed text and a printed text, the identification models being obtained by training on the basis of a convolutional recurrent neural network model (S3); and correspondingly matching the machine-printed text with the printed text to form a bill text (S4). The present invention also relates to the blockchain technology. The bill image and the bill text data are stored in a blockchain. According to the method, the text recognition accuracy is improved, and the bill information is typeset again to obtain the bill text.

Description

***信息抽取方法、装置、计算机设备及存储介质Invoice information extraction method, device, computer equipment and storage medium
本申请要求于2020年12月16日提交中国专利局、申请号为202011487344.1,发明名称为“***信息抽取方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011487344.1 and the invention titled "Invoice Information Extraction Method, Device, Computer Equipment and Storage Medium", which was filed with the China Patent Office on December 16, 2020, the entire contents of which are by reference Incorporated in this application.
技术领域technical field
本申请涉及人工智能技术领域,尤其涉及***信息抽取方法、装置、计算机设备及存储介质。The present application relates to the field of artificial intelligence technology, and in particular, to a method, device, computer equipment and storage medium for extracting invoice information.
背景技术Background technique
OCR(Optical Character Recognition,光学字符识别)是模式识别领域中重要的研究方向。近年来,随着移动设备的快速更新迭代,以及移动互联网的快速发展,使得OCR有更为广泛的应用场景,从以往的扫描文件的字符识别,到现在应用到自然场景中图片文字的识别,如识别身份证、银行卡、门牌、票据及各类网络图片中的文字。发明人意识到,在现有技术中,利用OCR技术自动识别、提取票据等上面的字段信息并完成结构化输出,但当面对票据上字段重叠、窜行时,会出现识别精度不高的情况。因此,如何提高票据的识别精度成为了亟待解决的问题。OCR (Optical Character Recognition, Optical Character Recognition) is an important research direction in the field of pattern recognition. In recent years, with the rapid update and iteration of mobile devices and the rapid development of the mobile Internet, OCR has a wider range of application scenarios, from the previous character recognition of scanned documents to the recognition of pictures and texts in natural scenes. Such as identifying the text in ID cards, bank cards, house numbers, bills and various network pictures. The inventor realizes that in the prior art, OCR technology is used to automatically identify and extract the field information on the bill and complete the structured output. Happening. Therefore, how to improve the recognition accuracy of bills has become an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
本申请提供了***信息抽取方法、装置、计算机设备及存储介质,以解决现有技术中票据的印刷文字与打印文字重叠或窜行时,票据图像识别精度不高的问题。The present application provides a method, device, computer equipment and storage medium for extracting invoice information, so as to solve the problem of low recognition accuracy of the image of the invoice when the printed text and the printed text of the invoice overlap or stray in the prior art.
为解决上述问题,本申请实施例提供了***信息抽取方法,包括:In order to solve the above problems, the embodiment of the present application provides a method for extracting invoice information, including:
获取票据图像;get the ticket image;
利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;Use a pre-trained separation model to separate layers of the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained by training a confrontational generative network model;
对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
将所述机打文本和印刷文本对应匹配,以构成票据文本。The machine-typed text and the printed text are matched correspondingly to form the receipt text.
为了解决上述问题,本申请实施例还提供***信息抽取装置,所述装置包括:In order to solve the above problem, the embodiment of the present application also provides an invoice information extraction device, and the device includes:
获取模块,用于获取票据图像;The acquisition module is used to acquire the ticket image;
分离模块,用于利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;A separation module is used to separate layers of the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, and the separation model is obtained by training based on a confrontational generation network model;
识别模块,用于对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The recognition module is used for recognizing the machine-printed image and the printed image by using the corresponding pre-trained recognition model respectively, and converting the machine-printed image and the printed image into machine-printed text and printed text, and the recognition model is based on the volume The product recurrent neural network model is trained;
匹配模块,用于将所述机打文本和印刷文本对应匹配,以构成票据文本。The matching module is used for correspondingly matching the machine-typed text and the printed text to form the bill text.
为了解决上述问题,本申请实施例还提供一种计算机设备,包括至少一个处理器;以及,In order to solve the above problem, an embodiment of the present application further provides a computer device, including at least one processor; and,
与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:The memory stores computer-readable instructions, and the processor implements the following steps when executing the computer-readable instructions:
获取票据图像;get the ticket image;
利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;Use a pre-trained separation model to separate layers of the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained by training a confrontational generative network model;
对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
将所述机打文本和印刷文本对应匹配,以构成票据文本。The machine-typed text and the printed text are matched correspondingly to form the receipt text.
为了解决上述问题,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时,使得所述处理器执行如下步骤:In order to solve the above problem, an embodiment of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the processing The device performs the following steps:
获取票据图像;get the ticket image;
利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;Use a pre-trained separation model to separate layers of the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained by training a confrontational generative network model;
对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
将所述机打文本和印刷文本对应匹配,以构成票据文本。The machine-typed text and the printed text are matched correspondingly to form the receipt text.
根据本申请实施例提供的***信息抽取方法、装置、计算机设备及存储介质,与现有技术相比至少具有以下有益效果:Compared with the prior art, the invoice information extraction method, device, computer equipment and storage medium provided according to the embodiments of the present application have at least the following beneficial effects:
通过获取票据图像,并利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像;通过将机打图像和印刷图像分离处理,便于后续步骤的处理;对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机 打文本和印刷文本,通过利用双识别模型,提高对机打图像和印刷图像的文本识别率;最后将所述机打文本和印刷文本对应匹配,以构成票据文本。通过将票据图像分离为机打图像和印刷图像,随后对机打图像和印刷图像分别识别处理后,进行对应匹配,提高了文本识别准确率,并将票据信息进行重新排版得到票据文本。The machine-printed image and the printed image are obtained by acquiring the bill image, and using a pre-trained separation model to separate the bill image into layers; the machine-printed image and the printed image are separated and processed to facilitate the processing of subsequent steps; The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text. The text recognition rate is high; finally, the machine-typed text and the printed text are matched correspondingly to form the bill text. By separating the bill image into a machine-printed image and a printed image, and then identifying and processing the machine-printed image and the printed image respectively, and then performing corresponding matching, the text recognition accuracy is improved, and the bill information is rearranged to obtain bill text.
附图说明Description of drawings
为了更清楚地说明本申请中的方案,下面将对本申请实施例描述中所需要使用的附图做一个简单介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the solutions in the present application more clearly, the following will briefly introduce the accompanying drawings used in the description of the embodiments of the present application. For those of ordinary skill, other drawings can also be obtained from these drawings without any creative effort.
图1为本申请一实施例提供的***信息抽取方法的流程示意图;1 is a schematic flowchart of a method for extracting invoice information provided by an embodiment of the present application;
图2为本申请一实施例提供的***信息抽取装置的模块示意图;FIG. 2 is a schematic block diagram of an invoice information extraction device provided by an embodiment of the present application;
图3为本申请一实施例的计算机设备的结构示意图。FIG. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
具体实施方式Detailed ways
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”“第二”等是用于区别不同对象,而不是用于描述特定顺序。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field of this application; the terms used herein in the specification of the application are for the purpose of describing specific embodiments only It is not intended to limit the application; the terms "comprising" and "having" and any variations thereof in the description and claims of this application and the above description of the drawings are intended to cover non-exclusive inclusion. The terms "first", "second" and the like in the description and claims of the present application or the above drawings are used to distinguish different objects, rather than to describe a specific order.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是是相同的实施例,也不是与其它实施例相互排斥的独立的或备选的实施例。本领域技术人员显式地或隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are they a separate or alternative embodiment that is mutually exclusive of other embodiments. It is explicitly or implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.
本申请提供一种***信息抽取方法。参照图1所示,为本申请一实施例提供的***信息抽取方法的流程示意图。The present application provides a method for extracting invoice information. Referring to FIG. 1 , it is a schematic flowchart of a method for extracting invoice information according to an embodiment of the present application.
在本实施例中,***信息抽取方法包括:In this embodiment, the method for extracting invoice information includes:
S1、获取票据图像;S1. Obtain a bill image;
所述票据图像包括增值税***图像、医疗***等图像等。The bill images include images of VAT invoices, medical invoices, and the like.
所述票据图像仅是已开票据的扫描图档(拍摄等方式得到),而非电子***图档。The bill image is only a scanned image file of an issued bill (obtained by shooting, etc.), rather than an electronic invoice image file.
进一步的,在所述获取票据图像之前,还包括:Further, before the obtaining of the ticket image, the method further includes:
向数据库发送调用请求,所述调用请求携带验签令牌;Send a call request to the database, where the call request carries a signature verification token;
接收所述数据库返回的验签结果,并在验签结果为通过时,调用所述数据库中票据图像;Receive the signature verification result returned by the database, and call the ticket image in the database when the signature verification result is passed;
所述验签的方式为RSA非对称加密方式。The signature verification method is an RSA asymmetric encryption method.
所述数据库中存储了大量现实产生的票据图像,由于票据上所展示的信息具有隐私属性,所以需要加密保存,在获取票据图像时,就需要进行验签的步骤;并且所述数据库中存储有业务***实时获取到的票据图像。A large number of actual bill images are stored in the database. Since the information displayed on the bill has privacy attributes, it needs to be encrypted and stored. When acquiring the bill image, the step of signature verification needs to be performed; The bill image obtained by the business system in real time.
通过验签的方式保证了票据图像数据安全。The security of the bill image data is ensured through signature verification.
在本申请的另一实施例中,直接通过接收业务***发送的票据图像进行处理,在处理完成后直接反馈给业务***,或者再传递到下一处理***进行进一步处理。In another embodiment of the present application, processing is performed directly by receiving the bill image sent by the business system, and after the processing is completed, it is directly fed back to the business system, or transferred to the next processing system for further processing.
S2、利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;S2, using a pre-trained separation model to perform layer separation on the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained based on the training of a confrontational generation network model;
所述对抗生成网络模型为采用pix2pix的对抗生成网络模型,pix2pix由两个网路组成:生成网络和判别网络,pix2pix将生成网络看作是一种映射,即将图片映射成另一张需要的图片;判别网络来对生成的图片与原图片进行判断。The adversarial generative network model is an adversarial generative network model using pix2pix. pix2pix consists of two networks: a generative network and a discriminant network. pix2pix regards the generative network as a kind of mapping, that is, the picture is mapped to another required picture ; discriminative network to judge the generated image and the original image.
进一步的,在所述利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像之前,还包括:Further, before using the pre-trained separation model to perform layer separation on the bill image to obtain the machine-printed image and the printed image, the method further includes:
收集票据数据及场景专用语料;Collect bill data and scene-specific corpus;
对所述票据数据进行预处理,得到票据模板;Preprocessing the bill data to obtain a bill template;
将所述场景专用语料根据属性填入所述票据模板对应区域,得到训练数据;Filling the scene-specific corpus into the corresponding area of the ticket template according to the attributes to obtain training data;
将所述训练数据输入到对抗生成网络模型进行训练,得到所述分离模型。The training data is input into an adversarial generative network model for training to obtain the separation model.
具体的,通过收集每个地区(各省或直辖市)不同的真实票据。真实票据可以为未打印票据或已开票据,若为已开票据,对已开票据利用PS工具和图像增强等方式进行预处理,将机打信息文字内容去除,只保留票据的板式信息,即未打印的模板;若为未打印票据,对未打印票据图像进行增强处理,使得未打印票据更清晰。将处理后的已开票据或未打印票据作为票据模板;Specifically, by collecting different real bills in each region (province or municipality). The real ticket can be an unprinted ticket or an opened ticket. If it is an opened ticket, the opened ticket is preprocessed by means of PS tools and image enhancement, and the text content of the machine-printed information is removed, and only the plate information of the ticket is retained, that is, Unprinted template; if it is an unprinted ticket, the image of the unprinted ticket is enhanced to make the unprinted ticket clearer. Use the processed opened or unprinted ticket as a ticket template;
本申请可以收集特定场景的票据数据进行训练,例如,收集的是医疗票据,则相应的,场景专用语料则为医疗用语语料,在网络上收集医疗用语等语料作为扩充语料,根据扩充语料的属性对其进行分类,根据扩充语料的属性将其填入所述票据模板对应的区域,即得到训练数据;This application can collect bill data for specific scenarios for training. For example, if medical bills are collected, correspondingly, the scene-specific corpus is medical language corpus, and medical language and other corpus are collected on the Internet as expanded corpus. According to the attributes of the expanded corpus Classify it, and fill it into the corresponding area of the bill template according to the attributes of the expanded corpus, that is, to obtain training data;
例如:有头孢克肟干混悬冲剂等药品以及对应数量和价格、各种检查项目及费用和次数(核磁共振检查费、CT检查费)及对应价格等,中文大写的总金额等医疗用语语料;For example: there are medicines such as Cefixime Dry Suspension and the corresponding quantity and price, various examination items and fees and times (MRI examination fee, CT examination fee) and corresponding price, etc., the total amount in Chinese capitals and other medical language corpus ;
根据上述医疗用语语料属性,可分为项目/规格类(头孢克肟干混悬冲剂等药品、检查项目)、各项目/规格对应的价格(头孢克肟干混悬冲剂等药品价格、各种检查项目费用)、各项目/规格对应的数量(头孢克肟干混悬冲剂等药品数量、各种检查项目次数)、总金额(中文大写的总金额);根据医疗用语语料属性将医疗用语语料填入上述票据模板的对应区域内,以得到海量的训练数据。According to the attributes of the above-mentioned medical language corpus, it can be divided into items/specifications (drugs such as Cefixime dry suspension, inspection items), prices corresponding to each item/specification (prices of drugs such as Cefixime suspension, various The cost of inspection items), the quantity corresponding to each item/specification (the number of drugs such as Cefixime Dry Suspension, the number of various inspection items), the total amount (the total amount in Chinese capital); Fill in the corresponding area of the above ticket template to obtain massive training data.
在将所述场景专用语料根据属性填入所述票据模板对应区域时,所述对应区域包括正常区域和异常区域,所述正常区域,即所述场景专用语料恰好与其所属票据模板上的文本相对应;而异常区域则是所述场景专用语料与所属票据模板上的文本重合或对应不整齐出现的窜行情况。When filling the scene-specific corpus into the corresponding area of the ticket template according to attributes, the corresponding area includes a normal area and an abnormal area, and the normal area, that is, the scene-specific corpus, is exactly the same as the text on the ticket template to which it belongs. The abnormal area is the situation where the scene-specific corpus overlaps with the text on the corresponding ticket template or the corresponding irregularity occurs.
根据真实数据来生成训练数据,并以此训练数据来训练模型,能得到分离效果更好的分离模型。The training data is generated according to the real data, and the model is trained with the training data, and a separation model with better separation effect can be obtained.
再进一步的,在所述得到训练数据之后,还包括:Still further, after the training data is obtained, it also includes:
对随机选取的部分训练数据进行亮度或阴影变化处理,得到光照变化处理后的训练数据;和/或Performing brightness or shadow change processing on a randomly selected part of the training data to obtain training data after illumination change processing; and/or
利用高斯模糊或方框模糊,对随机选取的部分训练数据进行模糊处理,得到模糊变化处理后的训练数据;和/或Using Gaussian blurring or box blurring, blurring randomly selected part of the training data to obtain training data after blurring changes; and/or
对随机选取的部分训练数据进行角度变化处理,得到形态变化处理后的训练数据。The randomly selected part of the training data is subjected to angle change processing to obtain the training data after morphological change processing.
具体的,通过对所述训练数据随机进行数字图像处理,以模拟现实可能出现的情况;数字图像处理包括光照变化处理、模糊程度变化处理和形态变化处理中的一种或多种,其中,所述光照变化处理即对图片的亮度或阴影处理;模糊程度变化处理即模拟拍照不是很清楚的情况,通过高斯模糊或方框模糊等算法进行处理得到;所述形态变化处理即有关人员在拍摄时,不一定与票据平行,导致拍摄出来的票据形态不一致,通过旋转、角度变化等方式得到。通过将训练数据进行数字图像处理,进一步模拟真实情况,使得基于对抗生成网络模型训练出来的模型效果更优,更贴近真实情况。Specifically, random digital image processing is performed on the training data to simulate possible situations in reality; digital image processing includes one or more of illumination change processing, blur degree change processing, and morphological change processing, wherein all the The illumination change processing is to process the brightness or shadow of the picture; the blurring degree change processing is to simulate the situation that the photo is not very clear, and it is obtained by processing algorithms such as Gaussian blur or box blur; , not necessarily parallel to the bill, resulting in inconsistent shape of the photographed bill, obtained through rotation, angle change, etc. By digital image processing of the training data, the real situation is further simulated, so that the model trained based on the adversarial generative network model has better effect and is closer to the real situation.
通过进一步模拟真实情况下的票据,使训练出来的模型更贴近现实情况,其处理效果也更优。By further simulating the bills in the real situation, the trained model is closer to the real situation, and its processing effect is also better.
S3、对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;S3, respectively adopt the corresponding pre-trained recognition model to identify the machine-printed image and the printed image, convert the machine-printed image and the printed image into machine-printed text and printed text, and the recognition model is based on a convolutional cyclic neural network. The network model is trained;
对机打图像和印刷图像分别采用不同的识别模型进行识别,但都基于卷积循环神经网络模 型进行训练,基于不同的训练数据进行训练,机打图像对应的识别模型采用对应字体的图像进行训练,同理的,印刷图像对应的识别模型采用对应字体的图像进行训练。Different recognition models are used to recognize machine-printed images and printed images, but they are both trained based on convolutional recurrent neural network models, and are trained based on different training data. The recognition model corresponding to machine-printed images is trained with images of corresponding fonts , and similarly, the recognition model corresponding to the printed image is trained by using the image of the corresponding font.
所述卷积循环神经网络模型包括卷积层(CNN)、循环层(RNN)和转录层(CTC loss);卷积层使用深度CNN,对输入图像提取特征,得到特征图;循环层使用双向RNN(BLSTM)对特征序列进行预测,对序列中的每个特征向量进行学习,并输出预测标签(真实值)分布;转录层使用CTC损失,把从循环层获取的一系列标签分布转换成最终的标签序列。The convolutional recurrent neural network model includes a convolutional layer (CNN), a recurrent layer (RNN) and a transcription layer (CTC loss); the convolutional layer uses a deep CNN to extract features from the input image to obtain a feature map; the recurrent layer uses a bidirectional RNN (BLSTM) predicts the feature sequence, learns each feature vector in the sequence, and outputs the predicted label (true value) distribution; the transcription layer uses the CTC loss to convert a series of label distributions obtained from the recurrent layer into the final tag sequence.
卷积循环神经网络模型用于解决基于图像的序列识别问题,特别是场景文字识别问题。Convolutional recurrent neural network models are used to solve image-based sequence recognition problems, especially scene text recognition problems.
进一步的,在所述对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别之前,还包括:Further, before the machine-printed image and the printed image are respectively identified by using the corresponding pre-trained identification model, the method further includes:
基于预训练的定位切割模型将所述机打图像和印刷图像分为多个区域图像,并得到各个所述区域图像对应的区域坐标,所述定位切割模型为基于DBNet模型训练得到。Based on the pre-trained positioning and cutting model, the machine-printed image and the printing image are divided into multiple regional images, and the regional coordinates corresponding to each of the regional images are obtained, and the positioning and cutting model is obtained by training based on the DBNet model.
具体的,通过定位切割模型将整张机打图像和印刷图像分别分为多个区域图像,在本申请实施例中以矩形的方式来进行区域图像的切分,并得到多个区域图像对应矩形四个点的坐标数据,即区域坐标,所述坐标数据以整个票据的相邻两边为坐标轴,整个票据位于第一象限,从而得到对应的坐标数据,所述机打图像和印刷图像共用同一坐标轴。Specifically, the entire machine-printed image and the printed image are respectively divided into multiple area images by using the positioning and cutting model. In the embodiment of the present application, the area images are segmented in a rectangular manner, and the rectangles corresponding to the multiple area images are obtained. The coordinate data of the four points, that is, the regional coordinates, the coordinate data takes the adjacent two sides of the entire bill as the coordinate axis, and the entire bill is located in the first quadrant, so as to obtain the corresponding coordinate data, and the machine-printed image and the printed image share the same Axis.
所述机打图像的区域的划分是通过判断相邻的字段间有无间隔来区分的。The division of the area of the machine-printed image is distinguished by judging whether there is a gap between adjacent fields.
所述印刷图像的区域划分基于文本框来进行。The area division of the printed image is performed based on text boxes.
DBNet模型为一种文本检测模型,其准确率和速度都较高。The DBNet model is a text detection model with high accuracy and speed.
通过定位切割模型将机打图像和印刷图像都分为多个区域图像,便于后续进行文本识别后,机打文本和印刷文本匹配,且便于机打文本填入对应的印刷文本内。Both the machine-printed image and the printed image are divided into multiple regional images by the positioning and cutting model, so that the machine-printed text and the printed text can be matched after the subsequent text recognition, and it is convenient for the machine-printed text to be filled in the corresponding printed text.
S4、将所述机打文本和印刷文本对应匹配,以构成票据文本。S4. Match the machine-typed text and the printed text to form the bill text.
将所述机打文本和印刷文本重新对应匹配后,形成票据文本;形成的票据文本各区域机打文本与印刷文本对应整齐,即进行了重新排版,避免了原有直接通过机打得到的票据,出现机打文本跨行或覆盖掉印刷文本的问题,实现票据文本的结构化。After the machine-printed text and the printed text are re-correspondingly matched, the bill text is formed; the machine-printed text in each area of the formed bill text corresponds to the printed text neatly, that is, the typesetting is carried out, which avoids the original bill directly obtained by machine printing. , there is a problem that the machine-typed text spans lines or covers the printed text, and the structuring of the bill text is realized.
进一步的,将所述机打文本和印刷文本对应匹配,以构成票据文本,包括:Further, the machine-typed text and the printed text are matched correspondingly to form the bill text, including:
基于所述区域坐标,将所述机打文本中各第一区域文本与所述印刷文本中各第二区域文本进行匹配;Based on the area coordinates, matching each of the first area texts in the machine-printed text with each of the second area texts in the printed text;
在匹配完成后,基于所述区域坐标,将所述第一区域文本填入对应的所述第二区域文本中,以构成票据文本。After the matching is completed, based on the area coordinates, the first area text is filled into the corresponding second area text to form the bill text.
在本实施例中,根据所述区域坐标,将所述机打文本对应的各区域文本与所述印刷文本对 应的各区域文本进行匹配。计算所述机打文本中一区域文本对应的第一中心坐标,并计算所述印刷文本中各区域文本对应的第二中心坐标,判断第一中心坐标与多个第二中心坐标的距离,离哪个近,就将机打文本中的所述一区域文本准确匹配到对应的印刷文本中对应的区域文本中。所述区域图像为一矩形的形状;所述中心坐标指的是区域坐标的中心,即区域图像对应的矩形对角线的交点。In this embodiment, according to the area coordinates, each area text corresponding to the machine-printed text is matched with each area text corresponding to the printed text. Calculate the first center coordinate corresponding to an area text in the machine-typed text, and calculate the second center coordinate corresponding to each area text in the printed text, and determine the distance between the first center coordinate and a plurality of second center coordinates. Whichever is closest, the one area text in the machine-typed text is exactly matched to the corresponding area text in the corresponding printed text. The area image is in the shape of a rectangle; the center coordinate refers to the center of the area coordinate, that is, the intersection of the diagonal lines of the rectangle corresponding to the area image.
通过将所述机打文本和印刷文本,基于区域坐标的距离,离哪个近就将机打文本填入对应的印刷文本,实现票据的对应重新排版。By filling the machine-typed text and the printed text into the corresponding printed text based on the distance of the regional coordinates, the machine-typed text is filled into the corresponding printed text, so as to realize the corresponding re-typesetting of the bill.
进一步的,将所述机打文本和印刷文本对应匹配,以构成票据文本,包括:Further, the machine-typed text and the printed text are matched correspondingly to form the bill text, including:
利用预训练的匹配模型,将所述机打文本中的各区域文本与所述印刷文本中的各区域文本进行匹配,得到匹配值,所述匹配模型基于BIMPM模型训练得到;Using a pre-trained matching model, each region text in the machine-printed text is matched with each region text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
当匹配值大于等于预设数值时,基于所述区域坐标,将所述机打文本中的各区域文本填入所述印刷文本对应的区域文本中,以构成票据文本。When the matching value is greater than or equal to a preset value, based on the area coordinates, fill in the area texts of the machine-printed texts into the area texts corresponding to the printed texts to form bill texts.
通过匹配模型将机打文本中各区域文本与印刷文本中的各区域文本进行匹配,并得到匹配值,当匹配值大于等于预设数值时,将机打文本中各区域文本填入所述印刷文本对应的区域文本中。具体的,将机打文本中各区域文本对应的区域坐标需完全在印刷文本中对应区域文本的区域坐标之内,这样才实现将机打文本中各区域文本准确填入印刷文本中对应区域文本中。Match each area text in the machine-printed text with each area text in the printed text through the matching model, and obtain the matching value, when the matching value is greater than or equal to the preset value, fill in the text in each area of the machine-typed text into the printed text text in the corresponding area text. Specifically, the region coordinates corresponding to each region text in the machine-typed text must be completely within the region coordinates of the corresponding region text in the printed text, so that the region text in the machine-typed text can be accurately filled into the corresponding region text in the printed text. middle.
BIMPM模型为一种文本匹配模型。The BIMPM model is a text matching model.
通过匹配模型将所述机打文本和印刷文本进行文本匹配,匹配完成后若满足预设要求,就将机打文本填入对应的印刷文本中,同样实现票据的重新排版。The machine-typed text and the printed text are text-matched through a matching model. After the matching is completed, if the preset requirements are met, the machine-typed text is filled in the corresponding printed text, and the receipt is also retyped.
需要强调的是,为了进一步保证数据的私密性和安全性,所述票据图像的所有数据还可以存储于一区块链的节点中。It should be emphasized that, in order to further ensure the privacy and security of the data, all the data of the bill image can also be stored in a node of a blockchain.
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。The blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Blockchain, essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block. The blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
通过获取票据图像,并利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像;通过将机打图像和印刷图像分离处理,便于后续步骤的处理;对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,通过利用双识别模型,提高对机打图像和印刷图像的文本识别率;最后将 所述机打文本和印刷文本对应匹配,以构成票据文本。通过将票据图像分离为机打图像和印刷图像,随后对机打图像和印刷图像分别识别处理后,进行对应匹配,提高了文本识别准确率,并将票据信息进行重新排版得到票据文本。The machine-printed image and the printed image are obtained by acquiring the bill image, and using a pre-trained separation model to separate the bill image into layers; the machine-printed image and the printed image are separated and processed to facilitate the processing of subsequent steps; The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text. The text recognition rate is high; finally, the machine-typed text and the printed text are matched correspondingly to form the bill text. By separating the bill image into a machine-printed image and a printed image, and then identifying and processing the machine-printed image and the printed image respectively, and then performing corresponding matching, the text recognition accuracy is improved, and the bill information is rearranged to obtain bill text.
如图2所示,是本申请***信息抽取装置的功能模块图。As shown in FIG. 2 , it is a functional block diagram of the apparatus for extracting invoice information of the present application.
本申请所述***信息抽取装置100可以安装于电子设备中。根据实现的功能,所述***信息抽取装置100可以包括获取模块101、分离模块102、识别模块103和匹配模块104。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机可读指令段,其存储在电子设备的存储器中。The apparatus 100 for extracting invoice information described in this application may be installed in an electronic device. According to the realized functions, the invoice information extraction apparatus 100 may include an acquisition module 101 , a separation module 102 , an identification module 103 and a matching module 104 . The modules described in this application may also be referred to as units, which refer to a series of computer-readable instruction segments that can be executed by the electronic device processor and can perform fixed functions, and are stored in the memory of the electronic device.
在本实施例中,关于各模块/单元的功能如下:In this embodiment, the functions of each module/unit are as follows:
获取模块101,用于获取票据图像;an acquisition module 101, configured to acquire a ticket image;
进一步的,所述***信息抽取装置100还包括发送请求模块和调用模块;Further, the invoice information extraction device 100 further includes a sending request module and a calling module;
所述发送请求模块用于向数据库发送调用请求,所述调用请求携带验签令牌;The sending request module is configured to send a calling request to the database, where the calling request carries a signature verification token;
所述调用模块接收所述数据库返回的验签结果,并在验签结果为通过时,调用所述数据库中票据图像;The calling module receives the signature verification result returned by the database, and when the signature verification result is passed, invokes the bill image in the database;
所述验签的方式为RSA非对称加密方式。The signature verification method is an RSA asymmetric encryption method.
通过发送请求模块和调用模块的配合,保证了票据图像数据安全。Through the cooperation of the sending request module and the calling module, the security of the bill image data is guaranteed.
分离模块102,用于利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;The separation module 102 is configured to perform layer separation on the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, and the separation model is obtained by training based on a confrontational generation network model;
进一步的,所述***信息抽取装置100还包括收集模块、预处理模块、填写模块和训练模块;Further, the invoice information extraction device 100 further includes a collection module, a preprocessing module, a filling module and a training module;
收集模块用于收集票据数据及场景专用语料;The collection module is used to collect bill data and scene-specific corpus;
预处理模块用于对所述票据数据进行预处理,得到票据模板;The preprocessing module is used to preprocess the bill data to obtain a bill template;
填写模块用于将所述场景专用语料根据属性填入所述票据模板对应区域,得到训练数据;The filling module is used to fill the scene-specific corpus into the corresponding area of the ticket template according to the attributes, to obtain training data;
训练模块用于将所述训练数据输入到对抗生成网络模型进行训练,得到所述分离模型。The training module is used for inputting the training data into the confrontation generative network model for training to obtain the separation model.
具体的,收集模块通过收集每个地区(各省或直辖市)不同的真实票据。真实票据可以为未打印票据或已开票据,若为已开票据,预处理模块对已开票据利用PS工具和图像增强等方式进行预处理,将机打信息文字内容去除,只保留票据的板式信息,即未打印的模板;若为未打印票据,对未打印票据图像进行增强处理,使得未打印票据更清晰。将处理后的已开票据或未打印票据作为票据模板;Specifically, the collection module collects different real bills in each region (province or municipality). The real ticket can be an unprinted ticket or an opened ticket. If it is an opened ticket, the preprocessing module preprocesses the opened ticket by means of PS tools and image enhancement, removes the text content of the machine-printed information, and only retains the plate form of the ticket. Information, that is, an unprinted template; if it is an unprinted ticket, the image of the unprinted ticket is enhanced to make the unprinted ticket clearer. Use the processed opened or unprinted ticket as a ticket template;
通过收集模块、预处理模块、填写模块和训练模块的配合使用,从而得到分离效果更好的 分离模型。A separation model with better separation effect can be obtained through the cooperative use of the collection module, the preprocessing module, the filling module and the training module.
再进一步的,所述填写模块包括光照变化子模块、模糊程度变化子模块和形态变化子模块;Still further, the filling module includes an illumination change sub-module, a blur degree change sub-module and a shape change sub-module;
光照变化子模块对随机选取的部分训练数据进行亮度或阴影变化处理,得到光照变化处理后的训练数据;和/或The illumination change sub-module performs brightness or shadow change processing on the randomly selected part of the training data to obtain the training data after illumination change processing; and/or
模糊程度变化子模块利用高斯模糊或方框模糊,对随机选取的部分训练数据进行模糊处理,得到模糊变化处理后的训练数据;和/或The fuzzy degree change sub-module uses Gaussian blur or box blur to perform fuzzy processing on a part of the randomly selected training data to obtain the training data after fuzzy change processing; and/or
形态变化子模块对随机选取的部分训练数据进行角度变化处理,得到形态变化处理后的训练数据。The morphological change sub-module performs angle change processing on part of the randomly selected training data, and obtains the training data after morphological change processing.
通过光照变化子模块、模糊程度变化子模块和形态变化子模块的配合,进一步模拟真实情况下的票据,使训练出来的模型更贴近现实情况,其处理效果也更优。Through the cooperation of the illumination change sub-module, the blur degree change sub-module and the shape change sub-module, the bills in the real situation are further simulated, so that the trained model is closer to the real situation, and its processing effect is also better.
识别模块103,用于对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The identification module 103 is used to identify the machine-printed image and the printed image using a corresponding pre-trained identification model, and convert the machine-printed image and the printed image into machine-printed text and printed text. The identification model is based on The convolutional recurrent neural network model is trained;
具体的,识别模块103对机打图像和印刷图像分别采用不同的识别模型进行识别,但都基于卷积循环神经网络模型进行训练,基于不同的训练数据进行训练,机打图像对应的识别模型采用对应字体的图像进行训练,同理的,印刷图像对应的识别模型采用对应字体的图像进行训练。Specifically, the recognition module 103 uses different recognition models to recognize the machine-printed image and the printed image, but both are trained based on the convolutional recurrent neural network model, and are trained based on different training data. The recognition model corresponding to the machine-printed image adopts The image corresponding to the font is used for training. Similarly, the recognition model corresponding to the printed image is trained using the image corresponding to the font.
进一步的,所述***信息抽取装置100还包括定位切割模块;Further, the invoice information extraction device 100 further includes a positioning and cutting module;
所述定位切割模块基于预训练的定位切割模型将所述机打图像和印刷图像分为多个区域图像,并得到各个所述区域图像对应的区域坐标,所述定位切割模型为基于DBNet模型训练得到。The positioning and cutting module divides the machine-printed image and the printing image into a plurality of regional images based on a pre-trained positioning and cutting model, and obtains the regional coordinates corresponding to each of the regional images, and the positioning and cutting model is based on DBNet model training. get.
具体的定位切割模块通过定位模型将整张机打图像和印刷图像分别分为多个区域图像,在本申请实施例中以矩形的方式来进行区域图像的切分,并得到多个区域图像对应矩形四个点的坐标数据,即区域坐标,所述坐标数据以整个票据的相邻两边为坐标轴,整个票据位于第一象限,从而得到对应的坐标数据,所述机打图像和印刷图像共用同一坐标轴。The specific positioning and cutting module divides the entire machine-printed image and the printed image into multiple regional images through the positioning model. In the embodiment of the present application, the regional images are segmented in a rectangular manner, and the corresponding regional images are obtained. The coordinate data of the four points of the rectangle, that is, the area coordinates, the coordinate data takes the adjacent two sides of the entire bill as the coordinate axis, and the entire bill is located in the first quadrant, so as to obtain the corresponding coordinate data, the machine-printed image and the printed image are shared the same coordinate axis.
定位模块通过定位切割模型将机打图像和印刷图像都分为多个区域图像,便于后续进行文本识别后,机打文本和印刷文本匹配,且便于机打文本填入对应的印刷文本内。The positioning module divides both the machine-printed image and the printed image into multiple regional images through the positioning and cutting model, so that the machine-printed text and the printed text can be matched after subsequent text recognition, and it is convenient for the machine-printed text to be filled in the corresponding printed text.
匹配模块104,用于将所述机打文本和印刷文本对应匹配,以构成票据文本。The matching module 104 is configured to match the machine-typed text and the printed text correspondingly to form the bill text.
具体的,匹配模块104将所述机打文本和印刷文本重新对应匹配后,形成票据文本;形成的票据文本各区域机打文本与印刷文本对应整齐,即进行了重新排版,避免了原有直接通过机 打得到的票据,出现机打文本跨行或覆盖掉印刷文本的问题,实现票据文本的结构化。Specifically, the matching module 104 re-matches the machine-typed text and the printed text to form a bill text; the machine-typed text in each area of the formed bill text corresponds to the printed text neatly, that is, the typesetting is performed again, avoiding the original direct For bills obtained by machine printing, there is a problem that the machine-printed text crosses lines or covers the printed text, so that the structuring of the bill text is realized.
进一步的,所述匹配模块104包括坐标匹配子模块和第一对应填写子模块;Further, the matching module 104 includes a coordinate matching sub-module and a first corresponding filling sub-module;
匹配子模块基于所述区域坐标,将所述机打文本中各第一区域文本与所述印刷文本中各第二区域文本进行匹配;The matching submodule matches each first area text in the machine-printed text with each second area text in the printed text based on the area coordinates;
第一对应填写子模块在匹配完成后,基于所述区域坐标,将所述第一区域文本填入对应的所述第二区域文本中,以构成票据文本。After the matching is completed, the first corresponding filling sub-module fills in the text of the first area into the corresponding text of the second area based on the coordinate of the area to form the text of the bill.
具体的,匹配子模块根据所述区域坐标,将所述机打文本对应的各区域文本与所述印刷文本对应的各区域文本进行匹配。计算所述机打文本中一区域文本对应的第一中心坐标,并计算所述印刷文本中各区域文本对应的第二中心坐标,判断第一中心坐标与多个第二中心坐标的距离,离哪个近,第一对应填写子模块就将机打文本中的所述一区域文本准确匹配到对应的印刷文本中对应的区域文本中。所述区域图像为一矩形的形状;所述中心坐标指的是区域坐标的中心,即区域图像对应的矩形对角线的交点。Specifically, the matching submodule matches each area text corresponding to the machine-printed text with each area text corresponding to the printed text according to the area coordinates. Calculate the first center coordinate corresponding to an area text in the machine-typed text, and calculate the second center coordinate corresponding to each area text in the printed text, and determine the distance between the first center coordinate and a plurality of second center coordinates. Whichever is closest, the first corresponding filling sub-module will accurately match the region text in the machine-printed text to the corresponding region text in the corresponding printed text. The area image is in the shape of a rectangle; the center coordinate refers to the center of the area coordinate, that is, the intersection of the diagonal lines of the rectangle corresponding to the area image.
通过匹配子模块和第一对应填写子模块的配合,将所述机打文本和印刷文本,基于区域坐标的距离,离哪个近就将机打文本填入对应的印刷文本,实现票据的对应重新排版。Through the cooperation of the matching sub-module and the first corresponding filling sub-module, the machine-typed text and the printed text are filled in the corresponding printed text based on the distance of the regional coordinates, and the machine-typed text is filled in the corresponding printed text, so as to realize the corresponding rewriting of the bill. typesetting.
进一步的,所述匹配模块104包括文本匹配子模块和第二对应填写子模块;Further, the matching module 104 includes a text matching submodule and a second corresponding filling submodule;
文本匹配子模块利用预训练的匹配模型,将所述机打文本中的各区域文本与所述印刷文本中的各区域文本进行匹配,得到匹配值,所述匹配模型基于BIMPM模型训练得到;The text matching submodule utilizes a pre-trained matching model to match each regional text in the machine-printed text with each regional text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
第二对应填写子模块当匹配值大于等于预设数值时,基于所述区域坐标,将所述机打文本中的各区域文本填入所述印刷文本对应的区域文本中,以构成票据文本。When the matching value is greater than or equal to a preset value, the second corresponding filling sub-module fills in the regional texts in the machine-printed texts into the regional texts corresponding to the printed texts based on the regional coordinates to form bill texts.
文本匹配子模块通过将机打文本中各区域文本与印刷文本中的各区域文本进行匹配,并得到匹配值,第二对应填写子模块当匹配值大于等于预设数值时,将机打文本中各区域文本填入所述印刷文本对应的区域文本中。具体的,将机打文本中各区域文本对应的区域坐标需完全在印刷文本中对应区域文本的区域坐标之内,这样才实现将机打文本中各区域文本准确填入印刷文本中对应区域文本中。The text matching sub-module matches the text in each area of the machine-typed text with the text in each area of the printed text, and obtains the matching value, and the second corresponding filling sub-module When the matching value is greater than or equal to the preset value, the machine-typed text will be included in the matching value. Each area text is filled in the area text corresponding to the printed text. Specifically, the region coordinates corresponding to each region text in the machine-typed text must be completely within the region coordinates of the corresponding region text in the printed text, so that the region text in the machine-typed text can be accurately filled into the corresponding region text in the printed text. middle.
通过文本匹配子模块和第二对应填写子模块的配合,将所述机打文本和印刷文本进行文本匹配,匹配完成后若满足预设要求,就将机打文本填入对应的印刷文本中,同样实现票据的重新排版。Through the cooperation of the text matching sub-module and the second corresponding filling sub-module, text matching is performed between the machine-typed text and the printed text. After the matching is completed, if the preset requirements are met, the machine-typed text is filled in the corresponding printed text. Also realize the rearrangement of the ticket.
通过采用上述装置,所述***信息抽取装置100通过获取模块101、分离模块102、识别模块103、匹配模块104的配合使用,通过将票据图像分离为机打图像和印刷图像,随后对机打图像和印刷图像分别识别处理后,进行对应匹配,提高了文本识别准确率,并将票据信息进 行重新排版得到票据文本。By adopting the above device, the invoice information extraction device 100 can separate the bill image into a machine-printed image and a printed image through the cooperation of the acquisition module 101, the separation module 102, the identification module 103, and the matching module 104, and then the machine-printed image is After the identification and processing of the printed image, the corresponding matching is performed, the accuracy of text identification is improved, and the bill information is rearranged to obtain the bill text.
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图3,图3为本实施例计算机设备基本结构框图。To solve the above technical problems, the embodiments of the present application also provide computer equipment. For details, please refer to FIG. 3 , which is a block diagram of a basic structure of a computer device according to this embodiment.
所述计算机设备4包括通过***总线相互通信连接存储器41、处理器42、网络接口43。需要指出的是,图中仅示出了具有组件41-43的计算机设备4,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。The computer device 4 includes a memory 41, a processor 42, and a network interface 43 that communicate with each other through a system bus. It should be noted that only the computer device 4 with components 41-43 is shown in the figure, but it should be understood that it is not required to implement all of the shown components, and more or less components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。The computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment. The computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
所述存储器41至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器41可以是所述计算机设备4的内部存储单元,例如该计算机设备4的硬盘或内存。在另一些实施例中,所述存储器41也可以是所述计算机设备4的外部存储设备,例如该计算机设备4上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器41还可以既包括所述计算机设备4的内部存储单元也包括其外部存储设备。本实施例中,所述存储器41通常用于存储安装于所述计算机设备4的操作***和各类应用软件,例如***信息抽取方法的计算机可读指令等。此外,所述存储器41还可以用于暂时地存储已经输出或者将要输出的各类数据。The memory 41 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4 , such as a hard disk or a memory of the computer device 4 . In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc. Of course, the memory 41 may also include both the internal storage unit of the computer device 4 and its external storage device. In this embodiment, the memory 41 is generally used to store the operating system and various application software installed on the computer device 4 , such as computer-readable instructions of the method for extracting invoice information. In addition, the memory 41 can also be used to temporarily store various types of data that have been output or will be output.
所述处理器42在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器42通常用于控制所述计算机设备4的总体操作。本实施例中,所述处理器42用于运行所述存储器41中存储的计算机可读指令或者处理数据,例如运行所述***信息抽取方法的计算机可读指令。The processor 42 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments. This processor 42 is typically used to control the overall operation of the computer device 4 . In this embodiment, the processor 42 is configured to execute computer-readable instructions or process data stored in the memory 41, for example, computer-readable instructions for executing the method for extracting invoice information.
所述网络接口43可包括无线网络接口或有线网络接口,该网络接口43通常用于在所述计算机设备4与其他电子设备之间建立通信连接。The network interface 43 may include a wireless network interface or a wired network interface, and the network interface 43 is generally used to establish a communication connection between the computer device 4 and other electronic devices.
本实施例通过处理器执行存储在存储器的计算机可读指令时实现如上述实施例***信息 抽取方法的步骤,通过获取票据图像,并利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像;通过将机打图像和印刷图像分离处理,便于后续步骤的处理;对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,通过利用双识别模型,提高对机打图像和印刷图像的文本识别率;最后将所述机打文本和印刷文本对应匹配,以构成票据文本。通过将票据图像分离为机打图像和印刷图像,随后对机打图像和印刷图像分别识别处理后,进行对应匹配,提高了文本识别准确率,并将票据信息进行重新排版得到票据文本。In this embodiment, when the processor executes the computer-readable instructions stored in the memory, the steps of the method for extracting invoice information as in the above-mentioned embodiment are implemented. By acquiring a bill image and using a pre-trained separation model to separate layers of the bill image, Obtain the machine-printed image and the printed image; separate the machine-printed image and the printed image to facilitate the processing of subsequent steps; use the corresponding pre-trained recognition model to identify the machine-printed image and the printed image, respectively. The printed image and the printed image are converted into machine-typed text and printed text, and the text recognition rate of the machine-typed image and the printed image is improved by using the double recognition model; finally, the machine-typed text and the printed text are matched correspondingly to form the bill text . By separating the bill image into a machine-printed image and a printed image, and then identifying and processing the machine-printed image and the printed image respectively, and then performing corresponding matching, the text recognition accuracy is improved, and the bill information is rearranged to obtain bill text.
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行如上述的***信息抽取方法的步骤,通过获取票据图像,并利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像;通过将机打图像和印刷图像分离处理,便于后续步骤的处理;对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,通过利用双识别模型,提高对机打图像和印刷图像的文本识别率;最后将所述机打文本和印刷文本对应匹配,以构成票据文本。通过将票据图像分离为机打图像和印刷图像,随后对机打图像和印刷图像分别识别处理后,进行对应匹配,提高了文本识别准确率,并将票据信息进行重新排版得到票据文本。所述计算机可读存储介质可以是非易失性,也可以是易失性。The present application also provides another embodiment, that is, to provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be executed by at least one processor to The at least one processor is caused to execute the steps of the above-mentioned method for extracting invoice information, by acquiring a ticket image, and using a pre-trained separation model to separate the layers of the ticket image to obtain a machine-printed image and a printed image; The machine-printed image and the printed image are separated and processed to facilitate the processing of the subsequent steps; the machine-printed image and the printed image are respectively recognized by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text. and printed text, by using a double recognition model, the text recognition rate of the machine-printed image and the printed image is improved; finally, the machine-printed text and the printed text are matched correspondingly to form the bill text. By separating the bill image into a machine-printed image and a printed image, and then identifying and processing the machine-printed image and the printed image respectively, and then performing corresponding matching, the text recognition accuracy is improved, and the bill information is rearranged to obtain bill text. The computer-readable storage medium may be non-volatile or volatile.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。Obviously, the above-described embodiments are only a part of the embodiments of the present application, rather than all of the embodiments. The accompanying drawings show the preferred embodiments of the present application, but do not limit the scope of the patent of the present application. This application may be embodied in many different forms, rather these embodiments are provided so that a thorough and complete understanding of the disclosure of this application is provided. Although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art can still modify the technical solutions described in the foregoing specific embodiments, or perform equivalent replacements for some of the technical features. . Any equivalent structure made by using the contents of the description and drawings of the present application, which is directly or indirectly used in other related technical fields, is also within the scope of protection of the patent of the present application.

Claims (20)

  1. 一种***信息抽取方法,所述方法包括:A method for extracting invoice information, the method comprising:
    获取票据图像;get the ticket image;
    利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;Use a pre-trained separation model to separate layers of the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained by training a confrontational generative network model;
    对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
    将所述机打文本和印刷文本对应匹配,以构成票据文本。The machine-typed text and the printed text are matched correspondingly to form the receipt text.
  2. 根据权利要求1所述的***信息抽取方法,其中,在所述获取票据图像之前,还包括:The method for extracting invoice information according to claim 1, wherein before the acquiring the image of the invoice, the method further comprises:
    向数据库发送调用请求,所述调用请求携带验签令牌;Send a call request to the database, where the call request carries a signature verification token;
    接收所述数据库返回的验签结果,并在验签结果为通过时,调用所述数据库中票据图像;Receive the signature verification result returned by the database, and call the ticket image in the database when the signature verification result is passed;
    所述验签的方式为RSA非对称加密方式。The signature verification method is an RSA asymmetric encryption method.
  3. 根据权利要求1所述的***信息抽取方法,其中,在所述利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像之前,还包括:The method for extracting invoice information according to claim 1, wherein, before using the pre-trained separation model to separate the layers of the invoice image to obtain the machine-printed image and the printed image, the method further comprises:
    收集票据数据及场景专用语料;Collect bill data and scene-specific corpus;
    对所述票据数据进行预处理,得到票据模板;Preprocessing the bill data to obtain a bill template;
    将所述场景专用语料根据属性填入所述票据模板对应区域,得到训练数据;Filling the scene-specific corpus into the corresponding area of the ticket template according to the attributes to obtain training data;
    将所述训练数据输入到对抗生成网络模型进行训练,得到所述分离模型。The training data is input into an adversarial generative network model for training to obtain the separation model.
  4. 根据权利要求3所述的***信息抽取方法,其中,在所述得到训练数据之后,还包括:The method for extracting invoice information according to claim 3, wherein after obtaining the training data, the method further comprises:
    对随机选取的部分训练数据进行亮度或阴影变化处理,得到光照变化处理后的训练数据;和/或Performing brightness or shadow change processing on a randomly selected part of the training data to obtain training data after illumination change processing; and/or
    利用高斯模糊或方框模糊,对随机选取的部分训练数据进行模糊处理,得到模糊变化处理后的训练数据;和/或Using Gaussian blurring or box blurring, blurring randomly selected part of the training data to obtain training data after blurring changes; and/or
    对随机选取的部分训练数据进行角度变化处理,得到形态变化处理后的训练数据。The randomly selected part of the training data is subjected to angle change processing to obtain the training data after morphological change processing.
  5. 根据权利要求1至4中任一项所述的***信息抽取方法,其中,在所述对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别之前,还包括:The method for extracting invoice information according to any one of claims 1 to 4, wherein, before the machine-printed image and the printed image are respectively identified using corresponding pre-trained identification models, the method further comprises:
    基于预训练的定位切割模型将所述机打图像和印刷图像分为多个区域图像,并得到各个所述区域图像对应的区域坐标,所述定位切割模型为基于DBNet模型训练得到。Based on the pre-trained positioning and cutting model, the machine-printed image and the printing image are divided into multiple regional images, and the regional coordinates corresponding to each of the regional images are obtained, and the positioning and cutting model is obtained by training based on the DBNet model.
  6. 根据权利要求5所述的***信息抽取方法,其中,所述将所述机打文本和印刷文本对应匹配,以构成票据文本包括:The method for extracting invoice information according to claim 5, wherein the corresponding matching of the machine-typed text and the printed text to form the bill text comprises:
    基于所述区域坐标,将所述机打文本中各第一区域文本与所述印刷文本中各第二区域文本 进行匹配;Based on the area coordinates, each first area text in the machine-printed text is matched with each second area text in the printed text;
    在匹配完成后,基于所述区域坐标,将所述第一区域文本填入对应的所述第二区域文本中,以构成票据文本。After the matching is completed, based on the area coordinates, the first area text is filled into the corresponding second area text to form the bill text.
  7. 根据权利要求5所述的***信息抽取方法,其中,所述将所述机打文本和印刷文本对应匹配,以构成票据文本包括:The method for extracting invoice information according to claim 5, wherein the corresponding matching of the machine-typed text and the printed text to form the bill text comprises:
    利用预训练的匹配模型,将所述机打文本中的各区域文本与所述印刷文本中的各区域文本进行匹配,得到匹配值,所述匹配模型基于BIMPM模型训练得到;Using a pre-trained matching model, each region text in the machine-printed text is matched with each region text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
    当匹配值大于等于预设数值时,基于所述区域坐标,将所述机打文本中的各区域文本填入所述印刷文本对应的区域文本中,以构成票据文本。When the matching value is greater than or equal to a preset value, based on the area coordinates, fill in the area texts of the machine-printed texts into the area texts corresponding to the printed texts to form bill texts.
  8. 一种***信息抽取装置,所述装置包括:A device for extracting invoice information, the device comprising:
    获取模块,用于获取票据图像;The acquisition module is used to acquire the ticket image;
    分离模块,用于利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;A separation module is used to separate layers of the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, and the separation model is obtained by training based on a confrontational generation network model;
    识别模块,用于对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The recognition module is used for recognizing the machine-printed image and the printed image by using the corresponding pre-trained recognition model respectively, and converting the machine-printed image and the printed image into machine-printed text and printed text, and the recognition model is based on the volume The product recurrent neural network model is trained;
    匹配模块,用于将所述机打文本和印刷文本对应匹配,以构成票据文本。The matching module is used for correspondingly matching the machine-typed text and the printed text to form the bill text.
  9. 一种计算机设备,包括:至少一个处理器;以及,A computer device comprising: at least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:The memory stores computer-readable instructions, and the processor implements the following steps when executing the computer-readable instructions:
    获取票据图像;get the ticket image;
    利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;Use a pre-trained separation model to separate layers of the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained by training a confrontational generative network model;
    对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
    将所述机打文本和印刷文本对应匹配,以构成票据文本。The machine-typed text and the printed text are matched correspondingly to form the receipt text.
  10. 根据权利要求9所述的计算机设备,其中,在所述利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像之前,还包括:The computer device according to claim 9, wherein, before using the pre-trained separation model to perform layer separation on the bill image to obtain the machine-printed image and the printed image, the method further comprises:
    收集票据数据及场景专用语料;Collect bill data and scene-specific corpus;
    对所述票据数据进行预处理,得到票据模板;Preprocessing the bill data to obtain a bill template;
    将所述场景专用语料根据属性填入所述票据模板对应区域,得到训练数据;Filling the scene-specific corpus into the corresponding area of the ticket template according to the attributes to obtain training data;
    将所述训练数据输入到对抗生成网络模型进行训练,得到所述分离模型。The training data is input into an adversarial generative network model for training to obtain the separation model.
  11. 根据权利要求10所述的计算机设备,其中,在所述得到训练数据之后,还包括:The computer device according to claim 10, wherein, after said obtaining the training data, it further comprises:
    对随机选取的部分训练数据进行亮度或阴影变化处理,得到光照变化处理后的训练数据;和/或Performing brightness or shadow change processing on a randomly selected part of the training data to obtain training data after illumination change processing; and/or
    利用高斯模糊或方框模糊,对随机选取的部分训练数据进行模糊处理,得到模糊变化处理后的训练数据;和/或Using Gaussian blurring or box blurring, blurring randomly selected part of the training data to obtain training data after blurring changes; and/or
    对随机选取的部分训练数据进行角度变化处理,得到形态变化处理后的训练数据。The randomly selected part of the training data is subjected to angle change processing to obtain the training data after morphological change processing.
  12. 根据权利要求9至11中任一项所述的计算机设备,其中,其中,在所述对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别之前,还包括:The computer device according to any one of claims 9 to 11, wherein, before the recognizing the machine-printed image and the printed image using a corresponding pre-trained recognition model, the method further comprises:
    基于预训练的定位切割模型将所述机打图像和印刷图像分为多个区域图像,并得到各个所述区域图像对应的区域坐标,所述定位切割模型为基于DBNet模型训练得到。Based on the pre-trained positioning and cutting model, the machine-printed image and the printing image are divided into multiple regional images, and the regional coordinates corresponding to each of the regional images are obtained, and the positioning and cutting model is obtained by training based on the DBNet model.
  13. 根据权利要求12所述的计算机设备,其中,所述将所述机打文本和印刷文本对应匹配,以构成票据文本包括:The computer device according to claim 12, wherein the corresponding matching of the machine-typed text and the printed text to form the receipt text comprises:
    基于所述区域坐标,将所述机打文本中各第一区域文本与所述印刷文本中各第二区域文本进行匹配;Based on the area coordinates, matching each of the first area texts in the machine-printed text with each of the second area texts in the printed text;
    在匹配完成后,基于所述区域坐标,将所述第一区域文本填入对应的所述第二区域文本中,以构成票据文本。After the matching is completed, based on the area coordinates, the first area text is filled into the corresponding second area text to form the bill text.
  14. 根据权利要求12所述的计算机设备,其中,所述将所述机打文本和印刷文本对应匹配,以构成票据文本包括:The computer device according to claim 12, wherein the corresponding matching of the machine-typed text and the printed text to form the receipt text comprises:
    利用预训练的匹配模型,将所述机打文本中的各区域文本与所述印刷文本中的各区域文本进行匹配,得到匹配值,所述匹配模型基于BIMPM模型训练得到;Using a pre-trained matching model, each region text in the machine-printed text is matched with each region text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
    当匹配值大于等于预设数值时,基于所述区域坐标,将所述机打文本中的各区域文本填入所述印刷文本对应的区域文本中,以构成票据文本。When the matching value is greater than or equal to a preset value, based on the area coordinates, fill in the area texts of the machine-printed texts into the area texts corresponding to the printed texts to form bill texts.
  15. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时,使得所述处理器执行如下步骤:A computer-readable storage medium, where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the processor is caused to perform the following steps:
    获取票据图像;get the ticket image;
    利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像,所述分离模型基于对抗生成网络模型训练得到;Use a pre-trained separation model to separate layers of the bill image to obtain a machine-printed image and a printed image, and the separation model is obtained by training a confrontational generative network model;
    对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别,将所述机打图像和印刷图像转化为机打文本和印刷文本,所述识别模型基于卷积循环神经网络模型训练得到;The machine-printed image and the printed image are respectively identified by corresponding pre-trained recognition models, and the machine-printed image and the printed image are converted into machine-printed text and printed text, and the recognition model is based on a convolutional recurrent neural network model. trained;
    将所述机打文本和印刷文本对应匹配,以构成票据文本。The machine-typed text and the printed text are matched correspondingly to form the receipt text.
  16. 根据权利要求15所述的计算机可读存储介质,其中,在所述利用预训练的分离模型对所述票据图像进行图层分离,得到机打图像和印刷图像之前,还包括:The computer-readable storage medium according to claim 15, wherein before the layer separation is performed on the bill image by using a pre-trained separation model to obtain a machine-printed image and a printed image, the method further comprises:
    收集票据数据及场景专用语料;Collect bill data and scene-specific corpus;
    对所述票据数据进行预处理,得到票据模板;Preprocessing the bill data to obtain a bill template;
    将所述场景专用语料根据属性填入所述票据模板对应区域,得到训练数据;Filling the scene-specific corpus into the corresponding area of the ticket template according to the attributes to obtain training data;
    将所述训练数据输入到对抗生成网络模型进行训练,得到所述分离模型。The training data is input into an adversarial generative network model for training to obtain the separation model.
  17. 根据权利要求16所述的计算机可读存储介质,其中,在所述得到训练数据之后,还包括:The computer-readable storage medium of claim 16, wherein, after the obtaining the training data, further comprising:
    对随机选取的部分训练数据进行亮度或阴影变化处理,得到光照变化处理后的训练数据;和/或Performing brightness or shadow change processing on a randomly selected part of the training data to obtain training data after illumination change processing; and/or
    利用高斯模糊或方框模糊,对随机选取的部分训练数据进行模糊处理,得到模糊变化处理后的训练数据;和/或Using Gaussian blurring or box blurring, blurring randomly selected part of the training data to obtain training data after blurring changes; and/or
    对随机选取的部分训练数据进行角度变化处理,得到形态变化处理后的训练数据。The randomly selected part of the training data is subjected to angle change processing to obtain the training data after morphological change processing.
  18. 根据权利要求15至17中任一项所述的计算机可读存储介质,其中,在所述对所述机打图像和印刷图像分别采用对应的预训练的识别模型进行识别之前,还包括:The computer-readable storage medium according to any one of claims 15 to 17, wherein, before the recognizing the machine-printed image and the printed image using a corresponding pre-trained recognition model, further comprising:
    基于预训练的定位切割模型将所述机打图像和印刷图像分为多个区域图像,并得到各个所述区域图像对应的区域坐标,所述定位切割模型为基于DBNet模型训练得到。Based on the pre-trained positioning and cutting model, the machine-printed image and the printing image are divided into multiple regional images, and the regional coordinates corresponding to each of the regional images are obtained, and the positioning and cutting model is obtained by training based on the DBNet model.
  19. 根据权利要求18所述的计算机可读存储介质,其中,所述将所述机打文本和印刷文本对应匹配,以构成票据文本包括:The computer-readable storage medium of claim 18, wherein the correspondingly matching the machine-typed text and the printed text to form the receipt text comprises:
    基于所述区域坐标,将所述机打文本中各第一区域文本与所述印刷文本中各第二区域文本进行匹配;Based on the area coordinates, matching each of the first area texts in the machine-printed text with each of the second area texts in the printed text;
    在匹配完成后,基于所述区域坐标,将所述第一区域文本填入对应的所述第二区域文本中,以构成票据文本。After the matching is completed, based on the area coordinates, the first area text is filled into the corresponding second area text to form the bill text.
  20. 根据权利要求18所述的计算机可读存储介质,其中,所述将所述机打文本和印刷文本对应匹配,以构成票据文本包括:The computer-readable storage medium of claim 18, wherein the correspondingly matching the machine-typed text and the printed text to form the receipt text comprises:
    利用预训练的匹配模型,将所述机打文本中的各区域文本与所述印刷文本中的各区域文本进行匹配,得到匹配值,所述匹配模型基于BIMPM模型训练得到;Using a pre-trained matching model, each region text in the machine-printed text is matched with each region text in the printed text to obtain a matching value, and the matching model is obtained based on BIMPM model training;
    当匹配值大于等于预设数值时,基于所述区域坐标,将所述机打文本中的各区域文本填入所述印刷文本对应的区域文本中,以构成票据文本。When the matching value is greater than or equal to a preset value, based on the area coordinates, fill in the area texts of the machine-printed texts into the area texts corresponding to the printed texts to form bill texts.
PCT/CN2021/090807 2020-12-16 2021-04-29 Invoice information extraction method and apparatus, computer device and storage medium WO2022126978A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011487344.1A CN112541443B (en) 2020-12-16 2020-12-16 Invoice information extraction method, invoice information extraction device, computer equipment and storage medium
CN202011487344.1 2020-12-16

Publications (1)

Publication Number Publication Date
WO2022126978A1 true WO2022126978A1 (en) 2022-06-23

Family

ID=75018963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090807 WO2022126978A1 (en) 2020-12-16 2021-04-29 Invoice information extraction method and apparatus, computer device and storage medium

Country Status (2)

Country Link
CN (1) CN112541443B (en)
WO (1) WO2022126978A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222498A (en) * 2022-07-20 2022-10-21 北京令才科技有限公司 Method for comparing, packaging and configuring multiple element arrays
CN115431653A (en) * 2022-08-13 2022-12-06 绍兴市财税印刷有限公司 Tax bill printing method and equipment based on anti-fake technology

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541443B (en) * 2020-12-16 2024-05-10 平安科技(深圳)有限公司 Invoice information extraction method, invoice information extraction device, computer equipment and storage medium
CN114898385A (en) * 2022-05-07 2022-08-12 微民保险代理有限公司 Data processing method, device, equipment, readable storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170004374A1 (en) * 2015-06-30 2017-01-05 Yahoo! Inc. Methods and systems for detecting and recognizing text from images
CN110399851A (en) * 2019-07-30 2019-11-01 广东工业大学 A kind of image processing apparatus, method, equipment and readable storage medium storing program for executing
CN111461099A (en) * 2020-03-27 2020-07-28 重庆农村商业银行股份有限公司 Bill identification method, system, equipment and readable storage medium
CN111931784A (en) * 2020-09-17 2020-11-13 深圳壹账通智能科技有限公司 Bill recognition method, system, computer device and computer-readable storage medium
CN112085029A (en) * 2020-08-31 2020-12-15 浪潮通用软件有限公司 Invoice identification method, equipment and medium
CN112541443A (en) * 2020-12-16 2021-03-23 平安科技(深圳)有限公司 Invoice information extraction method and device, computer equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977723B (en) * 2017-12-22 2021-10-22 苏宁云商集团股份有限公司 Large bill picture character recognition method
CN109635627A (en) * 2018-10-23 2019-04-16 中国平安财产保险股份有限公司 Pictorial information extracting method, device, computer equipment and storage medium
CN109919014B (en) * 2019-01-28 2023-11-03 平安科技(深圳)有限公司 OCR (optical character recognition) method and electronic equipment thereof
CN111291629A (en) * 2020-01-17 2020-06-16 平安医疗健康管理股份有限公司 Method and device for recognizing text in image, computer equipment and computer storage medium
CN111652232B (en) * 2020-05-29 2023-08-22 泰康保险集团股份有限公司 Bill identification method and device, electronic equipment and computer readable storage medium
CN111950356B (en) * 2020-06-30 2024-04-19 深圳市雄帝科技股份有限公司 Seal text positioning method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170004374A1 (en) * 2015-06-30 2017-01-05 Yahoo! Inc. Methods and systems for detecting and recognizing text from images
CN110399851A (en) * 2019-07-30 2019-11-01 广东工业大学 A kind of image processing apparatus, method, equipment and readable storage medium storing program for executing
CN111461099A (en) * 2020-03-27 2020-07-28 重庆农村商业银行股份有限公司 Bill identification method, system, equipment and readable storage medium
CN112085029A (en) * 2020-08-31 2020-12-15 浪潮通用软件有限公司 Invoice identification method, equipment and medium
CN111931784A (en) * 2020-09-17 2020-11-13 深圳壹账通智能科技有限公司 Bill recognition method, system, computer device and computer-readable storage medium
CN112541443A (en) * 2020-12-16 2021-03-23 平安科技(深圳)有限公司 Invoice information extraction method and device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222498A (en) * 2022-07-20 2022-10-21 北京令才科技有限公司 Method for comparing, packaging and configuring multiple element arrays
CN115222498B (en) * 2022-07-20 2023-04-18 北京令才科技有限公司 Method for comparing, packaging and configuring multi-element arrays
CN115431653A (en) * 2022-08-13 2022-12-06 绍兴市财税印刷有限公司 Tax bill printing method and equipment based on anti-fake technology

Also Published As

Publication number Publication date
CN112541443B (en) 2024-05-10
CN112541443A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2022126978A1 (en) Invoice information extraction method and apparatus, computer device and storage medium
CN112037077A (en) Seal identification method, device, equipment and storage medium based on artificial intelligence
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
EP4109332A1 (en) Certificate authenticity identification method and apparatus, computer-readable medium, and electronic device
CN109710907A (en) A kind of generation method and equipment of electronic document
CN112052850A (en) License plate recognition method and device, electronic equipment and storage medium
CN112580108B (en) Signature and seal integrity verification method and computer equipment
CN113887408B (en) Method, device, equipment and storage medium for detecting activated face video
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
CN113033543A (en) Curved text recognition method, device, equipment and medium
CN112581344A (en) Image processing method and device, computer equipment and storage medium
CN112528998A (en) Certificate image processing method and device, electronic equipment and readable storage medium
CN113887438A (en) Watermark detection method, device, equipment and medium for face image
CN112668580A (en) Text recognition method, text recognition device and terminal equipment
CN111062262B (en) Invoice recognition method and invoice recognition device
CN112668575A (en) Key information extraction method and device, electronic equipment and storage medium
CN114708461A (en) Multi-modal learning model-based classification method, device, equipment and storage medium
CN115758451A (en) Data labeling method, device, equipment and storage medium based on artificial intelligence
CN112699646A (en) Data processing method, device, equipment and medium
CN114495146A (en) Image text detection method and device, computer equipment and storage medium
CN112434506A (en) Electronic protocol signing processing method, device, computer equipment and medium
CN112560855A (en) Image information extraction method and device, electronic equipment and storage medium
CN115880702A (en) Data processing method, device, equipment, program product and storage medium
CN115690819A (en) Big data-based identification method and system
CN112395834B (en) Brain graph generation method, device and equipment based on picture input and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21904917

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21904917

Country of ref document: EP

Kind code of ref document: A1