CN112507864A - Credit archive identification method based on convolutional neural network - Google Patents

Credit archive identification method based on convolutional neural network Download PDF

Info

Publication number
CN112507864A
CN112507864A CN202011412379.9A CN202011412379A CN112507864A CN 112507864 A CN112507864 A CN 112507864A CN 202011412379 A CN202011412379 A CN 202011412379A CN 112507864 A CN112507864 A CN 112507864A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
credit
training
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011412379.9A
Other languages
Chinese (zh)
Inventor
李明亮
许雷
周沛松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shijiazhuang Harmony Is Science And Technology Co ltd
Hebei GEO University
Original Assignee
Shijiazhuang Harmony Is Science And Technology Co ltd
Hebei GEO University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shijiazhuang Harmony Is Science And Technology Co ltd, Hebei GEO University filed Critical Shijiazhuang Harmony Is Science And Technology Co ltd
Priority to CN202011412379.9A priority Critical patent/CN112507864A/en
Publication of CN112507864A publication Critical patent/CN112507864A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer vision, and discloses a credit archive identification method based on a convolutional neural network, which comprises the steps of carrying out geometric correction on collected credit archive images, and then expanding the collected credit archive images by using an image enhancement technology to obtain a self-made data set; building a convolutional neural network architecture based on deep learning open source framework Tensorflow, and training the convolutional neural network architecture to obtain a convolutional neural network model; scanning a credit file, and loading the scanned picture into a convolutional neural network model for identification and classification. The method has the advantages of high identification accuracy, stronger generalization, high robustness, more convenience and safety in identifying the credit file and improvement of the working efficiency. The invention is suitable for credit file identification.

Description

Credit archive identification method based on convolutional neural network
Technical Field
The invention belongs to the technical field of computer vision, and relates to credit archive identification, in particular to a credit archive identification method based on a convolutional neural network.
Background
The credit file is formed by the enterprise in the process of transacting the credit business by the bank, records and reflects important documents and evidences of the credit business, and comprises related contracts and evidences, basic data of a borrower, credit business data of the borrower and the like. The data shows that credit remains the dominant loan model in our country to date, occupying 78.61% of the loan market. According to statistical prediction, the credit of an enterprise reaches a scale of more than 10 trillion levels by 2020, and the development potential of the industry is huge, however, in the process of credit transaction, the enterprise needs to submit a large amount of credit file paper materials to a loan bank, and then a bank teller manually checks the integrity and compliance of the materials submitted by the enterprise to determine whether to grant the enterprise a loan. This causes the bank teller to spend a lot of time checking the data submitted by the customer, which causes a problem of low work efficiency. In addition, the teller has difficulty in ensuring that the error rate can be minimized in the face of a large amount of corporate credit profile data by manually and directly checking the unclassified materials one by one.
With the remarkable progress of deep learning in the field of artificial intelligence in recent years, the current deep learning algorithm has achieved excellent performance in image recognition and speech transcription tasks. The model performance is beyond the human level, and the image recognition method in the industry is basically converted into a deep learning method from a traditional method.
In the conventional image recognition field, the process can be completed by directly calling an OCR character recognition framework tesseract packet of an open source. However, the method cannot identify the handwritten Chinese characters or numbers at present, which causes many defects in the application scene of credit file identification, for example, when a customer needs to sign in a paper file before scanning, the customer cannot identify the handwritten signature of the customer, so that the file lacks legal effectiveness.
Disclosure of Invention
The invention aims to provide a credit file identification method based on a convolutional neural network, so as to improve the efficiency of a bank teller in checking the compliance and integrity of credit application materials submitted by a customer.
In order to achieve the purpose, the technical method comprises the following steps:
a credit archive identification method based on a convolutional neural network comprises the following steps:
s1, after geometric correction is carried out on the collected credit archive images, the images are expanded through an image enhancement technology to obtain a self-made data set;
s2, selecting training parameters, building a convolutional neural network architecture based on deep learning open source framework Tensorflow, dividing images in a self-made data set into a training set and a testing set, loading the training set to the convolutional neural network for training, and performing visual representation on a training result; fine-tuning the training parameters of the convolutional neural network according to the training result, loading the test set to the convolutional neural network for accuracy rate testing, and fine-tuning the training parameters of the convolutional neural network until the accuracy rate of the test set reaches an expected standard, thus obtaining a convolutional neural network model;
and S3, scanning the credit file, and loading the scanned picture into the convolutional neural network model for identification and classification.
As a limitation: the geometric correction in step S1 is implemented by calling a method of affine transformation in the opencv function library, specifically:
the affine matrix M is automatically solved by transforming the correspondence between the four vertices of the images before and after,
Figure BDA0002818040810000021
wherein pos1 and pos2 represent the corresponding positional relationship before and after image conversion, and a11、a12、a21、a22Matrix elements each representing an image pixel value;
and then, using a function cv2.warpAffine () to realize the affine transformation of the image, wherein the coordinate transformation formula is as follows:
Figure BDA0002818040810000022
wherein, x, y, u1、v1、u2、v2Each representing a matrix element of image pixel values.
As a further limitation: in step S1, the image size in the homemade data set is adjusted to 32 × 32 pixels, and the img _ to _ array method in the numpy function library is called to store the pixel values of the image in an array form in a 4D tensor with a shape of (128, 32, 32, 3).
As a further limitation: the homemade data set includes 10 credit file categories, which are organization code certificate, tax register certificate, business license, standing document, credit analysis report, loan application form, loan contract, financial statement, low-pressure insurance certificate, and repayment schedule.
As another limitation: in step S2, after normalization processing is performed on the self-made data set, the training set and the test set are distributed according to the ratio of 8:2, and a python self-contained function library matplotlib module is called to visually represent the training result.
As a further limitation: the architecture of the convolutional neural network in step S2 is composed of:
conv(32)+conv(32)+pool(64)+conv(64)+conv(64)+pool(128)+flat()+Den()+Dropout()+den(10)
where Conv represents a convolutional layer, pool represents a pooling layer, Den represents a fully connected layer, Dropout represents a Dropout function, and flat () represents a leveling layer.
As a further limitation: the training of the convolutional neural network in step S2 specifically includes: defining an assumed function for model prediction, assigning weights of the neural network, and performing classification prediction on the input image to obtain a predicted value y _ pred;
and obtaining a distance value between the predicted value y _ pred and the real value y by adopting a square error cost function, wherein the square error loss function is as follows:
Figure BDA0002818040810000031
wherein h θ (x) is θ01x is a linear prediction function, theta012,......θmIs a model parameter, m is the total number of samples;
solving an ownership weight value corresponding to the loss function by adopting an analytical method, calculating the gradient of the loss function by adopting a chain type derivation method, gradually updating the weight along the reverse direction of the gradient by using a gradient descent algorithm until a weight parameter which enables the loss value to be minimum is solved by using a back propagation algorithm, wherein the specific formula is as follows:
Figure BDA0002818040810000032
wherein j is 0 or 1, and α is convergence rate;
using Dropout control overfitting, the network calculates the formula:
rj (l)=Bernoulli(p)
Figure BDA0002818040810000041
Figure BDA0002818040810000042
yi(l+1)=f(zi(l+1))
wherein, the Bernoulli function generates the probability r of the selected and discarded neuronjVector, wiRepresenting a weight matrix, rj、l、
Figure BDA0002818040810000043
y、zi、biAll represent a one-dimensional vector, f (z)i (l+1)) Represents the Relu activation function;
and according to the prediction information output by the neural network in the test set, classifying all credit archive images one by one according to the labels of the samples, and storing the trained neural network, namely a convolutional neural network model.
Due to the adoption of the scheme, compared with the prior art, the invention has the beneficial effects that:
the method has the advantages that the types of credit files are multiple, so that the identification accuracy of the trained convolutional neural network model is high, and the Dropout is adopted to control overfitting, so that the generalization of the convolutional neural network model is stronger, the overfitting condition is avoided, and the accuracy of image classification is further improved; the convolutional neural network model carries out normalization processing on the data of the self-made data set, the robustness and the generalization capability of the convolutional neural network model are improved, the recognition of credit files is more convenient and safer, the efficiency of checking the compliance and the integrity of credit application materials submitted by a customer by a bank teller is improved, the time is saved, and the burden of bank workers is lightened.
The invention is suitable for credit file identification.
Drawings
The invention is described in further detail below with reference to the figures and the embodiments.
FIG. 1 is a flow diagram of credit profile identification according to an embodiment of the present invention;
FIG. 2 is a flowchart of convolutional neural network model training according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of image expansion according to an embodiment of the present invention;
FIG. 4 is a diagram of a 4D tensor feature model according to an embodiment of the present invention;
fig. 5 shows the evaluation result of the convolutional neural network model according to the embodiment of the present invention.
Detailed Description
The present invention is further described with reference to the following examples, but it should be understood by those skilled in the art that the present invention is not limited to the following examples, and any modifications and equivalent changes based on the specific examples of the present invention are within the scope of the claims of the present invention.
Credit archive identification method based on convolutional neural network
A credit archive identification method based on a convolutional neural network is disclosed, wherein a credit archive identification flow chart is shown in figure 1, a convolutional neural network model training flow chart is shown in figure 2, and the method specifically comprises the following steps:
s1, carrying out geometric correction on the collected credit archive images by calling an affine transformation method in an opencv function library, and then expanding the images by using an image enhancement technology, wherein an image expansion schematic diagram is shown in FIG. 3, so that a self-made data set is obtained, the self-made data set comprises 10 credit archive categories, namely an organization code certificate, a tax registration certificate, a business license, an item setting file, a credit analysis report, a loan application form, a loan contract, a financial statement, a low-pressure insurance certificate and a repayment plan form, and 1796 images are obtained in total; adjusting the size of an image in a homemade data set to 32 × 32 pixels, and calling an img _ to _ array method in a numpy function library to store pixel values of the image in an array form in a 4D tensor with a shape of (128, 32, 32, 3), wherein a 4D tensor feature model diagram is shown in fig. 4, Color channels represent the number of Color channels of the image, Height represents the Height of the image, Width represents the Width of the image, Samples: representing sample data;
the affine transformation method specifically comprises the following steps:
the affine matrix M is automatically solved by transforming the correspondence between the four vertices of the images before and after,
Figure BDA0002818040810000051
wherein pos1 and pos2 represent the corresponding positional relationship before and after image conversion, and a11、a12、a21、a22Matrix elements each representing an image pixel value;
and then, using a function cv2.warpAffine () to realize the affine transformation of the image, wherein the coordinate transformation formula is as follows:
Figure BDA0002818040810000052
wherein, x, y, u1、v1、u2、v2Matrix elements each representing an image pixel value;
the geometric correction is equivalent to the composition of two translations and one original point rotation transformation, namely, the center (x, y) of the picture is moved to the original point, then the rotation transformation is carried out, and finally the upper left corner of the picture is set as the original point of the picture, so that the original image is corrected.
S2, selecting training parameters, wherein the training parameters comprise the number of class labels, initial learning rate, rehearsal round, batch size, optimizer and weight attenuation rate, the training parameters are set as shown in Table 1, a convolutional neural network architecture based on deep learning open source framework Tensorflow is built, and the architecture composition of the convolutional neural network is as follows:
conv(32)+conv(32)+pool(64)+conv(64)+conv(64)+pool(128)+flat()+Den()+Dropout()+Den(10)
wherein conv represents a convolution layer, pool represents a pooling layer, Den represents a full link layer, Dropout represents a Dropout function, flat () represents a leveling layer, the bracketed areas of the convolution layer and the pooling layer are the size and the number of convolution kernels respectively, the bracketed area of the full link layer is the number of neurons, Den (10) represents a softmax function for the full link layer to use in one 10-way, which returns an array consisting of 10 probability values (the sum is 1), each probability value representing the probability that the current image belongs to one of 10 number categories;
TABLE 1 setting of training parameters
Figure BDA0002818040810000061
After image normalization processing in the self-made data set, dividing the image normalization processing into a training set and a test set according to the proportion of 8:2, loading the training set to a convolutional neural network, training the convolutional neural network by using a keras framework, and calling a python self-contained function library matplotlib module to visually represent the training result; fine-tuning the training parameters of the convolutional neural network according to the training results, and then loading the test set to the convolutional neural network for accuracy test, wherein the test results are shown in table 2;
TABLE 2 recognition accuracy
Figure BDA0002818040810000071
As can be seen from table 2, the difference between the recognition accuracy rates of the test set for different types of credit files is large, the highest recognition accuracy rate is the tax registration certificate and reaches 97.34%, and then the highest recognition accuracy rates are the organization code certificate and the business license, the recognition accuracy rates are 97.12% and 96.94%, the lowest recognition accuracy rate is the financial statement, the recognition rate is 90.25%, and the recognition accuracy rates are all above 90%;
fine tuning the training parameters of the convolutional neural network until the accuracy of the test set reaches the expected standard, wherein the training of the convolutional neural network specifically comprises the following steps: defining an assumed function for model prediction, assigning weights of a neural network, and performing classification prediction on an input image, namely performing forward propagation to obtain a predicted value y _ pred;
and obtaining a distance value between the predicted value y _ pred and the real value y by adopting a square error cost function, wherein the square error loss function is as follows:
Figure BDA0002818040810000072
wherein h θ (x) is θ01x is a linear prediction function, theta012,......θmIs a model parameter, m is the total number of samples;
solving an ownership weight value corresponding to the loss function by adopting an analytical method, calculating the gradient of the loss function by adopting a chain type derivation method, gradually updating the weight along the reverse direction of the gradient by using a gradient descent algorithm until a weight parameter which enables the loss value to be minimum is solved by using a back propagation algorithm, wherein the specific formula is as follows:
Figure BDA0002818040810000073
wherein j is 0 or 1, and α is convergence rate;
using Dropout control overfitting, the Dropout function in tenserflow has two standard parameters, the first parameter x is input, the data output in the previous pooling layer is used as the first parameter x, the second parameter keep _ prob is used to set the probability that the neuron is selected to be discarded, which is set to 0.5, and the network calculates the formula:
rj (l)=Bernoulli(p)
Figure BDA0002818040810000081
Figure BDA0002818040810000082
yi(l+1)=f(zi(l+1))
wherein, the Bernoulli function generates the probability r of the selected and discarded neuronjVector, i.e. a vector of 0 or 1 is randomly generated, i.e. after the vector operation, its activation function value is changed to 0 with probability p, wiRepresenting a weight matrix, rj、l、
Figure BDA0002818040810000083
y、zi、biAll represent a one-dimensional vector, f (z)i (l+1) Denotes the Relu activation function;
according to the prediction information output by the neural network in the test set, classifying all credit archive images one by one according to the labels of the samples, and storing the trained neural network, namely, the convolutional neural network model, and aiming at the evaluation result of the convolutional neural network model as shown in fig. 5, wherein train _ loss represents the loss function value of the model on the training set, val _ loss represents the loss function value of the model on the verification set, and val _ acc represents the precision of the model on the verification set, and as can be seen from fig. 5, the loss function value of the model on the training set gradually stabilizes to about 1.1 along with the increase of the iteration times.
And S3, scanning the credit file, calling the convolutional neural network model, loading the scanned picture into the convolutional neural network model for identification and classification, and if the identification fails, scanning again for identification.
The convolutional neural network used in this embodiment is an AlexNet network.

Claims (10)

1. A credit archive identification method based on a convolutional neural network is characterized by comprising the following steps:
s1, after geometric correction is carried out on the collected credit archive images, the images are expanded through an image enhancement technology to obtain a self-made data set;
s2, selecting training parameters, building a convolutional neural network architecture based on deep learning open source framework Tensorflow, dividing images in a self-made data set into a training set and a testing set, loading the training set to the convolutional neural network for training, and performing visual representation on a training result; fine-tuning the training parameters of the convolutional neural network according to the training result, loading the test set to the convolutional neural network for accuracy rate testing, and fine-tuning the training parameters of the convolutional neural network until the accuracy rate of the test set reaches an expected standard, thus obtaining a convolutional neural network model;
and S3, scanning the credit file, and loading the scanned picture into the convolutional neural network model for identification and classification.
2. The convolutional neural network-based credit archive identification method as claimed in claim 1, wherein the geometric correction in step S1 is implemented by calling the affine transformation method in opencv function library, specifically:
the affine matrix M is automatically solved by transforming the correspondence between the four vertices of the images before and after,
Figure FDA0002818040800000011
of these, pos1 and pos2 shows the corresponding positional relationship before and after image conversion, a11、a12、a21、a22Matrix elements each representing an image pixel value;
and then, using a function cv2.warpAffine () to realize the affine transformation of the image, wherein the coordinate transformation formula is as follows:
Figure FDA0002818040800000012
wherein, x, y, u1、v1、u2、v2Each representing a matrix element of image pixel values.
3. The convolutional neural network-based credit profile identification method as claimed in claim 1 or 2, wherein the image size in the homemade data set is adjusted to 32 × 32 pixels in step S1, and the img _ to _ array method in the numpy function library is called to convert the pixel values of the image into an array form and store the array form in a 4D tensor with the shape of (128, 32, 32, 3).
4. The convolutional neural network-based credit profile identification method as claimed in claim 1 or 2, wherein the homemade data set includes 10 credit profile categories, which are organization code certificate, tax registration certificate, business license, standing document, credit analysis report, loan application form, loan contract, financial statement, low-pressure insurance certificate, repayment schedule, respectively.
5. The convolutional neural network-based credit profile identification method as claimed in claim 3, wherein the homemade data set includes 10 credit profile categories, which are organization code certificate, tax register certificate, business license, standing document, credit analysis report, loan application form, loan contract, financial statement, low-pressure insurance certificate, repayment schedule, respectively.
6. The convolutional neural network-based credit archive identification method as claimed in any one of claims 1, 2 and 5, wherein after normalization processing is performed on the homemade data set in step S2, a training set and a test set are allocated according to a ratio of 8:2, and a python self-contained function library matplotlib module is called to perform visual representation on a training result.
7. The convolutional neural network-based credit archive identification method as claimed in claim 3, wherein in step S2, after the self-made data set is normalized, the training set and the test set are distributed according to a ratio of 8:2, and a python self-contained function library matplotlib module is called to visually represent the training result.
8. The convolutional neural network-based credit archive identification method as claimed in claim 4, wherein in step S2, after the self-made data set is normalized, the training set and the test set are distributed according to a ratio of 8:2, and a python self-contained function library matplotlib module is called to visually represent the training result.
9. The convolutional neural network based credit profile identification method as claimed in any one of claims 1, 2, 5, 7 and 8, wherein the architecture of the convolutional neural network in step S2 is composed of:
conv(32)+conv(32)+pool(64)+conv(64)+conv(64)+pool(128)+flat()+Den()+Dropout()+den(10)
where Conv represents a convolutional layer, pool represents a pooling layer, Den represents a fully connected layer, Dropout represents a Dropout function, and flat () represents a leveling layer.
10. The convolutional neural network-based credit profile identification method as claimed in claim 9, wherein the training of the convolutional neural network in step S2 is specifically: defining an assumed function for model prediction, assigning weights of the neural network, and performing classification prediction on the input image to obtain a predicted value y _ pred;
and obtaining a distance value between the predicted value y _ pred and the real value y by adopting a square error cost function, wherein the square error loss function is as follows:
Figure FDA0002818040800000031
wherein h θ (x) is θ01x is a linear prediction function, theta012,......θmIs a model parameter, m is the total number of samples;
solving an ownership weight value corresponding to the loss function by adopting an analytical method, calculating the gradient of the loss function by adopting a chain type derivation method, gradually updating the weight along the reverse direction of the gradient by using a gradient descent algorithm until a weight parameter which enables the loss value to be minimum is solved by using a back propagation algorithm, wherein the specific formula is as follows:
Figure FDA0002818040800000032
wherein j is 0 or 1, and α is convergence rate;
using Dropout control overfitting, the network calculates the formula:
rj (l)=Bernoulli(p)
Figure FDA0002818040800000033
Figure FDA0002818040800000034
yi (l+1)=f(zi (l+1))
wherein, the Bernoulli function generates the probability r of the selected and discarded neuronjVector, wiRepresenting a weight matrix, rj、l、
Figure FDA0002818040800000035
y、zi、biAll represent a one-dimensional vector, f (z)i (l+1)) Represents the Relu activation function;
and according to the prediction information output by the neural network in the test set, classifying all credit archive images one by one according to the labels of the samples, and storing the trained neural network, namely a convolutional neural network model.
CN202011412379.9A 2020-12-04 2020-12-04 Credit archive identification method based on convolutional neural network Pending CN112507864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011412379.9A CN112507864A (en) 2020-12-04 2020-12-04 Credit archive identification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011412379.9A CN112507864A (en) 2020-12-04 2020-12-04 Credit archive identification method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN112507864A true CN112507864A (en) 2021-03-16

Family

ID=74970222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011412379.9A Pending CN112507864A (en) 2020-12-04 2020-12-04 Credit archive identification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112507864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511124A (en) * 2022-09-27 2022-12-23 上海网商电子商务有限公司 Customer grading method based on after-sale maintenance records

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590497A (en) * 2017-09-20 2018-01-16 重庆邮电大学 Off-line Handwritten Chinese Recognition method based on depth convolutional neural networks
CN109409421A (en) * 2018-10-09 2019-03-01 杭州诚道科技股份有限公司 Motor vehicle, driver's archival image recognition methods based on convolutional neural networks
CN109492529A (en) * 2018-10-08 2019-03-19 中国矿业大学 A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion
US20200160177A1 (en) * 2018-11-16 2020-05-21 Royal Bank Of Canada System and method for a convolutional neural network for multi-label classification with partial annotations
CN111325152A (en) * 2020-02-19 2020-06-23 北京工业大学 Deep learning-based traffic sign identification method
CN111553423A (en) * 2020-04-29 2020-08-18 河北地质大学 Handwriting recognition method based on deep convolutional neural network image processing technology
US20200285916A1 (en) * 2019-03-06 2020-09-10 Adobe Inc. Tag-based font recognition by utilizing an implicit font classification attention neural network
CN111652332A (en) * 2020-06-09 2020-09-11 山东大学 Deep learning handwritten Chinese character recognition method and system based on two classifications

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590497A (en) * 2017-09-20 2018-01-16 重庆邮电大学 Off-line Handwritten Chinese Recognition method based on depth convolutional neural networks
CN109492529A (en) * 2018-10-08 2019-03-19 中国矿业大学 A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion
CN109409421A (en) * 2018-10-09 2019-03-01 杭州诚道科技股份有限公司 Motor vehicle, driver's archival image recognition methods based on convolutional neural networks
US20200160177A1 (en) * 2018-11-16 2020-05-21 Royal Bank Of Canada System and method for a convolutional neural network for multi-label classification with partial annotations
US20200285916A1 (en) * 2019-03-06 2020-09-10 Adobe Inc. Tag-based font recognition by utilizing an implicit font classification attention neural network
CN111325152A (en) * 2020-02-19 2020-06-23 北京工业大学 Deep learning-based traffic sign identification method
CN111553423A (en) * 2020-04-29 2020-08-18 河北地质大学 Handwriting recognition method based on deep convolutional neural network image processing technology
CN111652332A (en) * 2020-06-09 2020-09-11 山东大学 Deep learning handwritten Chinese character recognition method and system based on two classifications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FURUIT: "对dropout的理解详细版", 《HTTPS://BLOG.CSDN.NET/FU6543210/ARTICLE/DETAILS/84450890》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511124A (en) * 2022-09-27 2022-12-23 上海网商电子商务有限公司 Customer grading method based on after-sale maintenance records
CN115511124B (en) * 2022-09-27 2023-04-18 上海网商电子商务有限公司 Customer grading method based on after-sale maintenance records

Similar Documents

Publication Publication Date Title
Hosaka Bankruptcy prediction using imaged financial ratios and convolutional neural networks
CN111325203B (en) American license plate recognition method and system based on image correction
CN107067044B (en) Financial reimbursement complete ticket intelligent auditing system
CN110532920B (en) Face recognition method for small-quantity data set based on FaceNet method
CN107194400B (en) Financial reimbursement full ticket image recognition processing method
CN110619059B (en) Building marking method based on transfer learning
US9589185B2 (en) Symbol recognition using decision forests
CN111652273B (en) Deep learning-based RGB-D image classification method
CN111401156B (en) Image identification method based on Gabor convolution neural network
CN110084327B (en) Bill handwritten digit recognition method and system based on visual angle self-adaptive depth network
CN108364037A (en) Method, system and the equipment of Handwritten Chinese Character Recognition
CN111881958A (en) License plate classification recognition method, device, equipment and storage medium
Lin et al. Determination of the varieties of rice kernels based on machine vision and deep learning technology
CN114266757A (en) Diabetic retinopathy classification method based on multi-scale fusion attention mechanism
CN112507864A (en) Credit archive identification method based on convolutional neural network
CN114648667A (en) Bird image fine-granularity identification method based on lightweight bilinear CNN model
CN113591997B (en) Assembly feature graph connection relation classification method based on graph learning convolutional neural network
CN114549928A (en) Image enhancement processing method and device, computer equipment and storage medium
CN113657377A (en) Structured recognition method for airplane ticket printing data image
CN116563862A (en) Digital identification method based on convolutional neural network
CN112232102A (en) Building target identification method and system based on deep neural network and multitask learning
Chen et al. Design and Implementation of Second-generation ID Card Number Identification Model based on TensorFlow
CN114882287A (en) Image classification method based on semantic relation graph
CN113066094B (en) Geographic grid intelligent local desensitization method based on generation countermeasure network
CN111079715B (en) Occlusion robustness face alignment method based on double dictionary learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210316