CN112743993A - Method and device for safely outputting printing information, terminal equipment and medium - Google Patents

Method and device for safely outputting printing information, terminal equipment and medium Download PDF

Info

Publication number
CN112743993A
CN112743993A CN202010974703.XA CN202010974703A CN112743993A CN 112743993 A CN112743993 A CN 112743993A CN 202010974703 A CN202010974703 A CN 202010974703A CN 112743993 A CN112743993 A CN 112743993A
Authority
CN
China
Prior art keywords
training
characters
printed
tested
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010974703.XA
Other languages
Chinese (zh)
Other versions
CN112743993B (en
Inventor
赵维巍
袁云欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hushen Intelligent Material Technology Co ltd
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202010974703.XA priority Critical patent/CN112743993B/en
Publication of CN112743993A publication Critical patent/CN112743993A/en
Application granted granted Critical
Publication of CN112743993B publication Critical patent/CN112743993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B41PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
    • B41JTYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
    • B41J3/00Typewriters or selective printing or marking mechanisms characterised by the purpose for which they are constructed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B41PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
    • B41JTYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
    • B41J29/00Details of, or accessories for, typewriters or selective printing mechanisms not otherwise provided for
    • B41J29/38Drives, motors, controls or automatic cut-off devices for the entire printing mechanism
    • B41J29/393Devices for controlling or analysing the entire machine ; Controlling or analysing mechanical parameters involving printing of test patterns

Landscapes

  • Printing Methods (AREA)

Abstract

The application relates to the technical field of information security, and provides a method and a device for safely outputting printing information, terminal equipment and a medium. The safe output method of the printing information is applied to terminal equipment, and comprises the following steps: acquiring a codebook to be tested printed by adopting invisible ink, wherein the invisible ink has ultraviolet responsiveness; irradiating the to-be-detected codebook by adopting ultraviolet light to obtain a to-be-detected sample with password characters; and inputting the sample to be tested into a convolutional neural network model to obtain printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by training through a machine learning algorithm by taking training characters as an input training set and training labels corresponding to the training characters as an output training set. The safety output method can improve the safety performance of the printed information.

Description

Method and device for safely outputting printing information, terminal equipment and medium
Technical Field
The application belongs to the technical field of information security, and particularly relates to a method for safely outputting printed information, a device for safely outputting the printed information, a terminal device and a computer readable storage medium thereof.
Background
The security protection of conventional paper information relies primarily on stimuli-responsive functional materials. The stimulus-responsive functional material responds (e.g., displays color or emits light) by an external stimulus (e.g., light irradiation, chemical agent treatment, or heating) to display paper information. But the encryption method only depends on the properties of the material, and has low complexity and high predictability. Once the responsiveness of the ink is exposed, the information is easily deciphered, presenting a significant risk to commercial and military applications. Although researchers have utilized fluorescent plate readers to enhance the security of information, the complexity and security of encryption remains to be improved. Therefore, it remains a challenge to improve the security level of the paper information. In recent years, artificial intelligence has been applied in various fields including healthcare, autopilot, military and network security, and has gradually penetrated into the field of material chemistry such as synthesis of organic molecules and image analysis by transmission electron microscopy. How to encrypt and decrypt the printing information by using artificial intelligence is very important to improve the safety performance of the printing information.
Disclosure of Invention
The application aims to provide a method, a device, a terminal device and a medium for safely outputting printing information, and aims to solve the problem that the safety performance of the printing information is insufficient.
In order to achieve the purpose of the application, the technical scheme adopted by the application is as follows:
in a first aspect, the present application provides a method for securely outputting print information, which is applied to a terminal device, and includes:
acquiring a codebook to be tested printed by adopting invisible ink, wherein the invisible ink has ultraviolet responsiveness;
irradiating the to-be-detected codebook by adopting ultraviolet light to obtain a to-be-detected sample displaying password character information;
and inputting the sample to be tested into a convolutional neural network neural model to obtain printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by training through a machine learning algorithm by taking training characters as an input training set and training labels corresponding to the training characters as an output training set.
According to the safe output method of the printing information, based on the corresponding relation between the characters and the labels in the convolutional neural network model, the printing information with ultraviolet response is input into the convolutional neural network model after being developed, and the corresponding password labels are output, so that the real printing information is obtained. By adopting the safe output method, the printing characters have higher safety, on the basis, the decryption of the printing information depends on the corresponding relation between the printing characters and the password labels, the complexity of the password is further improved, and the safety performance of the printing information can be improved.
In some embodiments, the convolutional neural network model is trained by:
respectively determining training characters and training labels, and establishing a corresponding relation between the training characters and the training labels;
and based on the corresponding relation, taking the training characters as an input training set and the training labels as an output training set, and performing model training by adopting a convolutional neural network to construct a convolutional neural network model.
In some embodiments, the determining training characters and training labels respectively and establishing a correspondence between the training characters and the training labels includes:
irradiating a training codebook by adopting ultraviolet light to obtain a training sample with printed characters, wherein the training codebook is printed by adopting invisible ink;
carrying out image acquisition on the printed characters displayed in the training sample to obtain a first electronic image containing the printed characters;
extracting printed characters in the first electronic image as training characters;
and determining a training label, and establishing a corresponding relation between the training character and the training label.
In some embodiments, said extracting printed characters in said first electronic image as training characters comprises:
carrying out angle adjustment on the first electronic image to obtain a first angle correction electronic image;
obtaining a plurality of printed characters in the first angle correction electronic image by performing image segmentation on the first angle correction electronic image;
and carrying out pixel adjustment and angle adjustment on each printed character to obtain a plurality of training characters, wherein each printed character corresponds to a plurality of training characters.
In some embodiments, the performing pixel and angle adjustment on each printed character to obtain a plurality of training characters includes:
carrying out pixel adjustment on each printed character to obtain printed characters with various definitions;
and respectively rotating the printed characters with each definition by a plurality of angles to obtain a plurality of printed characters.
In some embodiments, the inputting the sample to be tested into a convolutional neural network model to obtain the printing information corresponding to the codebook to be tested includes:
inputting the sample to be tested into a convolutional neural network model, wherein the convolutional neural network model is used for identifying the password characters in the sample to be tested, and obtaining decrypted printing information according to the identified password characters and the corresponding relation between the training characters and the training labels;
and receiving the decrypted printing information output by the convolutional neural network model.
In some embodiments, the convolutional neural network model identifies the password characters in the sample to be tested by:
the convolutional neural network model acquires an image of the password character displayed in the sample to be detected, and a second electronic image containing the password character is obtained;
and after the convolutional neural network model carries out image segmentation and pixel adjustment on the second electronic image, the password character is identified.
In a second aspect, the present application provides a secure output device of printed information, comprising:
the code book acquiring device comprises an acquiring module, a printing module and a control module, wherein the acquiring module is used for acquiring a code book to be detected printed by adopting invisible ink, and the invisible ink has ultraviolet responsiveness;
the ultraviolet developing module is used for irradiating the codebook to be tested by adopting ultraviolet light to obtain a sample to be tested with character information;
and the decryption module is used for inputting the sample to be tested into a convolutional neural network model to obtain printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by taking training characters as an input training set and training labels as an output training set through machine learning algorithm training.
The application provides a safe output device of printing information can pass through the ultraviolet development module development back with the printing information in the codebook that awaits measuring, exports through the decryption module that forms based on the corresponding relation between character and the label, realizes double-deck decryption to improve printing information security identification performance.
In a third aspect, the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
In a fifth aspect, the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to perform the method of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart illustrating steps of a method for securely outputting print information according to an embodiment of the present application;
FIG. 2 is a flow chart of a convolutional neural network model training provided in an embodiment of the present application;
FIG. 3 is a flowchart of steps for establishing a correspondence between training characters and training labels according to an embodiment of the present application;
FIG. 4 is a flowchart of the steps provided by one embodiment of the present application for extracting printed characters in a first electronic image as training characters;
FIG. 5 is a sample diagram of a codebook to be tested obtained after printing with invisible ink according to an embodiment of the present application;
FIG. 6A is an ultraviolet developed image of a printed character on a codebook to be tested according to an embodiment of the present application;
FIG. 6B is an illustration of an alternative UV developed view of printed characters on a codebook under test according to one embodiment of the present application;
FIG. 7 is an ultraviolet developed image of a printed character on a training codebook provided by an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a correspondence relationship between training characters and training labels according to an embodiment of the present application;
FIG. 9 is a block diagram of a secure output device for printed information according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present application more clearly apparent, the present application is further described in detail below with reference to the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be understood that, in various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, some or all of the steps may be executed in parallel or executed sequentially, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
With reference to fig. 1, a first aspect of the embodiments of the present application provides a method for securely outputting print information, which is applied to a terminal device, and the method includes:
s110, acquiring the codebook to be tested printed by the invisible ink, wherein the invisible ink has ultraviolet responsiveness.
In the step, the codebook to be tested is printed by invisible ink, and the codebook to be tested comprises password character information. Under the condition that the invisible factors are not removed, the printed information on the codebook to be tested is invisible, so that a first safety barrier is provided for the printed information on the codebook to be tested. In embodiments of the present application, the invisible ink has ultraviolet responsiveness. In this case, the printed information on the codebook to be tested is visible under ultraviolet conditions, but not under visible conditions. Exemplarily, a codebook to be tested obtained by printing with invisible ink using "0", "1", "2", "3", "4", "5", "6", "7", "8" and "Λ" arranged in this order as print characters is shown in fig. 5.
In one possible embodiment, the invisible ink used to print the codebook to be tested is carbon dot ink. The carbon dot ink is an aqueous solution of carbon dots. In some embodiments, the carbon dot film water is obtained after sonication by dissolving the carbon dots in deionized water. In some embodiments, the carbon dots in the carbon dot ink can be prepared by microwave pyrolysis of L-cysteine and citric acid, and specifically comprises the following steps:
dissolving citric acid and L-cysteine in deionized water, and performing ultrasonic stirring treatment to obtain a mixed solution;
carrying out pyrolysis reaction on the mixed solution under the microwave condition, adding water into a product after the reaction, and carrying out ultrasonic stirring treatment to obtain a reactant solution;
adding alkali into the reactant solution, adjusting the pH value of the solution to be neutral, dialyzing the reactant solution by adopting a dialysis film, and collecting carbon points.
For example, the invisible ink is 25mg/mL carbon dot ink, and the preparation method comprises the following steps:
mixing citric acid and L-cysteine according to a mass ratio of 2:1, wherein the mass volume ratio of a solute to a solvent is 75 mg: dissolving 1mL of the mixed solution in deionized water, and performing ultrasonic stirring treatment to obtain a semitransparent mixed solution;
putting the mixed solution into a microwave oven, and carrying out pyrolysis for 210 seconds under the microwave condition with the power of 490W; adding water into the reacted product, and carrying out ultrasonic stirring treatment to completely dissolve the dark yellow precipitate to obtain a reactant solution;
adding a sodium hydroxide solution into a reactant solution, adjusting the pH of the solution to be neutral, dialyzing for 24 hours by using a dialysis membrane with Molecular Weight Cut (MWCO) of 500 to remove unreacted micromolecules, drying the obtained solution in an oven at 40 ℃, and collecting carbon points;
and dissolving the carbon dots in deionized water, and performing ultrasonic treatment to completely dissolve the carbon dots to obtain the carbon dot ink.
In the embodiment of the application, the codebook to be tested printed by the invisible ink can be obtained by printing the invisible ink on the filter paper, but the printing carrier is not limited to a paper carrier, and can also be made of other materials. The printing equipment of the codebook to be tested is not strictly limited, and a conventional ink-jet printer can be adopted, and a printer with other specific functions can also be adopted.
S120, irradiating the code book to be tested by adopting ultraviolet light to obtain a sample to be tested, wherein the sample to be tested is displayed with code character information.
In the step, the password character information on the password book to be tested has ultraviolet responsiveness, so that the password character information is displayed after the password book to be tested is irradiated by ultraviolet light. In one possible implementation mode, the to-be-tested sample with the password character information is obtained by collecting an electronic image of the password character information displayed by the to-be-tested codebook under the ultraviolet light irradiation condition.
Illustratively, when the printed characters on the codebook to be tested are "1234 Λ 125", the printed characters are displayed under the ultraviolet irradiation treatment, as shown in fig. 6A; when the printed character on the codebook to be tested is "0510355", the printed character is displayed under the ultraviolet irradiation process, as shown in fig. 6B.
S130, inputting a sample to be tested into a convolutional neural network neural model to obtain printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by training through a machine learning algorithm by taking training characters as an input training set and training labels corresponding to the training characters as an output training set.
In the step, the convolutional neural network model is used for obtaining a sample to be tested, analyzing and processing password character information in the sample to be tested, outputting a label corresponding to the password character information in the sample to be tested, and finally obtaining label information in the password book to be tested.
In some embodiments, as shown in fig. 2, the convolutional neural network model is trained by:
and S21, respectively determining training characters and training labels, and establishing a corresponding relation between the training characters and the training labels.
In this step, the training characters refer to characters printed on a training codebook, and the characters are contents displayed under ultraviolet irradiation. The training characters are used to mask the true meaning of the training characters. Illustratively, the training characters may be numbers, symbols, graphics, and the like. The training labels are labels corresponding to training characters and visually displaying the expressions corresponding to the training characters. Illustratively, the training characters may be letters, words, radicals, Chinese characters, etc. In the embodiment of the application, the corresponding relation between the training characters and the training labels is established by designating the training characters corresponding to the training labels.
In some embodiments, as shown in fig. 3, the determining a training character and a training label respectively, and establishing a correspondence between the training character and the training label includes:
and S31, irradiating the training codebook by adopting ultraviolet light to obtain a training sample with printed characters, wherein the training codebook is printed by adopting invisible ink.
In this step, the training codebook is a codebook printed with printed characters using invisible ink. After the training codebook is subjected to ultraviolet irradiation treatment, the printed characters on the training codebook are displayed, so that a training sample with the printed characters displayed is obtained.
In this embodiment, in the step of irradiating the training codebook with ultraviolet light, the ultraviolet light irradiating the training codebook should be ultraviolet light with responsiveness of invisible ink. Illustratively, when the training codebook is printed by invisible ink responsive to 365nm ultraviolet light, the training codebook is irradiated by 365nm ultraviolet light, and the printed characters on the training codebook are displayed.
S32, carrying out image acquisition on the printed characters displayed in the training sample to obtain a first electronic image containing the printed characters.
Illustratively, as shown in fig. 7, when the printed character on the training codebook is "012345678 Λ", the printed character is displayed under the ultraviolet irradiation treatment. A first electronic image as shown in fig. 7 is obtained by image capturing the printed characters displayed in the training sample.
And S33, extracting the printing characters in the first electronic image as training characters.
In the step, the first electronic image comprises a plurality of printed characters, one or more printed characters in the first electronic image are obtained by processing the first electronic image, and the printed characters are used as training characters. In some embodiments, the first electronic image is image processed based on OpenCV to generate a training set symbol.
In some embodiments, as shown in FIG. 4, extracting printed characters in the first electronic image as training characters includes:
and S41, carrying out angle adjustment on the first electronic image to obtain a first angle correction electronic image.
In the step, the angle of the first electronic image is adjusted, and the display angle of the first electronic image obtained by image acquisition is adjusted to be a clearer angle for displaying the printed characters. In some embodiments, the angle of the first electronic image is adjusted through Hough transform to obtain a first angle-corrected electronic image with clearer printed character display.
And S42, obtaining a plurality of printed characters in the first angle correction electronic image by carrying out image segmentation on the first angle correction electronic image.
In the step, an image segmentation algorithm is adopted to carry out image segmentation on the first angle correction electronic image, and printing characters contained in the first angle correction electronic image are selected. For example, with fig. 7 as the first angle-corrected electronic image, after image segmentation, the printed characters "0", "1", "2", "3", "4", "5", "6", "7", "8", and "Λ" can be obtained.
S43, pixel adjustment and angle adjustment are carried out on each printed character to obtain a plurality of training characters, wherein each printed character corresponds to a plurality of training characters.
In the embodiment, the definition of the printed characters can be adjusted by adjusting the pixels of each printed character, and the recognition degree of the printed characters is improved during the training of a machine learning algorithm; by adjusting the angle of each printed character, samples of the same printed character displayed at different angles can be obtained. The samples obtained after the angle adjustment are taken as training samples, and the obtained convolutional neural network model has higher recognition performance on the password characters displayed in the samples to be detected, so that the accuracy of output information is improved when the samples to be detected are input into the convolutional neural network model for recognition.
In this embodiment, after the same printed character is subjected to pixel adjustment and angle adjustment, a plurality of training characters can be formed, and specifically, after one printed character is subjected to pixel adjustment N times, N training characters are correspondingly obtained; furthermore, after M angles of each character after pixel adjustment are adjusted, M training characters are correspondingly obtained. Finally, after pixel adjustment and angle adjustment are carried out on each printed character, N multiplied by M printed characters can be obtained.
In some embodiments, performing pixel and angle adjustments on each printed character to obtain a plurality of training characters comprises:
carrying out pixel adjustment on each printed character to obtain printed characters with various definitions;
and respectively rotating the printed characters with each definition by a plurality of angles to obtain a plurality of printed characters.
Illustratively, the printing characters of each definition are respectively rotated by n1 °, n2 ° and n3 ° to obtain a plurality of printing characters, wherein the value ranges of n1, n2 and n3 satisfy: 0 < n1 < 135 < n2 < 225 < n3 < 360.
In some embodiments, after performing the pixel adjustment and the angle adjustment on each printed character, brightness adjustment is performed on each printed character to obtain a plurality of printed characters with different brightness values. Illustratively, after the same printed character is subjected to pixel adjustment and angle adjustment, a plurality of training characters can be formed, and specifically, after one printed character is subjected to pixel adjustment for N times, N training characters are correspondingly obtained; further, after M angles of each pixel-adjusted character are adjusted, M training characters are correspondingly obtained; furthermore, L brightness adjustments are carried out on each printed character after the angle adjustment, and each printed character corresponds to L training characters with different brightness values. Finally, after pixel adjustment, angle adjustment and brightness adjustment are carried out on each printed character, N × M × L printed characters can be obtained.
Illustratively, pixel and angle adjustment is performed on each printed character to obtain a plurality of training characters, including:
after adjusting the pixels of each printed character to 30 × 30, each printed character is rotated by 90 °, 180 °, and 270 °, and the brightness of each printed symbol, which is not rotated, rotated by 90 °, 180 °, and 270 °, is changed to 10 different values. Thus, 40 training characters are generated per printed character.
And S34, determining the training labels, and establishing the corresponding relation between the training characters and the training labels.
In the step, the corresponding relation between the training characters and the training labels is established, so that the printing information on the codebook to be tested cannot be obtained even after the responsiveness factors of the invisible ink are known, and a second safety barrier is provided for the printing information on the codebook to be tested.
Exemplarily, as shown in fig. 8, a corresponding relationship is established between a training character "0" and a training label "P", a corresponding relationship is established between a training character "1" and a training label "O", a corresponding relationship is established between a training character "2" and a training label "N", a corresponding relationship is established between a training character "3" and a training label "L", a corresponding relationship is established between a training character "4" and a training label "Y", a corresponding relationship is established between a training character "5" and a training label "E", a corresponding relationship is established between a training character "6" and a training label "D", a corresponding relationship is established between a training character "7" and a training label "G", a corresponding relationship is established between a training character "8" and a training label "a", and a corresponding relationship is established between a training character "Λ" and a training label "space".
And S22, based on the corresponding relation, taking the training characters as an input training set and the training labels as an output training set, and performing model training by adopting a convolutional neural network to construct a convolutional neural network model.
In the step, a plurality of training characters are constructed for each printing character to be used as an input training set, corresponding training labels corresponding to the printing characters are used as an output training set, a convolutional neural network is adopted for model training, and a convolutional neural network model which takes the characters as input and the labels corresponding to the characters as output is constructed.
And a convolutional neural network is adopted for model training, and the training process can be completed on the terminal equipment within a few seconds. For example, after 30 training cycles, the network can learn the labels corresponding to all the symbols, and the training accuracy is nearly 100%.
And inputting the sample to be tested into the convolutional neural network neural model based on the convolutional neural network model, outputting a label corresponding to the training character after acquiring the password character matched with the training character by the convolutional neural network neural model, and finally obtaining the printing information corresponding to the password book to be tested.
In some embodiments, inputting the sample to be tested into the convolutional neural network model to obtain the printing information corresponding to the codebook to be tested, including:
inputting a sample to be tested into a convolutional neural network model, wherein the convolutional neural network model is used for identifying password characters in the sample to be tested, and obtaining decrypted printing information according to the identified password characters and the corresponding relation between training characters and training labels;
and receiving the decrypted printing information output by the convolutional neural network model.
In some embodiments, the convolutional neural network model identifies the password characters in the sample to be tested by the following steps, including:
and the convolutional neural network model acquires an image of the password character displayed in the sample to be detected, and a second electronic image containing the password character is obtained.
In the step, an image segmentation algorithm is adopted to segment the second electronic image, and password characters contained in the second electronic image are collected.
And after the convolutional neural network model carries out image segmentation and pixel adjustment on the second electronic image, the code character is identified.
In the step, an image segmentation algorithm is adopted to segment the second electronic image, and password characters contained in the second electronic image are collected. For example, with fig. 6A as the second electronic image, after image division, the printed characters "1", "2", "3", "4", "Λ", "2", "3" can be obtained. By dividing the second electronic image shown in fig. 6B, the printed characters "0", "5", "1", "0", "3", "5" and "5" can be obtained.
And after the second electronic image is subjected to image segmentation, adjusting the acquired password characters and pixels, and improving the definition of the password characters.
According to the method for safely outputting the printing information, the printing information with ultraviolet response is developed based on the corresponding relation between the characters and the labels in the convolutional neural network model, then the printing information is input into the convolutional neural network model, and the corresponding password labels are output, so that the real printing information is obtained. The safety output method has higher safety of the printed characters, and on the basis, the decryption of the printed information depends on the corresponding relation between the printed characters and the password labels, so that the complexity of the password is further improved, and the safety performance of the printed information can be improved.
According to the safe output method of the printing information, the network can identify all codebooks to be tested with high precision after training for several times in the testing process. Since the neural network is a black box, the trained information is hidden in millions of unstructured parameters, so that it is almost impossible to find the label corresponding to the corresponding symbol from the network itself. Therefore, when the printing information is encrypted, the complex numbers and/or shapes can be designed to form an unpredictable and highly complex codebook, so that the encryption has greater complexity and unpredictability. The method greatly improves the security level of the information, and is beneficial to the application in the economic or military aspect. In addition, because the symbol is invisible under natural light, the corresponding information can be obtained only under the irradiation of ultraviolet rays, and the correct information can be obtained only by inputting the information into a specific network, so that the method has higher safety than the method which only depends on the property of the material.
As shown in fig. 9, a second aspect of the embodiments of the present application provides a secure output apparatus for printed information, including:
the acquiring module 91 is used for acquiring a codebook to be detected, which is printed by adopting invisible ink, wherein the invisible ink has ultraviolet responsiveness;
the ultraviolet developing module 92 is used for irradiating the codebook to be tested by adopting ultraviolet light to obtain a sample to be tested, wherein the sample to be tested is displayed with character information;
and the decryption module 93 is configured to input the sample to be tested into a convolutional neural network model to obtain print information corresponding to the codebook to be tested, where the convolutional neural network model is obtained by training through a machine learning algorithm with training characters as an input training set and training labels as an output training set.
The safe output device of printing information that this application embodiment provided can pass through the ultraviolet development module development back with the printing information in the codebook that awaits measuring, exports through the decryption module that forms based on the corresponding relation between character and the label, realizes double-deck decryption to realize the improvement of printing information security identification performance.
In the embodiment of the present application, the convolutional neural network model may be obtained by calling the following modules:
the device comprises an establishing module, a judging module and a judging module, wherein the establishing module is used for respectively determining training characters and training labels and establishing the corresponding relation between the training characters and the training labels;
and the building module is used for carrying out model training by adopting a convolutional neural network based on the corresponding relation by taking the training characters as an input training set and the training labels as an output training set to build a convolutional neural network model.
In some embodiments, the setup module may include the following sub-modules:
the training sample obtaining submodule is used for irradiating the training codebook by adopting ultraviolet light to obtain the training sample with the printed characters, and the training codebook is printed by adopting invisible ink;
the image acquisition submodule is used for carrying out image acquisition on the printed characters displayed in the training sample to obtain a first electronic image containing the printed characters;
the extraction submodule is used for extracting the printing characters in the first electronic image as training characters;
and the determining submodule is used for determining the training labels and establishing the corresponding relation between the training characters and the training labels.
In some embodiments, the extraction sub-module may include the following elements:
the angle adjusting unit is used for carrying out angle adjustment on the first electronic image to obtain a first angle correction electronic image;
an image dividing unit configured to obtain a plurality of printed characters in the first angle-corrected electronic image by performing image division on the first angle-corrected electronic image;
and the pixel adjustment and angle adjustment unit is used for carrying out pixel adjustment and angle adjustment on each printed character to obtain a plurality of training characters, wherein each printed character corresponds to a plurality of training characters.
In some embodiments, the pixel adjustment and angle adjustment unit may include the following sub-units:
the pixel adjustment subunit is used for carrying out pixel adjustment on each print character to obtain print characters with various definitions;
and the angle rotation subunit is used for respectively rotating the printing characters with each definition by a plurality of angles to obtain a plurality of printing characters.
In some embodiments, the decryption module 93 may include the following sub-modules:
the input and recognition submodule is used for inputting a sample to be detected into the convolutional neural network model, the convolutional neural network model is used for recognizing the password characters in the sample to be detected, and decrypted printing information is obtained according to the recognized password characters and the corresponding relation between the training characters and the training labels;
and the receiving submodule is used for receiving the decrypted printing information output by the convolutional neural network model.
In some embodiments, the input and recognition sub-module may include the following elements:
the image acquisition unit is used for acquiring an image of the password character displayed in the sample to be detected by the convolutional neural network model to obtain a second electronic image containing the password character;
and the image segmentation and pixel adjustment unit is used for carrying out image segmentation and pixel adjustment on the second electronic image by the convolutional neural network model and then identifying the password characters.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to the description of the method embodiment section for relevant points.
Referring to fig. 10, a schematic diagram of a terminal device according to an embodiment of the present application is shown. As shown in fig. 10, the terminal device 100 provided in the present embodiment includes: a processor 110, a memory 120, and a computer program 121 stored in the memory 120 and operable on the processor 110. The processor 110, when executing the computer program 121, implements the steps in the various embodiments of the above-described method for securely outputting print information, such as steps S110 to S130 shown in fig. 1.
Illustratively, the computer program 121 may be divided into one or more modules/units, which are stored in the memory 120 and executed by the processor 110 to accomplish the present application. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which may be used to describe the execution of the computer program 121 in the terminal device. For example, the computer program 121 may be divided into an acquisition module, an ultraviolet development module, and a decryption module, and each module has the following specific functions:
the acquisition module is used for acquiring the codebook to be tested printed by adopting invisible ink, and the invisible ink has ultraviolet responsiveness;
the ultraviolet developing module is used for irradiating the cipher book to be tested by adopting ultraviolet light to obtain a sample to be tested with character information;
and the decryption module is used for inputting the sample to be tested into the convolutional neural network model to obtain the printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by taking the training characters as an input training set and the training labels as an output training set and training through a machine learning algorithm.
The terminal device 100 may include, but is not limited to, a processor 110, a memory 120. Those skilled in the art will appreciate that fig. 10 is merely an example of a server 100 and is not intended to limit server 100 and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., server 100 may also include input-output devices, network access devices, buses, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 120 may be an internal storage unit of the terminal device 100, such as a hard disk or a memory of the terminal device 100. The memory 120 may also be an external storage device of the server 100, such as a plug-in hard disk provided on the server 100, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so on. Further, the memory 120 may also include both an internal storage unit of the server 100 and an external storage device. The memory 120 is used to store a computer program 121 and other programs and data required by the terminal device 100. The memory 120 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the safe output method of the printing information of the foregoing embodiments.
The embodiment of the present application further provides a computer program product, which, when running on a terminal device, enables the terminal device to execute the method for securely outputting the print information according to the foregoing embodiments.
The present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed.

Claims (10)

1. A method for safely outputting printed information is applied to a terminal device, and comprises the following steps:
acquiring a codebook to be tested printed by adopting invisible ink, wherein the invisible ink has ultraviolet responsiveness;
irradiating the to-be-detected codebook by adopting ultraviolet light to obtain a to-be-detected sample with password characters;
and inputting the sample to be tested into a convolutional neural network model to obtain printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by training through a machine learning algorithm by taking training characters as an input training set and training labels corresponding to the training characters as an output training set.
2. The method for secure output of printed information according to claim 1, wherein the convolutional neural network model is trained by the steps of:
respectively determining training characters and training labels, and establishing a corresponding relation between the training characters and the training labels;
and based on the corresponding relation, taking the training characters as an input training set and the training labels as an output training set, and performing model training by adopting a convolutional neural network to construct a convolutional neural network model.
3. The method for safely outputting the printed information according to claim 2, wherein the determining the training characters and the training labels respectively, and the establishing the corresponding relationship between the training characters and the training labels comprises:
irradiating a training codebook by adopting ultraviolet light to obtain a training sample with printed characters, wherein the training codebook is printed by adopting invisible ink;
carrying out image acquisition on the printed characters displayed in the training sample to obtain a first electronic image containing the printed characters;
extracting printed characters in the first electronic image as training characters;
and determining a training label, and establishing a corresponding relation between the training character and the training label.
4. A method for secure output of printed information according to claim 3, wherein said extracting printed characters in said first electronic image as training characters comprises:
carrying out angle adjustment on the first electronic image to obtain a first angle correction electronic image;
obtaining a plurality of printed characters in the first angle correction electronic image by performing image segmentation on the first angle correction electronic image;
and carrying out pixel adjustment and angle adjustment on each printed character to obtain a plurality of training characters, wherein each printed character corresponds to a plurality of training characters.
5. The method for securely outputting printed information according to claim 4, wherein the adjusting the pixel and angle of each printed character to obtain a plurality of training characters comprises:
carrying out pixel adjustment on each printed character to obtain printed characters with various definitions;
and respectively rotating the printed characters with each definition by a plurality of angles to obtain a plurality of printed characters.
6. The method for safely outputting the printing information according to any one of claims 1 to 5, wherein the inputting the sample to be tested into a convolutional neural network model to obtain the printing information corresponding to the codebook to be tested comprises:
inputting the sample to be tested into a convolutional neural network model, wherein the convolutional neural network model is used for identifying the password characters in the sample to be tested, and obtaining decrypted printing information according to the identified password characters and the corresponding relation between the training characters and the training labels;
and receiving the decrypted printing information output by the convolutional neural network model.
7. The method for securely outputting printed information according to claim 6, wherein the convolutional neural network model identifies password characters in the sample to be tested by:
the convolutional neural network model acquires an image of the password character displayed in the sample to be detected, and a second electronic image containing the password character is obtained;
and after the convolutional neural network model carries out image segmentation and pixel adjustment on the second electronic image, the password character is identified.
8. A secure output device for printed information, comprising:
the code book acquiring device comprises an acquiring module, a printing module and a control module, wherein the acquiring module is used for acquiring a code book to be detected printed by adopting invisible ink, and the invisible ink has ultraviolet responsiveness;
the ultraviolet developing module is used for irradiating the codebook to be tested by adopting ultraviolet light to obtain a sample to be tested with character information;
and the decryption module is used for inputting the sample to be tested into a convolutional neural network model to obtain printing information corresponding to the codebook to be tested, wherein the convolutional neural network model is obtained by taking training characters as an input training set and training labels as an output training set through machine learning algorithm training.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010974703.XA 2020-09-16 2020-09-16 Method and device for safely outputting printing information, terminal equipment and medium Active CN112743993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010974703.XA CN112743993B (en) 2020-09-16 2020-09-16 Method and device for safely outputting printing information, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010974703.XA CN112743993B (en) 2020-09-16 2020-09-16 Method and device for safely outputting printing information, terminal equipment and medium

Publications (2)

Publication Number Publication Date
CN112743993A true CN112743993A (en) 2021-05-04
CN112743993B CN112743993B (en) 2021-10-01

Family

ID=75645425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010974703.XA Active CN112743993B (en) 2020-09-16 2020-09-16 Method and device for safely outputting printing information, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN112743993B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114474722A (en) * 2022-01-21 2022-05-13 芯体素(杭州)科技发展有限公司 Transparent flexible film surface fine line processing method and device based on 3D printing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040046995A1 (en) * 1999-05-25 2004-03-11 Silverbrook Research Pty Ltd Interactive publication printer and binder
JP2005085246A (en) * 2003-09-04 2005-03-31 Masatoshi Ouchi Method and device for link information print/read and link destination information record/reference
CN102381065A (en) * 2010-08-31 2012-03-21 顾泽苍 Method for realizing variable information printing by invisible two dimensional barcode word stock
CN104239925A (en) * 2013-06-21 2014-12-24 广州市人民印刷厂股份有限公司 Printing and recognition method for invisible and encrypted two-dimension code
CN110399912A (en) * 2019-07-12 2019-11-01 广东浪潮大数据研究有限公司 A kind of method of character recognition, system, equipment and computer readable storage medium
CN111488883A (en) * 2020-04-14 2020-08-04 上海眼控科技股份有限公司 Vehicle frame number identification method and device, computer equipment and storage medium
CN112667979A (en) * 2020-12-30 2021-04-16 网神信息技术(北京)股份有限公司 Password generation method and device, password identification method and device, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040046995A1 (en) * 1999-05-25 2004-03-11 Silverbrook Research Pty Ltd Interactive publication printer and binder
JP2005085246A (en) * 2003-09-04 2005-03-31 Masatoshi Ouchi Method and device for link information print/read and link destination information record/reference
CN102381065A (en) * 2010-08-31 2012-03-21 顾泽苍 Method for realizing variable information printing by invisible two dimensional barcode word stock
CN104239925A (en) * 2013-06-21 2014-12-24 广州市人民印刷厂股份有限公司 Printing and recognition method for invisible and encrypted two-dimension code
CN110399912A (en) * 2019-07-12 2019-11-01 广东浪潮大数据研究有限公司 A kind of method of character recognition, system, equipment and computer readable storage medium
CN111488883A (en) * 2020-04-14 2020-08-04 上海眼控科技股份有限公司 Vehicle frame number identification method and device, computer equipment and storage medium
CN112667979A (en) * 2020-12-30 2021-04-16 网神信息技术(北京)股份有限公司 Password generation method and device, password identification method and device, and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114474722A (en) * 2022-01-21 2022-05-13 芯体素(杭州)科技发展有限公司 Transparent flexible film surface fine line processing method and device based on 3D printing
CN114474722B (en) * 2022-01-21 2023-12-01 芯体素(杭州)科技发展有限公司 Transparent flexible film surface fine circuit processing method and device based on 3D printing

Also Published As

Publication number Publication date
CN112743993B (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US10217179B2 (en) System and method for classification and authentication of identification documents using a machine learning based convolutional neural network
Kohli et al. Detecting medley of iris spoofing attacks using DESIST
CN112743993B (en) Method and device for safely outputting printing information, terminal equipment and medium
CA2954089A1 (en) Systems and methods for authentication of physical features on identification documents
CN106030615A (en) Composite information bearing device
CN111931783A (en) Training sample generation method, machine-readable code identification method and device
TWI770947B (en) Verification method and verification apparatus based on attacking image style transfer
CN108521387A (en) A kind of signal modulation pattern recognition methods and device
Lv et al. Chinese character CAPTCHA recognition based on convolution neural network
CN116311214A (en) License plate recognition method and device
CN115422518A (en) Text verification code identification method based on data-free knowledge distillation
Biswas et al. DeepFake detection using 3D-Xception net with discrete Fourier transformation
CN109859372A (en) Watermark recognition methods, device, cloud server and the system of anti-forge paper
CN106934756B (en) Method and system for embedding information in single-color or special-color image
US11955239B2 (en) Systems, methods, and devices for non-human readable diagnostic tests
Struppek et al. Leveraging diffusion-based image variations for robust training on poisoned data
Arenas et al. An analysis of cholesteric spherical reflector identifiers for object authenticity verification
Khuspe et al. Robust image forgery localization and recognition in copy-move using bag of features and SVM
CN115439850A (en) Image-text character recognition method, device, equipment and storage medium based on examination sheet
Bruce et al. Visual representation determines search difficulty: explaining visual search asymmetries
Thai et al. Basic information processing effects from perceptual learning in complex, real-world domains
CN112597810A (en) Identity document authentication method and system
CN113077048B (en) Seal matching method, system, equipment and storage medium based on neural network
CN112132133B (en) Identification image data enhancement method and true-false intelligent identification method
CN115130531B (en) Network structure tracing method of image generation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231011

Address after: No.2 building, Chongwen Park, Nanshan wisdom Park, no.3370 Liuxian Avenue, Fuguang community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Hushen Intelligent Material Technology Co.,Ltd.

Address before: 518000 Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: HARBIN INSTITUTE OF TECHNOLOGY (SHENZHEN)