CN117611953A - Graphic code generation method, graphic code generation device, computer equipment and storage medium - Google Patents

Graphic code generation method, graphic code generation device, computer equipment and storage medium Download PDF

Info

Publication number
CN117611953A
CN117611953A CN202410070234.7A CN202410070234A CN117611953A CN 117611953 A CN117611953 A CN 117611953A CN 202410070234 A CN202410070234 A CN 202410070234A CN 117611953 A CN117611953 A CN 117611953A
Authority
CN
China
Prior art keywords
graphic code
style
feature
fusion
graphic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410070234.7A
Other languages
Chinese (zh)
Inventor
周昆
吴海浪
蒋念娟
吕江波
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202410070234.7A priority Critical patent/CN117611953A/en
Publication of CN117611953A publication Critical patent/CN117611953A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a graphic code generation method, a graphic code generation device, computer equipment and a storage medium. The method comprises the following steps: acquiring a target graphic code generation model trained on an initial graphic code generation model based on a graphic code set to be trained, a binary graphic code to be processed and a graphic code to be processed corresponding to a target graphic code style; the graphic code set to be trained comprises a plurality of groups of graphic code groups with different graphic code styles, wherein the graphic code groups comprise graphic codes to be trained with the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained; inputting the graphic code to be processed and the binary graphic code to be processed into a target graphic code generation model, and extracting graphic code style characteristics of the graphic code to be processed and graphic code geometric characteristics of the binary graphic code to be processed; and generating the target style graphic code based on the graphic code feature fused with the graphic code style feature and the graphic code geometric feature through the target graphic code generation model. By adopting the method and the device, the accuracy of generating the graphic code with the specific style can be improved.

Description

Graphic code generation method, graphic code generation device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating a graphic code, a computer device, and a storage medium.
Background
In daily life, graphic codes are visible everywhere, and decoding analysis is needed in various fields, such as decoding analysis of various graphic codes on a target object by an industrial code scanner. However, the requirements of the existing graphic code decoding model on the graphic code training samples of the specific style under different scenes are difficult to be ensured, and the controllability of the existing graphic code generating technology on the style of the generated graphic code is low, so that the accuracy of generating the graphic code of the specific style applied to the graphic code decoding model training is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a graphics code generating method, apparatus, computer device, and storage medium for generating graphics codes of a specific style, which can improve the accuracy of graphics codes of a specific style applied to the training of a graphics code decoding model.
In a first aspect, the present application provides a method for generating a graphic code, including:
obtaining a trained target graphic code generation model, and obtaining a binary graphic code to be processed and a graphic code to be processed corresponding to the style of the target graphic code; training of the target graphic code generation model includes: training the initial graphic code generation model based on a graphic code set to be trained to obtain a target graphic code generation model, wherein the graphic code set to be trained comprises a plurality of groups of graphic code groups corresponding to different graphic code styles, and the graphic code groups comprise graphic codes to be trained of the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained;
Inputting the graphic code to be processed and the binary graphic code to be processed into a target graphic code generation model, and respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binary graphic code to be processed based on the target graphic code generation model;
and fusing the graphic code style characteristics and the graphic code geometric characteristics based on the target graphic code generation model to obtain fused graphic code characteristics, and generating the target style graphic code based on the fused graphic code characteristics.
In a second aspect, the present application provides a graphic code generating apparatus, including:
the acquisition module is used for acquiring a trained target graphic code generation model and acquiring a binary graphic code to be processed and a graphic code to be processed corresponding to the style of the target graphic code; training of the target graphic code generation model includes: training the initial graphic code generation model based on a graphic code set to be trained to obtain a target graphic code generation model, wherein the graphic code set to be trained comprises a plurality of groups of graphic code groups corresponding to different graphic code styles, and the graphic code groups comprise graphic codes to be trained of the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained;
the extraction module is used for inputting the graphic code to be processed and the binary graphic code to be processed into the target graphic code generation model, and respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binary graphic code to be processed based on the target graphic code generation model;
And the fusion module is used for fusing the graphic code style characteristics and the graphic code geometric characteristics based on the target graphic code generation model to obtain fusion graphic code characteristics and generating the target style graphic code based on the fusion graphic code characteristics.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method described above.
The method, the device, the computer equipment, the computer readable storage medium and the computer program product for generating the graphic code train the initial graphic code generating model to obtain the target graphic code generating model which is trained and can generate graphic codes of any appointed style, process the graphic code to be processed corresponding to the graphic code of any style and the graphic code to be processed corresponding to the target graphic code style, input the graphic code generating model to process the graphic code to be processed, extract graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the graphic code to be processed respectively through the graphic code generating model, fuse the graphic code style characteristics and the graphic code geometric characteristics, generate fused graphic code characteristics which are fused with the graphic code geometric characteristic information with high precision, generate the graphic code characteristics of the graphic code with the high precision, and generate the graphic code with the same style based on the fused graphic code feature information, and the graphic code geometric characteristics of the graphic code to be processed corresponding to the graphic code to be processed, and the graphic code to be processed corresponding to the target graphic code of any style, and the graphic code to be processed can be more easily obtain the graphic code to be processed according to the style of the multiple-style of the graphic code to be processed, and the graphic code to be processed can be more easily obtained based on the graphic code to the style of the multiple-style of the graphic code to be processed, the graphic code to be processed can be more easily obtained, and the set of the graphic codes of the multiple target styles is used as training data of the graphic code decoding model, and the graphic code samples of the specific styles are obtained by improving the accuracy of generating the graphic codes of the specific styles, so that the accuracy of the graphic code decoding model for decoding the graphic codes of the specific styles is improved while the accuracy of the graphic codes of the specific styles is ensured.
Drawings
FIG. 1 is an application environment diagram of a method for generating a graphic code according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for generating a graphic code according to an embodiment of the present application;
FIG. 3 is a schematic diagram of graphic code generation according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a feature fusion layer according to an embodiment of the present application;
FIG. 5 is a block diagram of a graphic code generating device according to an embodiment of the present application;
FIG. 6 is an internal block diagram of a computer device according to an embodiment of the present application;
FIG. 7 is an internal block diagram of another computer device according to an embodiment of the present application;
fig. 8 is an internal structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The graphic code generation method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a communication network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
As shown in fig. 2, an embodiment of the present application provides a method for generating a graphic code, which is described by taking application of the method to the terminal 102 or the server 104 in fig. 1 as an example. It is understood that the computer device may include at least one of a terminal and a server. The method comprises the following steps:
s200, acquiring a trained target graphic code generation model, and acquiring a binary graphic code to be processed and a graphic code to be processed corresponding to a target graphic code style; training of the target graphic code generation model includes: training the initial graphic code generation model based on a graphic code set to be trained to obtain a target graphic code generation model, wherein the graphic code set to be trained comprises a plurality of groups of graphic code groups corresponding to different graphic code styles, and the graphic code groups comprise graphic codes to be trained of the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained.
Wherein the target graphic code generation model refers to a model for generating graphic codes, and can be, but not limited to, a model formed by combining a Resnet50, a VGG-16 and a model comprising a plurality of cross-modal attention layers; the graphic code may be a two-dimensional code. The binarized graphic code to be processed refers to a binarized graphic obtained by binarizing the graphic code; the pattern is not limited to a binarized pattern obtained by binarizing a pattern code having the same style as the target pattern code, and may be a binarized pattern obtained by binarizing a pattern code having an arbitrary style. The target graphic code style refers to the local texture structure style of the graphic code; different graphic codes have different local texture structures, such as continuous square blocks, solid circles, line frames and the like, the local texture structures are similar, namely the two graphic codes belong to the same graphic code style, for example, the information matrixes corresponding to the two graphic codes are respectively obtained, the information matrixes corresponding to the two graphic codes are subtracted to obtain the approach values of the two graphic codes, if the absolute value of the approach values is greater than or equal to a preset approach threshold value, the two graphic codes belong to the same graphic code style, the information matrixes contain pixel information and texture information corresponding to each image block in the graphic codes, each image block is provided with a corresponding binary label (such as 1 or 0), when determining whether the two graphic codes belong to the same graphic code style, the image blocks belonging to the same binary label in the information matrixes can be calculated to the degree, for example, the image block with the binary label of 1 is compared with the image block with the binary label of 1, the image block with the binary label of 0 is compared with the image block with the binary label of 0, and the image block with the binary label of 0 can be one corresponding to one image block in the two-dimensional code style, for example, the two-dimensional image block can be formed into a square block; the target graphic code style may be a specific graphic code style required for training applied to the graphic code decoding model. The graphic code set to be trained refers to a training set for training a graphic code generation model; the graphic code set to be trained contains a plurality of groups of graphic code sets with different graphic code styles, each group of graphic code sets contains graphic codes to be trained with the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained, the binary graphic codes to be trained refer to binary graphics obtained by performing binarization processing on the graphic codes to be trained, namely, each graphic code to be trained in the graphic code set to be trained has the binary graphic code to be trained after performing binarization processing on the graphic codes to be trained, and the graphic codes to be trained in the training set need to be in one-to-one correspondence with the binary graphic codes to be trained. The graphic code group refers to a data group for distinguishing different graphic code styles; the same data group contains the graphic codes to be trained and the binarized graphic codes to be trained in the same graphic code style, and the binarized graphic to be trained is obtained by performing binarization processing on the graphic codes to be trained.
Specifically, before the trained target graphic code generation model is obtained, training the initial graphic code generation model according to a graphic code set to be trained, wherein the graphic code set to be trained contains graphic code groups of various graphic code styles, each graphic code group contains graphic codes to be trained of the same graphic code style and corresponding binary graphic codes to be processed after the binary processing of the graphic codes to be trained, namely, the relationship between the graphic codes to be trained and the binary graphic codes to be processed is one-to-one, and after training the initial graphic code generation model based on the graphic code set to be trained, the target graphic code generation model capable of generating graphic codes of specified styles is obtained. In addition, the graphic code of the target graphic code style appointed in the actual demand is generated, the to-be-processed binarized graphic code belonging to any graphic code style and the to-be-processed graphic code corresponding to the target graphic code style can be obtained, and a data basis is provided for the graphic code of the target graphic code style to be generated subsequently.
S202, inputting the graphic code to be processed and the binary graphic code to be processed into a target graphic code generation model, and respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binary graphic code to be processed based on the target graphic code generation model.
Wherein the graphic code style characteristics refer to graphic code style characteristics of graphic codes to be processed. The graphic code geometric feature refers to the binarization feature of the binarized graphic code to be processed, which is a feature related to geometric information represented by the position of each code point and the pixel value of the code point in the binarized graphic code to be processed.
Specifically, in order to fuse the feature information related to the graphic code to be processed and the binary graphic code to generate a graphic code with different graphic code content and the same graphic code style according to the fused feature information, the graphic code to be processed and the binary graphic code to be processed can be input into a target graphic code generation model, feature extraction is performed on the graphic code to be processed based on a graphic code style encoder in the target graphic code generation model to obtain graphic code style features, and feature extraction is performed on the binary graphic code to be processed based on a graphic code geometry encoder in the target graphic code generation model to obtain graphic code geometric features, so that high-precision feature information is obtained.
S204, fusing the graphic code style characteristics and the graphic code geometric characteristics based on the target graphic code generation model to obtain fused graphic code characteristics, and generating the target style graphic code based on the fused graphic code characteristics.
The feature of the graphic code is fused with the style feature of the graphic code and the geometric feature of the graphic code. The target style graphic code refers to a graphic code with the same style as the target graphic code; the graphic content in the graphic code is changed, but the style of the graphic code is consistent with that of the target graphic code, and the graphic code can be used as training data of a specific graphic code style for training a graphic code decoding model, so that the graphic code decoding model has higher decoding capability on the graphic code of the specific style and better decoding accuracy on the graphic code of the specific style.
Specifically, the graphic code style characteristics and the graphic code geometric characteristics can be fused based on the characteristic fusion layer in the target graphic code generation model, so that the fused graphic code characteristics fused with the graphic code style characteristics and the graphic code geometric characteristics are obtained, and further, the target style graphic code with the graphic code style consistent with the target graphic code style is generated based on the fused graphic code characteristics through the target graphic code generation model.
The method for generating the graphic code trains an initial graphic code generating model based on graphic code sets corresponding to a plurality of groups of different graphic code styles, wherein the graphic code sets comprise graphic codes to be trained of the same graphic code style and graphic code sets to be trained of binary graphic codes corresponding to the graphic codes to be trained, so as to obtain a target graphic code generating model which is trained and can generate graphic codes of any appointed style, inputs the graphic codes to be processed corresponding to the binary graphic codes of any style and the graphic codes to be processed corresponding to the target graphic code style into the target graphic code generating model for processing, respectively extracts graphic code style characteristics corresponding to the graphic codes to be processed and graphic code geometric characteristics corresponding to the binary graphic codes to be processed through the target graphic code generating model, fuses the graphic code style characteristics and the graphic code geometric characteristics, generating the fusion graphic code characteristics which are fused with the graphic code geometric characteristic information and the graphic code style characteristic information with high precision, generating a target style graphic code with the same style as the target graphic code style based on the fusion graphic code characteristic information, guaranteeing the precision of the characteristic data, being beneficial to improving the precision of the finally generated target style graphic code, and if more graphic codes with the target graphic code style are generated, only needing to acquire a graphic code to be processed corresponding to one target graphic code style, and acquiring a plurality of to-be-processed binarized graphics which are easier to obtain and belong to any graphic code style, generating a plurality of target style graphic codes corresponding to the target graphic code style based on the target graphic code model, further taking the set of the plurality of target style graphic codes as training data of the graphic code decoding model, and improving the precision of generating the graphic code with the designated style, the method and the device can obtain more accurate graphic code samples of the specified style, so that the accuracy of the graphic code of the specific style is ensured, and meanwhile, the accuracy of the graphic code decoding model for decoding the graphic code of the specific style is improved.
In some embodiments, the target graphic code generation model includes a graphic code style encoder and a graphic code geometry encoder. Step S202 includes:
and S300, based on the graphic code style encoder, extracting features of the graphic code to be processed to obtain graphic code style features corresponding to the graphic code to be processed.
S302, based on a graphic code geometric encoder, extracting features of the binary graphic code to be processed to obtain graphic code geometric features corresponding to the binary graphic code to be processed.
Where the graphics code style decoder refers to a model for extracting style characteristics of graphics codes, which may be, but is not limited to, resNet50. The graphic code geometry encoder refers to a model for extracting the geometric features of the binarized graphic and can be a convolution model such as VGG-16, VGG-19, mobileNet and the like.
Specifically, the target graphic code generation model comprises encoders for extracting different features, and feature extraction can be performed on the graphic code to be processed based on the graphic code style encoder in the target graphic code generation model, so that graphic code style features capable of representing the graphic code style of the graphic code to be processed are obtained; the feature extraction can be carried out on the binary graphic code to be processed based on the graphic code geometric encoder in the target graphic code generation model, so that graphic code geometric features which can represent geometric feature information related to each code point in the binary graphic code to be processed are extracted, and a data basis is provided for subsequent feature fusion and generation of the graphic code of a specific style.
In the above embodiment, by extracting the style characteristics of the graphic code corresponding to the graphic code to be processed based on the style encoder of the graphic code and extracting the geometric characteristics of the graphic code corresponding to the binary graphic code to be processed based on the geometric encoder of the graphic code, high-precision feature information is provided for subsequent feature fusion and generation of the graphic code of the specified style, and the precision of finally generating the graphic code of the specified style is improved to a certain extent.
In some embodiments, the target graphic code generation model includes a preset number of feature fusion layers, the feature fusion layers including a first fusion layer and a second fusion layer; step S204 includes:
s400, taking the style characteristics of the graphic codes as current style fusion characteristics and taking the geometric characteristics of the graphic codes as current geometric fusion characteristics.
S402, determining a current feature fusion layer in sequence, inputting current style fusion features and current geometric fusion features into a first fusion layer in the current feature fusion layer for processing, outputting first style fusion features and first geometric fusion features, inputting the first style fusion features and the first geometric fusion features into a second fusion layer in the current feature fusion layer for processing, and outputting second style fusion features and second geometric fusion features.
S404, taking the second style fusion feature as the current style fusion feature, taking the second geometric fusion feature as the current geometric fusion feature, and taking the backward feature layer corresponding to the current feature fusion layer as the current feature fusion layer.
S406, returning to the step of inputting the current style fusion feature and the current geometric fusion feature into the first fusion layer in the current feature fusion layer for processing until the second style fusion feature and the second geometric fusion feature corresponding to the last current feature fusion layer are obtained, and taking the fusion feature of the second style fusion feature and the second geometric fusion feature corresponding to the last current feature fusion layer as the fusion graphic code feature.
The current style fusion features refer to fusion features under current operation, and only comprise graphic code style features before a feature fusion layer is not input. The current geometric fusion features refer to fusion features under the current operation, and only comprise graphic code geometric features before a feature fusion layer is not input; i.e. the feature fusion layer is correspondingly two outputs. The first fusion layer and the second fusion layer are neural network layers for fusing the style characteristics of the graphic codes and the geometric figures of the graphic codes under the current operation, and the fusion treatments of the first fusion layer and the second fusion layer on the characteristics are different; and both the first fusion layer and the second fusion layer have two outputs. The first style fusion feature refers to the fusion feature which is output after the first fusion layer fuses the graphic code style feature and the graphic code geometric feature; the method comprises graphic code style characteristics and graphic code geometric characteristics, and the more first fusion layers are processed to a certain extent, the more the graphic code geometric characteristics are correspondingly fused. The first geometric fusion features refer to fusion features which are output after the geometric features of the graphic codes and the style features of the graphic codes are subjected to fusion processing through a first fusion layer; the method comprises the geometric characteristics and style characteristics of the graphic codes, and the more first fusion layers are processed to a certain extent, the more style characteristics of the graphic codes are correspondingly fused. The second style fusion feature refers to a feature obtained by fusing the graphic code style feature and the graphic code geometric feature through a second fusion layer. The second geometric fusion feature refers to a feature obtained by fusing the geometric feature of the graphic code and the style feature of the graphic code through a second fusion layer.
Specifically, the target graphic code generation model comprises a plurality of feature fusion layers, each feature fusion layer comprises a first fusion layer and a second fusion layer, and feature information fused by the first fusion layer and the second fusion layer is different. Correspondingly, the current feature fusion layer under the current operation can be sequentially determined from a preset number of feature fusion layers, the current style fusion features and the current geometric fusion features are fused based on a first fusion layer in the current feature fusion layer, when the current style fusion features and the current geometric fusion features are fused in the first fusion layer, the features corresponding to different weights of the current style fusion features and the current geometric fusion features are fused, the first style fusion features and the first geometric fusion features are output from the first fusion layer, the first style fusion features and the first geometric fusion features are input into a second fusion layer, the features corresponding to different weights of the first style fusion features and the first geometric fusion features are fused through the second fusion layer, the second style fusion features and the second geometric fusion features output by the second fusion layer are processed in the backward feature fusion layer corresponding to the current feature fusion layers, and analogized until the second feature corresponding to the last style fusion features and the second feature fusion features and the second geometric fusion features are obtained, and the second feature code fusion pattern is generated according to the corresponding to the feature fusion code fusion pattern of the current feature and the second geometric fusion feature, and the style fusion code fusion pattern is more accurate.
In the above embodiment, the graphic code style features and the graphic code geometric features are fused through the multiple feature fusion layers, so as to obtain the fused graphic code features with higher accuracy, ensure the accuracy of the finally generated target style graphic code to a certain extent, and facilitate ensuring that the generated target style graphic code is more suitable for training the graphic code decoding model for decoding the graphic code related to the specific application scene.
In some embodiments, in step S402, inputting the current style fusion feature and the current geometry fusion feature into a first fusion layer of the current feature fusion layers for processing, and outputting the first style fusion feature and the first geometry fusion feature includes:
s500, based on the first fusion layer, respectively determining a first weight style feature, a second weight style feature and a third weight style feature corresponding to the current style fusion feature, and a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the current geometry fusion feature.
S502, based on a first fusion layer, fusing the first weight style characteristic, the second weight style characteristic and the third weight geometry characteristic corresponding to the current geometry fusion characteristic, which are corresponding to the current style fusion characteristic, so as to obtain the first style fusion characteristic.
S504, fusing the first weight geometric feature, the second weight geometric feature and the third weight style feature corresponding to the current style fusion feature to obtain a first geometric fusion feature.
The first weight style feature, the second weight style feature and the third weight style feature are style features related to different Attention degree weights set on the current style fusion feature based on different Attention degree requirements, and may be Q, V, K corresponding to the current style fusion feature after the Attention mechanism Attention is used for processing the current style fusion feature. The first weighted geometric feature, the second weighted geometric feature and the third weighted geometric feature are geometric features which are set by the current geometric fusion feature based on different Attention degree requirements and are related to different Attention degree weights, and the geometric features can be Q, V, K corresponding to the current geometric fusion feature after the Attention mechanism Attention is used for processing the current geometric fusion feature.
Specifically, the first fusion layer and the second fusion layer in the feature fusion layer can fuse cross-modal feature information, wherein the first fusion layer can fuse a first weight style feature, a second weight style feature and a third weight geometry feature corresponding to a current geometry fusion feature, cross-modal fusion of graphic code style features and graphic code geometry features is achieved, the fused first style fusion feature contains more graphic code geometry features, and the fused first weight geometry feature, the fused second weight geometry feature and the fused third weight style feature corresponding to the current geometry fusion feature contain more graphic code style features.
In the above embodiment, the first fusion layer fuses the style features of different weights and the geometric features of different weights, so that the fused feature information can contain style feature information and geometric feature information of the graphic code as much as possible, thereby providing more accurate feature data for generating the graphic code of the subsequent target graphic code style.
In some embodiments, in step S402, inputting the first style fusion feature and the first geometry fusion feature into a second fusion layer of the current feature fusion layers for processing, and outputting the second style fusion feature and the second geometry fusion feature includes:
s600, based on the second fusion layer, determining a first weight style feature, a second weight style feature and a third weight style feature corresponding to the first style fusion feature, and a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the first geometry fusion feature respectively.
S602, based on the second fusion layer, fusing the first weight style feature, the second weight style feature and the third weight geometry feature corresponding to the first geometry fusion feature corresponding to the first style fusion feature to obtain a second style fusion feature.
S604, fusing the first weight geometric feature, the second weight geometric feature and the third weight style feature corresponding to the first style fusion feature corresponding to the first geometric fusion feature to obtain the second geometric fusion feature.
Specifically, in order to obtain a fusion feature with higher accuracy, the first style fusion feature and the first geometry fusion feature output by the first fusion layer can be continuously input into the second fusion layer to further perform cross-modal feature fusion, and the second fusion layer can be used for fusing the first weight style feature, the second weight style feature and the third weight geometry feature corresponding to the first geometry fusion feature corresponding to the first style fusion feature to obtain a second style fusion feature fusing more graphic code geometry feature information, and fusing the first weight geometry feature, the second weight geometry feature and the third weight style feature corresponding to the first style fusion feature corresponding to the first geometry fusion feature to obtain a second geometry fusion feature fusing more graphic code style feature information, so that more accurate feature information is provided for the graphic code corresponding to the subsequent generated target graphic code style.
In the above embodiment, the second fusion layer further fuses the style features of different weights and the geometric features of different weights, which is favorable for further improving the accuracy of the feature information obtained by fusion, so as to provide more accurate feature data for the generation of the graphic code of the subsequent target graphic code style.
In some embodiments, the method further comprises:
s700, acquiring a binary graphic code set to be processed, wherein the binary graphic code set to be processed comprises binary graphic codes corresponding to any graphic code style.
S702, inputting the graphic code to be processed and the binary graphic code set to be processed corresponding to the target graphic code style into a target graphic code generation model for processing, and outputting the target style graphic code set corresponding to the target graphic code style.
S704, acquiring an initial graphic code decoding model, and training the initial graphic code decoding model for multiple times based on a target style graphic code set to obtain a target graphic code decoding model, wherein the target graphic code decoding model is used for decoding graphic codes.
The graphic code set to be processed refers to a binary graphic code required when the graphic code set is used for generating the graphic code of a specific style; the binary graphic codes in the graphic code set to be processed can be binary graphics obtained by performing binary processing on graphic codes of different styles, or any binary graphic code can be obtained, and the pixel value in the obtained binary graphic code is modified, for example, the pixel value 1 at a certain position in the binary graphic code is changed into a pixel value 255; that is, a plurality of binary graphic codes with different graphic contents can be obtained by modifying the pixel values of a binary graphic code in a plurality of different manners. The target style graphic code set refers to a set of graphic codes that are all of the target graphic code style. The initial graphic code decoding model refers to an untrained graphic code decoding model. The target graphic code decoding model refers to a model which is obtained through training and is used for decoding graphic codes of specific styles.
Specifically, to obtain more graphic codes of a specified style, the decoding capability of the graphic code decoding model to the graphic codes of the specified style is trained, so as to obtain a set of to-be-processed binary graphic codes, where the graphic code style corresponding to each to-be-processed binary graphic code in the set of to-be-processed binary graphic codes may be inconsistent with the target graphic code style, and may be a plurality of different binary graphic codes obtained by converting the same binary graphic code in different manners, for example, modifying pixel values of different positions in the same binary graphic code, so as to obtain to-be-processed binary graphic codes of a plurality of different binary graphic contents. And the target graphic code set is used for training the initial graphic code decoding model, so that the decoding capability of the target graphic code decoding model obtained by training on the graphic code of the appointed target graphic code style is improved, and the accuracy of the target graphic code decoding model on decoding the graphic code of the specific style is better improved.
In the above embodiment, the target style graphic code set belonging to the target graphic code style can be obtained by obtaining the binary graphic code set to be processed, inputting the binary graphic code set to be processed and the graphic code to be processed corresponding to the target graphic code style into the target graphic code generation model for processing, and using the target style graphic code set for training the initial graphic code decoding model to obtain the target graphic code decoding model, wherein the obtaining of the binary graphic code set to be processed is relatively simple, and only the same binary graphic code is subjected to the change of pixel values in different manners, so that a plurality of binary graphic codes can be obtained, and further, the target style graphic code set of the designated target graphic code style is generated according to the target graphic code generation model, and is applied to the training of the initial graphic code decoding model, so that the decoding capability and the decoding accuracy of the graphic code of the designated style by the target graphic code decoding model obtained by the final training are better improved.
In some embodiments, in step S200, training the initial graphic code generation model based on the graphic code set to be trained, to obtain the target graphic code generation model includes:
S800, training the initial graphic code generation model based on the graphic code set to be trained to obtain the graphic code generation model to be finely tuned.
S802, a target style graphic code set corresponding to a target graphic code style is obtained, wherein the target style graphic code set comprises graphic codes corresponding to the target graphic code style and binarized graphic codes corresponding to the graphic codes.
S804, training the graphic code model to be fine-tuned based on the target style graphic code set to obtain a target graphic code generation model.
Wherein the graphic code generation model to be fine-tuned refers to a model to be enhanced in the capability of generating graphic codes of a specific style. The target style graphic code set refers to a set of graphic codes with different graphic contents of the graphic code list graphic codes of the same style; the target style graphic code set only needs a small number (such as 1-5) of graphic codes corresponding to the target graphic code style, and each graphic code has a corresponding binarized graphic code, that is, the binarized graphic code is a binarized graphic obtained by binarizing the corresponding graphic code.
Specifically, in order to improve the capability of the target graphic code generating model obtained by training to generate the graphic code of the designated target graphic code style and improve the accuracy of the graphic code of the specific target graphic code style, after training the initial graphic code generating model based on the graphic code set to be trained including multiple styles to obtain the graphic code generating model to be fine-tuned, fine-tuning the graphic code generating model to be fine-tuned based on a small number of target style graphic code sets corresponding to the target graphic code style to obtain the final target graphic code generating model, so that the target graphic code generating model which can accurately generate the specific style can be trained and obtained only by a small number of graphic code sample sets of the specific style. It should be noted that the target graphic code generation model at this time is not capable of generating only the graphic code of the target graphic code style, but is higher in capability and accuracy than the capability and accuracy of generating the graphic code of the target graphic code style as compared with the capability and accuracy of the graphics of other styles.
In the above embodiment, the target graphic code set corresponding to the target graphic code style is obtained, and the graphic code generation model to be trimmed is obtained by training the graphic code set to be trained based on the target graphic code set, so that the target graphic code generation model capable of generating the graphic code of the target graphic code style with higher precision can be obtained by training with only a small amount of sample data, which is beneficial to improving the accuracy of the graphic code of the specific style applied to the graphic code decoding model training to a certain extent.
In one embodiment, taking the training data for generating a graphic code decoding model applied to decoding graphic codes of a specified style as an example for description, a graphic code set to be trained comprising a large number of arbitrary graphic code styles can be obtained, the graphic code set to be trained comprises graphic code groups of different graphic code styles, the graphic codes comprise graphic codes to be trained of the same style and binary graphic codes to be trained corresponding to the graphic codes to be trained, wherein the binary graphic codes to be trained are obtained by performing binary processing on the corresponding graphic codes to be trained; and then training the initial graphic code generation model for multiple times based on the graphic code set to be trained to obtain a graphic code generation model to be trimmed, which can generate graphic codes of any designated style, so as to improve the accuracy of generating graphic codes of a target graphic code style by the graphic code generation model to be trimmed, and carrying out fine tuning training on the graphic code generation model to be trimmed by acquiring a small number (such as 1-5) of target style graphic code sets of the target graphic code style, thereby further optimizing and adjusting parameters of the graphic code generation model to be trimmed, and further obtaining the target graphic code generation model capable of generating graphic codes corresponding to the target graphic code style with higher precision.
Further, a to-be-processed binarized graphic code set containing any graphic code style is obtained, wherein a plurality of to-be-processed binarized graphic codes in the to-be-processed binarized graphic code set can be obtained by changing pixel values of pixel values corresponding to code points in the binarized graphic code obtained by binarization processing after binarization processing is performed on the graphic code of any style; and acquiring the graphic codes to be processed corresponding to the style of the target graphic codes, inputting the graphic codes to be processed and the binary graphic code sets to be processed into a target graphic code generation model, generating the graphic code sets of the target style which are the same as the binary graphic codes to be processed in the binary graphic code sets to be processed, and taking the graphic code sets of the target style as training data of a graphic code decoding model so as to improve the decoding capability of the graphic code decoding model on the graphic codes of a specific style, thereby improving the accuracy of the graphic code decoding model on graphic code decoding.
More specifically, the target graphic code generation model comprises a graphic code style encoder, graphic code geometric coding and a feature fusion sub-model, the frame of the target graphic code generation model can be shown in fig. 3, and the data flow process of generating a target style graphic code corresponding to a target graphic code style is shown in fig. 3; in addition, the feature fusion sub-model includes a preset number of feature fusion layers, and the frame structure of the feature fusion layers may be as shown in fig. 4, where Q1, V1 and K1 in fig. 4 correspond to a first weight style feature, a second weight style feature and a third weight style feature corresponding to the current style fusion feature, Q2, V2 and K2 correspond to a first weight style feature, a second weight style feature and a third weight style feature corresponding to the current geometry fusion feature, Q1, V1 and K1 correspond to a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the first style fusion feature, and Q2, V2 and K2 correspond to a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the first geometry fusion feature. The feature extraction can be carried out on the graphic codes to be processed through a graphic code style encoder in the target graphic code generation model to obtain graphic code style features, the feature extraction is carried out on each binary graphic code to be processed in the binary graphic code set based on the graphic code geometry encoder in the target graphic code generation model to obtain graphic code geometry features corresponding to each binary graphic code to be processed, the graphic code style features and the graphic code geometry features are fused based on a plurality of feature fusion layers in the feature fusion sub-model to obtain fusion graphic code features, the feature fusion sub-model is used for generating a target style graphic code set based on any graphic code features, and the target style graphic code set is applied to the graphic code decoding model, so that the accuracy of the graphic code decoding model on the graphic codes of specific styles required to be decoded in specific scenes is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a graphic code generating device. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the one or more graphic code generating devices provided below may refer to the limitation of the graphic code generating method hereinabove, and will not be repeated herein.
As shown in fig. 5, an embodiment of the present application provides an apparatus 500, including:
the obtaining module 502 is configured to obtain a trained target graphic code generation model, and obtain a binary graphic code to be processed and a graphic code to be processed corresponding to a target graphic code style; training of the target graphic code generation model includes: training the initial graphic code generation model based on a graphic code set to be trained to obtain a target graphic code generation model, wherein the graphic code set to be trained comprises a plurality of groups of graphic code groups corresponding to different graphic code styles, and the graphic code groups comprise graphic codes to be trained of the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained;
the extracting module 504 is configured to input the graphic code to be processed and the binary graphic code to be processed into a target graphic code generating model, and extract graphic code style features corresponding to the graphic code to be processed and graphic code geometric features corresponding to the binary graphic code to be processed, respectively, based on the target graphic code generating model;
and the fusion module 506 is configured to fuse the graphic code style feature and the graphic code geometric feature based on the target graphic code generation model to obtain a fused graphic code feature, and generate the target style graphic code based on the fused graphic code feature.
In some embodiments, the target graphic code generation model includes a graphic code style encoder and a graphic code geometry encoder; inputting the graphic code to be processed and the binary graphic code to be processed into a target graphic code generation model, respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binary graphic code to be processed based on the target graphic code generation model, wherein the extraction module 504 is specifically configured to:
based on the graphic code style encoder, extracting features of the graphic code to be processed to obtain graphic code style features corresponding to the graphic code to be processed;
and extracting features of the binary graphic code to be processed based on the graphic code geometric encoder to obtain graphic code geometric features corresponding to the binary graphic code to be processed.
In some embodiments, the target graphic code generation model includes a preset number of feature fusion layers, and the feature fusion layers include a first fusion layer and a second fusion layer; the graphic code style feature and the graphic code geometric feature are fused based on the target graphic code generation model, so as to obtain a fused graphic code feature aspect, and the fusion module 506 is specifically configured to:
taking the style characteristics of the graphic codes as current style fusion characteristics and taking the geometric characteristics of the graphic codes as current geometric fusion characteristics;
Sequentially determining a current feature fusion layer, inputting current style fusion features and current geometric fusion features into a first fusion layer in the current feature fusion layer for processing, outputting first style fusion features and first geometric fusion features, inputting the first style fusion features and the first geometric fusion features into a second fusion layer in the current feature fusion layer for processing, and outputting second style fusion features and second geometric fusion features;
taking the second style fusion feature as a current style fusion feature, taking the second geometric fusion feature as a current geometric fusion feature, and taking a backward feature layer corresponding to the current feature fusion layer as a current feature fusion layer;
and returning to the step of inputting the current style fusion feature and the current geometric fusion feature into a first fusion layer in the current feature fusion layer for processing until a second style fusion feature and a second geometric fusion feature corresponding to the last current feature fusion layer are obtained, and taking the fusion feature of the second style fusion feature and the second geometric fusion feature corresponding to the last current feature fusion layer as the fusion graphic code feature.
In some embodiments, in the aspect of inputting the current style fusion feature and the current geometry fusion feature into the first fusion layer in the current feature fusion layer for processing, and outputting the first style fusion feature and the first geometry fusion feature, the fusion module 506 is specifically further configured to:
Based on the first fusion layer, respectively determining a first weight style feature, a second weight style feature and a third weight style feature corresponding to the current style fusion feature, and a first weight geometric feature, a second weight geometric feature and a third weight geometric feature corresponding to the current geometric fusion feature;
based on the first fusion layer, fusing the first weight style feature, the second weight style feature and the third weight geometry feature corresponding to the current geometry fusion feature to obtain a first style fusion feature;
and fusing the first weight geometric feature, the second weight geometric feature and the third weight style feature corresponding to the current style fusion feature, which are corresponding to the current geometric fusion feature, so as to obtain the first geometric fusion feature.
In some embodiments, in inputting the first style fusion feature and the first geometry fusion feature into the second fusion layer of the current feature fusion layer for processing, and outputting the second style fusion feature and the second geometry fusion feature, the fusion module 506 is specifically further configured to:
based on the second fusion layer, respectively determining a first weight style feature, a second weight style feature and a third weight style feature corresponding to the first style fusion feature, and a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the first geometry fusion feature;
Based on the second fusion layer, fusing the first weight style feature, the second weight style feature and the third weight geometry feature corresponding to the first geometry fusion feature corresponding to the first style fusion feature to obtain a second style fusion feature;
and fusing the first weight geometric feature, the second weight geometric feature and the third weight style feature corresponding to the first style fusion feature corresponding to the first geometric fusion feature to obtain a second geometric fusion feature.
In some embodiments, the graphic code generating apparatus further comprises a usage module 508, the usage module 508 being configured to:
acquiring a binary graphic code set to be processed, wherein the binary graphic code set to be processed comprises binary graphic codes corresponding to any graphic code style;
inputting the graphic code to be processed and the binary graphic code set to be processed corresponding to the target graphic code style into a target graphic code generation model for processing, and outputting a target style graphic code set corresponding to the target graphic code style;
and acquiring an initial graphic code decoding model, and training the initial graphic code decoding model for multiple times based on the target style graphic code set to obtain a target graphic code decoding model, wherein the target graphic code decoding model is used for decoding graphic codes.
In some embodiments, the fusion module 506 is specifically further configured to:
training the initial graphic code generation model based on the graphic code set to be trained to obtain a graphic code generation model to be finely tuned;
acquiring a target style graphic code set corresponding to a target graphic code style, wherein the target style graphic code set comprises graphic codes corresponding to the target graphic code style and binarization graphic codes corresponding to the graphic codes;
training the graphic code model to be trimmed based on the target style graphic code set to obtain a target graphic code generation model.
The respective modules in the above-described graphic code generating apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store data related to the execution process. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the steps in the graphics code generation method described above.
In some embodiments, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement the steps in the graphics code generation method described above. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen; the input device of the computer equipment can be a touch layer covered on a display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 6 or 7 are merely block diagrams of portions of structures related to the aspects of the present application and are not intended to limit the computer devices to which the aspects of the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, an internal structural diagram of a computer-readable storage medium is provided as shown in fig. 8, the computer-readable storage medium storing a computer program that, when executed by a processor, implements the steps of the method embodiments described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described embodiment methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric RandomAccess Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of generating a graphic code, the method comprising:
obtaining a trained target graphic code generation model, and obtaining a binary graphic code to be processed and a graphic code to be processed corresponding to the style of the target graphic code; the training of the target graphic code generation model comprises the following steps: training an initial graphic code generation model based on a graphic code set to be trained to obtain a target graphic code generation model, wherein the graphic code set to be trained comprises a plurality of groups of graphic code groups corresponding to different graphic code styles, and the graphic code groups comprise graphic codes to be trained of the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained;
Inputting the graphic code to be processed and the binarized graphic code to be processed into the target graphic code generation model, and respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binarized graphic code to be processed based on the target graphic code generation model;
and fusing the graphic code style characteristics and the graphic code geometric characteristics based on the target graphic code generation model to obtain fused graphic code characteristics, and generating a target style graphic code based on the fused graphic code characteristics.
2. The method of claim 1, wherein the target graphic code generation model comprises a graphic code style encoder and a graphic code geometry encoder; inputting the graphic code to be processed and the binarized graphic code to be processed into the target graphic code generation model, respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binarized graphic code to be processed based on the target graphic code generation model, wherein the graphic code style characteristics comprise:
based on the graphic code style encoder, extracting features of the graphic code to be processed to obtain graphic code style features corresponding to the graphic code to be processed;
And based on the graphic code geometric encoder, extracting the characteristics of the binary graphic code to be processed to obtain the graphic code geometric characteristics corresponding to the binary graphic code to be processed.
3. The method of claim 1, wherein the target graphic code generation model comprises a preset number of feature fusion layers, the feature fusion layers comprising a first fusion layer and a second fusion layer; the step of fusing the graphic code style characteristics and the graphic code geometric characteristics based on the target graphic code generation model to obtain fused graphic code characteristics comprises the following steps:
taking the style characteristics of the graphic codes as current style fusion characteristics and taking the geometric characteristics of the graphic codes as current geometric fusion characteristics;
sequentially determining a current feature fusion layer, inputting the current style fusion feature and the current geometric fusion feature into a first fusion layer in the current feature fusion layer for processing, outputting a first style fusion feature and a first geometric fusion feature, inputting the first style fusion feature and the first geometric fusion feature into a second fusion layer in the current feature fusion layer for processing, and outputting a second style fusion feature and a second geometric fusion feature;
Taking the second style fusion feature as a current style fusion feature, taking the second geometric fusion feature as a current geometric fusion feature, and taking a backward feature layer corresponding to the current feature fusion layer as a current feature fusion layer;
and returning to the step of inputting the current style fusion feature and the current geometric fusion feature into a first fusion layer in the current feature fusion layer for processing until a second style fusion feature and a second geometric fusion feature corresponding to the last current feature fusion layer are obtained, and taking the fusion feature of the second style fusion feature and the second geometric fusion feature corresponding to the last current feature fusion layer as the fusion graphic code feature.
4. The method of claim 3, wherein inputting the current style fusion feature and the current geometry fusion feature into a first fusion layer of the current feature fusion layers for processing, and outputting a first style fusion feature and a first geometry fusion feature comprises:
based on the first fusion layer, respectively determining a first weight style feature, a second weight style feature and a third weight style feature corresponding to the current style fusion feature, and a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the current geometry fusion feature;
Based on the first fusion layer, fusing the first weight style feature, the second weight style feature and the third weight geometry feature corresponding to the current geometry fusion feature, which are corresponding to the current style fusion feature, to obtain a first style fusion feature;
and fusing the first weight geometric feature, the second weight geometric feature and the third weight style feature corresponding to the current style fusion feature corresponding to the current geometric fusion feature to obtain a first geometric fusion feature.
5. The method of claim 3, wherein inputting the first style fusion feature and the first geometry fusion feature into a second fusion layer of the current feature fusion layers for processing, and outputting a second style fusion feature and a second geometry fusion feature comprises:
based on the second fusion layer, respectively determining a first weight style feature, a second weight style feature and a third weight style feature corresponding to the first style fusion feature, and a first weight geometry feature, a second weight geometry feature and a third weight geometry feature corresponding to the first geometry fusion feature;
Based on the second fusion layer, fusing the first weight style feature, the second weight style feature and the third weight geometry feature corresponding to the first geometry fusion feature corresponding to the first style fusion feature to obtain the second style fusion feature;
and fusing the first weight geometric feature, the second weight geometric feature and the third weight style feature corresponding to the first style fusion feature corresponding to the first geometric fusion feature to obtain the second geometric fusion feature.
6. The method according to claim 1, wherein the method further comprises:
acquiring a binary graphic code set to be processed, wherein the binary graphic code set to be processed comprises binary graphic codes corresponding to any graphic code style;
inputting the graphic code to be processed and the binary graphic code set to be processed corresponding to the target graphic code style into the target graphic code generation model for processing, and outputting the target style graphic code set corresponding to the target graphic code style;
and acquiring an initial graphic code decoding model, and training the initial graphic code decoding model for multiple times based on the target style graphic code set to obtain a target graphic code decoding model, wherein the target graphic code decoding model is used for decoding graphic codes.
7. The method of claim 1, wherein training the initial graphical code generation model based on the set of graphical codes to be trained to obtain the target graphical code generation model comprises:
training the initial graphic code generation model based on the graphic code set to be trained to obtain a graphic code generation model to be finely tuned;
acquiring a target style graphic code set corresponding to the target graphic code style, wherein the target style graphic code set comprises graphic codes corresponding to the target graphic code style and binarization graphic codes corresponding to the graphic codes;
and training the graphic code model to be trimmed based on the target style graphic code set to obtain the target graphic code generation model.
8. A graphic code generating apparatus, the apparatus comprising:
the acquisition module is used for acquiring a trained target graphic code generation model and acquiring a binary graphic code to be processed and a graphic code to be processed corresponding to the style of the target graphic code; the training of the target graphic code generation model comprises the following steps: training an initial graphic code generation model based on a graphic code set to be trained to obtain a target graphic code generation model, wherein the graphic code set to be trained comprises a plurality of groups of graphic code groups corresponding to different graphic code styles, and the graphic code groups comprise graphic codes to be trained of the same graphic code style and binary graphic codes to be trained corresponding to the graphic codes to be trained;
The extraction module is used for inputting the graphic code to be processed and the binarized graphic code to be processed into the target graphic code generation model, and respectively extracting graphic code style characteristics corresponding to the graphic code to be processed and graphic code geometric characteristics corresponding to the binarized graphic code to be processed based on the target graphic code generation model;
and the fusion module is used for fusing the graphic code style characteristics and the graphic code geometric characteristics based on the target graphic code generation model to obtain fusion graphic code characteristics, and generating a target style graphic code based on the fusion graphic code characteristics.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410070234.7A 2024-01-18 2024-01-18 Graphic code generation method, graphic code generation device, computer equipment and storage medium Pending CN117611953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410070234.7A CN117611953A (en) 2024-01-18 2024-01-18 Graphic code generation method, graphic code generation device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410070234.7A CN117611953A (en) 2024-01-18 2024-01-18 Graphic code generation method, graphic code generation device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117611953A true CN117611953A (en) 2024-02-27

Family

ID=89958160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410070234.7A Pending CN117611953A (en) 2024-01-18 2024-01-18 Graphic code generation method, graphic code generation device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117611953A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376830A (en) * 2018-10-17 2019-02-22 京东方科技集团股份有限公司 Two-dimensional code generation method and device
CN109492735A (en) * 2018-11-23 2019-03-19 清华大学 Two-dimensional code generation method and computer readable storage medium
CN110473141A (en) * 2019-08-02 2019-11-19 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110489582A (en) * 2019-08-19 2019-11-22 腾讯科技(深圳)有限公司 Personalization shows the generation method and device, electronic equipment of image
KR102543451B1 (en) * 2022-04-29 2023-06-13 주식회사 이너버즈 Image feature extraction and synthesis system using deep learning and its learning method
CN117011415A (en) * 2022-11-11 2023-11-07 腾讯科技(深圳)有限公司 Method and device for generating special effect text, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376830A (en) * 2018-10-17 2019-02-22 京东方科技集团股份有限公司 Two-dimensional code generation method and device
US20200226440A1 (en) * 2018-10-17 2020-07-16 Boe Technology Group Co., Ltd. Two-dimensional code image generation method and apparatus, storage medium and electronic device
CN109492735A (en) * 2018-11-23 2019-03-19 清华大学 Two-dimensional code generation method and computer readable storage medium
CN110473141A (en) * 2019-08-02 2019-11-19 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110489582A (en) * 2019-08-19 2019-11-22 腾讯科技(深圳)有限公司 Personalization shows the generation method and device, electronic equipment of image
KR102543451B1 (en) * 2022-04-29 2023-06-13 주식회사 이너버즈 Image feature extraction and synthesis system using deep learning and its learning method
CN117011415A (en) * 2022-11-11 2023-11-07 腾讯科技(深圳)有限公司 Method and device for generating special effect text, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN111127468A (en) Road crack detection method and device
US20220335685A1 (en) Method and apparatus for point cloud completion, network training method and apparatus, device, and storage medium
CN113536856A (en) Image recognition method and system, and data processing method
CN115359220B (en) Method and device for updating virtual image of virtual world
CN115272082A (en) Model training method, video quality improving method, device and computer equipment
CN116977531A (en) Three-dimensional texture image generation method, three-dimensional texture image generation device, computer equipment and storage medium
CN116824092B (en) Three-dimensional model generation method, three-dimensional model generation device, computer equipment and storage medium
CN117292020B (en) Image generation method, device, electronic equipment and storage medium
CN116912148B (en) Image enhancement method, device, computer equipment and computer readable storage medium
CN112101252B (en) Image processing method, system, device and medium based on deep learning
CN116051811B (en) Region identification method, device, computer equipment and computer readable storage medium
CN116486009A (en) Monocular three-dimensional human body reconstruction method and device and electronic equipment
CN117611953A (en) Graphic code generation method, graphic code generation device, computer equipment and storage medium
CN115861401A (en) Binocular and point cloud fusion depth recovery method, device and medium
CN114493971A (en) Media data conversion model training and digital watermark embedding method and device
CN116881871B (en) Model watermark embedding method, device, computer equipment and storage medium
CN118411286A (en) Graphic code generation method and device and computer equipment
CN117786147B (en) Method and device for displaying data in digital twin model visual field range
CN116703687B (en) Image generation model processing, image generation method, image generation device and computer equipment
CN112801919B (en) Image defogging model training method, defogging processing method and device and storage medium
CN117953321A (en) Defect image generation method, device, computer equipment and storage medium
CN111953971B (en) Video processing method, video processing device and terminal equipment
CN116563357B (en) Image matching method, device, computer equipment and computer readable storage medium
CN117974992A (en) Matting processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination