CN118115509A - Label generation method and device, electronic equipment and storage medium - Google Patents

Label generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118115509A
CN118115509A CN202310785253.3A CN202310785253A CN118115509A CN 118115509 A CN118115509 A CN 118115509A CN 202310785253 A CN202310785253 A CN 202310785253A CN 118115509 A CN118115509 A CN 118115509A
Authority
CN
China
Prior art keywords
target
image
label
background
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310785253.3A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Hanyin Electronic Technology Co Ltd
Original Assignee
Xiamen Hanyin Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Hanyin Electronic Technology Co Ltd filed Critical Xiamen Hanyin Electronic Technology Co Ltd
Priority to CN202310785253.3A priority Critical patent/CN118115509A/en
Publication of CN118115509A publication Critical patent/CN118115509A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a label generation method, a label generation device, electronic equipment and a storage medium. Relates to the technical field of label printing, and the method comprises the following steps: acquiring a target image, wherein the target image comprises an element area and a background area; extracting at least one target element from the element region, wherein each target element includes element content and element location; determining a background image and generating a basic label according to the background image or a preset image, wherein the background image comprises a background area and an element area after filling treatment; and drawing the element content of the target element in the basic label according to the element position of each target element to generate a target label. The scheme provided by the invention can efficiently and intelligently generate the label, so that the flexibility and diversity of label design are improved, the cost and error rate of manual intervention are reduced, and the automation degree and the intelligent level of the production line are improved.

Description

Label generation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of label printing technologies, and in particular, to a label generating method, a device, an electronic apparatus, and a storage medium.
Background
With the continuous development of computer vision technology and deep learning technology, image recognition has become an important direction of commercial application. In the production and management process, it is often necessary to identify information on the image and convert it into a visual label for printing and management.
Existing label generation schemes are often implemented using label editing software: the user firstly needs to import the elements (such as text, bar codes, pictures and the like) into the label editing software, and then the labels are finally saved through operations of modifying the text, scaling the element size, dragging and typesetting the elements and the like. However, different label editing software has different editing modes, and a great deal of learning cost is often required; and for complex labels, the number of operation steps of the user is increased, a great deal of time is consumed, and meanwhile, the probability of error of the user is greatly increased.
Disclosure of Invention
The invention provides a label generation method, a label generation device, electronic equipment and a storage medium, which can efficiently and intelligently generate labels, improve flexibility and diversity of label design, reduce cost and error rate of manual intervention, and improve automation degree and intelligent level of a production line.
According to an aspect of the present invention, there is provided a tag generation method including:
Acquiring a target image, wherein the target image comprises an element area and a background area;
extracting at least one target element from the element region, wherein each target element includes element content and element location;
Determining a background image and generating a basic label according to the background image or a preset image, wherein the background image comprises a background area and an element area after filling treatment;
and drawing the element content of the target element in the basic label according to the element position of each target element to generate a target label.
Optionally, acquiring the target image includes:
Acquiring an original image;
And preprocessing the original image to obtain a target image, wherein the preprocessing comprises at least one of clipping processing, graying processing, binarization processing, denoising processing, morphological processing, rotation processing and perspective transformation processing.
Optionally, extracting at least one target element from the element region includes:
Identifying the number of target elements included in the element region, and respectively determining the element type of each target element;
if the element type of the target element is a text type, extracting the element content and the element position of the target element based on a text recognition technology;
If the element type of the target element is a bar code type, extracting the element content of the target element based on a bar code identification technology, and determining the element position of the target element based on a connected domain technology;
If the element type of the target element is the picture type, determining the element position of the target element based on an image recognition technology or a machine learning technology, and extracting the element content of the target element according to the element position of the target element.
Optionally, the method for filling the element region includes any one of the following:
filling the element region according to the color value of the junction of the element region and the background region;
If the background area is a solid color area, filling the element area according to the color value of the background area;
if the background area is formed by the basic image, determining an arrangement rule of the basic image, and filling the element area according to the basic image and the arrangement rule.
Optionally, generating the basic tag according to the background image or the preset image includes:
Obtaining a label template, wherein the label template is a universal template or corresponds to the model of the printer;
Determining whether the background image meets a preset condition;
if the background image meets the preset condition, generating a basic label according to the background image and the label template;
if the background image does not meet the preset condition, generating a basic label according to the preset image and the label template.
Optionally, after drawing the element content of the target element in the base tag, the method further includes:
And adjusting the element content of the target element and/or the position of the element content of the target element in the base tag in response to an adjustment instruction input by a user.
Optionally, after generating the target tag, the method further includes:
The target label is sent to the printer to cause the printer to print the target label.
According to another aspect of the present invention, there is provided a tag generating apparatus including: the device comprises an image acquisition module, an element extraction module, a background processing module and a label generation module;
The image acquisition module is used for acquiring a target image, wherein the target image comprises an element area and a background area;
An element extraction module for extracting at least one target element from an element region, wherein each target element includes element content and element location;
The background processing module is used for determining a background image, wherein the background image comprises a background area and an element area subjected to filling processing;
The label generation module is used for generating a basic label according to the background image or the preset image, and drawing the element content of the target element in the basic label according to the element position of each target element respectively so as to generate the target label.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the tag generation method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the tag generation method of any of the embodiments of the present invention.
According to the technical scheme, at least one target element is extracted from an element area of a target image through obtaining the target image, a background image is determined, a basic label is generated according to the background image or a preset image, and element contents of the target elements are drawn in the basic label according to element positions of each target element respectively to generate the target label. Therefore, the label can be generated efficiently and intelligently, and the production efficiency and quality are greatly improved; meanwhile, the label template adopted in the process of generating the target label can be a universal template or corresponds to the model of the printer, so that the flexibility and diversity of label design can be improved, and the attractiveness and suitability of the label are ensured; furthermore, the label generation method of the scheme basically does not need manual intervention, thereby greatly reducing the cost and error rate of the manual intervention and improving the automation degree and the intelligent level of the production line.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a label generating method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a target image according to a first embodiment of the present invention;
fig. 3 is a flow chart of a label generating method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a label generating apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of another label generating apparatus according to the third embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "target," "original," "base," and the like in the description and claims of the present invention and in the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a schematic flow chart of a label generating method according to a first embodiment of the present invention, where the method may be applied to a case of generating a label based on an image, and the method may be performed by a label generating apparatus, where the label generating apparatus may be implemented in a form of hardware and/or software, and the label generating apparatus may be configured in an electronic device (such as a computer device). As shown in fig. 1, the method includes:
S110, acquiring a target image, wherein the target image comprises an element area and a background area.
In an embodiment, the target image may be an image directly acquired by an image acquisition device (such as a camera, a video camera, etc.), an image directly acquired from a memory, or an image obtained by preprocessing an original image.
Fig. 2 is a schematic diagram of a target image according to a first embodiment of the present invention. As shown in fig. 2, the target image includes an element area 10 and a background area 20, where the element area 10 refers to an area where at least one target element in the target image is located, and the target element may be a two-dimensional barcode (may also be referred to as a two-dimensional code), a text, or a one-dimensional barcode (may also be referred to as a barcode), a picture, or a graphic shown in fig. 2, which is not particularly limited in the embodiment of the present invention; the background area is an area other than the element area.
The target image is assumed to be an image obtained by preprocessing an original image, and the original image may be an image directly acquired by an image acquisition device or an image directly acquired from a memory. The preprocessing may include at least one of clipping processing, graying processing, binarizing processing, denoising processing, morphological processing, rotation processing, perspective transformation processing.
Specifically, the clipping processing is a process of processing an image in response to a clipping operation by a user, and the image after the clipping processing may be any shape such as a rectangle, a circle, or the like.
The graying process may convert a color image into a gray image. The binarization process may convert the image into a black and white binary image. The graying treatment and/or the binarizing treatment can facilitate the extraction of the subsequent target elements, and is more suitable for printers adopting black-and-white printing modes, such as thermal printers.
The denoising process is a process of removing noise in an image by means of a filter or the like.
Morphological processing is a process of changing the morphology of an image by using operations such as dilation, erosion, etc., to further reduce noise and divide an element region from a background region.
When the target element included in the image is detected to be a character, and the direction of the character is not the preset direction, the rotation process can rotate the image to the correct direction, so that the subsequent process is facilitated.
When it is judged that the outermost peripheral shape of the image is not rectangular, the perspective transformation processing may change the image to rectangular for the subsequent processing.
S120, extracting at least one target element from the element area, wherein each target element comprises element content and element positions.
In particular, a method of extracting at least one target element from an element region may include: identifying the number of target elements included in the element region, and respectively determining the element type of each target element; if the element type of the target element is a text type, extracting the element content and the element position of the target element based on a text recognition technology; if the element type of the target element is a bar code type, extracting the element content of the target element based on a bar code identification technology, and determining the element position of the target element based on a connected domain technology; if the element type of the target element is the picture type, determining the element position of the target element based on an image recognition technology or a machine learning technology, and extracting the element content of the target element according to the element position of the target element.
Assume that the number of target elements included in the identification element region is 3, and the element types of the 3 target elements are a text type, a barcode type, and a picture type, respectively. For the target element of the character type, a character recognition technology (such as TESSERACT OCR, microsoft OCR, OCRopus, a character recognition engine and the like) can be adopted to directly take the element content and the element position of the target element; for a target element of a bar code type, extracting element content of the target element by using a bar code identification technology (such as ML_kit, wechat_ qrcode, scankit and the like), and determining element positions of the target element by using a connected domain technology, wherein the bar code type at least comprises a one-dimensional bar code and a two-dimensional bar code; for the target element of the picture type, an image recognition technology (such as threshold segmentation, edge detection, region growing and the like) or a machine learning technology can be adopted to determine the element position of the target element, and then the target image is cut according to the element position of the target element so as to extract the element content of the target element.
S130, determining a background image and generating a basic label according to the background image or a preset image, wherein the background image comprises a background area and an element area after filling processing.
Since the background image includes the background region and the element region after the filling process, the element region is first subjected to the filling process before the background image is determined. Specifically, the method for filling the element region comprises any one of the following steps:
Method 1: and filling the element region according to the color value of the junction of the element region and the background region.
Method 2: and if the background area is a solid color area, filling the element area according to the color value of the background area.
Method 3: if the background area is formed by the basic image, determining an arrangement rule of the basic image, and filling the element area according to the basic image and the arrangement rule.
In an embodiment, the method for generating the basic tag according to the background image or the preset image may include: obtaining a label template, wherein the label template is a universal template or corresponds to the model of the printer; determining whether the background image meets a preset condition; if the background image meets the preset condition, generating a basic label according to the background image and the label template; if the background image does not meet the preset condition, generating a basic label according to the preset image and the label template.
The preset image may be an image stored in the tag generation apparatus in advance, the number of the preset images being at least one.
The preset condition may be a preset average color value based on a user's selection or a background image. For example, when the preset condition is based on a user's selection, whether to employ the background image or the preset image may be determined to generate the base label according to the user's selection; when the preset condition is a preset average color value of the background image, the actual average color value of the background image can be compared with the preset average color value, when the actual average color value of the background image is smaller than or equal to the preset average color value, a basic label is generated according to the background image and the label template, and when the actual average color value of the background image is larger than the preset average color value, the basic label is generated according to the preset image and the label template.
It should be noted that, the step S120 and the step S130 have no relation of execution sequence, that is, the step S120 may be executed first, then the step S130 may be executed first, then the step S120 may be executed, and the step S120 and the step S130 may be executed simultaneously.
And S140, respectively drawing the element content of the target element in the basic label according to the element position of each target element to generate the target label.
In one embodiment, the element position of the target element refers to the relative position of the target element. Therefore, the accuracy of the position of the element content of the target element in the basic label after the element content of the target element is drawn in the basic label according to the element position of each target element can be ensured.
Example two
Fig. 3 is a schematic flow chart of a label generating method according to a second embodiment of the present invention, and the label generating method is described in detail on the basis of the first embodiment. As shown in fig. 3, the method includes:
S201, acquiring an original image.
The original image may be an image directly captured by an image capturing device (e.g., camera, video camera, etc.), or may be an image directly captured from a memory.
S202, preprocessing an original image to obtain a target image, wherein the preprocessing comprises at least one of clipping processing, graying processing, binarization processing, denoising processing, morphological processing, rotation processing and perspective transformation processing, and the target image comprises an element area and a background area.
The clipping processing is a process of processing an image in response to a clipping operation by a user, and the image after the clipping processing may be any shape such as a rectangle, a circle, or the like.
The graying process may convert a color image into a gray image. The binarization process may convert the image into a black and white binary image. The graying treatment and/or the binarizing treatment can facilitate the extraction of the subsequent target elements, and is more suitable for printers adopting black-and-white printing modes, such as thermal printers. Illustratively, the present invention may employ a Convolutional Neural Network (CNN) or an adaptive threshold binarization algorithm for binarization processing, during which the CNN receives a set of gray scale images as input and outputs a set of binarized images. To achieve this, CNNs typically use a pooling layer (Pooling Layer) to reduce the dimensions of the image and a convolution layer (Convolutional Layer) to learn the features in the image. The CNN then maps the features to the binarized output using a full connectivity layer (Fully Connected Layer).
The denoising process is a process of removing noise in an image by means of a filter or the like. Illustratively, the present invention may employ Convolutional Neural Networks (CNNs) for denoising processes, during which the CNNs may perform noise cancellation by learning the difference between the noise and the signal. CNNs can accept a set of noisy images as input and then learn to convert them to a set of denoised images as output. To achieve this, CNNs typically use a deconvolution layer (Deconvolutional Layer) to eliminate noise, which enlarges the input image to the original size and restores the detail of the original image as much as possible.
Morphological processing is a process of changing the morphology of an image by using operations such as dilation, erosion, etc., to further reduce noise and divide an element region from a background region.
When the target element included in the image is detected to be a character, and the direction of the character is not the preset direction, the rotation process can rotate the image to the correct direction, so that the subsequent process is facilitated. For example, the detection of text reversal may be performed by using a gradient method or a convolutional neural network method. The method based on gradient mainly detects the direction of the characters by calculating the gradient of the image, and the method based on convolution neural network learns the characteristics of the character direction by training a neural network, and then carries out direction detection on the picture.
When it is judged that the outermost peripheral shape of the image is not rectangular, the perspective transformation processing may change the image to rectangular for the subsequent processing. For example, it is possible to automatically determine whether or not a perspective transformation process is required using a deep learning technique, for example, to train a model using CNN to automatically detect whether or not the outermost peripheral region of an image is rectangular, and to automatically perform the perspective transformation process.
S203, the number of target elements included in the element area is identified, and the element type of each target element is determined.
The element types of the target element include text type, bar code type and picture type.
S204, if the element type of the target element is a text type, extracting the element content and the element position of the target element based on a text recognition technology.
The word recognition technique may be implemented using an open source tool such as TESSERACT, OCROPUS, EASYOCR or using a commercial OCR engine such as ABBYY FineReader, adobe Acrobat Pro DC, etc.
S205, if the element type of the target element is a bar code type, extracting the element content of the target element based on a bar code identification technology, and determining the element position of the target element based on a connected domain technology.
Barcode recognition techniques may be implemented using open source tools such as ZBar, ZXing, etc., or using business engines such as Dynamsoft Barcode Reader, scandit, etc. The connected domain technique is an image processing technique for dividing an image into a plurality of connected regions composed of adjacent pixels, with similarity between the pixels. In an image, a connected domain refers to a set of pixels having the same pixel value or gray value.
S206, if the element type of the target element is the picture type, determining the element position of the target element based on an image recognition technology or a machine learning technology, and extracting the element content of the target element according to the element position of the target element.
Image recognition techniques may include algorithms such as threshold segmentation, edge detection, region growing, image classification, object detection, image segmentation, etc., such as YOLO, mask R-CNN, fast R-CNN, etc.
S207, determining a background image, wherein the background image comprises a background area and an element area after filling processing.
Specifically, the method for filling the element region comprises any one of the following steps: filling the element region according to the color value of the junction of the element region and the background region; if the background area is a solid color area, filling the element area according to the color value of the background area; if the background area is formed by the basic image, determining an arrangement rule of the basic image, and filling the element area according to the basic image and the arrangement rule.
S208, acquiring a label template, wherein the label template is a universal template or corresponds to the model of the printer.
A generic template refers to a template that can be matched to most printers. When the label template corresponds to the model of the printer, the model of one printer can correspond to at least one template; when the model of one printer corresponds to a plurality of templates, the user can select a label template from the plurality of templates.
S209, determining whether the background image meets a preset condition. If yes, executing S210; if not, S211 is executed.
The preset condition may be a preset average color value based on a user's selection or a background image. For example, when the preset condition is based on a user's selection, whether to employ the background image or the preset image may be determined to generate the base label according to the user's selection; when the preset condition is a preset average color value of the background image, the actual average color value of the background image can be compared with the preset average color value, when the actual average color value of the background image is smaller than or equal to the preset average color value, a basic label is generated according to the background image and the label template, and when the actual average color value of the background image is larger than the preset average color value, the basic label is generated according to the preset image and the label template.
S210, generating a basic label according to the background image and the label template.
S211, generating a basic label according to the preset image and the label template.
The preset image may be an image stored in the tag generation apparatus in advance, the number of the preset images being at least one.
S212, drawing the element content of each target element in the basic label according to the element position of each target element.
In one embodiment, the element position of the target element refers to the relative position of the target element. Therefore, the accuracy of the position of the element content of the target element in the basic label after the element content of the target element is drawn in the basic label according to the element position of each target element can be ensured.
S213, adjusting the element content of the target element and/or the position of the element content of the target element in the basic tag in response to an adjusting instruction input by a user.
Therefore, the position of the element content of the target element in the basic tag can be more in line with the requirements of users, and errors possibly occurring in the text recognition process are reduced.
S214, generating and storing the target label.
The object tags may be stored in XML format, picture format, or json format.
S215, sending the target label to the printer so that the printer prints the target label.
Therefore, the label can be generated efficiently and intelligently, and the production efficiency and quality are greatly improved; meanwhile, the label template adopted in the process of generating the target label can be a universal template or corresponds to the model of the printer, so that the flexibility and diversity of label design can be improved, and the attractiveness and suitability of the label are ensured; furthermore, the label generation method of the scheme basically does not need manual intervention, thereby greatly reducing the cost and error rate of the manual intervention and improving the automation degree and the intelligent level of the production line.
The embodiment of the invention provides a label generation method, which comprises the steps of obtaining a target image, wherein the target image comprises an element area and a background area; extracting at least one target element from the element region, wherein each target element includes element content and element location; determining a background image and generating a basic label according to the background image or a preset image, wherein the background image comprises a background area and an element area after filling treatment; and drawing the element content of the target element in the basic label according to the element position of each target element to generate a target label. According to the technical scheme, at least one target element is extracted from an element area of a target image through obtaining the target image, a background image is determined, a basic label is generated according to the background image or a preset image, and element contents of the target elements are drawn in the basic label according to element positions of each target element respectively to generate the target label. Therefore, the label can be generated efficiently and intelligently, and the production efficiency and quality are greatly improved; meanwhile, the label template adopted in the process of generating the target label can be a universal template or corresponds to the model of the printer, so that the flexibility and diversity of label design can be improved, and the attractiveness and suitability of the label are ensured; furthermore, the label generation method of the scheme basically does not need manual intervention, thereby greatly reducing the cost and error rate of the manual intervention and improving the automation degree and the intelligent level of the production line.
Example III
Fig. 4 is a schematic structural diagram of a label generating apparatus according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes: an image acquisition module 401, an element extraction module 402, a background processing module 403 and a label generation module 404.
An image acquisition module 401, configured to acquire a target image, where the target image includes an element area and a background area;
An element extraction module 402 for extracting at least one target element from an element region, wherein each target element comprises element content and an element location;
A background processing module 403, configured to determine a background image, where the background image includes a background area and an element area after being filled;
The tag generation module 404 is configured to generate a base tag according to the background image or the preset image, and draw the element content of the target element in the base tag according to the element position of each target element, so as to generate the target tag.
Optionally, the image obtaining module 401 is specifically configured to obtain an original image; and preprocessing the original image to obtain a target image, wherein the preprocessing comprises at least one of clipping processing, graying processing, binarization processing, denoising processing, morphological processing, rotation processing and perspective transformation processing.
Optionally, the element extraction module 402 is specifically configured to identify the number of target elements included in the element area, and determine an element type of each target element respectively; if the element type of the target element is a text type, extracting the element content and the element position of the target element based on a text recognition technology; if the element type of the target element is a bar code type, extracting the element content of the target element based on a bar code identification technology, and determining the element position of the target element based on a connected domain technology; if the element type of the target element is the picture type, determining the element position of the target element based on an image recognition technology or a machine learning technology, and extracting the element content of the target element according to the element position of the target element.
Optionally, the background processing module 403 is configured to perform a filling process on the element area, and the method for performing the filling process on the element area includes any one of the following: filling the element region according to the color value of the junction of the element region and the background region; if the background area is a solid color area, filling the element area according to the color value of the background area; if the background area is formed by the basic image, determining an arrangement rule of the basic image, and filling the element area according to the basic image and the arrangement rule.
Optionally, the label generating module 404 is specifically configured to obtain a label template, where the label template is a general template, or the label template corresponds to a model of the printer; determining whether the background image meets a preset condition; if the background image meets the preset condition, generating a basic label according to the background image and the label template; if the background image does not meet the preset condition, generating a basic label according to the preset image and the label template.
Optionally, the tag generation module 404 is further configured to, after the element content of the target element is drawn in the base tag, adjust the element content of the target element and/or the position of the element content of the target element in the base tag in response to an adjustment instruction input by the user.
Optionally, referring to fig. 4, fig. 5 is a schematic structural diagram of another label generating apparatus according to the third embodiment of the present invention. As shown in fig. 5, further includes: a communication module 405.
And a communication module 405, configured to send the target label to the printer after the label generating module 404 generates the target label, so that the printer prints the target label.
The label generating device provided by the embodiment of the invention can execute the label generating method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 6 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the tag generation method.
In some embodiments, the tag generation method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the tag generation method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the tag generation method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A tag generation method, comprising:
Acquiring a target image, wherein the target image comprises an element area and a background area;
Extracting at least one target element from the element region, wherein each target element includes element content and element location;
Determining a background image and generating a basic label according to the background image or a preset image, wherein the background image comprises the background area and the element area after filling;
And drawing the element content of the target element in the basic label according to the element position of each target element so as to generate a target label.
2. The tag generation method according to claim 1, wherein the acquiring the target image includes:
Acquiring an original image;
and preprocessing the original image to obtain the target image, wherein the preprocessing comprises at least one of clipping processing, graying processing, binarization processing, denoising processing, morphological processing, rotation processing and perspective transformation processing.
3. The tag generation method of claim 1, wherein the extracting at least one target element from the element region comprises:
Identifying the number of the target elements included in the element region, and respectively determining the element type of each target element;
if the element type of the target element is a text type, extracting the element content and the element position of the target element based on a text recognition technology;
If the element type of the target element is a bar code type, extracting the element content of the target element based on a bar code identification technology, and determining the element position of the target element based on a connected domain technology;
If the element type of the target element is the picture type, determining the element position of the target element based on an image recognition technology or a machine learning technology, and extracting the element content of the target element according to the element position of the target element.
4. The tag generation method according to claim 1, wherein the method of performing the filling process on the element region includes any one of:
Filling the element region according to the color value of the junction of the element region and the background region;
If the background area is a solid color area, filling the element area according to the color value of the background area;
And if the background area is formed by the basic image, determining an arrangement rule of the basic image, and filling the element area according to the basic image and the arrangement rule.
5. The tag generation method according to claim 1 or 4, wherein the generating a base tag from the background image or a preset image includes:
Obtaining a label template, wherein the label template is a universal template or corresponds to the model of a printer;
determining whether the background image meets a preset condition;
If the background image meets the preset condition, generating a basic label according to the background image and the label template;
And if the background image does not meet the preset condition, generating a basic label according to the preset image and the label template.
6. The tag generation method according to claim 1, characterized by further comprising, after drawing the element content of the target element in the base tag:
And adjusting the element content of the target element and/or the position of the element content of the target element in the base tag in response to an adjustment instruction input by a user.
7. The tag generation method according to claim 1, further comprising, after generating the target tag:
and sending the target label to a printer so that the printer prints the target label.
8.A label producing apparatus, comprising: the device comprises an image acquisition module, an element extraction module, a background processing module and a label generation module;
The image acquisition module is used for acquiring a target image, wherein the target image comprises an element area and a background area;
The element extraction module is used for extracting at least one target element from the element area, wherein each target element comprises element content and element positions;
The background processing module is used for determining a background image, wherein the background image comprises the background area and the element area after filling processing;
The label generating module is used for generating a basic label according to the background image or the preset image, and drawing the element content of each target element in the basic label according to the element position of each target element so as to generate a target label.
9. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the tag generation method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the tag generation method of any one of claims 1-7.
CN202310785253.3A 2023-06-29 2023-06-29 Label generation method and device, electronic equipment and storage medium Pending CN118115509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310785253.3A CN118115509A (en) 2023-06-29 2023-06-29 Label generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310785253.3A CN118115509A (en) 2023-06-29 2023-06-29 Label generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118115509A true CN118115509A (en) 2024-05-31

Family

ID=91211020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310785253.3A Pending CN118115509A (en) 2023-06-29 2023-06-29 Label generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118115509A (en)

Similar Documents

Publication Publication Date Title
US8391602B2 (en) Character recognition
WO2018010657A1 (en) Structured text detection method and system, and computing device
US8634644B2 (en) System and method for identifying pictures in documents
CN114463586A (en) Training and image recognition method, device, equipment and medium of image recognition model
CN110942004A (en) Handwriting recognition method and device based on neural network model and electronic equipment
WO2024041032A1 (en) Method and device for generating editable document based on non-editable graphics-text image
CN111275139A (en) Handwritten content removal method, handwritten content removal device, and storage medium
CN112418216A (en) Method for detecting characters in complex natural scene image
US9235757B1 (en) Fast text detection
CN111680690A (en) Character recognition method and device
CN113627439A (en) Text structuring method, processing device, electronic device and storage medium
CN108230332B (en) Character image processing method and device, electronic equipment and computer storage medium
CN113177542A (en) Method, device and equipment for identifying characters of seal and computer readable medium
CN113610809A (en) Fracture detection method, fracture detection device, electronic device, and storage medium
CN113326766A (en) Training method and device of text detection model and text detection method and device
CN114120305B (en) Training method of text classification model, and text content recognition method and device
CN113435257B (en) Method, device, equipment and storage medium for identifying form image
CN115620315A (en) Handwritten text detection method, device, server and storage medium
CN118115509A (en) Label generation method and device, electronic equipment and storage medium
Liang et al. Robust table recognition for printed document images
CN114612971A (en) Face detection method, model training method, electronic device, and program product
CN111814780A (en) Bill image processing method, device and equipment and storage medium
CN111476800A (en) Character region detection method and device based on morphological operation
CN114998906B (en) Text detection method, training method and device of model, electronic equipment and medium
Shekar Skeleton matching based approach for text localization in scene images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination