CN112686355B - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112686355B
CN112686355B CN202110033115.0A CN202110033115A CN112686355B CN 112686355 B CN112686355 B CN 112686355B CN 202110033115 A CN202110033115 A CN 202110033115A CN 112686355 B CN112686355 B CN 112686355B
Authority
CN
China
Prior art keywords
image
target area
target
color
anchor point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110033115.0A
Other languages
Chinese (zh)
Other versions
CN112686355A (en
Inventor
董方亮
王俞
张术景
王欣燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rootcloud Technology Co Ltd
Original Assignee
Rootcloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rootcloud Technology Co Ltd filed Critical Rootcloud Technology Co Ltd
Priority to CN202110033115.0A priority Critical patent/CN112686355B/en
Publication of CN112686355A publication Critical patent/CN112686355A/en
Application granted granted Critical
Publication of CN112686355B publication Critical patent/CN112686355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a readable storage medium, wherein the image processing method comprises the following steps: acquiring an original image; determining a target area of an original image; generating an anchor point based on attribute information of the target area and an association relation between the target area and target content; and fusing the anchor point into the original image to generate a fused image. By adopting the image processing method, the device, the electronic equipment and the readable storage medium, the concealment and the safety of the scanned image can be improved, and the display of the scanned image can be prevented from being abrupt.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a readable storage medium.
Background
At present, information (such as text information, numerical information, web site links, etc.) stored in scanned images is obtained by scanning specific images, and has been widely used in various fields.
Taking two-dimensional code images as an example, the two-dimensional code images comprise stacked/row-type two-dimensional bar codes, which can also be called stacked two-dimensional bar codes or layer-type two-dimensional bar codes, and the coding principle is that the two-dimensional bar codes are stacked into two or more rows according to the requirement on the basis of one-dimensional bar codes. The method inherits some characteristics of the one-dimensional bar code in the aspects of coding design, verification principle, reading mode and the like, and the reading equipment is compatible with bar code printing and one-dimensional bar code technology. However, due to the increase of the number of lines, the lines need to be judged, and the decoding algorithm and software are not identical to those of the one-dimensional bar code. The currently commonly used row-type two-dimensional bar codes may include: code 16K, code, PDF417, etc. (as shown in fig. 1).
The existing scanned image has the following defects: (1) single display form, which is black and white block; (2) The scanned image is arranged on a background image (for example, a two-dimensional code image is embedded on a propaganda poster and an advertisement mark) and is very abrupt, so that the attention of readers is easily dispersed; (3) Affecting the original interface style of the background image embedded with the scanned image; (4) The code scanning information is that the plain text cannot be encrypted, or the information encryption is forced to be realized only by exposing the encrypted content.
Disclosure of Invention
In view of the foregoing, it is an object of the present application to provide an image processing method, apparatus, electronic device, and readable storage medium, which overcome at least one of the above-mentioned drawbacks.
The embodiment of the application provides an image processing method, which comprises the following steps: acquiring an original image; determining a target area of the original image; generating an anchor point based on the attribute information of the target area and the association relation between the target area and target content; and fusing the anchor point into the original image to generate a fused image.
Alternatively, the attribute information may include an image color characteristic of the target area, and the association relationship may be determined based on the image color characteristic of the target area.
Alternatively, the association relationship between the target area and the target content may be determined by: and determining the association relation between the target area and the target content in a binary coding mode based on the image color characteristics of the target area.
Optionally, the step of determining the association relationship between the target area and the target content by using a binary coding manner based on the image color characteristic of the target area may include: dividing the target area to obtain a plurality of color blocks; respectively corresponding the image color characteristics of each color block in the plurality of color blocks to binary values according to a preset coding rule to form binary codes; the binary code is associated with the target content to represent the target content through the target region.
Alternatively, the attribute information may include location information of the target region, and the anchor point may be generated based on the location information and the association relationship.
The embodiment of the application also provides an image processing method, which comprises the following steps: identifying an anchor point in the fused image; determining a target area of the fusion image indicated by the anchor point and an association relationship between the target area and target content based on the identification result; determining target content associated with the target area based on the association relation; and outputting the target content.
Optionally, the anchor point may indicate a plurality of color tiles in a target area of the fused image and an association relationship between each color tile and target content, wherein the step of determining the target content associated with the target area based on the association relationship may include: determining binary values corresponding to image color characteristics of each of the plurality of color patches based on a decoding rule corresponding to an encoding rule for encoding the target area, respectively, to form binary codes; and determining target content associated with the binary code as target content associated with the target area.
The embodiment of the application also provides an image processing device, which comprises: the image acquisition module acquires an original image; the area determining module is used for determining a target area of the original image; the anchor point generation module is used for generating an anchor point based on the attribute information of the target area and the association relation between the target area and the target content; and the image fusion module fuses the anchor points into the original image to generate a fused image.
The embodiment of the application also provides electronic equipment, which comprises: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the image processing method as described above.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image processing method as described above.
Compared with the prior art, the image processing method and device provided by the embodiment of the application can avoid the scanned image from being displayed suddenly in the embedded background image, can improve the concealment and safety of the scanned image, and enriches the display form of the scanned image.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows an example of a prior art two-dimensional bar code;
FIG. 2 shows a flowchart of an image processing method provided by an embodiment of the present application;
FIG. 3 is a flowchart showing steps for determining an association relationship between a target area and target content according to an embodiment of the present application;
FIG. 4 shows an exemplary diagram of an image processing method provided by an embodiment of the present application;
FIG. 5 shows a flowchart of another image processing method provided by an embodiment of the present application;
FIG. 6 illustrates a flowchart of steps provided by embodiments of the present application to determine target content associated with a target region;
fig. 7 is a schematic diagram showing the structure of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram showing the structure of another image processing apparatus provided in the embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment that a person skilled in the art would obtain without making any inventive effort is within the scope of protection of the present application.
Fig. 2 shows a flowchart of an image processing method of an embodiment of the present application.
As shown in fig. 2, in step S101, an original image is acquired.
In this step, the original image refers to an image used to generate the encoded image. Here, the original image may be acquired in various ways, for example, the image may be captured by an image capturing apparatus, the image captured by the image capturing apparatus acquired from the image capturing apparatus may be determined as the original image, the locally stored image may be determined as the original image, and the original image may be acquired from the internet, which is not limited in this application.
In step S102, a target area of the original image is determined.
In this step, the image encircled by the target area may be a partial image in the original image or may be the original image itself (i.e., a complete original image).
Here, the target area of the original image may be determined in various ways.
In an example, the target region of the original image may be determined based on user input on the original image.
As an example, the input may be a sliding operation, a track of the sliding operation on the original image is tracked, and an area surrounded by the track of the sliding operation is determined as a target area. Alternatively, the input may be a click operation, a coordinate position of the click operation on the original image may be determined, and an area within a predetermined range centered on the coordinate position may be determined as the target area.
It should be understood that the above-listed ways of determining the target area of the original image are merely examples, and the present application is not limited thereto, and those skilled in the art may determine the target area of the original image in various custom ways.
In another example, a default selection rule may be used to select a target region from an original image.
For example, a predetermined position on the original image (e.g., at the first pixel position in the upper left corner of the original image) may be determined as the start position of the target area, a first number of pixels may be selected in the length direction of the original image from the predetermined position to determine the length of the target area, and a second number of pixels may be selected in the width direction of the original image from the predetermined position to determine the width of the target area, thereby selecting the target area from the original image.
In step S103, an anchor point is generated based on the attribute information of the target area and the association relationship between the target area and the target content.
In a preferred example, the attribute information of the target area may include an image color characteristic of the target area, in which case the association relationship between the target area and the target content is determined based on the image color characteristic of the target area.
In an embodiment, the attribute information of the target area may further include location information of the target area in addition to the image color characteristics of the target area, for determining the location of the target area in the original image. As an example, the location information may include, but is not limited to, at least one of: the starting point position of the target area, the length of the target area, the width of the target area and the boundary information of the target area.
In this case, the anchor point may be generated based on the positional information of the target area and the association relationship between the target area and the target content.
In the embodiment of the present application, the association relationship between the target area and the target content may be established based on the image color characteristics of the target area through various encoding methods. In a preferred embodiment, the association relationship between the target area and the target content may be determined by a binary coding method based on the image color characteristics of the target area.
A process of establishing an association relationship between a target area and target content by a binary encoding method will be described with reference to fig. 3.
Fig. 3 is a flowchart illustrating steps for determining an association relationship between a target area and target content according to an embodiment of the present application.
As shown in fig. 3, in step S31, the target area is divided to obtain a plurality of color blocks.
In this step, the target area may be divided in various ways, and furthermore, the number of color patches may be determined as needed by those skilled in the art.
In an example, a plurality of color patches may be obtained by equally dividing the target area.
In another example, a plurality of color patches may be obtained based on an image color distribution of the target area. For example, consecutive pixel points having the same or similar image color characteristics are determined as one color patch.
In step S32, a binary code is formed based on the image color characteristics of each of the plurality of color patches.
In one embodiment, the image color characteristics of each of the plurality of color patches are respectively corresponding to binary values with a predetermined encoding rule to form a binary code.
That is, any one of the color blocks corresponds to one binary value (0 or 1). For example, color block coding is performed according to the image color distribution, or coding color blocks which do not affect the original image are added to respectively correspond the image color characteristics of each color block to binary values.
In the above step, each color block is encoded according to the target content, and the size of the color block, the image color characteristics, the positional information of the color block, and the like are converted into binary codes as encoding basis.
In step S33, a binary code is associated with the target content to represent the target content through the target area.
That is, a binary code associated with the target content is generated based on the image color characteristics of each color patch to associate the target area with the target content to represent the target content through the target area. In this case, the image of the target area is a two-dimensional code image.
In this step, the binary code may be directly associated with the target content, or may be associated with a content address where the target content is stored, to acquire the target content based on the associated content address.
The target area of the original image is binary coded through the steps of color block determination, image color characteristic determination of each color block and binary value determination corresponding to each color block.
For the above case where color patches are divided, an anchor point may be generated based on the positional information of each color patch and the association relationship between each color patch and the target content.
Returning to fig. 2, in step S104, the anchor point is fused into the original image, and a fused image is generated.
Here, anchor points may be rendered into the original image using various image rendering methods, thereby generating a fused image fused with the encoded information. That is, the fused image is an image fused with the encoded information, that is, the fused image fused with the encoded information has one anchor point.
In an example, the anchor point may be an identifier, and preferably, in order to make the color distribution of the anchor point and the color distribution of the fused image more coordinated, the area occupied by the anchor point in the fused image may be reduced as much as possible. For example, the area occupied by the anchor point in the fused image is smaller than or equal to the set value, and as an example, a person skilled in the art can determine the value of the set value according to the actual requirement. In a preferred example, the set value may include an area occupied by 1 pixel.
In the embodiment of the application, the position information of the target area and the association relationship between the target area and the target content can be reflected based on the attribute value (such as the anchor color) of the anchor point.
Fig. 4 shows an exemplary diagram of an image processing method provided in an embodiment of the present application.
Taking the example shown in fig. 4 as an example, in this example, assuming that the original image 1 is a sunflower image, a target area 11 is determined from the sunflower image, and attribute information of the target area 11 includes position information (e.g., a start position, a length, a width of the target area 11) and image color characteristics.
In this example, the target area 11 is divided into 8 color blocks, and the image color characteristic of each color block is associated with a binary value, for example, 0, 1, 0, respectively, from left to right, in a binary encoding manner, thereby forming an 8-bit binary code 01001010.
The generated anchor point is rendered into the original image, and the anchor point can be arranged at any position in the fused image, and as an example, the anchor point can be arranged at the starting position of the target region. Because the area occupied by the anchor point is small, for example, only one pixel point can be occupied, and the anchor point cannot be identified by naked eyes, so that the whole picture of the fused image fused with the anchor point is not damaged.
The target area 11 shown in the above example is only used to assist in understanding the image processing procedure of the present application, and is not displayed in the original image 1 or the fused image. Further, the number of divisions of the color patch and the binary value listed in the above example are merely examples, and the present application is not limited thereto.
In addition, in the prior art, the two-dimensional code image has a single display form, is a black and white block, but as can be seen from the example shown in fig. 4, the image processing method provided by the embodiment of the application uses the image in the target area of the original image as the encoded image, so that the display form of the encoded image is enriched.
According to the image processing method provided by the embodiment of the application, the two-dimensional code image is not required to be suddenly displayed on the background image like the prior art, the embedded type two-dimensional code which cannot be identified or is difficult to identify by naked eyes is used for completing the embedding of the coding information, the pollution of the embedding of the two-dimensional code image to the original image is avoided, and the concentration of readers is not influenced due to the embedding of the two-dimensional code image.
Fig. 5 is a flowchart of an image processing method according to another embodiment of the present application. The image processing method is an image decoding process corresponding to the image encoding process shown in fig. 2.
As shown in fig. 5, in step S201, an anchor point in the fused image is identified.
Here, the fused image may be scanned with various scanning devices to identify anchor points in the fused image.
In step S202, based on the recognition result, the target region of the fusion image indicated by the anchor point and the association relationship between the target region and the target content are determined.
In embodiments of the present application, the anchor point indicates location information of the target region, based on which the target region may be determined from the fused image based on the identified location information.
For the above case where color tiles are divided during the image encoding process, the anchor point may indicate position information of each color tile, at which time each color tile may be determined from the fused image based on the identified position information. In this case, the anchor point also indicates an association relationship between each color patch and the target content.
In step S203, the target content associated with the target area is determined based on the association relationship between the target area and the target content.
For example, the association between the target region and the target content may indicate the association between the image color characteristics of the target region and the target content.
For the above-described case where color patches are divided during the image encoding process, the target content associated with the target area may be determined based on the association relationship between each color patch and the target content with reference to the method shown in fig. 6.
Fig. 6 shows a flowchart of the steps provided by an embodiment of the present application to determine target content associated with a target region.
As shown in fig. 6, in step S41, each color patch is determined from the fused image based on the position information of each color patch indicated by the anchor point.
In step S42, a binary code is formed based on the image color characteristics of each color patch.
For example, binary values corresponding to image color characteristics of each of a plurality of color patches are respectively determined based on a decoding rule corresponding to an encoding rule for encoding a target area, and a binary code is formed.
In step S43, the target content associated with the binary code is determined as the target content associated with the target area.
Returning to fig. 5, in step S204, the target content is output.
Here, the target content may be output in various manners, for example, the target content may be provided to the user who makes the identification request, for example, the target content may be transmitted to a terminal of the user who makes the identification request for display, or the target content may be transmitted to another terminal that can be viewed by the user for display, which is not limited in this application. The above terminal or other terminals may be a scanning device for scanning the fused image, or may be other devices than the scanning device.
According to the image processing method, the device, the electronic equipment and the readable storage medium, the problem of single display form of the two-dimensional code image in the prior art can be solved by generating the coded image based on the original image, namely, constructing the fused image fused into the coded image based on the original image.
In addition, the anchor point integrated with the original image is generated to solve the problems of abrupt display and poor safety of the scanned image (such as a two-dimensional code image) in the prior art, so that the hidden property and the safety of the scanned image are improved, and the abrupt display of the scanned image can be avoided.
Based on the same inventive concept, the embodiment of the present application further provides an image processing device corresponding to the image processing method, and since the principle of solving the problem by the device in the embodiment of the present application is similar to that of the image processing method in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 7 and 8, fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and fig. 8 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application.
As shown in fig. 7, an image processing apparatus 700 provided in an embodiment of the present application includes: an image acquisition module 701, a region determination module 702, an anchor point generation module 703, and an image fusion module 704.
Further, the image acquisition module 701 acquires an original image.
The region determination module 702 determines a target region of the original image.
The anchor point generation module 703 generates an anchor point based on attribute information of the target region and an association relationship between the target region and the target content.
The image fusion module 704 fuses the anchor points to the original image, generating a fused image.
As shown in fig. 8, another image processing apparatus 800 provided in an embodiment of the present application includes: an image recognition module 801, an information determination module 802, a content determination module 803, and a content output module 804.
Further, the image recognition module 801 recognizes anchor points in the fused image.
The information determination module 802 determines a target area of the fusion image indicated by the anchor point and an association relationship between the target area and the target content based on the identification result.
The content determination module 803 determines target content associated with the target region based on an association relationship between the target region and the target content.
The content output module 804 outputs the target content.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic device 900 includes a processor 910, a memory 920, and a bus 930.
The memory 920 stores machine-readable instructions executable by the processor 910, when the electronic device 900 is running, the processor 910 communicates with the memory 920 through the bus 930, and when the machine-readable instructions are executed by the processor 910, the steps of the image processing method in the method embodiments shown in fig. 2 and fig. 5 may be executed, and detailed implementation manners may refer to the method embodiments and are not repeated herein.
The embodiment of the present application further provides a computer readable storage medium, where a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the image processing method in the method embodiments shown in fig. 2 and fig. 4 may be executed, and a specific implementation manner may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. An image processing method, comprising:
acquiring an original image;
determining a target area of the original image;
generating an anchor point based on attribute information of the target area and an association relation between the target area and target content, wherein the attribute information comprises image color characteristics of the target area, the anchor point is an identifiable mark for indicating a coded image, and the coded image is an image corresponding to the target area in an original image;
fusing the anchor point into the original image to generate a fused image,
wherein the association relationship between the target area and the target content is determined by:
dividing the target area to obtain a plurality of color blocks;
respectively corresponding the image color characteristics of each color block in the plurality of color blocks to binary values according to a preset coding rule to form binary codes;
the binary code is associated with the target content to represent the target content through the target region.
2. The image processing method according to claim 1, wherein the attribute information includes position information of the target area, and the anchor point is generated based on the position information and the association relation.
3. An image processing method, comprising:
identifying an anchor point in the fusion image, wherein the anchor point is an identifiable mark for indicating the coded image;
determining a target area of the fusion image indicated by the anchor point and an association relation between the target area and target content based on the identification result, wherein the coded image is an image corresponding to the target area in the original image;
determining target content associated with the target area based on the association relation;
the target content is output and the target content is output,
wherein the association relationship between the target area and the target content is determined by:
dividing the target area to obtain a plurality of color blocks;
respectively corresponding the image color characteristics of each color block in the plurality of color blocks to binary values according to a preset coding rule to form binary codes;
the binary code is associated with the target content to represent the target content through the target region.
4. The image processing method according to claim 3, wherein the anchor point indicates a plurality of color patches in a target area of the fused image and an association relationship between each color patch and target content,
wherein, based on the association relation, the step of determining the target content associated with the target area includes:
determining binary values corresponding to image color characteristics of each of the plurality of color patches based on a decoding rule corresponding to an encoding rule for encoding the target area, respectively, to form binary codes;
and determining target content associated with the binary code as target content associated with the target area.
5. An image processing apparatus, comprising:
the image acquisition module acquires an original image;
the area determining module is used for determining a target area of the original image;
the anchor point generation module is used for generating an anchor point based on attribute information of the target area and an association relation between the target area and target content, wherein the attribute information comprises image color characteristics of the target area, the anchor point is an identifiable mark for indicating a coded image, and the coded image is an image corresponding to the target area in an original image;
the image fusion module fuses the anchor points into the original image to generate a fusion image,
the anchor point generating module determines the association relation between the target area and the target content by the following method:
dividing the target area to obtain a plurality of color blocks;
respectively corresponding the image color characteristics of each color block in the plurality of color blocks to binary values according to a preset coding rule to form binary codes;
the binary code is associated with the target content to represent the target content through the target region.
6. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1 to 2 or the steps of the method of any one of claims 3 to 4.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, performs the steps of the method according to any one of claims 1 to 2 or the steps of the method according to any one of claims 3 to 4.
CN202110033115.0A 2021-01-12 2021-01-12 Image processing method and device, electronic equipment and readable storage medium Active CN112686355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110033115.0A CN112686355B (en) 2021-01-12 2021-01-12 Image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110033115.0A CN112686355B (en) 2021-01-12 2021-01-12 Image processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112686355A CN112686355A (en) 2021-04-20
CN112686355B true CN112686355B (en) 2024-01-05

Family

ID=75457339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110033115.0A Active CN112686355B (en) 2021-01-12 2021-01-12 Image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112686355B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105229673A (en) * 2013-04-03 2016-01-06 诺基亚技术有限公司 A kind of device and the method be associated
CN110413719A (en) * 2019-07-25 2019-11-05 Oppo广东移动通信有限公司 Information processing method and device, equipment, storage medium
CN111191557A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Mark identification positioning method, mark identification positioning device and intelligent equipment
CN111353321A (en) * 2020-02-24 2020-06-30 口碑(上海)信息技术有限公司 Data generating and analyzing method and device, computer storage medium and electronic device
CN111507446A (en) * 2019-01-31 2020-08-07 北京骑胜科技有限公司 Two-dimensional code generation and identification method and device
CN111643888A (en) * 2019-03-04 2020-09-11 仁宝电脑工业股份有限公司 Game device and method for identifying game device
CN112037160A (en) * 2020-08-31 2020-12-04 维沃移动通信有限公司 Image processing method, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105229673A (en) * 2013-04-03 2016-01-06 诺基亚技术有限公司 A kind of device and the method be associated
CN111507446A (en) * 2019-01-31 2020-08-07 北京骑胜科技有限公司 Two-dimensional code generation and identification method and device
CN111643888A (en) * 2019-03-04 2020-09-11 仁宝电脑工业股份有限公司 Game device and method for identifying game device
CN110413719A (en) * 2019-07-25 2019-11-05 Oppo广东移动通信有限公司 Information processing method and device, equipment, storage medium
CN111191557A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Mark identification positioning method, mark identification positioning device and intelligent equipment
CN111353321A (en) * 2020-02-24 2020-06-30 口碑(上海)信息技术有限公司 Data generating and analyzing method and device, computer storage medium and electronic device
CN112037160A (en) * 2020-08-31 2020-12-04 维沃移动通信有限公司 Image processing method, device and equipment

Also Published As

Publication number Publication date
CN112686355A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN111191414B (en) Page watermark generation method, identification method, device, equipment and storage medium
CN108229596B (en) Combined two-dimensional code, electronic certificate carrier, generating and reading device and method
CN109960957B (en) Incomplete two-dimensional code and generation, repair and identification methods, devices and systems thereof
US20090232351A1 (en) Authentication method, authentication device, and recording medium
CN106384143B (en) Dynamic electronic two-dimensional code generation method and identification method
JP2005094107A (en) Printed matter processing system, watermark embedded document printer, watermark embedded document reader, printed matter processing method, information reader, and information reading method
US20190066254A1 (en) Image processing device, image processing method, and program
CN111860727B (en) Two-dimensional code generation method, two-dimensional code verification device and computer readable storage medium
CN112508145B (en) Electronic seal generation and verification method and device, electronic equipment and storage medium
KR101587501B1 (en) Method of authenticating goods using identification code image and apparatus performing the same
CN111738898A (en) Text digital watermark embedding \ extracting method and device
KR100855668B1 (en) Image processing apparatus, control method therefor, and computer-readable storage medium
CN108256608A (en) A kind of two dimensional image code and its recognition methods and equipment
CN112686355B (en) Image processing method and device, electronic equipment and readable storage medium
CN110335189A (en) Fill method, apparatus, computer equipment and the storage medium of anti-counterfeiting information
CN116910778A (en) Method and storage medium for steganographic marking based on picture pixel value
CN110955889A (en) Electronic document tracing method based on digital fingerprints
CN116127419A (en) Data processing method, data identification method, font file generation method and device
CN107247984B (en) Coding method of visual two-dimensional code
WO2022123635A1 (en) Signature generation device, authentication device, and program
CN111860726B (en) Two-dimensional code display method, verification method, device and computer readable storage medium
CN115511030A (en) Anti-counterfeiting verification method and device and electronic equipment
CN115035523A (en) Data identification method and mobile terminal
CN109840574B (en) Two-dimensional code information hiding method and device, electronic equipment and storage medium
CN114330621A (en) Two-dimensional code anti-counterfeiting method and device based on identification information and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 303-309, No.3, Pazhou Avenue East Road, Haizhu District, Guangzhou City, Guangdong Province 510000

Applicant after: Shugen Internet Co.,Ltd.

Address before: Unit 12-30, 4th floor, Xigang office building, Guangzhou international media port, 218 and 220 Yuejiang West Road, Haizhu District, Guangzhou City, Guangdong Province 510000

Applicant before: IROOTECH TECHNOLOGY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant