CN108134906B - Image processing method and system - Google Patents

Image processing method and system Download PDF

Info

Publication number
CN108134906B
CN108134906B CN201711401209.9A CN201711401209A CN108134906B CN 108134906 B CN108134906 B CN 108134906B CN 201711401209 A CN201711401209 A CN 201711401209A CN 108134906 B CN108134906 B CN 108134906B
Authority
CN
China
Prior art keywords
image
identification information
designated image
designated
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711401209.9A
Other languages
Chinese (zh)
Other versions
CN108134906A (en
Inventor
柯海滨
许枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201711401209.9A priority Critical patent/CN108134906B/en
Publication of CN108134906A publication Critical patent/CN108134906A/en
Application granted granted Critical
Publication of CN108134906B publication Critical patent/CN108134906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure provides an image processing method, including: the method comprises the steps of obtaining a designated image and additional information related to the designated image, generating identification information of the additional information based on the additional information, and fusing the identification information into the designated image to obtain a target image containing the identification information and the designated image. The image processing method provided by the disclosure generates the identification information based on the acquired additional information associated with the designated image, and organically fuses the identification information into the designated image, so that the technical problem that the additional information is easily lost when the designated image is processed in the related art can be at least partially overcome, the effective binding between the designated image and the additional information associated with the designated image is realized, and the technical effect that the information behind the designated image can be acquired through the additional information organically fused with the designated image when the designated image is processed is achieved. In addition, the present disclosure also provides an image processing system.

Description

Image processing method and system
Technical Field
The present disclosure relates to an image processing method and system thereof.
Background
In view of the convenience of browsing pictures, in practical applications, for any one image, the content of the image and the shooting time of the image, etc. are generally described by additional information in addition to the image itself.
However, in implementing the embodiments of the present disclosure, the inventors found that there are at least the following problems in the related art: the prior art is easy to cause the loss of the additional information of the image when the image is processed (such as transmitted, stored or printed).
In view of the above problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
One aspect of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining a designated image and additional information related to the designated image, generating identification information of the additional information based on the additional information, and fusing the identification information into the designated image to obtain a target image comprising the identification information and the designated image.
Optionally, the additional information associated with the specific image includes information of at least one of: the collection information of the appointed image is used for describing the text information of the appointed image and the voice information output by the user when the appointed image is collected.
Optionally, the method further includes: and determining the display position of the identification information in the designated image, and fusing the identification information into the designated image according to the display position.
Optionally, the method further includes: the method includes the steps of determining an image layout of the designated image, determining a first display position of the identification information in the designated image according to the image layout, and fusing the identification information to the first display position in the designated image.
Optionally, the method further includes: and re-fusing the identification information to the second display position in the designated image when detecting that a dragging operation for instructing dragging the identification information from the first display position to the second display position in the designated image exists.
Optionally, the method further includes: in the case where it is detected that there is a printing operation for instructing printing of the target image, the identification information is separated from the target image, an image portion remaining after the separation of the identification information from the target image is printed on one side of a preset sheet, and the identification information is printed on the other side of the preset sheet.
Optionally, the method further includes: and determining the display state of the identification information, and displaying the identification information according to the display state when the target image is displayed.
Optionally, the display state includes a transparent state with a preset transparency, and the method further includes: when the target image is displayed, the identification information is displayed in a corresponding transparent state according to the preset transparency, and when the preset transparency needs to be changed into other transparency, the identification information is displayed in other corresponding transparent states on the target image according to the other transparency.
Another aspect of the present disclosure provides an image processing system including: the image processing device comprises an acquisition module, a generation module and a first fusion module, wherein the acquisition module is used for acquiring a designated image and additional information related to the designated image, the generation module is used for generating identification information of the additional information based on the additional information, and the first fusion module is used for fusing the identification information into the designated image to obtain a target image containing the identification information and the designated image.
Optionally, the system further includes: the device comprises a determining module used for determining the display position of the identification information in the appointed image, and a second fusing module used for fusing the identification information into the appointed image according to the display position.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates a system architecture suitable for an image processing method and system thereof according to an embodiment of the present disclosure;
FIG. 2A schematically illustrates a flow chart of an image processing method according to an embodiment of the present disclosure;
fig. 2B schematically shows an effect diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 3A schematically illustrates a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 3B schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 3C schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 3D schematically illustrates a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 3E schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 3F schematically illustrates a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 4A schematically illustrates a block diagram of an image processing system according to an embodiment of the present disclosure;
FIG. 4B schematically shows a block diagram of an image processing system according to another embodiment of the present disclosure; and
FIG. 5 schematically illustrates a block diagram of a computer system suitable for use to implement embodiments of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The present disclosure provides an image processing method, including: acquiring a designated image and additional information associated with the designated image; generating identification information of the additional information based on the additional information; and fusing the identification information into the designated image to obtain a target image containing the identification information and the designated image. The image processing method provided by the disclosure can at least partially overcome the technical problem that the additional information is easy to lose when the specified image is processed (such as transmitted, stored or printed) in the related art, not only can the effective binding between the specified image and the additional information associated with the specified image be realized, but also the technical effect that the information behind the specified image can be obtained through the additional information organically fused with the specified image when the specified image is processed is achieved.
Fig. 1 schematically illustrates a system architecture 100 suitable for an image processing method and system thereof according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the image processing system provided by the embodiment of the present disclosure may be generally disposed in the server 105. The image processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the image processing system provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the image processing method provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the image processing system provided by the embodiment of the present disclosure may also be provided in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, the designated image to be processed may be originally stored in any one of the terminal apparatuses 101, 102, or 103 (for example, but not limited to the terminal apparatus 101), or stored on an external storage apparatus and may be imported into the terminal apparatus 101. Then, the terminal device 101 may locally execute the image processing method provided by the embodiment of the present disclosure, or send the designated image to be processed to another terminal device, server, or server cluster, and execute the image processing method provided by the embodiment of the present disclosure by another terminal device, server, or server cluster that receives the designated image to be processed.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2A schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2A, the image processing method may include operations S210 to S230, in which:
in operation S210, a designated image and additional information associated with the designated image are acquired.
In operation S220, identification information of the additional information is generated based on the additional information.
In operation S230, the identification information is fused into the designated image, and a target image including the identification information and the designated image is obtained.
According to the embodiment of the present disclosure, the image is specified based on the requirement of the user in practical application, and is not limited herein. The designated image may include, but is not limited to, an image with any attribute, an image of any frame in a video with any format, and a specific acquisition manner of the designated image is not limited, and the designated image may be acquired directly, such as acquired by an image acquisition device of an electronic device, or acquired indirectly, such as downloaded via the internet, transmitted by other electronic devices, or the like.
The designated image may be a picture in any format, such as JPG, PNG, GIF, TIFF, and the like, and the designated image may be in any color format, such as a color format or a black-and-white format, and may be a transparent image or a non-transparent image, which is not limited specifically herein.
In the embodiment of the present disclosure, an example will be taken as an example in which an ARGB four-channel image of a transparent channel a is added on the basis of an RGB (red, green, and blue) three-channel image, and a JPG picture represented by a matrix 1024 × 1024 is taken as a designated image, to describe in detail the image processing method provided in the embodiment of the present disclosure, where when an a value of a pixel is 0, the pixel is completely transparent, and when the a value is not 0, the pixel is opaque or translucent.
According to the embodiment of the present disclosure, the additional information associated with the designated image may be any information related to the designated image, and is determined according to the requirement of the user for the designated image, and is not limited herein, for example, the additional information may be information related to the acquisition mode of the designated image, information related to the acquisition process of the designated image, and information for describing the designated image.
According to the embodiment of the present disclosure, the identification information may be any information having an identification function, such as a two-dimensional code, generated based on the additional information. It is understood that those skilled in the art can use an appropriate method or the like according to the actual situation, such as the difference of the identification information, to meet the needs of the actual situation, such as the text information "summer holiday at 6.28.6.2007, afternoon, walk on the country lane, and be used to describe the designated image in fig. 2B, which can be input, for example! "afterwards, the right key selects the function" generate the two-dimensional code ", or any other known method for generating the two-dimensional code, which is not described herein again.
It should be noted that, the attribute of the identification information is not limited in this disclosure, and for example, the attribute may be color, black and white, transparent, or non-transparent, and the attributes may be combined arbitrarily, for example, the attribute may be color translucent, or black and white transparent. Embodiments of the present disclosure will be described below taking a widely used two-dimensional code as an example of identification information.
According to the embodiment of the disclosure, the method for obtaining the target image including the identification information and the designated image by fusing the identification information into the designated image is various, for example, the watermark which changes the identification information into the designated image can be fused with the designated image based on the image processing method.
According to the embodiment of the disclosure, due to the technical scheme that the identification information is generated based on the acquired additional information associated with the designated image and is organically fused into the designated image, the technical problem that the additional information is easily lost when the designated image is processed in the related art can be at least partially overcome, the effective binding between the designated image and the additional information associated with the designated image is realized, and the technical effect that the information behind the designated image can be acquired through the additional information organically fused with the designated image when the designated image is processed is achieved.
Fig. 2B schematically shows an effect diagram 200 of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2B, the image 210 is specified, the additional information 220 associated with the specified image 210, the identification information 230 is generated based on the additional information 220, and the identification information 230 is organically fused into the specified image 210.
It should be understood that the specific image 210, the additional information 220 and the identification information 230 in fig. 2B are only illustrative, that is, the specific position of the identification information 230 in the specific image 210, the size of the identification information 230, the number and the attributes of the identification information 230 are only illustrative and are not intended to limit or narrow the scope of the claimed embodiments of the present disclosure. According to the actual needs of the user, any display size, any attribute, and any number of identification information 230 may be set at any position in the designated image 210 by itself, where the size and the attribute of any number of identification information may be the same or different, and may be set or changed by itself according to the preference of the user, and are not described here again.
The image processing method shown in fig. 2A is further described with reference to fig. 3A to 3F in conjunction with specific embodiments.
According to an embodiment of the present disclosure, the additional information associated with the designated image may include, but is not limited to, capture information of the designated image, text information for describing the designated image, and voice information output by a user when the designated image is captured.
It should be noted that the acquisition information of the designated image may include, but is not limited to, related setting information of the electronic device, such as a camera model, an exposure mode, a shutter speed, a sensitivity, an exposure compensation, and the like, when the designated image is acquired, and may also include, but is not limited to, related information of an image attribute, such as a picture format, a picture size, a resolution, or a pixel value size, when the designated image is acquired.
The text information for describing the designated image may include, but is not limited to, text information for describing the designated image input by the user himself or may also include, but is not limited to, text information for describing the designated image input and stored by other users, and relevant attributes of the text information, such as word number, font, input mode, and the like, are not limited, as long as the text information describes the designated image.
The voice information output by the user when the specified image is captured may include, but is not limited to, the voice information input by the user himself for describing the specified image, and may also include, but is not limited to, the interactive voice information of the user himself and other users for the current image capturing scene, which is not limited herein.
Through the embodiment of the disclosure, as the additional information associated with the designated image can include, but is not limited to, the acquisition information of the designated image, the text information for describing the designated image and the voice information output by the user when the designated image is acquired, the possibility of providing various additional information for the user is provided, so that the additional information is diversified, the requirements of different users on the additional information of the designated image are met, and the user experience is improved.
Fig. 3A schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3A, the image processing method may include operations S210, S220, S230, S311, and S312. Operations S210, S220, and S230 are similar to the previous embodiments, and are not described herein again. Wherein:
in operation S311, a display position of the identification information in the designated image is determined.
In operation S312, the identification information is fused into the designated image according to the display position.
According to the embodiment of the present disclosure, the identification information may be displayed at a default display position, or may be set at a display position in the designated image according to actual needs, where the default display position of the identification information may be determined according to a layout of the designated image, for example, the default display position may be located at a lower right corner of the designated image, or may be located at a lower left corner of the designated image, which is not limited herein.
Specifically, pixels of the designated image can be represented by 1024 × 1024 by a matrix, pixels of the identification information such as the two-dimensional code can be represented by 50 × 50 by a matrix, a region of 50 × 50 is cut at any position in the designated image of 1024 × 1024, pixel values in the cut region of 50 × 50 are replaced by pixel values of the two-dimensional code matrix of 50 × 50, and fusion of the identification information to the designated image is achieved. The method for fusing any size of identification information into a designated image can be correspondingly developed according to the spirit of the embodiment of the present disclosure, and will not be described herein again.
Through this disclosed embodiment, fuse identification information to appointed display position for identification information's position can set up or change according to actual need by oneself, satisfies different users ' aesthetic demands, realizes visual effect's variety, and user experience is better.
Fig. 3B schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3B, the image processing method may include operations S210, S220, S230, S311, S312, S321, S322, and S323. Operations S210, S220, S230, S311, and S312 are similar to those of the previous embodiments, and are not repeated here.
In operation S321, an image layout of the designated image is determined.
In operation S322, a first display position of the identification information in the designated image is determined according to the image layout.
In operation S323, the identification information is fused to the first display position in the designated image.
According to an embodiment of the present disclosure, there is provided a method of determining a display position of identification information, that is, determining a first display position of the identification information in a specific image according to an image layout of the specific image.
The embodiment of the present disclosure does not limit the specific method for determining the image layout, and any known method for determining the image layout according to the designated image is within the scope of the present disclosure.
For example, for a given image containing a person, the layout of the image may be determined according to the size of the face of the person, such as symmetry, V-shape, trisection, diagonal, etc., and is not limited herein.
According to the embodiment of the disclosure, the technical scheme that the first display position of the identification information in the designated image is determined based on the image layout is adopted, so that the display position of the identification information in the designated image is harmonious and beautiful, and the visual experience of a user is improved on the basis of realizing the organic integration of the designated image and the additional information.
Fig. 3C schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3C, the method may include operations S210, S220, S230, S311, S312, S321, S322, S323, and S341. Operations S210, S220, S230, S311, S312, S321, S322, and S323 are similar to those of the previous embodiments, and are not repeated here.
In operation S341, in the case where it is detected that there is a drag operation for instructing to drag the identification information from the first display position to the second display position in the designated image, the identification information is re-fused onto the second display position in the designated image.
According to the embodiment of the present disclosure, the identification information may be displayed not only on the first position, but also on a second position different from the first position as needed, and specifically, if there is an indication and the indication may drag the identification information from the first display position to the second display position in the designated image, the identification information displayed on the first display position may also be dragged to the second display position, and the identification information may be fused to the second display position in the designated image again, where the fusion method is as described above and is not described herein again.
It is understood that the indication may be a drag command provided by a right button of the mouse after clicking the identification information, or may be double-clicking the identification information, and the identification information is changed from the current state to the draggable state, which is not limited herein.
Through the embodiment of the disclosure, due to the fact that the interactive mode for changing the display position of the identification information is provided, the user can change the display position of the identification information according to actual needs, and user experience is better.
Fig. 3D schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3D, the method may further include operations S210, S220, S230, S311, S312, S341, S342, and S343. Operations S210, S220, S230, S311, and S312 are similar to those of the previous embodiments, and are not repeated here.
In operation S341, in the case where it is detected that there is a printing operation for instructing printing of the target image, the identification information is separated from the target image.
In operation S342, an image portion remaining after the identification information is separated from the printing target image on one side of the preset paper.
In operation S343, identification information is printed on the other side of the preset paper.
The additional information fused appointed image is effectively bound between the additional information and the appointed image, and the additional information and the appointed image are transmitted and stored together during transmission and storage, so that the additional information is not easy to lose.
Specifically, when printing, the identification information is separated from the target image, and since the display position of the identification information is replaced with the pixel value of the identification information, it is necessary to restore the pixel value occupied by the identification information to the pixel value corresponding to the designated image, print the designated image on one side of the preset paper, and print the identification information on the other side of the preset paper.
Through the embodiment of the disclosure, a printing mode of the designated image fused with the additional information is provided, so that the designated image and the additional information can be respectively printed on different pages of preset paper, the printing requirements of different users are met, and the user experience is better.
Fig. 3E schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3E, the image processing method may include operations S210, S220, S230, S351, and S352. Operations S210, S220, and S230 are similar to the previous embodiments, and are not described herein again.
In operation S351, a presentation state of the identification information is determined.
In operation S352, the identification information is displayed according to the display state when the target image is displayed.
In addition to the display position of the identification information can be set and changed by itself, the display state of the identification information can also be set and changed by itself, wherein the display state can include but is not limited to the color mode of the display of the identification information, but also can include but is not limited to the transparency of the display of the identification information, and can also include but is not limited to the size of the display of the identification information, and the identification information can be displayed to the user according to different display states.
Through the embodiment of the disclosure, the technical scheme of displaying the identification information according to the display state when the target image is displayed is adopted, so that various display effects of the identification information are provided for the user, various intelligent experiences are increased, and the user experience is better.
Fig. 3F schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3F, the image processing method may include operations S210, S220, S230, S351, S352, S361, and S362. Operations S210, S220, S230, S351, and S352 are similar to those of the previous embodiments, and are not repeated here.
In operation S361, when the target image is displayed, the identification information is displayed in a corresponding transparent state with a preset transparency.
In operation S362, in case that the preset transparency needs to be changed to another transparency, the identification information is displayed in another corresponding transparent state on the target image with the other transparency.
According to an embodiment of the present disclosure, for an ARGB four-channel image, for a designated image of a JPG whose pixels can be represented by a matrix 1024 × 1024, if an a value of a pixel is 0, the pixel is completely transparent, and if the a value is not 0, the pixel is opaque or translucent. The identification information can be displayed in a preset transparency mode, and the display of the identification information in other transparencies can be changed.
Through the embodiment of the disclosure, due to the adoption of the technical scheme that the identification information is displayed in the preset transparency and can be changed, the display state of the identification information can be changed by a user according to personal preference, the participation sense is strong, and the user experience is good.
Fig. 4A schematically illustrates a block diagram of an image processing system according to an embodiment of the present disclosure.
As shown in fig. 4A, the image processing system 400 may include an acquisition module 410, a generation module 420, and a first fusion module 430. Wherein: the obtaining module 410 is used for obtaining the specified image and the additional information associated with the specified image. The generating module 420 is configured to generate identification information of the additional information based on the additional information. The first fusion module 430 is configured to fuse the identification information into the designated image, so as to obtain a target image including the identification information and the designated image.
It is understood that the obtaining module 410, the generating module 420 and the first fusing module 430 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present invention, at least one of the obtaining module 410, the generating module 420, and the first fusing module 430 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of the obtaining module 410, the generating module 420 and the first fusing module 430 may be at least partially implemented as a computer program module, which, when executed by a computer, may perform the functions of the respective module.
According to the embodiment of the disclosure, due to the technical scheme that the identification information is generated based on the acquired additional information associated with the designated image and is organically fused into the designated image, the technical problem that the additional information is easily lost when the designated image is processed in the related art can be at least partially overcome, the effective binding between the designated image and the additional information associated with the designated image is realized, and the technical effect that the information behind the designated image can be acquired through the additional information organically fused with the designated image when the designated image is processed is achieved.
Fig. 4B schematically shows a block diagram of an image processing system according to another embodiment of the present disclosure.
As shown in fig. 4B, the image processing system 400 may include a determination module 440 and a second fusion module 450 in addition to the acquisition module 410, the generation module 420, and the first fusion module 430. Wherein: the determining module 440 is configured to determine a display position of the identification information in the designated image. The second fusing module 450 is configured to fuse the identification information into the designated image according to the display position.
Through this disclosed embodiment, fuse identification information to appointed display position for identification information's position can set up or change according to actual need by oneself, satisfies different users ' aesthetic demands, realizes visual effect's variety, and user experience is better.
FIG. 5 schematically illustrates a block diagram of a computer system suitable for use to implement embodiments of the present disclosure. The computer system illustrated in FIG. 5 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 5, a computer system 500 according to an embodiment of the present disclosure includes a processor 510 and a computer-readable storage medium 520. The computer system 500 may perform the image processing method described above with reference to fig. 2A and 3A to 3F, including: acquiring a designated image and additional information associated with the designated image; generating identification information of the additional information based on the additional information; and fusing the identification information into the designated image to obtain a target image containing the identification information and the designated image. The image processing method provided by the disclosure can at least partially overcome the technical problem that the additional information is easy to lose when the specified image is processed (such as transmitted, stored or printed) in the related art, by generating the identification information based on the acquired additional information associated with the specified image and organically fusing the identification information into the specified image, and thus, the technical effect that the information behind the specified image can be acquired through the additional information organically fused with the specified image when the specified image is processed is achieved.
In particular, processor 510 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 510 may also include on-board memory for caching purposes. Processor 510 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows described with reference to fig. 2A and 3A-3F in accordance with embodiments of the present disclosure.
Computer-readable storage medium 520 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 520 may include a computer program 521, which computer program 521 may include code/computer-executable instructions that, when executed by the processor 510, cause the processor 510 to perform a method flow, such as described above in connection with fig. 2A and 3A-3F, and any variations thereof.
The computer program 521 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 521 may include one or more program modules, including for example 521A, modules 521B, … …. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, which when executed by the processor 510, enable the processor 510 to perform the method flows described above in connection with fig. 2A and 3A to 3F, for example, and any variations thereof.
According to an embodiment of the disclosure, processor 510 may perform the method flows described above in conjunction with fig. 2A and 3A-3F, and any variations thereof.
According to an embodiment of the present invention, at least one of the obtaining module 410, the generating module 420 and the first fusing module 430 may be implemented as a computer program module described with reference to fig. 5, which, when executed by the processor 510, may implement the respective operations described above.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (8)

1. An image processing method comprising:
acquiring a designated image and additional information associated with the designated image;
generating identification information of the additional information and determining a display state of the identification information based on the additional information, wherein the display state comprises a display color mode and a display size;
fusing the identification information into the designated image to obtain a target image containing the identification information and the designated image, and displaying the identification information according to the display state when the target image is displayed;
separating the identification information from the target image in a case where it is detected that there is a printing operation for instructing printing of the target image;
printing the target image on one side of preset paper to separate out the image part left after the identification information; and
and printing the identification information on the other side of the preset paper.
2. The method of claim 1, wherein the additional information associated with the designated image includes information of at least one of:
acquisition information of the designated image;
text information for describing the specified image; and
and acquiring voice information output by a user when the appointed image is acquired.
3. The method of claim 1, wherein the method further comprises:
determining a display position of the identification information in the designated image; and
and fusing the identification information into the specified image according to the display position.
4. The method of claim 3, wherein the method further comprises:
determining an image layout of the designated image;
determining a first display position of the identification information in the designated image according to the image layout; and
and fusing the identification information to the first display position in the designated image.
5. The method of claim 4, wherein the method further comprises:
in a case where it is detected that there is a drag operation for instructing dragging of the identification information from the first display position to a second display position in the designated image, the identification information is re-fused onto the second display position in the designated image.
6. The method of claim 1, wherein the presentation state comprises a transparent state having a preset transparency, the method further comprising:
when the target image is displayed, displaying the identification information in a corresponding transparent state according to the preset transparency; and
and under the condition that the preset transparency is required to be changed into other transparencies, displaying the identification information on the target image in other corresponding transparent states by using the other transparencies.
7. An image processing system comprising:
an acquisition module for acquiring a specified image and additional information associated with the specified image;
the generating module is used for generating identification information of the additional information and determining a display state of the identification information based on the additional information, wherein the display state comprises a display color mode and a display size; and
the first fusion module is used for fusing the identification information into the specified image to obtain a target image containing the identification information and the specified image and displaying the identification information according to the display state when the target image is displayed; separating the identification information from the target image in a case where it is detected that there is a printing operation for instructing printing of the target image; printing the target image on one side of preset paper to separate out the image part left after the identification information; and printing the identification information on the other side of the preset paper.
8. The system of claim 7, wherein the system further comprises:
a determination module, configured to determine a display position of the identification information in the designated image; and
and the second fusion module is used for fusing the identification information into the specified image according to the display position.
CN201711401209.9A 2017-12-21 2017-12-21 Image processing method and system Active CN108134906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711401209.9A CN108134906B (en) 2017-12-21 2017-12-21 Image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711401209.9A CN108134906B (en) 2017-12-21 2017-12-21 Image processing method and system

Publications (2)

Publication Number Publication Date
CN108134906A CN108134906A (en) 2018-06-08
CN108134906B true CN108134906B (en) 2020-12-18

Family

ID=62392159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711401209.9A Active CN108134906B (en) 2017-12-21 2017-12-21 Image processing method and system

Country Status (1)

Country Link
CN (1) CN108134906B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389550B (en) * 2018-09-17 2023-12-26 联想(北京)有限公司 Data processing method, device and computing equipment
CN110008364B (en) * 2019-03-25 2023-05-02 联想(北京)有限公司 Image processing method, device and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4447799B2 (en) * 2000-04-26 2010-04-07 キヤノン株式会社 Imaging apparatus and control method thereof
CN101950410A (en) * 2010-09-08 2011-01-19 优视科技有限公司 Digital image processing method and device based on mobile terminal and mobile terminal
CN102364521A (en) * 2011-10-21 2012-02-29 吴思 Distributed image information management method based on secondary information fusion
CN103985082B (en) * 2014-05-29 2017-02-15 中国工商银行股份有限公司 Verification method and device for electronic certificate information

Also Published As

Publication number Publication date
CN108134906A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
JP6479142B2 (en) Image identification and organization according to layout without user intervention
US10621954B2 (en) Computerized system and method for automatically creating and applying a filter to alter the display of rendered media
CN110458918B (en) Method and device for outputting information
US9619713B2 (en) Techniques for grouping images
US20190235740A1 (en) Rotatable Object System For Visual Communication And Analysis
US9729792B2 (en) Dynamic image selection
CN105573694B (en) Multiple display rendering of digital content
US20150248722A1 (en) Web based interactive multimedia system
US20220215192A1 (en) Two-dimensional code display method, apparatus, device, and medium
CN109389550B (en) Data processing method, device and computing equipment
US20180034979A1 (en) Techniques for capturing an image within the context of a document
CN108134906B (en) Image processing method and system
JP7471510B2 (en) Method, device, equipment and storage medium for picture to video conversion - Patents.com
US20150177944A1 (en) Capturing objects in editable format using gestures
KR101434234B1 (en) Method and apparatus for digital photo frame service providing feedback information
US8964063B2 (en) Camera resolution modification based on intended printing location
CN111212269A (en) Unmanned aerial vehicle image display method and device, electronic equipment and storage medium
US20140063240A1 (en) Systems, apparatuses, and methods for branding and/or advertising through immediate user interaction, social networking, and image sharing
US20230288786A1 (en) Graphic user interface system for improving throughput and privacy in photo booth applications
CN113094339B (en) File processing method, computer and readable storage medium
US10733637B1 (en) Dynamic placement of advertisements for presentation in an electronic device
US20190114814A1 (en) Method and system for customization of pictures on real time dynamic basis
CN117255219A (en) Comment processing method and device, electronic equipment and storage medium
CN115359139A (en) Method and device for drawing pixel picture
CN117909003A (en) System and method for reconstructing a physical instant analog print experience of a digital photograph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant