CN112764700A - Image display processing method, device, electronic equipment and storage medium - Google Patents

Image display processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112764700A
CN112764700A CN202011622738.3A CN202011622738A CN112764700A CN 112764700 A CN112764700 A CN 112764700A CN 202011622738 A CN202011622738 A CN 202011622738A CN 112764700 A CN112764700 A CN 112764700A
Authority
CN
China
Prior art keywords
image
contact
input
tag
setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011622738.3A
Other languages
Chinese (zh)
Inventor
李明政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011622738.3A priority Critical patent/CN112764700A/en
Publication of CN112764700A publication Critical patent/CN112764700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image display processing method and device, electronic equipment and a storage medium, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input; performing image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image; and acquiring a first contact matched with the first image label, and setting the display permission of the first image to the first contact not to be displayed in a target application program under the condition that the first image is issued through the target application program. The method and the device for setting the viewing authority of the image content can simplify the operation process of setting the viewing authority of the image content.

Description

Image display processing method, device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image display processing method and device, an electronic device and a storage medium.
Background
Images are the basis of human vision, and in practical applications, a great deal of information can be observed from images, such as: the characters, characters and the like in the photos can be even private information such as certificate numbers, and in order to protect the privacy of the images, the viewing permission of other users to the private photos needs to be limited.
In the related art, viewing permissions of other users on encrypted images are limited by setting an encryption password for an album, or viewing permissions of social friends on image content published in a social network are limited by setting viewing permissions of the social friends in the social network, but in this way, a user is often required to manually select a photo needing encryption and set an encryption password, or before publishing a social dynamic state, the user is required to manually select the social friends from a contact list to limit viewing permissions of the social network on the image content in the social dynamic state, and the operation process is complicated.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image display processing method, an image display processing apparatus, an electronic device, and a storage medium, which can solve the problem that an operation process is complicated in a process of setting a viewing right of image content in the related art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image display processing method, including:
receiving a first input;
performing image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image;
and acquiring a first contact matched with the first image label, and setting the display permission of the first image to the first contact not to be displayed in a target application program under the condition that the first image is issued through the target application program.
In a second aspect, an embodiment of the present application provides an image display processing apparatus, including:
the first receiving module is used for receiving a first input;
the determining module is used for responding to the first input, carrying out image recognition on a first image so as to determine a first image label corresponding to the first image;
and the first setting module is used for acquiring a first contact matched with the first image tag, and setting the display permission of the first image to the first contact in a target application program to be not displayed under the condition that the first image is issued through the target application program.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In an embodiment of the present application, a first input is received; performing image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image; and acquiring a first contact matched with the first image label, and setting the display permission of the first image to the first contact not to be displayed in a target application program under the condition that the first image is issued through the target application program. Compared with the prior art, for the image published in the application program, the user needs to manually select the image and the contact person, and sets the first image as the operation process invisible to the selected contact person, in the embodiment of the application, the user can trigger the identification of the content of the image by only executing the first input to obtain the image content of the first image, so that the image tag pre-associated with the image content is determined, the display authority of the contact person matched with the image tag to the first image is closed in the application program, and the operation process is simpler and more convenient.
Drawings
Fig. 1 is a flowchart of an image display processing method according to an embodiment of the present application;
fig. 2a is an application scene diagram of an image display processing method according to an embodiment of the present application;
fig. 2b is a second application scenario diagram of an image display processing method according to an embodiment of the present application;
fig. 2c is a third application scenario diagram of an image display processing method according to an embodiment of the present application;
FIG. 3 is a flow chart of another image display processing method provided in the embodiments of the present application;
fig. 4 is an application scene diagram of another image display processing method provided in the embodiment of the present application;
fig. 5 is one of the structural diagrams of an image display processing apparatus according to an embodiment of the present application;
fig. 6 is a second structural diagram of an image display processing apparatus according to an embodiment of the present application;
fig. 7 is a third structural diagram of an image display processing apparatus according to an embodiment of the present application;
fig. 8 is a fourth structural diagram of an image display processing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a block diagram of another electronic device provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display processing method, the image display processing apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, which is a flowchart of an image display processing method according to an embodiment of the present disclosure, as shown in fig. 1, the image display processing method may include the following steps:
step 101, receiving a first input.
In a specific implementation, the first input may include one or more of a click on the first image, a long press, and other touch inputs, for example: the operation performed by the user to select the first image is, for example: when the social dynamics is published, after the images to be published are selected, image recognition is triggered on the images to be published, or after the images to be published are selected, a user performs image understanding on the images to be published by long pressing one or more of the images to be published, of course, the first input may also include pressing operations on hardware buttons, which is not specifically limited herein.
And 102, responding to the first input, and performing image recognition on the first image to determine a first image label corresponding to the first image.
In a specific implementation, the performing image recognition on the first image may also be referred to as performing image content understanding on the first image, which may specifically be performing multidimensional recognition on content in the first image based on a deep learning algorithm (e.g., a deep neural network), for example: the motion, the attribute, the scene, the object in the image, etc. of the person are identified, and the finally obtained image content information may include at least one of information such as image tag, character information, character, etc. For example: and under the condition that the hand of the Nezha is identified in the picture, the hand handling label and the Nezha label can be output.
For convenience of description, in the following embodiments, the image identification information is taken as an example of an image tag, and in addition, in implementation, when the image content of one image is identified by the depth learning algorithm, the output image tags may be multiple, and the step may further include selecting one or part of the multiple output image tags as the input of the image tag of the first image.
As an optional implementation, the performing image recognition on the first image to determine a first image tag corresponding to the first image includes:
carrying out image recognition on the first image to obtain image content information of the first image;
determining the first image tag based on the image content information.
In an implementation, the first image may be subjected to image recognition using a neural network model, so as to determine an image tag of the first image based on image information recognized from the first image, and use the image tag of the first image as the first image tag.
In practice, the content of one image is often multiple, for example: a landscape image may include: buildings, mountains, trees, people, and the like, in which case the image tag obtained by image-recognizing the image may include a plurality of tags. In implementation, all or part of the image tags of the first image may be used as the first image tags, or the image tags of the first image may be displayed separately to receive a selection operation of a user on at least one of the image tags, so that the image tag selected by the user is used as the first image tag.
In this embodiment, the number of the image tags of the first image may be multiple, and the user may select one or a part of the image tags from the first image tag set as the first image tags through a selection operation.
For example: as shown in fig. 2a, after 3 images 21 to be published are selected in the social application, a user may press the images 21 to be published to trigger image recognition of the pressed images, so as to display their corresponding image tags 22 below the images, and each image tag 22 identified by the image may be multiple ones, and the user may click one or more of them to select the image tag 22, where the selected image tag 22 is displayed differently on other image tags, and determine the selected image tag 22 as a final image tag (i.e., a first image tag). In this way, after the user posts the 3 images to be posted to the social network, each of the posted images is not visible to the contact associated with the final image tag.
In addition, the above Neural network model may be a VGG16 network, the VGG16 network includes an encoding (Encode) part and a decoding (Decode) part, the encoding part may scan an image using a multilayer Convolutional Neural Network (CNN) and perform a max pooling (max pooling) process to obtain a high-dimensional feature vector of the image; the decoding part may use a full-concatenation layer and a softmax function to classify the image content according to the high-dimensional feature vector of the image, and then, the classification of the top n categories with the largest value in the output result of the softmax function is mapped into an existing label, where the existing label is all labels that can be identified by the neural network model, where n may be a preset value, for example: 3 image labels are fixedly output, and of course, the n categories may also mean that the number of categories larger than a preset value is n.
In application, the neural network model may be obtained through pre-training, where the training samples in the training process include labeled images, that is, each training sample includes an image and an image label associated with the image.
In practical applications, the first image may be recognized by using other image recognition models besides the neural network model, and the image recognition model is not particularly limited herein.
Step 103, acquiring a first contact matched with the first image tag, and setting the display permission of the first image to the first contact in the target application program to be not displayed under the condition that the first image is issued through the target application program.
In implementation, the first contact is matched with the first image tag, and specifically, contacts in a contact list may be respectively associated with the image tags in a manner preset by a user, or multiple contacts in the same chat group are respectively associated with the same image tag, so that the associated contacts and the image tags are matched with each other.
In addition, in the case where the first image is published by the target application program, setting the display right of the first image to the first contact in the target application program as not to be displayed may be understood as: and setting the display permission of the first contact person on the first image in the application program as non-display so that the first contact person cannot view the first image in the application program. Or, in the case that first information including a first image is issued through a target application program, the display permission of the first image to the first contact is set to be not displayed in the target application program, and the display permission of the first contact is set to be visible in the target application program except for the first image in the first information. Or, the display permission of the first information on the first contact is set to be not displayed in the target application program.
In implementation, an image tag set which can be identified by the deep learning algorithm may be acquired in advance, and in the process of setting an image tag associated with a contact, the image tags in the image tag set may be displayed, so that a user selects one or more image tags from the image tag set to be associated with the contact.
For example: as shown in fig. 2b, after selecting a contact, an interface as shown in fig. 2b is displayed, where the interface includes all image tags 23 in the image tag set, and when the user clicks the image tags "handheld", "game" and "fitness", the 3 tags 23 are distinguished from other un-clicked tag displays, for example: highlighted and then the selected contact is associated with the 3 tags by touching the confirm button 24.
It should be noted that the above-mentioned contact may be a contact in a social application, for example: the user A and the user B are social friends, and the user A pre-associates the contact information of the user B with the tag: the beauty, the computer and the hands are associated, so that when the social dynamics published by the user includes the images about the beauty, the computer and the hands, the user B cannot view the images about the beauty, the computer and the hands in the social dynamics published by the user A on the social platform.
It should be noted that, when the same release content includes other images in addition to the first image, the viewing right of the first contact to the other images may not be shielded in the application program.
As an optional implementation, the first image is an image published to a social network in a social dynamic state, the social dynamic state further includes a second image, and the setting, in the target application, the display permission of the first image to the first contact to be not displayed includes:
displaying the second image and not displaying the first image for the first contact in the social network.
In this embodiment, after the first contact logs in to its social account, the first image will not be viewable from the social network, but only the second image; and for other friend contacts which do not match with the first image label, the first image and the second image can be viewed from the social network after the other friend contacts log in the social account.
Further, in implementation, a user may publish a plurality of images in the same social dynamic state, and perform image recognition on each image to obtain an image tag of each image, so that different viewing permissions may be set for different images.
For example: in a case that the first image includes a first sub-image and a second sub-image, the performing image recognition on the first image to determine a first image tag corresponding to the first image includes:
respectively carrying out image recognition on the first sub-image and the second sub-image, determining a first sub-image label corresponding to the first sub-image, and determining a second sub-image label corresponding to the second sub-image, wherein the first image label comprises the first sub-image label and the second sub-image label;
the acquiring a first contact matched with the first image tag, and setting the display permission of the first image to the first contact in a target application program to be not displayed under the condition that the first image is issued by the target application program, includes:
and when the first sub-image and the second sub-image are published in an application program based on the first sub-image and the second sub-image, the viewing permission of the first sub-image by the first contact person is closed in the application program, and the viewing permission of the second sub-image by the second contact person is closed in the application program.
In this embodiment, multiple images in the same social dynamic state may be invisible (i.e., masked) to different social contacts at the same time.
As an optional implementation, after the image recognition is performed on the first image in response to the first input to determine a first image tag corresponding to the first image, the method further includes:
receiving a second input of the user to the second contact;
in response to the second input, setting the first image to be invisible to the second contact and associating the second contact with the first image tag.
In implementations, there may be instances where the second contact has not been previously associated with the first image tag, but the first image needs to be made invisible to the second contact. At this time, the viewing permission of the second contact for the first image can be set in the permission setting interface of the first image, and the second contact is automatically associated with the first image label under the condition that the first image is set to be invisible to the second contact. In this way, when other images corresponding to the first image tag are subsequently viewed, the other images can be automatically set to be invisible to the second contact.
For example: as shown in fig. 2c, after determining the first image tag corresponding to the first image, the contact list 25 may be opened to select a contact 26 from the contact list 25, so that the selected contact 26 is also associated with the first image tag 27 while the first image is set to be invisible to the selected contact 26. Wherein after associating the contact 26 with the first image tag 27, the first image tag 27 may be displayed in a display area of the contact 26.
In this embodiment, the second contact is automatically associated with the first image tag when the first image is set to be invisible to the second contact. The method and the device avoid the situation that the authority setting interface of the first image needs to be quitted, the association label setting interface of the contact person is opened, and the authority setting interface of the first image is re-entered after the second contact person is associated with the first image label setting in the association label setting interface, so that the first image is set to be invisible to the second contact person, and therefore the operation process of associating the second contact person with the first image label is simplified.
In an embodiment of the present application, a first input is received; performing image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image; and acquiring a first contact person associated with the first image tag, and setting the first image to be invisible to the first contact person. Compared with the prior art that a user needs to manually select an image, manually select a contact person and set the first image as an operation process invisible to the selected contact person, in the embodiment of the application, the user can trigger the content of the image to be identified only by executing the first input so as to obtain the image content of the first image, thereby determining the image tag matched with the image content, and setting the viewing authority of the contact person pre-associated with the image tag to the first image as invisible, so that the operation process is simpler and more convenient.
Referring to fig. 3, a flowchart of another image display processing method according to an embodiment of the present application is shown in fig. 3, where the image display processing method includes the following steps:
step 301, receiving a first input.
Step 302, responding to the first input, performing image recognition on a first image to determine a first image tag corresponding to the first image, wherein the first image is an image in an image library.
The step 301 is the same as the step 101 in the method embodiment shown in fig. 1, and the step 302 is the same as the step 102 in the method embodiment shown in fig. 1, except that the embodiment of the present application is applied to setting viewing permissions of images in an image library by different users.
In application, the image display processing method in the embodiment of the present application can hide different images from the image library for different authenticated contacts, for example: the owner stores the privacy images such as certificate photos in the image library, the user associates the image tags of the privacy images with the face identification information of friends and relatives in advance, and when the friends and relatives check the image library through face verification, the privacy images are hidden so as to protect the privacy of the owner.
Further, in view of different application scenarios, the first input in the embodiment of the present application may be an operation that triggers image recognition on all or part of images in the image library.
In an optional embodiment, before the performing, in response to the first input, image recognition on the first image to determine a first image tag corresponding to the first image, the method further includes:
and displaying an image recognition touch area under the condition that the first image is selected from the image library, wherein the first input is a preset touch operation on the image recognition touch area.
In specific implementation, the image recognition touch area may be a touch button displayed in an image library, and may also be a floating ball control displayed in a floating manner, and in addition, by pressing the floating ball control for a long time, an image recognition setting window may be displayed, so that an image recognition process is performed on images subsequently added to the image library in the image recognition setting window, so as to obtain an image tag corresponding to the newly added image.
For example: as shown in fig. 4, after at least one image 41 is selected from the album, a hover ball control 42 may be displayed, and if the user clicks the hover ball control 42, an image recognition process is performed on the selected image 41 to obtain an image tag corresponding to the selected image 41; in addition, if the user presses the hover ball control 42 for a long time, a pull-down option window is displayed, which may include: the method comprises a first option of recognizing only the current picture and a second option of recognizing the subsequent newly added pictures. At this time, if the user clicks the first option, an image recognition process is performed on the selected image 41 to obtain an image tag corresponding to the selected image 41. In addition, if the user clicks the second option, executing: carrying out an image identification process on the selected image 41 to obtain an image label corresponding to the selected image 41; and carrying out an image identification process on the images subsequently added into the image library to obtain image labels corresponding to the newly added images.
Of course, in a specific implementation, the image recognition function may be turned on in advance, so as to automatically trigger the image recognition process for the image newly added to the image library after turning on the image recognition function, which is not limited in this respect.
Step 303, receiving a third input of the first contact.
In a specific implementation, the third input may be an unlocking operation of the electronic device or a decryption input of the encrypted image library.
Step 304, responding to the third input, performing face recognition on the first contact person to obtain first face recognition information, and performing unlocking verification based on the first face recognition information, wherein an association relationship between the first face recognition information and the first image tag is stored in advance.
In specific implementation, the first face identification information and the first image tag may be pre-associated in a manner preset by a user (that is, the first contact corresponding to the first face identification information and the first image tag are pre-associated), or after the owner verification is performed to store the face identification information of the owner, the first face identification information and the first image tag may be pre-associated in a manner of associating the first image tag with the newly added face identification information by default.
The above-mentioned process of face recognition is the same as the process of acquiring a face image by a camera in the prior art and learning the face image to obtain face recognition information, and is not specifically described here.
In addition, the unlocking verification may be verification of a touch screen locking state or verification of a touch image library encryption state, and is not limited in detail here.
Step 305, hiding the first image in the image library interface under the condition that the unlocking verification is passed.
Wherein a user viewing the image library through face verification will not be able to view the first image after the first image is hidden within the image library interface.
In actual use, when another user performs face recognition again in unlocking verification next time, if the face recognition information of the other user is not associated with the first image tag, the first image is displayed in the image library interface.
In the embodiment of the application, the verification information of the user can be associated with the image tag in advance, so that the same image library can hide the image corresponding to the associated image tag for different viewing users, the same image library can display different images for different viewing users, and the operation process for setting the viewing permission of the image library is simplified.
It should be noted that, in the image display processing method provided in the embodiment of the present application, the execution subject may be an image display processing apparatus, or a control module in the image display processing apparatus for executing the image display processing method. In the embodiment of the present application, an image display processing apparatus executes a loaded image display processing method as an example, and the image display processing apparatus provided in the embodiment of the present application is described.
Referring to fig. 5, which is a structural diagram of an image display processing apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the image display processing apparatus 500 includes:
a first receiving module 501, configured to receive a first input;
a determining module 502, configured to perform image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image;
a first setting module 503, configured to acquire a first contact matched with the first image tag, and set, in a target application, a display permission of the first image for the first contact to be not displayed when the first image is released by the target application.
Optionally, as shown in fig. 6, the determining module 502 includes:
an image understanding unit 5021, configured to perform image recognition on the first image to obtain image content information of the first image;
a determining unit 5022, configured to determine the first image tag based on the image content information.
Optionally, as shown in fig. 7, the image display processing apparatus 500 further includes:
a second receiving module 504, configured to receive a second input from the user to the second contact;
a second setting module 505, configured to set the first image to be invisible to the second contact and associate the second contact with the first image tag in response to the second input.
Optionally, the first image is an image published to a social dynamic state of a social network, and the first setting module 503 is specifically configured to:
displaying the second image and not displaying the first image for the first contact in the social network.
Optionally, the first image is an image in an image library, as shown in fig. 8, the image display processing apparatus 500 further includes:
a third receiving module 506, configured to receive a third input of the first contact;
a verification module 507, configured to perform face recognition on the first contact in response to the third input to obtain first face recognition information, and perform unlocking verification based on the first face recognition information, where an association relationship between the first face recognition information and the first image tag is stored in advance;
the first setting module 503 is specifically configured to:
and hiding the first image in the image library interface under the condition that the unlocking verification is passed.
The image display processing apparatus 500 provided in the embodiment of the present application can perform each process in the method embodiments shown in fig. 1 or fig. 3, and can also simplify the operation process of setting the viewing permission of the image content, and for avoiding repetition, details are not described here again.
The image display processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image display processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1 or fig. 3, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the foregoing image display processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 is used for receiving a first input;
a processor 1010 configured to perform image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image;
the processor 1010 is further configured to obtain a first contact matched with the first image tag, and set, in the target application, a display right of the first image to the first contact to be not displayed when the first image is released through the target application.
Optionally, the performing, by the processor 1010, image recognition on the first image to determine a first image tag corresponding to the first image includes:
a processor 1010, configured to perform image recognition on the first image to obtain image content information of the first image;
a processor 1010 further configured to determine the first image tag based on the image content information.
Optionally, after the processor 1010 performs the image recognition on the first image in response to the first input to determine a first image tag corresponding to the first image:
a user input unit 1007, configured to receive a second input from the user to the second contact;
a processor 1010 further configured to set the first image to be invisible to the second contact and associate the second contact with the first image tag in response to the second input.
Optionally, the first image is an image published to a social network in a social dynamic state, the social dynamic state further includes a second image, and the masking, performed by the processor 1010, of the first contact in the application includes:
displaying the second image and not displaying the first image for the first contact in the social network.
Optionally, the first image is an image in an image library, after processor 1010 determines the first image label;
the user input unit 1007 is further used for receiving a third input of the first contact;
the processor 1010 is further configured to perform face recognition on the first contact in response to the third input to obtain first face recognition information, and perform unlocking verification based on the first face recognition information, where an association relationship between the first face recognition information and the first image tag is stored in advance;
the obtaining a first contact associated with the first image tag, the setting the first image to be invisible to the first contact, performed by processor 1010, comprising:
and hiding the first image in the image library interface under the condition that the unlocking verification is passed.
The electronic device provided in the embodiment of the application can implement each process in the method embodiment shown in fig. 1 or fig. 3, and can simplify the operation process of setting the viewing permission of the image content, and has the same beneficial effects as those in the method embodiment shown in fig. 1 or fig. 3, and is not described herein again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) and a microphone, and the Graphics Processing Unit processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel and other input devices. Touch panels, also known as touch screens. The touch panel may include two parts of a touch detection device and a touch controller. Other input devices may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image display processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above image display processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image display processing method, characterized in that the method comprises:
receiving a first input;
performing image recognition on a first image in response to the first input to determine a first image tag corresponding to the first image;
and acquiring a first contact matched with the first image label, and setting the display permission of the first image to the first contact not to be displayed in a target application program under the condition that the first image is issued through the target application program.
2. The method of claim 1, wherein the image recognizing the first image to determine the first image tag corresponding to the first image comprises:
carrying out image recognition on the first image to obtain image content information of the first image;
determining the first image tag based on the image content information.
3. The method of claim 1, wherein after said image recognizing the first image to determine a first image tag to which the first image corresponds in response to the first input, the method further comprises:
receiving a second input of the user to the second contact;
in response to the second input, setting the first image to be invisible to the second contact and associating the second contact with the first image tag.
4. The method of claim 2 or 3, wherein the first image is an image published to a social network in a social dynamic state, the social dynamic state further comprises a second image, and the setting of the display permission of the first image to the first contact in the target application program to be not displayed comprises:
displaying the second image and not displaying the first image for the first contact in the social network.
5. An image display processing apparatus characterized by comprising:
the first receiving module is used for receiving a first input;
the determining module is used for responding to the first input, carrying out image recognition on a first image so as to determine a first image label corresponding to the first image;
and the first setting module is used for acquiring a first contact matched with the first image tag, and setting the display permission of the first image to the first contact in a target application program to be not displayed under the condition that the first image is issued through the target application program.
6. The apparatus of claim 5, wherein the determining module comprises:
the image understanding unit is used for carrying out image recognition on the first image to obtain image content information of the first image;
a determining unit configured to determine the first image tag based on the image content information.
7. The apparatus of claim 5, further comprising:
the second receiving module is used for receiving a second input of the user to the second contact;
a second setting module to set the first image to be invisible to the second contact and to associate the second contact with the first image tag in response to the second input.
8. The apparatus of claim 6 or 7, wherein the first image is an image published to a social network in a social dynamic state, the social dynamic state further comprising a second image;
the first setting module is specifically configured to:
displaying the second image and not displaying the first image for the first contact in the social network.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image display processing method according to any one of claims 1 to 4.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image display processing method according to any one of claims 1 to 4.
CN202011622738.3A 2020-12-31 2020-12-31 Image display processing method, device, electronic equipment and storage medium Pending CN112764700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011622738.3A CN112764700A (en) 2020-12-31 2020-12-31 Image display processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011622738.3A CN112764700A (en) 2020-12-31 2020-12-31 Image display processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112764700A true CN112764700A (en) 2021-05-07

Family

ID=75698591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011622738.3A Pending CN112764700A (en) 2020-12-31 2020-12-31 Image display processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112764700A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037711A (en) * 2022-06-07 2022-09-09 元心信息科技集团有限公司 Data processing method and device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037711A (en) * 2022-06-07 2022-09-09 元心信息科技集团有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN115037711B (en) * 2022-06-07 2024-03-29 元心信息科技集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112422817B (en) Image processing method and device
CN111857511B (en) Wallpaper display control method and device and electronic equipment
CN112788178B (en) Message display method and device
CN112099704A (en) Information display method and device, electronic equipment and readable storage medium
CN112163239A (en) Privacy information protection method and device and electronic equipment
CN112783594A (en) Message display method and device and electronic equipment
CN112351327A (en) Face image processing method and device, terminal and storage medium
CN112462990A (en) Image sending method and device and electronic equipment
CN112533072A (en) Image sending method and device and electronic equipment
CN112311795A (en) Account management method and device and electronic equipment
CN111651106A (en) Unread message prompting method, unread message prompting device, unread message prompting equipment and readable storage medium
CN112929494B (en) Information processing method, information processing apparatus, information processing medium, and electronic device
CN112764700A (en) Image display processing method, device, electronic equipment and storage medium
CN111898159B (en) Risk prompting method, risk prompting device, electronic equipment and readable storage medium
US20230216684A1 (en) Integrating and detecting visual data security token in displayed data via graphics processing circuitry using a frame buffer
CN111897474A (en) File processing method and electronic equipment
CN112035877A (en) Information hiding method and device, electronic equipment and readable storage medium
CN113840035B (en) Information sharing method and device, electronic equipment and readable storage medium
CN114020391A (en) Information display method and device, electronic equipment and readable storage medium
US10733491B2 (en) Fingerprint-based experience generation
CN113821138A (en) Prompting method and device and electronic equipment
CN113157966A (en) Display method and device and electronic equipment
CN111797383A (en) Password verification method and device and electronic equipment
CN111913627A (en) Recording file display method and device and electronic equipment
CN112765447B (en) Data searching method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination