WO2018177002A1 - 社交信息的展示方法、计算机设备和存储介质 - Google Patents

社交信息的展示方法、计算机设备和存储介质 Download PDF

Info

Publication number
WO2018177002A1
WO2018177002A1 PCT/CN2018/073824 CN2018073824W WO2018177002A1 WO 2018177002 A1 WO2018177002 A1 WO 2018177002A1 CN 2018073824 W CN2018073824 W CN 2018073824W WO 2018177002 A1 WO2018177002 A1 WO 2018177002A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
face
terminal
preset
Prior art date
Application number
PCT/CN2018/073824
Other languages
English (en)
French (fr)
Inventor
杨田从雨
陈宇
张�浩
华有为
薛丰
肖鸿志
冯绪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018177002A1 publication Critical patent/WO2018177002A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present application relates to the field of information processing technologies, and in particular, to a method for displaying social information, a computer device, and a storage medium.
  • the social information of the user of the social software includes the personal information set by the user on the social software, the published dynamic message, and the like. Dynamic messages can be visual information in many forms such as text, audio, video or web links.
  • the traditional social software displays the specific user information in a similar way. Usually, it is displayed on the social webpage or social application display interface, and provides a virtual button for displaying the social information of the specific user. When the virtual button is clicked, the social information of the user is displayed.
  • a method of displaying social information, a computer device, and a storage medium are provided.
  • a method of displaying social information including:
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • One or more non-transitory readable storage mediums storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the steps of:
  • 1 is an application environment diagram of a method for displaying social information in an embodiment
  • FIG. 2 is an internal structural diagram of a terminal in an embodiment
  • FIG. 3 is a flow chart of a method for displaying social information in an embodiment
  • FIG. 4 is a flow chart of a method for displaying social information in another embodiment
  • FIG. 5 is a schematic diagram of an interface of an image scanning portal provided by a social network application in an embodiment
  • FIG. 6 is a schematic diagram of a scanning interface in an embodiment
  • FIG. 7 is a schematic diagram of an interface of displaying social information in an embodiment
  • FIG. 8 is a structural block diagram of a social information display apparatus in an embodiment
  • FIG. 9 is a structural block diagram of a device for displaying social information in another embodiment.
  • FIG. 10 is a structural block diagram of a device for displaying social information in still another embodiment.
  • the method for displaying social information provided by the embodiment of the present application can be applied to an application environment as shown in FIG. 1 .
  • the terminal 110 can establish a communication connection with the server 120 via a network.
  • Terminal 110 includes, but is not limited to, a cell phone, a handheld game console, a tablet, a personal digital assistant, or a portable wearable device.
  • the terminal 110 may acquire a frame image including a face image in a preset area of the scanable visible area; extract face feature data of the face image included in the frame image; and match the face image according to the face feature data query
  • the user image acquires a user identifier corresponding to the user image; and obtains social information associated with the user identifier from the local cache or the server 120, and displays the social information.
  • the social information associated with the user of the social network application may be stored in the server 120, and the social information includes, but is not limited to, user profile, user latest dynamic information, and the like.
  • FIG. 2 is a schematic diagram showing the internal structure of a terminal in an embodiment.
  • the terminal includes a processor connected via a system bus, a non-volatile storage medium, an internal memory and network interface, a display screen, and a camera.
  • the non-volatile storage medium of the terminal stores an operating system and computer readable instructions.
  • the computer readable instructions are used to implement a method for displaying social information provided by the following embodiments.
  • the processor of the terminal is used to provide computing and control capabilities to support the operation of the entire terminal.
  • the internal memory of the terminal provides an environment for the operation of computer readable instructions in a non-volatile storage medium, and the internal memory can store computer readable instructions that, when executed by the processor, can cause processing
  • the device performs a method of displaying social information.
  • the network interface of the terminal is used for network communication with the server, such as sending facial feature data or pictures to the server, receiving social information sent by the server, and the like.
  • the camera of the terminal is used to scan a target object in the visible area to generate a frame image.
  • the display screen of the terminal may be a touch screen, such as a capacitive screen or an electronic screen, and a corresponding instruction may be generated by receiving a click operation of a control applied to the touch screen. For example, receiving a click operation of a control for entering an image scanning state displayed on the touch screen, generating a scan instruction, and scanning a real scene in the visible area according to the scan instruction.
  • FIG. 2 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the terminal to which the solution of the present application is applied.
  • the specific terminal may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
  • a method for displaying social information is provided, which is applied to the terminal shown in FIG. 1 for illustration. include:
  • Step S302 Acquire a frame image including a face image in a preset area of the scanable visible area.
  • the terminal may invoke the camera to turn on the camera scan mode, and scan the target object in the visible area in real time, and generate a frame image in real time according to a certain frame rate, and the generated frame image may be cached locally in the terminal.
  • the visible area refers to an area that can be scanned by the camera presented on the display interface of the terminal.
  • the preset area is a partial area on the visible area, for example, a partial area that can be in the middle of the visible area.
  • the terminal may detect whether the generated frame image has a face image at a position corresponding to the preset area of the scan visible area, and if yes, acquire the generated frame image.
  • the camera may be a camera built into the terminal or an external camera associated with the terminal.
  • the terminal can be a smart phone, and the camera can be a camera on a smart wearable device (such as smart glasses).
  • the terminal receives the real scene scanned by the camera through the connection with the smart wearable device, and generates a frame image.
  • Step S304 extracting face feature data of the face image included in the frame image.
  • image data of an image in the preset area may be extracted, and whether the image data includes facial feature data is detected, and if so, it is determined that the frame image includes a face image in a corresponding preset area. And further extracting facial feature data from the image data.
  • the facial feature data may be one or more characteristic information for reflecting a gender, a contour of a face, a hairstyle, a glasses, a nose, a mouth, and a distance between respective facial organs.
  • Step S306 querying a user image that matches the face image according to the face feature data, and acquiring a user identifier corresponding to the user image.
  • a corresponding user image is set in advance for each user identification in the social application.
  • the user identifier may be a login account of a user of the social application.
  • Social applications include, but are not limited to, Instant Messaging (IM) applications, SNS (Social Network Service) applications, or live broadcast applications.
  • IM Instant Messaging
  • SNS Social Network Service
  • the user identifier is a unique identifier in the social application for identifying the user, and may be composed of one or more of a preset number of digits, letters, special characters, and the like.
  • the user image may be a real face image for reflecting the corresponding user, and has a corresponding relationship with the user identifier.
  • the selected image may be customized by the corresponding user, or the selected image may be automatically analyzed by the system as the corresponding user avatar.
  • the terminal may query the user image that matches the face image from the local cache and/or the background server corresponding to the social application, and obtain the user identifier corresponding to the matched user avatar.
  • Step S308 acquiring social information associated with the user identifier and displaying.
  • the user identification is set to associate social information of the corresponding user.
  • Social information includes personal data set by the user on social software, and published dynamic messages.
  • the personal data includes information such as name, nickname, gender, birthday, avatar, hobbies, and location;
  • the dynamic message includes content or dynamics posted by the user on the platform of the social application, which may be text, audio, video, or webpage links, etc. Multiple forms of visual information.
  • the dynamic message may exist in the form of a Feeds page.
  • the information publishing website integrates all or part of the information into an RSS (Really Simple Syndication) file, which is called a feed.
  • RSS Really Simple Syndication
  • Each user's dynamic message can be sorted in reverse order in the Feeds according to the release time.
  • the terminal may obtain and display the social information associated with the user identity from a local cache or a server. Specifically, some or all of the personal data associated with the user identification may be displayed, and the dynamic messages are displayed in reverse order.
  • the method for displaying social information provided by the embodiment, by acquiring a frame image including a face image in a preset area of the scanned visible area; extracting face feature data of the face image included in the frame image; according to the face
  • the feature data queries the user image that matches the face image, obtains the user identifier corresponding to the user image, and acquires and displays the social information associated with the user identifier.
  • the aim is to align the camera with the user's face, so that the social information of the user can be displayed, the operation of displaying the social information of the user is simplified, and the convenience of displaying the social information is improved.
  • the method for displaying the social information further includes: entering an image scanning state by using an image scanning portal provided by the social network application; and step S302: acquiring scanning visibility in the image scanning state.
  • the preset area in the area contains a frame image of the face image.
  • the terminal can provide an image scan portal on the interface of the social application.
  • the scan instruction may be generated by detecting the operation of the control for viewing the social information by opening the face scan of the social application, the gesture or the voice of the preset face scan to view the social information, and according to the scan instruction.
  • the camera associated with the terminal is turned on, and the image scanning state is entered from the image scanning portal.
  • the terminal can scan the scanable visible area of the camera in real time through the camera in the image scanning state, and display the scanned frame image on the display screen of the terminal.
  • the scanned object is a human face, and when the face image is included in the preset area, the frame image is acquired for matching the social user.
  • entering the image scanning state through the image scanning portal may be an image scanning state in which the augmented reality is performed, and in the image scanning state, the preset region in the acquired scanning visible region includes the face image.
  • the frame image is subjected to virtual reality processing such that the processed frame image is used as a background image of the social information to which the subsequent display is associated.
  • acquiring a frame image of the face image including the face image in the scanning visible region in the image scanning state may conveniently enter the image scanning state.
  • the coupling of the user's face with his social information is realized, and the face is used as the entrance of the social information, which improves the accuracy of the display of the social information.
  • step S308 includes acquiring social information associated with the user identification, and displaying the social information on the frame image within the scanned visible area in an image scanning state.
  • the acquired social information is social information that has open permissions for the user identity of the currently logged-in user of the terminal.
  • Open permissions include full open access to social information, partial open access, and no open access.
  • the user may set the open access rights for the social information that he or she publishes, including all or part of the users who have social relationships with them, and the same or different open rights for all or part of the users who do not have social relationships.
  • partial open permissions may be set to allow users who do not have a social relationship to display their published dynamic information or portions of personal information and the like.
  • the terminal may acquire social information associated with the user identifier corresponding to the matched user image, where the social information is social information that is allowed to be obtained by the open authority corresponding to the user identifier, and the social information is displayed in the frame image in the scan visible area. on.
  • the frame image acquired in the scanning state may be used as a background image of the social information to which the display is associated, and the acquired social information is superimposed and displayed on the background image, and formed in an image scanning state.
  • the combination of social information and real scene, the augmented reality display of social information can realize the personal social information around the face, and improve the diversity of social information display.
  • step S302 includes: detecting whether a similarity between consecutively generated preset number of frame images is greater than a similarity threshold, and if so, acquiring one of a plurality of consecutively generated preset number of frame images A frame image of a face image is included in a preset area of the scanned visible area.
  • the frame image may be generated according to a default lower frame rate, and the currently generated frame image is compared with a previous preset number of frame images, and the currently generated frame image is detected with the previous pre-frame. Set the similarity between the number of frame images.
  • the terminal further sets a similarity threshold, and compares each detected similarity with the similarity threshold. If the similarity between the current frame image and the previous preset number of frame images is greater than the The similarity threshold determines that the current image scanning state is in a stable state.
  • a frame image is selected from the current frame image and the previous preset number of frame images, and the frame image is detected to include a face image in the preset area.
  • step S302 includes: detecting whether the offset of the camera of the target object in the scanable area is less than the offset threshold within a preset duration, and if so, acquiring from the preset duration
  • One of the generated plurality of frame images includes a frame image of the face image in a preset area of the scanned visible area.
  • the terminal may acquire an offset of the camera detected in real time by the detecting device of the imaging offset data associated with the camera.
  • the offset is used to reflect the real-time variation of the camera in space such as up, down, left, and right.
  • the detecting device can be a gyroscope built in the terminal.
  • the preset image of the scanned visible area contains the frame image of the face image, which can also improve the stability of the image scanning.
  • step S304 includes: extracting, in a frame image that the proportion of the face image included in the frame image in the preset area exceeds a preset ratio, and the definition of the face image exceeds the definition threshold. Face feature data for face images.
  • the terminal may further detect a ratio of a face image included in the preset area to the preset area in the frame image, and a sharpness of the face image.
  • the preset area is a fixed area in the scanned viewable area. Therefore, the proportion of the image of the preset area on the frame image on the frame image is also a corresponding fixed ratio. Recognizing the number of pixels included in each face image in the preset area, and detecting the proportion of the number of pixels in the pixel points included in the entire frame image, according to the face image in the frame The proportion of the image, and the proportion of the preset area on the scanned visible area, can calculate the proportion of the face image in the preset area. And detecting the ratio of the proportion and the preset proportion.
  • the terminal further detects whether the sharpness of the face image exceeds a preset sharpness threshold.
  • the resolution is used to reflect the illumination and resolution of the image. The higher the resolution, and the light intensity is within a certain intensity range, the higher the resolution.
  • the facial feature data of the face image is extracted, thereby ensuring the extracted facial feature data. the quality of.
  • FIG. 4 another method of presenting social information is provided.
  • the method specifically includes the following steps:
  • Step S402 entering an image scanning state by using an image scanning portal provided by the social network application.
  • the terminal may provide a control for opening the face scan to view the social information on the interface of the social application, and the control is an image scanning portal that enters an image scanning state.
  • a scan instruction may be generated, and according to the scan instruction, the camera associated with the terminal is turned on, and an image scanning state is entered from the image scanning portal to scan the real scene in the scan visible area. .
  • the scanned real scene is presented as a frame image on the display screen of the terminal.
  • the image scan portal can be displayed on an interface selected by the social type entry of the social application.
  • the personal information display area 510 can display the personal information of the user of the social application that the terminal is logged in, and the area 520 can provide the following: “talk”, “photo”, “video”, “live”, “check in”, “dynamic album”
  • Social-type portals such as "Log” and "AR Camera” are displayed in the form of corresponding controls.
  • the "AR camera” is an image scanning portal provided by a social network application.
  • the image scanning state can be entered from the portal by receiving a click operation on the "AR Camera” control.
  • the various social type entries shown in FIG. 5 are merely an example, and the embodiment is not limited to this particular social type presentation. On the basis of the embodiment shown in Fig. 5, it is also possible to increase or decrease the social type.
  • the "AR camera” is also only the social type name provided by one of the embodiments. In other embodiments, it may also be presented in other forms, such as using a normal camera as an image scanning portal.
  • Step S404 acquiring, in the image scanning state, a preset image in the scan visible area that includes a frame image of the face image.
  • the terminal scans the scanable visible area in the image scanning state, generates a frame image in real time according to a preset frame rate, and displays it in the display screen of the terminal.
  • the terminal may perform augmented reality processing on the real scene in the scanned visible area in the form of augmented reality (AR) display, generate a frame image according to the processed real scene, cache the frame image, and display it in In the display screen of the terminal.
  • AR augmented reality
  • the real scene in the visible area of the scan is a real scene including a human face in a preset area on the terminal display interface, and the frame image is generated in real time according to a preset frame rate.
  • the terminal projects prompt information that needs to be aligned with the scanning of the face image.
  • the projected prompt message is "Please align the finder frame to scan the face of the friend and start scanning.”
  • the “framing frame” is the preset area 610 in the scan visible area described above. The user can align the camera with the face 620 to be recognized in the real scene such that it is presented in the preset area 610 of the display interface.
  • the extraction of facial feature data of the face image included in the frame image may be performed upon detecting that the current scan state is in a steady state.
  • whether the current image scanning state of the terminal is in a stable state can be determined by detecting the similarity between the preset number of frame images continuously generated. Or it is determined whether the current image scanning state is in a stable state by detecting the magnitude of the offset of the camera.
  • the user can keep the state of the camera aligned with the face to be recognized for a preset duration, and the terminal can detect whether the similarity between the preset number of consecutively generated frame images exceeds the preset time within the preset duration.
  • the similarity threshold is set, and if so, it is determined that the current image scanning state is in a stable state.
  • it may be detected whether the offset of the camera is less than the offset threshold within the preset duration, and if so, it is determined that the current image scanning state is in a stable state.
  • the process returns to the above-described step S404.
  • the face feature data of the face image included in the frame image in the preset area is extracted.
  • step S404 includes: detecting, in an image scanning state, whether a similarity between consecutively generated preset number of frame images is greater than a similarity threshold, and if so, acquiring a continuously generated preset number of frame images One of the frame images containing the face image in the preset area of the scanned visible area.
  • the preset number can be 10, or 20, etc., and can also be determined according to a frame rate, such as the number of frame images generated within a preset duration (for example, 1 second, 1.5 seconds, or 2 seconds, etc.). .
  • the terminal may calculate the similarity between the currently generated frame image and the preset number of buffers generated before it in the image scanning state. If it is calculated that the similarity between the current frame image and the previous preset number of frame images is greater than the preset similarity threshold, it is determined that the current image scanning state is in a stable state.
  • a frame image is selected from the current frame image and the previous preset number of frame images, and the frame image is detected to include a face image in the preset area.
  • the currently generated frame image may be used as the selected frame image.
  • a frame image with the clearest face image included in the preset area in the preset number of frame images may be generated as the selected frame image.
  • step S404 includes: detecting, in the image scanning state, whether the offset of the camera of the target object in the scanable area is less than the offset threshold within a preset duration, and if so, from the preset Within the duration, one of the plurality of generated frame images is acquired, and one of the frame images of the face image is included in the preset area of the scanned visible area.
  • the offset of the camera within the preset duration can be detected in real time through the gyroscope built in the terminal. If the offset is less than the preset offset threshold, the current image scanning is also determined. The status is in a steady state. And the generated plurality of frame images may be acquired from the preset duration, and one of the frames includes a frame image of the face image in the preset area of the scanned visible area.
  • the preset duration can be a default or a custom duration, such as 1.5 seconds.
  • the currently generated frame image can be taken as the selected frame image.
  • the frame image with the clearest face image included in the preset area may be used as the selected frame image.
  • Step S406 extracting facial feature data of the face image included in the frame image.
  • a pair of scan instructions may be received, and face feature data of the face image included in the frame image is extracted according to the scan instruction.
  • a control for starting a scan instruction of the face recognition scan may be displayed on a display interface in an image scanning state, and when a click operation on the control is received, a scan instruction is generated, and the scan instruction is started to be received. Then, the face feature data of the included face image is extracted from the preset area of the generated frame image. As shown in FIG. 6, when a click operation acting on the "start scan" control 630 is detected, a scan instruction is generated to extract face feature data of the face image included in the preset area 610 in the frame image.
  • step S406 includes: extracting, in a frame image that the proportion of the face image included in the frame image in the preset area exceeds a preset ratio, and the definition of the face image exceeds the definition threshold. Face feature data for face images.
  • the terminal may score the generated frame image according to the proportion of the generated preset image of the face image in the preset area and the sharpness of the face image, and select the generated frame image according to the generated preset number of frame images.
  • the frame image with the highest score exceeding the preset score threshold is scored, and the face feature data of the face image is extracted therefrom.
  • the score exceeds the preset score it means that the proportion of the included face image in the preset area exceeds the preset ratio in the corresponding frame image, and the definition of the face image exceeds the definition threshold.
  • Step S408 querying a user image that matches the face image according to the face feature data, and acquiring a user identifier corresponding to the user image.
  • the terminal may preferentially read the user avatar of the user in the social application cached locally, and detect whether one of the user avatars matches the face image. If yes, the user identifier corresponding to the user avatar matched by the local is obtained. Otherwise, the facial feature data may be uploaded to the connected background server of the social application, and the server is used to query whether the user avatar of one of the users matches the face image, and the matching is obtained from the server. User ID corresponding to the user ID.
  • the degree of matching between the face feature data of the face image and the face feature data included in the user image to be compared may be compared.
  • the face feature data included in one of the user avatars is detected for the first time, and the matching degree with the face feature data of the face image exceeds a preset matching degree threshold, it is determined that the two match.
  • the terminal feature data included in each user avatar may be directly stored in the terminal or the server, so that when the matching degree is compared, the facial feature data of the user avatar may be directly obtained without repeated Extracted from this user avatar.
  • the prompt information of the user who did not find the matching may be displayed on the scanning interface. For example, a message “No corresponding friends found, please rescan” can be projected on the scanned interface.
  • the scope of the queried user avatar includes: a user avatar corresponding to a user identifier of a social relationship such as a friend relationship chain of the currently logged-in user of the terminal.
  • the user avatar corresponding to the user identifier that does not have a social relationship with the user identifier of the currently logged-in user of the terminal may be included, that is, the user avatar corresponding to the user identifier of all registered users on the server.
  • Step S410 Acquire social information associated with the user identifier; display the social information on the frame image in the scan visible area in an image scanning state.
  • the acquired social information is social information with open permissions for the user identifier of the currently logged-in user of the terminal.
  • the frame image may be subjected to transparency and/or blurring or the like such that the frame image as the background image has a certain transparency to enhance the sharpness of the social information superimposed thereon.
  • the partial image in the frame image for superimposing and displaying the social information may be subjected to transparency or blurring, the transparency of the frame image is adjusted to a preset transparency, and the social information is superimposed and displayed on the frame image. on.
  • information such as a camera angle and/or an offset of the camera of the terminal may be detected, and the social information is displayed on the acquired social information according to a display style that matches information such as the camera angle and/or offset. Scans the frame image in the visible area.
  • the social information may be rotated or offset correspondingly according to the change of the camera angle and/or the offset, so that the display manner of the social information rotates with the rotation of the electronic device, thereby further improving The diversity of social information display.
  • the acquired profile information and the dynamic message may be projected around the preset area of the frame image.
  • the display area of the social information may be divided into a personal information information display area 640 and a dynamic message display area 650.
  • the person profile information display area 630 is disposed at an upper portion and a lower portion of the preset area; and the dynamic message display area 640 is projected at a lower portion of the preset area.
  • the profile information display area 630 can display one or more brief information such as a user's nickname, avatar, birthday reminder, and the like.
  • part of the person's profile information such as the user's nickname and birthday reminder is displayed on the upper part of the face 620 in the real scene
  • the user's avatar is displayed on the lower part of the face 620 in the real scene.
  • the dynamic message display area 640 can display various forms of visual information such as text, audio, video or webpage links posted by the user on the platform of the social application in reverse order according to the time of publication.
  • Corresponding responses can be made to the received instructions for interworking the social information.
  • the interaction includes operations such as a sliding operation of the displayed social information, a detailed information viewing operation, a comment of the social information, or a like. This embodiment is not limited to this particular form of social information presentation. On the basis of the embodiment shown in FIG.
  • the specific social information displayed may also be added or decreased, and the social information may also be displayed according to other display layouts.
  • the social information may also be presented in other forms, such as presenting some or all of the profile information in the lower part of the preset area, and/or displaying some or all of the dynamic messages to be acquired in the preset. The upper part of the area, etc.
  • the method for displaying social information provided by the embodiment is that an image scanning portal provided by the social network application enters an image scanning state, and in the image scanning state, a preset image in the scanning visible region is included, and a frame image of the face image is extracted.
  • the method further includes: when detecting that the real-time generated frame image is in the preset area, does not include the face image, turning off the display of the social information.
  • a scan of the real scene in the preset area of the visible area is also maintained, and a frame image is generated. And detecting whether the face image described above is included in the preset area of the frame image. If not, it means that the camera has deviated from the currently aligned face. At this time, the terminal can close the display of the queried social information.
  • the deviation time of the camera is counted, and when the deviation time reaches the preset deviation time threshold, none of the real-time scanning is performed.
  • the deviation duration threshold may be any suitable duration, such as 1 second or 2 seconds, and may be the same as the preset duration described above.
  • the newly aligned faces can be matched within the reserved deviation duration threshold to prepare for the display of the social information corresponding to the new user, and the coherence of switching the social information of different users can also be improved. Sex.
  • the method further includes: when detecting that the frame image generated in real time is in the preset area, does not include the face image, jumping to the display interface of the social information corresponding to the user, and suspending the image scanning.
  • the computer device can move the camera such that the face in the preset area within the viewable area disappears, or the actually scanned object produces a movement such that the face in the preset area within the viewable area disappears.
  • the face in the preset area disappears, it can be determined that the scanning of the target object has ended, so that the image scanning can be terminated.
  • the user jumps to the display interface of the social information of the user, so that the social information of the user can be browsed normally.
  • the social information displayed on the display interface after the jump may be the same as the social information superimposed on the frame image. Moreover, since the superimposed display is not required, the social information displayed may be further enlarged and displayed according to a preset ratio, so that the social information that is originally superimposed and displayed on the local area image is enlarged and displayed on the entire display interface according to a corresponding ratio. To further improve the convenience of reading.
  • the method further includes: receiving an instruction generated by an interaction operation on the social information; in response to the instruction, jumping to a presentation interface of the social information corresponding to the instruction, and suspending the image scanning.
  • the terminal may also receive an instruction generated by an interaction operation of the social information.
  • the interaction includes operations such as a sliding operation of the displayed social information, a detailed information viewing operation, a comment or a like of the social information.
  • the terminal may jump to the display interface of the corresponding social information, and perform appropriate enlargement processing on the displayed social information, so as to facilitate the user to view. For example, if the interaction operation is a comment operation on a dynamic message published by the user, jump to the detailed display interface of the dynamic message.
  • the image scanning can be suspended, the tracking of the real scene can be cancelled, and when the closing instruction for the displayed social information is received, the image scanning state can be restored to display the social information of the next user.
  • the method for displaying the social information further includes: acquiring user sign data associated with the user identifier; displaying the user on the frame image displayed in the scan visible area. Sign data.
  • the terminal may further retrieve, according to the acquired user identifier, whether there is vital sign data associated with the user identifier.
  • Signature data includes, but is not limited to, one or more of athletic data and health monitoring data.
  • the sports data includes data such as the number of walking steps of the user, the mileage of riding, and the amount of heat consumed;
  • the health monitoring data includes data such as the heartbeat, body temperature, and blood glucose parameters of the user.
  • the terminal may query other applications that are associated with the user identifier and obtain the physical data detected by the other application. This other application can be a sports application or a health monitoring application.
  • the terminal may detect whether the user identifier is also used to identify a user identity of a certain sports application, and if so, obtain the same from the local cache or from a background server corresponding to the sports application.
  • User ID data associated with the user ID may be detected whether the user identifier is also used to identify a user identity of a certain sports application, and if so, obtain the same from the local cache or from a background server corresponding to the sports application.
  • the terminal may display the user's vital sign data while displaying the social information, and display the user's vital sign data on the frame image displayed in the scanned visible area.
  • the vital sign data can be displayed within the user information area 630 as shown in FIG.
  • a terminal is provided.
  • the terminal includes a frame image acquisition module 802, a face feature data extraction module 804, a user identification query module 806, and a presentation module 808. among them:
  • the frame image obtaining module 802 is configured to acquire a frame image including a face image in a preset area of the scanable visible area.
  • the face feature data extraction module 804 is configured to extract face feature data of the face image included in the frame image.
  • the user identifier querying module 806 is configured to query a user image that matches the face image according to the face feature data, and acquire a user identifier corresponding to the user image.
  • the display module 808 is configured to acquire and display social information associated with the user identifier.
  • another terminal is provided, and the terminal further includes:
  • the image scanning module 810 is configured to enter an image scanning state by using an image scanning portal provided by the social network application.
  • the frame image acquisition module 802 is further configured to acquire, in an image scanning state, a frame image that includes a face image in a preset area in the scan visible area.
  • the user identification query module 806 is further configured to acquire social information associated with the user identification, and display the social information on the frame image in the scanned visible area in an image scanning state.
  • the frame image obtaining module 802 is further configured to detect whether the similarity between the continuously generated preset number of frame images is greater than a similarity threshold, and if yes, acquire a consecutively generated preset number of frame images.
  • a similarity threshold One of the frames containing the face image in the preset area of the scanned visible area; or detecting whether the offset of the camera of the target object in the scanned visible area is less than the offset within the preset duration.
  • the quantity threshold if yes, obtains a frame image of the face image in the preset area of the scanned visible area from the preset plurality of frame images within the preset duration.
  • the facial feature data extraction module 804 is further configured to extract that the proportion of the face image included in the frame image in the preset area exceeds a preset ratio, and the resolution of the face image exceeds the definition threshold. Face feature data of a face image included in the frame image.
  • another terminal is provided, where the terminal further includes:
  • the vital sign data obtaining module 812 is configured to obtain user sign data associated with the user identifier.
  • the presentation module 808 is also operative to display user sign data on the frame image displayed in the scanned viewable area.
  • the terminal obtains a frame image including a face image in a preset area of the scan visible area; extracts face feature data of the face image included in the frame image; and matches the face image according to the face feature data query
  • the user image acquires a user identifier corresponding to the user image; acquires social information associated with the user identifier and displays the same.
  • the aim is to align the camera with the user's face to realize the display of the social information of the user, simplify the operation of displaying the social information of the user, and improve the convenience and efficiency of displaying the social information.
  • Each of the above terminals may be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules may be embedded in the hardware in the terminal or in the memory in the terminal, or may be stored in the memory in the terminal in a software form, so that the processor calls the execution of the operations corresponding to the above modules.
  • the processor can be a central processing unit (CPU), a microprocessor, a single chip microcomputer, or the like.
  • a computer apparatus comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor, causing the processor to perform various embodiments of the present application
  • the computer device may be the terminal in the above embodiment.
  • one or more non-transitory readable storage mediums storing computer readable instructions that, when executed by one or more processors, cause one or more processors to execute The steps of the method for displaying social information described in the embodiments of the present application.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种社交信息的展示方法,包括:终端获取在扫描可视区域的预设区域中包含人脸图像的帧图像;所述终端提取所述帧图像所包含的人脸图像的人脸特征数据;所述终端根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及所述终端获取与所述用户标识关联的社交信息并展示。

Description

社交信息的展示方法、计算机设备和存储介质
本申请要求于2017年3月29日提交中国专利局,申请号为2017101990799,发明名称为“用户社交信息的展示方法、装置和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息处理技术领域,特别是涉及一种社交信息的展示方法、计算机设备和存储介质。
背景技术
社交软件的用户的社交信息包括用户在社交软件上设置的个人资料以及发表的动态消息等。动态消息可以是文字、音频、视频或网页链接等多种形式的可视信息。传统的社交软件对于某一具体用户信息的展示形式都是大同小异,通常都是在社交网页或社交应用的展示界面上,提供对该具体用户的社交信息进行展示的虚拟按键,在接收到作用于对该虚拟按键的点击指令时,对该用户的社交信息进行展示。
由于传统的对用户的社交信息进行展示的过程中,都少不了对该虚拟按键的点击操作。因此,这种对用户的社交信息的展示方法的操作较为繁琐。
发明内容
根据本申请公开的各种实施例,提供一种社交信息的展示方法、计算机设备和存储介质。
一种社交信息的展示方法,包括:
获取在扫描可视区域的预设区域中包含人脸图像的帧图像;
提取所述帧图像所包含的人脸图像的人脸特征数据;
根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及
获取与所述用户标识关联的社交信息并展示。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
获取在扫描可视区域的预设区域中包含人脸图像的帧图像;
提取所述帧图像所包含的人脸图像的人脸特征数据;
根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及
获取与所述用户标识关联的社交信息并展示。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
获取在扫描可视区域的预设区域中包含人脸图像的帧图像;
提取所述帧图像所包含的人脸图像的人脸特征数据;
根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及
获取与所述用户标识关联的社交信息并展示。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中社交信息的展示方法的应用环境图;
图2为一个实施例中终端的内部结构图;
图3为一个实施例中社交信息的展示方法的流程图;
图4为另一个实施例中社交信息的展示方法的流程图;
图5为一个实施例中社交网络应用提供的图像扫描入口的界面示意图;
图6为一个实施例中扫描界面示意图;
图7为一个实施例中社交信息的展示的界面示意图;
图8为一个实施例中社交信息的展示装置的结构框图;
图9为另一个实施例中社交信息的展示装置的结构框图;及
图10为又一个实施例中社交信息的展示装置的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例所提供的社交信息的展示方法,可应用于如图1所示的应用环境中。参考图1,终端110可通过网络与服务器120建立通信相连。终端110包括但不限于手机、掌上游戏机、平板电脑、个人数字助理或便携式穿戴设备等。终端110可获取在扫描可视区域的预设区域中包含人脸图像的帧图像;提取帧图像所包含的人脸图像的人脸特征数据;根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对应的用户标识;并从本地缓存中或者服务器120上获取与用户标识关联的社交信息,并展示该社交信息。服务器120中可存储社交网络应用的用户所关联的社交信息,这些社交信息包括但不限于用户个人资料、用户最新动态信息等。
图2为一个实施例中终端的内部结构示意图。该终端包括通过***总线连接的处理器、非易失性存储介质、内存储器和网络接口、显示屏和摄像头。其中,该终端的非易失性存储介质存储有操作***和计算机可读指令。该计 算机可读指令用于实现以下各实施例提供的一种社交信息的展示方法。该终端的处理器用于提供计算和控制能力,支撑整个终端的运行。该终端的内存储器为非易失性存储介质中的计算机可读指令的运行提供环境,该内存储器中可存储有计算机可读指令,该计算机指令可读指令被处理器执行时,可使得处理器执行一种社交信息的展示方法。该终端的网络接口用于与服务器进行网络通信,比如向服务器发送人脸特征数据或图片等,接收服务器发送的社交信息等。该终端的摄像头用于对可视区域中的目标物体进行扫描,生成帧图像。终端的显示屏可以是触摸屏,比如为电容屏或电子屏,可通过接收作用于该触摸屏上显示的控件的点击操作,生成相应的指令。比如接收作用于该触摸屏上显示的用于进入图像扫描状态的控件的点击操作,生成扫描指令,根据该扫描指令扫描可视区域内的实景。
本领域技术人员可以理解,图2中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的终端的限定,具体的终端可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,如图3所示,提供了一种社交信息的展示方法,该方法应用于图1所示的终端中来举例说明。包括:
步骤S302,获取在扫描可视区域的预设区域中包含人脸图像的帧图像。
在一个实施例中,终端可调用摄像头开启摄像扫描模式,并实时扫描可视区域内中的目标对象,并按照一定的帧率实时地生成帧图像,所生成的帧图像可缓存在终端本地。其中,可视区域是指在终端的显示界面上所呈现出的摄像头可扫描到拍摄到的区域。预设区域为在可视区域上的某一局部区域,例如,可处于该可视区域的中间位置的局部区域。终端可检测所生成的帧图像在与该扫描可视区域的预设区域对应的位置处,是否存在人脸图像,若是,则获取所生成的帧图像。
在一个实施例中,摄像头可以是终端内置的摄像头,或者外置的与终端关联的摄像头。比如,终端可为智能手机,摄像头可为智能穿戴设备(比如 智能眼镜)上的摄像头,终端通过与该智能穿戴设备的连接,接收该摄像头所扫描到的实景,生成帧图像。
步骤S304,提取帧图像所包含的人脸图像的人脸特征数据。
在一个实施例中,可提取该预设区域中的图像的图像数据,并检测该图像数据是否包含人脸特征数据,若是,则判定该帧图像在对应的预设区域内包含人脸图像。并进一步从该图像数据中提取人脸特征数据。其中,人脸特征数据可以是用于反映出人的性别、人脸的轮廓、发型、眼镜、鼻子、嘴以及各个脸部器官之间的距离等其中的一种或多种特征信息。
步骤S306,根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对应的用户标识。
在一个实施例中,预先针对社交应用中的每个用户标识设置了对应的用户图像。其中,用户标识可为社交应用的用户的登录账号。社交应用包括但不限于即时通信(Instant Messaging,IM)应用、SNS(Social Network Service,社交网站)应用或者直播应用等。用户标识为社交应用中用于标识用户且具有唯一性,可由预设位数的数字、字母和特殊字符等其中的一种或多种构成。该用户图像可以是用于反映对应用户的真实人脸图像,与用户标识具有对应关系。可从用户所上传的个人资料、历史发表的图片信息中,由对应用户自定义选取的图像,或由***自动地分析选取的一张图片,作为相应的用户头像。终端可从本地缓存和/或该社交应用对应的后台服务器中查询与该人脸图像相匹配的用户图像,获取所匹配到的用户头像对应的用户标识。
步骤S308,获取与用户标识关联的社交信息并展示。
在一个实施例中,用户标识被设置用于关联对应用户的社交信息。社交信息包括用户在社交软件上设置的个人资料以及发表的动态消息等。其中,个人资料包括姓名、昵称、性别、生日、头像、兴趣爱好和所在地等信息;动态消息包括用户在该社交应用的平台上发布的内容或动态,可以是文字、音频、视频或网页链接等多种形式的可视信息。
在一个实施例中,该动态消息可以Feeds页面的形式存在。信息发布网 站将网站全部或者部分信息整合到一个RSS(Really Simple Syndication,简易信息聚合)文件中,这个文件就被称之为Feed。每个用户的动态消息可按照发布时间在该Feeds进行倒序排序。终端可从本地缓存或者服务器上获取与该用户标识相关联的社交信息,并进行展示。具体的,可展示与该用户标识关联的部分或全部的个人资料,并对动态消息按倒序排序展示。
本实施例所提供的社交信息的展示方法,通过获取在扫描可视区域的预设区域中包含人脸图像的帧图像;提取帧图像所包含的人脸图像的人脸特征数据;根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对应的用户标识;获取与用户标识关联的社交信息并展示。使得只需将摄像头对准用户人脸,即可实现对该用户的社交信息的展示,简化了对用户的社交信息的展示的操作,提高了对社交信息的展示的便利性。
在一个实施例中,在上述的步骤S302之前,上述的社交信息的展示方法还包括:通过社交网络应用提供的图像扫描入口进入图像扫描状态;步骤S302包括:在图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像。
在一个实施例中,终端可在社交应用的界面上提供图像扫描入口。具体的,可通过检测到的作用于该社交应用的开启人脸扫描查看社交信息的控件的操作、预设的开启人脸扫描查看社交信息的手势或语音等生成扫描指令,并根据该扫描指令,开启与终端关联的摄像头,从该图像扫描入口进入图像扫描状态。
终端可在该图像扫描状态下,通过摄像头实时地对摄像头的扫描可视区域进行扫描,并将扫描得到的帧图像展示在终端的显示屏上。其中,扫描的对象为人脸,并在检测到实时呈现的该帧图像上,处于该预设区域中包含人脸图像时,获取该帧图像,用以进行社交用户的匹配。
在一个实施例中,通过该图像扫描入口进入图像扫描状态可为进行增强现实的图像扫描状态,在该图像扫描状态下,将所获取的扫描可视区域内的预设区域包含人脸图像的帧图像进行虚拟现实处理,使得将该处理后的帧图 像作为后续展示关联到的社交信息的背景图像。
在一个实施例中,通过提供图像扫描入口进入图像扫描状态,在图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像,可便利地进入图像扫描状态。同时还实现了将用户人脸与其社交信息的耦合,将该人脸作为其社交信息的入口,提高了对社交信息的展示的准确性。
在一个实施例中,步骤S308包括:获取与用户标识关联的社交信息,在图像扫描状态下将社交信息展示在扫描可视区域内的帧图像上。
在一个实施例中,所获取的社交信息为终端的当前登录用户的用户标识具有开放权限的社交信息。开放权限包括对社交信息的完全开放权限、部分开放权限以及无开放权限等。用户可针对其所发布的社交信息设置对外开放权限,包括针对与其具有好友关系等社交关系的全部或部分的用户,和不具有社交关系的全部或部分的用户设置相同或不同的开放权限。例如,可设置部分开放权限为允许不具有社交关系的用户展示其所发布的动态信息或部分个人信息等等。
终端可获取所匹配的用户图像对应的用户标识关联的社交信息,该社交信息为与该用户标识对应的开放权限所允许获取的社交信息,将该社交信息展示在扫描可视区域内的帧图像上。
在一个实施例中,可将在扫描状态下所获取的帧图像作为展示所关联到的社交信息的背景图像,并在该背景图像上叠加展示所获取的社交信息,形成在图像扫描状态下,将社交信息与实景结合,对社交信息的增强现实的展示,可实现在人脸的周围投射出其个人的社交信息,提高了对社交信息的展示的多样性。
在一个实施例中,步骤S302包括:检测连续生成的预设数量的帧图像之间的相似度是否大于相似度阈值,若是,则获取连连续生成的预设数量的帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
在一个实施例中,可按照一个默认的较低的帧率生成帧图像,将当前生成的帧图像与其在前的预设数量的帧图像进行对比,检测当前生成的帧图像 与在前的预设数量的帧图像的之间的相似度。
终端还进一步设置了相似度阈值,比较所检测出的每个相似度与该相似度阈值的大小,若当前的帧图像与在前的预设数量的帧图像之间的相似度,均大于该相似度阈值,则判定当前的图像扫描状态处于稳定状态。从当前的帧图像与在前的预设数量的帧图像中,选取一张帧图像,检测该帧图像在预设区域中包含人脸图像。
通过检测连续生成的预设数量的帧图像之间的相似度,当该相似度均大于预设相似度时,在预设数量的帧图像中,获取其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。可提高图像扫描的稳定性。
在一个实施例中,步骤S302包括:检测在预设时长之内,扫描可视区域内的目标物体的摄像头的偏移量是否小于偏移量阈值,若是,则从预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
在一个实施例中,终端可获取与摄像头关联的摄像偏移数据的检测设备所实时检测到的摄像头的偏移量。该偏移量用于反映摄像头在前后上下左右等空间上的实时变化量。该检测设备可为终端中内置的陀螺仪。通过比较在预设时长之间的所检测到的每份偏移量和预设的偏移量阈值的大小,当均小于该偏移量阈值时,判定终端处于终端当前的图像扫描状态处于稳定状态。并在判定为处于稳定状态时,从预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
通过检测在预设时长之内摄像头的偏移量的大小,当偏移量小于预设的偏移量阈值时,从该预设时长之内生成的多个帧图像中,获取其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像,也可提高图像扫描的稳定性。
在一个实施例中,步骤S304包括:提取帧图像中所包含的人脸图像在预设区域中所占比例超过预设比例,且人脸图像的清晰度超过清晰度阈值的帧图像中所包含的人脸图像的人脸特征数据。
终端可进一步检测帧图像中,在预设区域内包含的人脸图像占该预设区域的比例,以及该人脸图像的清晰度。
在一个实施例中,预设区域为扫描可视区域中的固定的区域。因而该帧图像上的预设区域的图像在帧图像上的占比也为相应的固定占比。可识别出处于该预设区域内的每个人脸图像所包含的像素点的数量,并检测该像素点的数量在整个帧图像所包含的像素点中的占比,根据该人脸图像在帧图像上的占比,和预设区域在扫描可视区域上的占比,可计算出人脸图像在预设区域内的占比。并检测该占比和预设占比的大小。
终端还进一步检测该人脸图像的清晰度是否超过预设的清晰度阈值。其中,该清晰度用于反映图像的光照和分辨率。当分辨率越大,且光照强度处于某一强度范围内,则该清晰度越高。当检测到人脸图像在预设区域的占比超过预设比例、清晰度超过预设的清晰度阈值时,提取该人脸图像的人脸特征数据,从而保证了所提取的人脸特征数据的质量。
在一个实施例中,如图4所示,提供了另一种社交信息的展示方法。该方法具体包括如下步骤:
步骤S402,通过社交网络应用提供的图像扫描入口进入图像扫描状态。
在一个实施例中,终端可在该社交应用的界面上提供用于开启人脸扫描查看社交信息的控件,该控件即为一种进入图像扫描状态的图像扫描入口。当检测到作用于该控件的点击操作时,可生成扫描指令,并根据该扫描指令,开启与终端关联的摄像头,从该图像扫描入口进入图像扫描状态,对扫描可视区域内的实景进行扫描。并将所扫描的实景以帧图像呈现在终端的显示屏上。
举例来说,如图5所示,可通过在社交应用的社交类型入口选择的界面上展示该图像扫描入口。其个人信息展示区域510可展示终端所登录的该社交应用的用户的个人信息,区域520可提供包括“说说”、“照片”、“视频”、“直播”、“签到”、“动感影集”、“日志”和“AR相机”等社交类型的入口,并以相应的控件形式展示。通过接收作用于上述的各个社交类型的控件的点击操作, 从对应的入口进入相应的社交类型的查看界面,并展示所登录的用户的好友和/或非好友的相关类型的社交信息。其中,该“AR相机”即为社交网络应用提供的图像扫描入口。通过接收作用于该“AR相机”控件的点击操作,可从该入口进入图像扫描状态。应当说明的是,图5所示的各种社交类型入口仅仅是一个示例,本实施例并不局限于这种特定的社交类型呈现形式。在图5所示实施例的基础上,还可以增加或者减少社交类型。而“AR相机”也仅仅是其中一个实施例搜提供的社交类型名称,在其他实施例中,也可以其他形式呈现,例如采用普通相机作为图像扫描入口。
步骤S404,在图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像。
在一个实施例中,终端在进入图像扫描状态下,对扫描可视区域进行扫描,按照预设的帧率实时生成帧图像,并展示在终端的显示屏中。其中,终端可以以增强现实(Augmented Reality,AR)的展示形式,对扫描可视区域中的实景进行增强现实的处理,根据处理后的实景生成帧图像,缓存该帧图像,并将其展示在终端的显示屏幕中。
具体的,扫描的可视区域内的实景为呈现在终端显示界面上的预设区域中包含人脸的实景,并按照预设的帧率实时地生成帧图像。如图6所示,终端在扫描状态下,投射出需要对准人脸图像扫描的提示信息。比如所投射的提示信息为“请将取景框对准要扫描好友的脸部并开始扫描”。其中,“取景框”即为上述的扫描可视区域中的预设区域610。用户可将摄像头对准实景中待识别的人脸620,使得将其呈现在展示界面的预设区域610中。
在一个实施例中,可在检测到当前的扫描状态处于稳定状态时,进行对帧图像所包含的人脸图像的人脸特征数据的提取。其中,可通过检测连续生成的预设数量的帧图像之间的相似度来判定终端当前的图像扫描状态是否处于稳定状态。或者通过检测摄像头的偏移量的大小来判定当前的图像扫描状态是否处于稳定状态。
具体的,用户可将摄像头对准待识别的人脸的状态保持预设时长,终端 可检测在该预设时长之内,连续生成的预设数量的帧图像之间的相似度是否均超过预设的相似度阈值,若是,则判定当前的图像扫描状态处于稳定状态。或者,可检测在该预设时长之内,摄像头的偏移量是否小于偏移量阈值,若是,则判定当前的图像扫描状态处于稳定状态。当判定不处于稳定状态时,返回继续执行上述的步骤S404。在判定处于稳定状态时,提取帧图像在该预设区域中所包含的人脸图像的人脸特征数据。
在一个实施例中,步骤S404包括:在图像扫描状态下检测连续生成的预设数量的帧图像之间的相似度是否大于相似度阈值,若是,则获取连连续生成的预设数量的帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
预设数量可为10幅,或20幅等,还可根据帧率所确定,比如可为在预设时长(比如为1秒、1.5秒或2秒等)之内所生成的帧图像的数量。
终端可在图像扫描状态下,计算当前生成的帧图像和缓存的在其之前生成的预设数量的之间的相似度。若计算出当前的帧图像与在前的预设数量的帧图像之间的相似度,均大于预设的相似度阈值,则判定当前的图像扫描状态处于稳定状态。从当前的帧图像与在前的预设数量的帧图像中,选取一张帧图像,检测该帧图像在预设区域中包含人脸图像。具体的,可将当前生成的帧图像作为选取的帧图像。或者可将生成预设数量的帧图像中,在预设区域中包含的人脸图像最清晰的帧图像作为选取的帧图像。
在一个实施例中,步骤S404包括:在图像扫描状态下检测在预设时长之内,扫描可视区域内的目标物体的摄像头的偏移量是否小于偏移量阈值,若是,则从预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
可在图像扫描状态下,通过终端中内置的陀螺仪来实时检测摄像头在预设时长之内的偏移量,若该偏移量小于预设的偏移量阈值,则同样判定当前的图像扫描状态处于稳定状态。并可从该预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
其中,预设时长可为默认或自定义的时长,比如为1.5秒。可将当前生成的帧图像作为选取的帧图像。或者可将生成多个的帧图像中,在预设区域中包含的人脸图像最清晰的帧图像作为选取的帧图像。
步骤S406,提取帧图像所包含的人脸图像的人脸特征数据。
在一个实施例中,可在图像扫描状态下,接收对扫描指令,根据该扫描指令提取帧图像所包含的人脸图像的人脸特征数据。
具体的,可在图像扫描状态下的显示界面上展示用于开始人脸识别扫描的扫描指令的控件,当接收到对该控件的点击操作时,生成扫描指令,并开始从接收到该扫描指令后,所生成的帧图像的预设区域中提取包含的人脸图像的人脸特征数据。如图6所示,当检测到作用于“开始扫描”控件630的点击操作时,生成扫描指令,提取该帧图像中,预设区域610所包含的人脸图像的人脸特征数据。
通过根据所接收到的扫描指令来进行人脸特征数据的提取,从而无需实时进行帧图像中是否包含人脸的识别,可减少终端进行人脸识别的资源占用。
在一个实施例中,步骤S406包括:提取帧图像中所包含的人脸图像在预设区域中所占比例超过预设比例,且人脸图像的清晰度超过清晰度阈值的帧图像中所包含的人脸图像的人脸特征数据。
终端可根据生成的预设数量的帧图像中,处于预设区域内的人脸图像在该预设区域中所占比例,以及人脸图像的清晰度对所生成的帧图像进行打分,并选取打分最高的,且超过预设的分数阈值的帧图像,从其中提取该人脸图像的人脸特征数据。当打分超过预设分数时,即表示对应的述帧图像中,所包含的人脸图像在预设区域中所占比例超过预设比例,且人脸图像的清晰度超过清晰度阈值。通过从打分最高且超过预设分数阈值的帧图像中,提取其内所包含的人脸图像的人脸特征数据,可进一步的提高了人脸特征数据的质量。
步骤S408,根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对应的用户标识。
在一个实施例中,终端可优先读取本地所缓存的该社交应用中的用户的用户头像,并检测是否存在其中一个用户头像与该人脸图像相匹配。若是,则获取本地所匹配到的用户头像对应的用户标识。否则,可将该人脸特征数据上传至所连接的该社交应用的后台服务器,通过该服务器来查询是否存在其中一个用户的用户头像与该人脸图像相匹配,并获取从服务器上所匹配到的用户头像对应的用户标识。
具体的,可通过比较该人脸图像的人脸特征数据,与待比较的用户图像中所包含的人脸特征数据之间的匹配度。当首次检测到其中一个用户头像所包含的人脸特征数据,与该人脸图像的人脸特征数据之间的匹配度超过预设的匹配度阈值时,则判定两者相匹配。并根据该用户头像与用户标识之间的对应关系,获取对应的用户标识。其中,终端或服务器中还可直接存储每个用户头像中所包含的人脸特征数据,从而使得在进行匹配度比较的时候,可直接获取该用户头像的人脸特征数据,而无需反复的从该用户头像中提取。
若未查找到相匹配的用户头像,可在扫描界面上显示未找到匹配的用户的提示信息。比如,可在扫描的界面上投射出“未找到对应好友,请重新扫描”的提示信息。
在一个实施例中,查询的用户头像的范围包括:与终端的当前登录用户的用户标识存在好友关系链等社交关系的用户标识所对应的用户头像。还可包括与终端的当前登录用户的用户标识不存在社交关系的用户标识所对应的用户头像,即为服务器上的所有注册用户的用户标识所对应的用户头像。
步骤S410,获取与用户标识关联的社交信息;在图像扫描状态下将社交信息展示在扫描可视区域内的帧图像上。
在一个实施例中,所获取的社交信息为终端的当前登录用户的用户标识设置有开放权限的社交信息。可将该帧图像进行透明和/或虚化等处理,使得作为背景图像的帧图像具有一定的透明度,以提高其上叠加展示的社交信息的清晰度。具体的,可针对帧图像中的用于叠加展示社交信息的局部图像进行透明度或虚化等处理,将该帧图像的透明度调整至预设的透明度,并将该 社交信息叠加展示在该帧图像上。
在一个实施例中,可检测终端摄像头的摄像角度和/或偏移等信息,对获取的社交信息按照与该摄像角度和/或偏移等信息相匹配的展示样式,将该社交信息展示在扫描可视区域内的帧图像上。可在展示该社交信息时,按照该摄像角度和/或偏移的变化而对该社交信息进行对应的旋转或偏移,使得社交信息的显示方式随着电子设备的转动而转动,从而进一步提高对社交信息展示的多样性。
具体的,可将获取的个人资料信息和动态消息投射在帧图像预设区域中的周围。如图7所示,可将社交信息的展示区域分成个人资料信息展示区域640和动态消息展示区域650。并将人资料信息展示区域630设置在预设区域上部及下部;将动态消息展示区域640投射在预设区域下部。个人资料信息展示区域630可展示用户的昵称、头像、生日提醒等一种或多种简要信息。比如将用户的昵称、生日提醒的等部分人资料信息展示在实景中的人脸620的上部,将用户的头像展示在实景中的人脸620的下部。动态消息展示区域640可按照发布的时间,倒序展示用户在该社交应用的平台上发布的文字、音频、视频或网页链接等多种形式的可视信息。并可对所接收到的对社交信息进行交互操作的指令的做出相应的响应。其中,该交互操作包括对展示的社交信息的滑动操作、详细信息查看操作、社交信息的评论或点赞等操作。本实施例并不局限于这种特定的社交信息呈现形式。在图7所示实施例的基础上,还可以增加或者减少所展示的具体的社交信息,也可以按照其它的展示布局来展示社交信息。在其他的实施例中,也可以其它的形式呈现该社交信息,比如将部分或全部的个人资料信息呈现在预设区域的下部,和/或将部分或全部待获取的动态消息展示在预设区域的上部等。
本实施例所提供的社交信息的展示方法,由社交网络应用提供的图像扫描入口进入图像扫描状态,在图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像,提取该帧图像所包含的人脸图像的人脸特征数据;并根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对 应的用户标识;并在图像扫描状态下将社交信息展示在扫描可视区域内的帧图像上。在简化了对用户的社交信息的展示的操作的同时,还将所展示的社交信息与预设区域中包含人脸图像的帧图像相结合,形成增强现实的展示方式,提高了对社交信息的展示的精准性。
在一个实施例中,在步骤S410之后,还包括:当检测到实时生成的帧图像在预设区域中,不包含人脸图像时,关闭对社交信息的展示。
在展示社交信息的过程中,还保持对可视区域的预设区域中的实景的扫描,生成帧图像。并检测该帧图像的预设区域中,是否包含上述的人脸图像。若否,则说明摄像头已经偏离当前所对准的人脸,此时,终端可关闭对所查询到的社交信息的展示。
在一个实施例中,在检测到帧图像的预设区域中不包含该人脸图像时,统计摄像头的偏离时长,当该偏离时长达到预设的偏离时长阈值之内,均未在实时扫描的帧图像的预设区域中检测到包含相同的人脸图像时,再关闭对所查询到的社交信息的展示。其中,偏离时长阈值可为任意合适的时长,比如为1秒或2秒,还可为与上述的预设时长相同。
通过预留相应的偏离时长阈值,可防止突然产生的摄像头抖动等暂时偏离的情况,提高了对社交信息的展示的稳定性。并且,还可在预留的偏离时长阈值内,进行对新对准的人脸进行匹配,为对应新的用户的社交信息的展示做准备,还可提高对不同用户的社交信息的切换的连贯性。
在一个实施例中,在步骤S410之后,还包括:当检测到实时生成的帧图像在预设区域中,不包含人脸图像时,跳转至对应用户的社交信息的展示界面,并中止图像扫描。
计算机设备可移动摄像头而使得可视区域内的预设区域中的人脸消失,或者实际扫描的对象产生了移动而使得可视区域内的预设区域中的人脸消失。当检测到该预设区域内的人脸消失时,则可判定已经结束了对目标对象的扫描,从而可终止进行图像扫描。同时,针对已经扫描到的目标对象,并针对已经识别出的该人脸的用户,跳转至该用户的社交信息的展示界面,从 而可对该用户的社交信息进行正常的浏览。
在一个实施例中,跳转后的显示界面上所展示的社交信息可与叠加展示在帧图像上的社交信息相同。且由于已经无需进行叠加显示,还可进一步对所展示的社交信息按照预设比例进行放大显示,使得将原来叠加展示在局部区域图像的社交信息按照对应的比例,放大展示在整个显示界面上,以进一步提高阅览的便利性。
在一个实施例中,在步骤S410之后,还包括:接收对社交信息进行的交互操作而生成的指令;响应该指令,跳转至与该指令对应的社交信息的展示界面,并中止图像扫描。
终端还可接收作用于该社交信息的交互操作而生成的指令。该交互操作包括对展示的社交信息的滑动操作、详细信息查看操作、社交信息的评论或点赞等操作。
终端可根据该指令,跳转到相应的社交信息的展示界面,针对所展示的社交信息进行适当放大处理,以便于用户进行查看。比如若该交互操作为对用户发表的动态消息的评论操作,则跳转至该动态消息的详细展示界面。同时,可中止图像扫描,取消对实景的追踪,当接收到对所展示的社交信息的关闭指令后,可恢复图像扫描状态,以进行对下一个用户的社交信息的展示。
在一个实施例中,在获取与用户图像对应的用户标识之后,上述的社交信息的展示方法还包括:获取与用户标识关联的用户体征数据;在扫描可视区域中显示的帧图像上显示用户体征数据。
终端可进一步根据所获取的用户标识,检索是否存在与该用户标识相关联的体征数据。体征数据包括但不限于运动数据和健康监护数据等其中的一种或多种。其中,运动数据包括用户的行走步数、骑行的里程、消耗的热量等数据;健康监护数据包括用户的心跳、体温、血糖参数等数据。终端可查询与该用户标识具有关联性的其它应用,并获取通过该其他应用所检测到该体征数据。该其它应用可为运动类应用或健康监护类应用。
举例来说,终端可检测该用户标识是否也被用于标识某一种运动类应用 的用户身份,若是,则可从本地缓存中,或者从该运动类应用对应的后台服务器中,获取与该用户标识关联的用户体征数据。
终端可在显示社交信息的同时,还显示该用户体征数据,并在扫描可视区域中显示的帧图像上显示用户体征数据。比如,可将该体征数据展示在如图6所示的用户信息区域630之内。通过进一步显示用户体征数据,可提高对所匹配到的用户的信息展示的丰富性。
在一个实施例中,如图8所示,提供了一种终端。该终端包括帧图像获取模块802、人脸特征数据提取模块804、用户标识查询模块806和展示模块808。其中:
帧图像获取模块802,用于获取在扫描可视区域的预设区域中包含人脸图像的帧图像。
人脸特征数据提取模块804,用于提取帧图像所包含的人脸图像的人脸特征数据。
用户标识查询模块806,用于根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对应的用户标识。
展示模块808,用于获取与用户标识关联的社交信息并展示。
在一个实施例中,如图9所示,提供了另一种终端,该终端还包括:
图像扫描模块810,用于通过社交网络应用提供的图像扫描入口进入图像扫描状态。
帧图像获取模块802还用于在图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像。
在一个实施例中,用户标识查询模块806还用于获取与用户标识关联的社交信息,在图像扫描状态下将社交信息展示在扫描可视区域内的帧图像上。
在一个实施例中,帧图像获取模块802还用于检测连续生成的预设数量的帧图像之间的相似度是否都大于相似度阈值,若是,则获取连连续生成的预设数量的帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像;或检测在预设时长之内,扫描可视区域内的目标物体的摄像头的 偏移量是否小于偏移量阈值,若是,则从预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
在一个实施例中,人脸特征数据提取模块804还用于提取帧图像中所包含的人脸图像在预设区域中所占比例超过预设比例,且人脸图像的清晰度超过清晰度阈值的帧图像中所包含的人脸图像的人脸特征数据。
在一个实施例中,如图10所示,提供了又一种终端,该终端还包括:
体征数据获取模块812,用于获取与用户标识关联的用户体征数据。
展示模块808还用于在扫描可视区域中显示的帧图像上显示用户体征数据。
上述终端通过获取在扫描可视区域的预设区域中包含人脸图像的帧图像;提取帧图像所包含的人脸图像的人脸特征数据;根据人脸特征数据查询与人脸图像相匹配的用户图像,获取与用户图像对应的用户标识;获取与用户标识关联的社交信息并展示。使得只需将摄像头对准用户人脸,即可实现对该用户的社交信息的展示,简化了对用户的社交信息的展示的操作,提高了对社交信息的展示的便利性和效率。
上述终端中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于终端中的处理器中,也可以以软件形式存储于终端中的存储器中,以便于处理器调用执行以上各模块对应的操作。该处理器可以为中央处理器(CPU)、微处理器、单片机等。
在一个实施例中,提供了一种计算机设备,该计算机设备包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行本申请各实施例中描述的社交信息的展示方法的步骤。其中,该计算机设备可为上述实施例中的终端。
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行本申请各实施例中描述的社交信息的展示方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流 程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种社交信息的展示方法,包括:
    终端获取在扫描可视区域的预设区域中包含人脸图像的帧图像;
    所述终端提取所述帧图像所包含的人脸图像的人脸特征数据;
    所述终端根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及
    所述终端获取与所述用户标识关联的社交信息并展示。
  2. 根据权利要求1所述的方法,其特征在于,在所述终端获取在扫描可视区域的预设区域中包含人脸图像的帧图像之前,还包括:
    所述终端通过社交网络应用提供的图像扫描入口进入图像扫描状态;
    所述终端获取在扫描可视区域的预设区域中包含人脸图像的帧图像,包括:
    所述终端在所述图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像。
  3. 根据权利要求2所述的方法,其特征在于,所述终端获取与所述用户标识关联的社交信息并展示,包括:
    所述终端获取与所述用户标识关联的社交信息,在所述图像扫描状态下将所述社交信息展示在所述扫描可视区域内的帧图像上。
  4. 根据权利要求3所述的方法,其特征在于,在所述图像扫描状态下将所述社交信息展示在所述扫描可视区域内的帧图像上之后,还包括:
    当检测到实时生成的帧图像在预设区域中,不包含所述人脸图像时,跳转至对应用户的社交信息的展示界面,并中止图像扫描。
  5. 根据权利要求3所述的方法,其特征在于,在所述图像扫描状态下将所述社交信息展示在所述扫描可视区域内的帧图像上之后,还包括:
    接收对社交信息进行的交互操作而生成的指令;响应所述指令,跳转至与所述指令对应的社交信息的展示界面,并中止图像扫描。
  6. 根据权利要求1所述的方法,其特征在于,所述终端获取在扫描可视 区域的预设区域中包含人脸图像的帧图像,包括:
    所述终端检测连续生成的预设数量的帧图像之间的相似度是否都大于相似度阈值,若是,则获取所述连续生成的预设数量的帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
  7. 根据权利要求1所述的方法,其特征在于,所述终端获取在扫描可视区域的预设区域中包含人脸图像的帧图像,包括:
    所述终端检测在预设时长之内,扫描可视区域内的目标物体的摄像头的偏移量是否小于偏移量阈值,若是,则从所述预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
  8. 根据权利要求1所述的方法,其特征在于,所述终端提取所述帧图像所包含的人脸图像的人脸特征数据,包括:
    所述终端提取所述帧图像中所包含的人脸图像在所述预设区域中所占比例超过预设比例,且所述人脸图像的清晰度超过清晰度阈值的帧图像中所包含的人脸图像的人脸特征数据。
  9. 根据权利要求1所述的方法,其特征在于,在所述终端获取与所述用户图像对应的用户标识之后,还包括:
    所述终端获取与所述用户标识关联的用户体征数据;及
    所述终端在所述扫描可视区域中显示的帧图像上显示所述用户体征数据。
  10. 一种终端,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    获取在扫描可视区域的预设区域中包含人脸图像的帧图像;
    提取所述帧图像所包含的人脸图像的人脸特征数据;
    根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及
    获取与所述用户标识关联的社交信息并展示。
  11. 根据权利要求10所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    通过社交网络应用提供的图像扫描入口进入图像扫描状态;及
    在所述图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像。
  12. 根据权利要求11所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    获取与所述用户标识关联的社交信息,在所述图像扫描状态下将所述社交信息展示在所述扫描可视区域内的帧图像上。
  13. 根据权利要求12所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    当检测到实时生成的帧图像在预设区域中,不包含所述人脸图像时,跳转至对应用户的社交信息的展示界面,并中止图像扫描;和/或
    接收对社交信息进行的交互操作而生成的指令;响应所述指令,跳转至与所述指令对应的社交信息的展示界面,并中止图像扫描。
  14. 根据权利要求10所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    检测连续生成的预设数量的帧图像之间的相似度是否都大于相似度阈值,若是,则获取所述连续生成的预设数量的帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像;或
    检测在预设时长之内,扫描可视区域内的目标物体的摄像头的偏移量是否小于偏移量阈值,若是,则从所述预设时长之内,获取生成的多个帧图像中,其中一幅在扫描可视区域的预设区域中包含人脸图像的帧图像。
  15. 根据权利要求10所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    提取所述帧图像中所包含的人脸图像在所述预设区域中所占比例超过预设比例,且所述人脸图像的清晰度超过清晰度阈值的帧图像中所包含的人脸 图像的人脸特征数据。
  16. 根据权利要求10所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    获取与所述用户标识关联的用户体征数据;及
    在所述扫描可视区域中显示的帧图像上显示所述用户体征数据。
  17. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
    获取在扫描可视区域的预设区域中包含人脸图像的帧图像;
    提取所述帧图像所包含的人脸图像的人脸特征数据;
    根据所述人脸特征数据查询与所述人脸图像相匹配的用户图像,获取与所述用户图像对应的用户标识;及
    获取与所述用户标识关联的社交信息并展示。
  18. 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,还使得一个或多个处理器执行如下步骤:
    通过社交网络应用提供的图像扫描入口进入图像扫描状态;及
    在所述图像扫描状态下获取扫描可视区域内的预设区域包含人脸图像的帧图像。
  19. 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,还使得一个或多个处理器执行如下步骤:
    提取所述帧图像中所包含的人脸图像在所述预设区域中所占比例超过预设比例,且所述人脸图像的清晰度超过清晰度阈值的帧图像中所包含的人脸图像的人脸特征数据。
  20. 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,还使得一个或多个处理器执行如下步骤:
    获取与所述用户标识关联的用户体征数据;及
    在所述扫描可视区域中显示的帧图像上显示所述用户体征数据。
PCT/CN2018/073824 2017-03-29 2018-01-23 社交信息的展示方法、计算机设备和存储介质 WO2018177002A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710199079.9A CN107194817B (zh) 2017-03-29 2017-03-29 用户社交信息的展示方法、装置和计算机设备
CN201710199079.9 2017-03-29

Publications (1)

Publication Number Publication Date
WO2018177002A1 true WO2018177002A1 (zh) 2018-10-04

Family

ID=59871655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073824 WO2018177002A1 (zh) 2017-03-29 2018-01-23 社交信息的展示方法、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN107194817B (zh)
WO (1) WO2018177002A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815804A (zh) * 2018-12-19 2019-05-28 平安普惠企业管理有限公司 基于人工智能的交互方法、装置、计算机设备及存储介质
CN111598128A (zh) * 2020-04-09 2020-08-28 腾讯科技(上海)有限公司 用户界面的控件状态识别、控制方法、装置、设备及介质
CN112733575A (zh) * 2019-10-14 2021-04-30 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113835582A (zh) * 2021-09-27 2021-12-24 青岛海信移动通信技术股份有限公司 一种终端设备、信息显示方法和存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194817B (zh) * 2017-03-29 2023-06-23 腾讯科技(深圳)有限公司 用户社交信息的展示方法、装置和计算机设备
CN108153822A (zh) * 2017-12-04 2018-06-12 珠海市魅族科技有限公司 一种关联方法及装置、终端和可读存储介质
KR102543656B1 (ko) * 2018-03-16 2023-06-15 삼성전자주식회사 화면 제어 방법 및 이를 지원하는 전자 장치
CN108764053A (zh) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN111010527B (zh) * 2019-12-19 2021-05-14 易谷网络科技股份有限公司 一种通过短信链接建立视频通话的方法及相关装置
CN111064658B (zh) * 2019-12-31 2022-04-19 维沃移动通信有限公司 显示控制方法及电子设备
CN111460032A (zh) * 2020-03-23 2020-07-28 郑州春泉节能股份有限公司 一种疫情防控装置跨平台数据同步方法
CN111813281A (zh) * 2020-05-28 2020-10-23 维沃移动通信有限公司 信息获取方法、信息输出方法、装置和电子设备

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183239A1 (en) * 2008-03-26 2012-07-19 Fujifilm Corporation Saving device for image sharing, image sharing system, and image sharing method
CN102819726A (zh) * 2012-06-27 2012-12-12 宇龙计算机通信科技(深圳)有限公司 用于移动终端的照片处理***及方法
CN102916986A (zh) * 2011-08-01 2013-02-06 环达电脑(上海)有限公司 人脸辨识的搜寻***及其方法
CN103076879A (zh) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 基于人脸信息的多媒体交互方法及装置及终端
CN103207890A (zh) * 2013-02-21 2013-07-17 北京百纳威尔科技有限公司 联系人信息获取方法及装置
CN103365922A (zh) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 一种用于关联图像和个人信息的方法和装置
CN103513890A (zh) * 2012-06-28 2014-01-15 腾讯科技(深圳)有限公司 一种基于图片的交互方法、装置和服务器
CN103532826A (zh) * 2013-07-10 2014-01-22 北京百纳威尔科技有限公司 即时通讯工具中用户状态的设置方法及装置
CN103744858A (zh) * 2013-12-11 2014-04-23 深圳先进技术研究院 一种信息推送方法及***
CN103810248A (zh) * 2014-01-17 2014-05-21 百度在线网络技术(北京)有限公司 基于照片查找人际关系的方法和装置
CN104572732A (zh) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 查询用户标识的方法及装置、获取用户标识的方法及装置
CN104780167A (zh) * 2015-03-27 2015-07-15 深圳创维数字技术有限公司 一种账号登录方法及终端
CN104808921A (zh) * 2015-05-08 2015-07-29 三星电子(中国)研发中心 进行信息提醒的方法及装置
CN104852908A (zh) * 2015-04-22 2015-08-19 中国建设银行股份有限公司 一种业务资讯的推荐方法及装置
CN105320407A (zh) * 2015-11-12 2016-02-10 上海斐讯数据通信技术有限公司 一种获取照片人物社交动态的方法及装置
CN105488726A (zh) * 2015-12-23 2016-04-13 北京奇虎科技有限公司 一种邀请好友加入社交群的方法和装置
CN105574155A (zh) * 2015-12-16 2016-05-11 广东欧珀移动通信有限公司 一种照片搜索方法和装置
CN106484737A (zh) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 一种网络社交方法及网络社交装置
CN107194817A (zh) * 2017-03-29 2017-09-22 腾讯科技(深圳)有限公司 用户社交信息的展示方法、装置和计算机设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980570A (zh) * 2011-09-06 2013-03-20 上海博路信息技术有限公司 一种实景增强现实导航***
CN103186590A (zh) * 2011-12-30 2013-07-03 牟颖 一种通过手机获取在逃通缉人员身份信息的方法
CN103426003B (zh) * 2012-05-22 2016-09-28 腾讯科技(深圳)有限公司 增强现实交互的实现方法和***
CN103218600B (zh) * 2013-03-29 2017-05-03 四川长虹电器股份有限公司 一种实时人脸检测算法
CN103412953A (zh) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 基于增强现实的社交方法
CN104618803B (zh) * 2014-02-26 2018-05-08 腾讯科技(深圳)有限公司 信息推送方法、装置、终端及服务器
CN104820665A (zh) * 2014-03-17 2015-08-05 腾讯科技(北京)有限公司 展示推荐信息的方法、终端及服务器
CN105303149B (zh) * 2014-05-29 2019-11-05 腾讯科技(深圳)有限公司 人物图像的展示方法和装置
CN105302428B (zh) * 2014-07-29 2020-07-28 腾讯科技(深圳)有限公司 基于社交网络的动态信息展示方法和装置
CN105117463B (zh) * 2015-08-24 2019-08-06 北京旷视科技有限公司 信息处理方法和信息处理装置
CN105323252A (zh) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 基于增强现实技术实现互动的方法、***和终端
CN105574498A (zh) * 2015-12-15 2016-05-11 重庆凯泽科技有限公司 基于海关安检的人脸识别***及识别方法
CN105591885B (zh) * 2016-01-21 2019-01-08 腾讯科技(深圳)有限公司 资源分享方法和装置
CN106203391A (zh) * 2016-07-25 2016-12-07 上海蓝灯数据科技股份有限公司 基于智能眼镜的人脸识别方法
CN106294681B (zh) * 2016-08-05 2019-11-05 腾讯科技(深圳)有限公司 多重曝光的方法、装置和***
CN106503262B (zh) * 2016-11-22 2019-12-27 张新民 社交人脸记忆识别方法及装置

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183239A1 (en) * 2008-03-26 2012-07-19 Fujifilm Corporation Saving device for image sharing, image sharing system, and image sharing method
CN102916986A (zh) * 2011-08-01 2013-02-06 环达电脑(上海)有限公司 人脸辨识的搜寻***及其方法
CN103365922A (zh) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 一种用于关联图像和个人信息的方法和装置
CN102819726A (zh) * 2012-06-27 2012-12-12 宇龙计算机通信科技(深圳)有限公司 用于移动终端的照片处理***及方法
CN103513890A (zh) * 2012-06-28 2014-01-15 腾讯科技(深圳)有限公司 一种基于图片的交互方法、装置和服务器
CN103076879A (zh) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 基于人脸信息的多媒体交互方法及装置及终端
CN103207890A (zh) * 2013-02-21 2013-07-17 北京百纳威尔科技有限公司 联系人信息获取方法及装置
CN103532826A (zh) * 2013-07-10 2014-01-22 北京百纳威尔科技有限公司 即时通讯工具中用户状态的设置方法及装置
CN104572732A (zh) * 2013-10-22 2015-04-29 腾讯科技(深圳)有限公司 查询用户标识的方法及装置、获取用户标识的方法及装置
CN103744858A (zh) * 2013-12-11 2014-04-23 深圳先进技术研究院 一种信息推送方法及***
CN103810248A (zh) * 2014-01-17 2014-05-21 百度在线网络技术(北京)有限公司 基于照片查找人际关系的方法和装置
CN104780167A (zh) * 2015-03-27 2015-07-15 深圳创维数字技术有限公司 一种账号登录方法及终端
CN104852908A (zh) * 2015-04-22 2015-08-19 中国建设银行股份有限公司 一种业务资讯的推荐方法及装置
CN104808921A (zh) * 2015-05-08 2015-07-29 三星电子(中国)研发中心 进行信息提醒的方法及装置
CN106484737A (zh) * 2015-09-01 2017-03-08 腾讯科技(深圳)有限公司 一种网络社交方法及网络社交装置
CN105320407A (zh) * 2015-11-12 2016-02-10 上海斐讯数据通信技术有限公司 一种获取照片人物社交动态的方法及装置
CN105574155A (zh) * 2015-12-16 2016-05-11 广东欧珀移动通信有限公司 一种照片搜索方法和装置
CN105488726A (zh) * 2015-12-23 2016-04-13 北京奇虎科技有限公司 一种邀请好友加入社交群的方法和装置
CN107194817A (zh) * 2017-03-29 2017-09-22 腾讯科技(深圳)有限公司 用户社交信息的展示方法、装置和计算机设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815804A (zh) * 2018-12-19 2019-05-28 平安普惠企业管理有限公司 基于人工智能的交互方法、装置、计算机设备及存储介质
CN112733575A (zh) * 2019-10-14 2021-04-30 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN111598128A (zh) * 2020-04-09 2020-08-28 腾讯科技(上海)有限公司 用户界面的控件状态识别、控制方法、装置、设备及介质
CN111598128B (zh) * 2020-04-09 2023-05-12 腾讯科技(上海)有限公司 用户界面的控件状态识别、控制方法、装置、设备及介质
CN113835582A (zh) * 2021-09-27 2021-12-24 青岛海信移动通信技术股份有限公司 一种终端设备、信息显示方法和存储介质
CN113835582B (zh) * 2021-09-27 2024-03-15 青岛海信移动通信技术有限公司 一种终端设备、信息显示方法和存储介质

Also Published As

Publication number Publication date
CN107194817B (zh) 2023-06-23
CN107194817A (zh) 2017-09-22

Similar Documents

Publication Publication Date Title
WO2018177002A1 (zh) 社交信息的展示方法、计算机设备和存储介质
US11803345B2 (en) Gallery of messages from individuals with a shared interest
US11783862B2 (en) Routing messages by message parameter
AU2015201759B2 (en) Electronic apparatus for providing health status information, method of controlling the same, and computer readable storage medium
US9817235B2 (en) Method and apparatus for prompting based on smart glasses
US9674485B1 (en) System and method for image processing
CN108108012B (zh) 信息交互方法和装置
WO2016180285A1 (zh) 智能眼镜
CN115735229A (zh) 在消息收发***中更新化身服装
US11632344B2 (en) Media item attachment system
KR20180066026A (ko) 맥락형 비디오 스트림들에서 개인들을 식별하기 위한 얼굴 인식 및 비디오 분석을 위한 장치 및 방법들
US11308327B2 (en) Providing travel-based augmented reality content with a captured image
CN115803723A (zh) 在消息收发***中更新化身状态
US11430211B1 (en) Method for creating and displaying social media content associated with real-world objects or phenomena using augmented reality
US20150169186A1 (en) Method and apparatus for surfacing content during image sharing
US11463533B1 (en) Action-based content filtering
US11491406B2 (en) Game drawer
US20220197027A1 (en) Conversation interface on an eyewear device
US20230308411A1 (en) Relationship-agnostic messaging system
CN115885247A (zh) 用于启动应用的视觉搜索
EP4341804A1 (en) Shortcuts from scan operation within messaging system
KR20200039814A (ko) 영상 캡쳐 장치에 의해 이전에 캡쳐된 비디오 데이터에 기초하여 영상 캡쳐 장치에 의한 비디오 데이터의 캡쳐를 수정
CN112806021B (zh) 基于捕获视频数据的另一客户端设备对视频数据的分析修改由接收客户端设备对视频数据的呈现
US20220377033A1 (en) Combining individual functions into shortcuts within a messaging system
CN106777030B (zh) 信息推送方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18774646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18774646

Country of ref document: EP

Kind code of ref document: A1