CN112818719B - Method and equipment for identifying two-dimensional code - Google Patents

Method and equipment for identifying two-dimensional code Download PDF

Info

Publication number
CN112818719B
CN112818719B CN202011618821.3A CN202011618821A CN112818719B CN 112818719 B CN112818719 B CN 112818719B CN 202011618821 A CN202011618821 A CN 202011618821A CN 112818719 B CN112818719 B CN 112818719B
Authority
CN
China
Prior art keywords
user
track
dimensional code
video frame
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011618821.3A
Other languages
Chinese (zh)
Other versions
CN112818719A (en
Inventor
黄永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011618821.3A priority Critical patent/CN112818719B/en
Publication of CN112818719A publication Critical patent/CN112818719A/en
Priority to PCT/CN2021/125287 priority patent/WO2022142620A1/en
Application granted granted Critical
Publication of CN112818719B publication Critical patent/CN112818719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The application aims to provide a method and equipment for identifying a two-dimensional code, wherein the method comprises the following steps: in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream; and if the identification is successful, processing the two-dimensional code information obtained by the identification. The method and the device can enable identification of the two-dimension code to be very simple, convenient and accurate in the video call process, can provide great convenience for users participating in the video call, and only perform two-dimension code identification operation on the video frame image area corresponding to the intercepting area in the video stream, but not perform two-dimension code identification operation on all display areas of the video stream, so that the identification speed of the two-dimension code can be accelerated, and the identification precision and the identification efficiency of the two-dimension code are improved.

Description

Method and equipment for identifying two-dimensional code
Technical Field
The application relates to the field of communication, in particular to a technology for identifying two-dimensional codes.
Background
Along with the development of the age, the two-dimensional code is widely applied to different scenes of various industries, almost relates to aspects of life, and a user can obtain corresponding two-dimensional code contents by scanning the two-dimensional code, for example, mobile payment, information identification and the like are carried out through the two-dimensional code, so that convenience of daily life of people is greatly improved.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for identifying two-dimensional codes.
According to one aspect of the present application, there is provided a method for identifying a two-dimensional code applied to a first user device, the method including:
in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the present application, there is provided a method for identifying a two-dimensional code applied to a second user equipment, the method including:
In the video call process of a first user and a second user, responding to track drawing operation of the second user on a video stream of the second user, obtaining a track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
if the identification is successful, the two-dimensional code information obtained through the identification is sent to first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
According to one aspect of the present application, there is provided a first user equipment for identifying a two-dimensional code, the apparatus comprising:
the system comprises a one-to-one module, a two-dimensional code identification module and a two-dimensional code identification module, wherein the one-to-one module is used for responding to track drawing operation of a first user on a video stream of a second user in the video call process of the first user and the second user, obtaining a track drawn by the first user, determining an interception area according to the track, and executing the two-dimensional code identification operation on the interception area in the video stream;
and the two-module is used for processing the two-dimensional code information obtained by the identification if the identification is successful.
According to another aspect of the present application, there is provided a second user equipment for identifying a two-dimensional code, the apparatus comprising:
The second module 21 is configured to, in a video call process between a first user and a second user, obtain a track drawn by the second user in response to a track drawing operation of the second user on a video stream of the second user, determine an interception area according to the track, and perform a two-dimensional code recognition operation on the interception area in the video stream;
and the second-second module 22 is configured to send the two-dimensional code information obtained by identification to the first user equipment corresponding to the first user if the identification is successful, so that the first user equipment processes the two-dimensional code information.
According to an aspect of the present application, there is provided an apparatus for identifying a two-dimensional code, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
And if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the present application, there is provided an apparatus for identifying a two-dimensional code, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in the video call process of a first user and a second user, responding to track drawing operation of the second user on a video stream of the second user, obtaining a track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
if the identification is successful, the two-dimensional code information obtained through the identification is sent to first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
According to one aspect of the present application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
And if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the present application, there is provided a computer readable medium storing instructions that, when executed, cause a system to:
in the video call process of a first user and a second user, responding to track drawing operation of the second user on a video stream of the second user, obtaining a track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
if the identification is successful, the two-dimensional code information obtained through the identification is sent to first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
According to one aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the method of:
in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
And if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the method of:
in the video call process of a first user and a second user, responding to track drawing operation of the second user on a video stream of the second user, obtaining a track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
if the identification is successful, the two-dimensional code information obtained through the identification is sent to first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
Compared with the prior art, the method and the device have the advantages that in the video call process of the first user and the second user, the track drawing operation of the first user on the video stream of the second user is responded, the track drawn by the first user is obtained, the intercepting area is determined according to the track, the two-dimensional code identification operation is carried out on the intercepting area in the video stream, so that the second user only needs to aim at the two-dimensional code which needs to be displayed to the first user, the video does not need to be withdrawn completely, the two-dimensional code is provided for the first user through shooting or screenshot, the first user only needs to carry out the track drawing operation on the screen for the two-dimensional code which is displayed by the second user, the two-dimensional code can be identified by the first user equipment quickly and conveniently, the two-dimensional code identification process is extremely simple, convenient and accurate, the two-dimensional code identification operation is carried out on the video frame image area which corresponds to the intercepting area in the video stream, the two-dimensional code identification operation is not carried out on the video frame image area which corresponds to the intercepting area in the video stream, the two-dimensional code identification efficiency can be improved, the two-dimensional code identification efficiency can be carried out, and the two-dimensional code identification accuracy can be improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
fig. 1 shows a flowchart of a method for identifying a two-dimensional code applied to a first user device according to an embodiment of the present application;
FIG. 2 illustrates a flow chart of a method of identifying a two-dimensional code for application to a second user device according to one embodiment of the present application;
fig. 3 illustrates a first user equipment structure diagram for identifying a two-dimensional code according to an embodiment of the present application;
fig. 4 illustrates a second user equipment structure diagram for identifying a two-dimensional code according to an embodiment of the present application;
FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described herein.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
In one typical configuration of the present application, the terminal, the devices of the services network, and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPU)), input/output interfaces, network interfaces, and memory.
The Memory may include non-volatile Memory in a computer readable medium, random access Memory (Random Access Memory, RAM) and/or non-volatile Memory, etc., such as Read Only Memory (ROM) or Flash Memory (Flash Memory). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (Programmable Random Access Memory, PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read-Only Memory (ROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other Memory technology, read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device.
The device referred to in the present application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (for example, perform man-machine interaction through a touch pad), such as a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, such as an Android operating system, an iOS operating system and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a digital signal processor (Digital Signal Processor, DSP), an embedded device, and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as appropriate for the application, are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Fig. 1 shows a flowchart of a method for identifying a two-dimensional code applied to a first user device according to an embodiment of the present application, where the method includes step S11 and step S12. In step S11, in the video call process of a first user and a second user, a first user device responds to a track drawing operation of the first user on a video stream of the second user, obtains a track drawn by the first user, determines an interception area according to the track, and executes a two-dimensional code identification operation on the interception area in the video stream; in step S12, if the identification is successful, the first user equipment processes the two-dimensional code information obtained by the identification.
In step S11, in the video call process of the first user and the second user, the first user equipment responds to the track drawing operation of the first user on the video stream of the second user, obtains the track drawn by the first user, determines an interception area according to the track, and executes two-dimensional code identification operation on the interception area in the video stream. In some embodiments, during a video call between a first user and a second user, the second user aims at a two-dimensional code of a second user device used by the second user at a front camera to be displayed to the first user, and sends the two-dimensional code to the first user device in a video stream mode, so that the two-dimensional code is displayed on a video picture of the second user displayed on the first user device, the second user does not need to exit the video call at all, and the second user does not need to provide the two-dimensional code to be displayed to the first user in a photographing or screenshot mode to the first user. In some embodiments, when the first user sees that the two-dimensional code is displayed on the video picture of the second user, the first user may perform a track drawing operation on the video picture through his finger (for example, the first user's finger is pressed at a certain position on the screen of the first user device, and the finger is moved in a state of being kept pressed), at this time, the first user device may obtain a track drawn by the first user, and in response to a track drawing end event corresponding to the track drawing operation (for example, the first user's finger is lifted from the screen of the first user device), determine a corresponding intercepting region according to the track currently drawn by the first user. In some embodiments, the first user drawn trajectory may or may not be displayed on the first user device screen. In some embodiments, if the track currently drawn by the first user is closed, determining an area surrounded by the closed track as an intercepting area. In some embodiments, if the current drawn track of the first user is not closed, a drawing start point (for example, at a finger pressing position) and a drawing end point (for example, at a finger lifting position) may be connected by a virtual straight line, so as to obtain a virtual closed track corresponding to the drawn track, and an area surrounded by the virtual closed track is determined as an intercepting area. In some embodiments, if the current track drawn by the first user is not closed, a virtual tangent line extension line may be drawn at the drawing start point and the drawing end point, and according to the two drawn virtual switching extension lines and the screen boundary of the first user device, a virtual closed track corresponding to the drawn track is obtained, and an area surrounded by the virtual closed track is determined as the intercepting area. In some embodiments, a two-dimensional code identification operation is performed on a video frame image area corresponding to the intercepting area in a video picture of the second user, so that two-dimensional code information contained in the two-dimensional code displayed on the video frame image area is identified. In some embodiments, the two-dimensional code identification operation is performed on the video frame image area corresponding to the intercepting area in the video picture of the second user, instead of performing the two-dimensional code identification operation on all the display areas in the video picture of the second user, so that the identification speed of the two-dimensional code can be increased, and the identification precision and the identification efficiency of the two-dimensional code can be improved.
In step S12, if the identification is successful, the first user equipment processes the two-dimensional code information obtained by the identification. In some embodiments, if the identification is successful, the two-dimensional code information obtained by the identification may be directly processed, or the two-dimensional code information obtained by the identification may be processed according to the user authorization information or the user identification information (for example, token, uuid (Universally Unique Identifier, universal unique identification code)) of the first user, or the two-dimensional code information obtained by the identification may be processed according to the personal real identity information of the first user to which the video call application is bound.
According to the method and the device, in the video call process of the first user and the second user, the track drawing operation of the first user on the video stream of the second user is responded, the track drawn by the first user is obtained, the intercepting area is determined according to the track, and the two-dimensional code identification operation is carried out on the intercepting area in the video stream, so that the second user only needs to aim at the two-dimensional code which needs to be displayed to the first user, the video does not need to be withdrawn completely, the two-dimensional code is provided for the first user without photographing or screenshot, the first user only needs to carry out the track drawing operation on the screen for the two-dimensional code which is displayed by the second user, the first user equipment can quickly and conveniently identify the two-dimensional code, the identification of the two-dimensional code in the video call process is extremely simple, convenient and accurate, the two-dimensional code identification operation is carried out only on the video frame image area which corresponds to the intercepting area in the video stream, but not the display area of the video stream, the two-dimensional code identification operation is carried out on the video frame image area which corresponds to the intercepting area in the video stream, the two-dimensional code identification efficiency can be accelerated, and the two-dimensional code identification accuracy can be improved.
In some embodiments, the step S11 includes: in the video call process of a first user and a second user, the first user equipment responds to the track drawing start triggering operation of the first user on the video stream of the second user, and pauses playing of the video stream of the second user; responding to the track drawing operation of the first user on the video stream of the second user, obtaining the track drawn by the first user, and determining an intercepting region according to the track; and executing two-dimensional code identification operation on the intercepting region in the current video frame corresponding to the video stream, and recovering to play the video stream of the second user. In some embodiments, the track drawing start trigger operation may be that the finger of the first user is pressed at a certain position on the screen of the first user device, or may also be that the distance that the finger is moved in a state where the finger is kept pressed after the pressing is greater than or equal to a predetermined distance threshold (e.g., 10 pixels, 1 cm, etc.), or may also be that the first user clicks a certain specific button (e.g., a "start drawing track" button) on the current page. In some embodiments, after determining the intercepting region according to the track currently drawn by the first user, performing two-dimensional code recognition operation on an image region corresponding to the intercepting region in the current video frame image, and recognizing to obtain two-dimensional code information contained in the two-dimensional code displayed on the image region, where the current video frame image corresponds to the current video picture of the second user that has been paused since the video stream of the second user has been paused before. In some embodiments, playback of the video stream of the second user resumes after the identification is successful. In some embodiments, after the recognition failure, the video stream of the second user is also directly resumed, or after the recognition failure, the video stream of the second user is not directly resumed, the first user may re-perform the track drawing operation on the current video frame, and the first user device may re-attempt to perform the two-dimensional code recognition operation on the image area corresponding to the intercepting area in the current video frame after re-determining the intercepting area, and resume playing the video stream of the second user if the number of recognition failures reaches the predetermined number of times threshold. In some embodiments, after the identification fails, the video stream of the second user is not directly resumed, but a "resume play" button is placed on the current page, and after the user clicks the button, the video stream of the second user resumes to play.
In some embodiments, the step S11 includes: in the video call process of a first user and a second user, a first user device responds to a track drawing start triggering operation of the first user on a video stream of the second user, acquires a first current video frame image corresponding to the video stream, and displays the first current video frame image on the video stream; responding to the track drawing operation of the first user on the first current video frame image, obtaining a track drawn by the first user, and determining a cut-out area according to the track; and executing two-dimensional code identification operation on the intercepting region in the first current video frame image, and canceling presentation of the first current video frame image. In some embodiments, in response to a track drawing start triggering operation of a first user on a video stream of a second user, a current video frame image corresponding to the video stream is obtained, the current video frame image is overlaid and presented on the video stream, after an intercepting area is determined according to the track currently drawn by the first user, a two-dimensional code identification operation is performed on an image area corresponding to the intercepting area of the current video frame image, and two-dimensional code information contained in a two-dimensional code displayed on the image area is identified. In some embodiments, the current video frame image is hidden after the identification is successful. In some embodiments, the current video frame image is also directly hidden after the recognition failure, or the current video frame image is not directly hidden after the recognition failure, the first user may re-perform the track drawing operation on the current video frame, the first user device may re-attempt to perform the two-dimensional code recognition operation on the image area corresponding to the intercepting area in the current video frame after re-determining the intercepting area, and if the number of times of recognition failure reaches the predetermined number of times threshold, the current video frame image may be hidden. In some embodiments, the current video frame image is not hidden directly after the recognition failure, but a predetermined button is placed on the current page, and the current video frame image is hidden after the user clicks the button.
In some embodiments, the step S11 includes: in the video call process of a first user and a second user, the first user equipment responds to track drawing operation of the first user on a video stream of the second user, obtains a track drawn by the first user, determines an intercepting area according to the track, and executes two-dimensional code identification operation on the intercepting area in a current video frame corresponding to the video stream. In some embodiments, in response to a track drawing end event corresponding to the track drawing operation (for example, a finger of a first user is lifted from a screen of the first user device), a corresponding intercepting region is determined according to a track currently drawn by the first user, and a two-dimensional code identification operation is performed on an image region corresponding to the intercepting region in a current video frame image corresponding to a video stream of a second user, so that two-dimensional code information contained in a two-dimensional code displayed on the image region is identified.
In some embodiments, the obtaining the track drawn by the first user, determining the intercepting region according to the track, includes: and obtaining the track drawn by the first user, responding to a track drawing ending event corresponding to the track drawing operation, detecting whether the track drawn by the first user is closed, and if so, determining an area enclosed by the drawn track as an intercepting area. In some embodiments, the track drawing end event corresponding to the track drawing operation may be that the finger of the first user is lifted from the first user device screen, or the finger of the first user is moved out of the display area of the video screen of the second user, or the time that the finger of the first user is pressed to stay at a certain position on the first user device screen exceeds a predetermined time duration threshold. In some embodiments, detecting whether the current drawn track of the first user is closed may determine whether to close by detecting whether the current drawn track of the first user intersects, if so, determining that the current drawn track of the first user is closed, and determining an area enclosed by the closure as an intercepting area.
In some embodiments, the method further comprises: if the drawn track of the first user is not closed, the first user equipment determines a virtual closed track corresponding to the track according to the drawing starting point and the drawing ending point corresponding to the track, and determines an area surrounded by the virtual closed track as an intercepting area. In some embodiments, if the track drawn by the first user is not closed, a virtual straight line may connect the drawing start point and the drawing end point to obtain a corresponding virtual closed region, and a region surrounded by the virtual closed track is determined as the intercepting region. In some embodiments, if the track drawn by the first user is not closed, a virtual tangent line extension line may be drawn at the drawing start point and the drawing end point, respectively, and according to the two drawn virtual switching extension lines and the screen boundary of the first user device or the video picture boundary of the second user, a corresponding virtual closed region is obtained, and the region surrounded by the virtual closed track is determined as the intercepting region.
In some embodiments, the determining the virtual closed area corresponding to the track according to the drawing start point and the drawing end point corresponding to the track includes: and connecting the drawing starting point and the drawing ending point through a virtual straight line to obtain a virtual closed region corresponding to the track. In some embodiments, the virtual straight line may or may not be displayed on the first user device screen.
In some embodiments, the determining the virtual closed track corresponding to the track according to the drawing start point and the drawing end point corresponding to the track includes: and respectively drawing a virtual tangent extension line at the drawing starting point and the drawing ending point, and obtaining a virtual closed region corresponding to the track according to the two drawn virtual switching extension lines and the boundary of the video stream. In some embodiments, the virtual tangent extension may or may not be displayed on the first user device screen. In some embodiments, the video stream boundary may be a boundary of a first user device screen or may also be a boundary of a video picture of a second user.
In some embodiments, the obtaining the track drawn by the first user, determining the intercepting region according to the track, includes: and obtaining the track drawn by the first user, and determining an area surrounded by the track drawn by the first user as an intercepting area in response to a track closing event corresponding to the track drawing operation. In some embodiments, a track closing event corresponding to a track drawing operation performed by a first user on a video picture of a second user may be directly used as a track drawing end event, and when a track currently drawn by the first user reaches to be closed, an area surrounded by the track currently drawn by the first user is determined as an intercepting area.
In some embodiments, the method further comprises: and if the two-dimensional code information is not recognized in the intercepting region in the current video frame, the first user equipment executes two-dimensional code recognition operation on the intercepting region in the target video frame before the current video frame in the video stream. In some embodiments, if the two-dimensional code is not recognized in the image area corresponding to the truncated area in the current video frame image corresponding to the video stream of the second user, performing a two-dimensional code recognition operation on the image area corresponding to the truncated area in the target video frame image preceding the current video frame image in the video stream of the second user. In some embodiments, the target video frame image may be a video frame image corresponding to a video stream of the second user at a start point in time of the trajectory drawing operation of the first user. In some embodiments, the target video frame image may also be a previous video frame image to the current video frame image in the video stream of the second user.
In some embodiments, if the two-dimensional code information is not identified in the truncated area in the current video frame, performing a two-dimensional code identification operation on the truncated area in a target video frame preceding the current video frame in the video stream includes: if the two-dimensional code information is not recognized in the intercepting area in the current video frame, acquiring a previous video frame corresponding to the current video frame, and executing two-dimensional code recognition operation on the intercepting area in the previous video frame, so that the two-dimensional code information is reciprocally recognized until the two-dimensional code information is recognized from the intercepting area in the target video frame. In some embodiments, the target video frame image may be a previous video frame image of the current video frame image in the video stream of the second user, performing a two-dimensional code identification operation on an image area corresponding to the cut-out area in the target video frame image, if the two-dimensional code is not identified in the previous video frame image, determining the target video frame image as a previous video frame image of the previous video frame image in the video stream of the second user, and performing a two-dimensional code identification operation on an image area corresponding to the cut-out area in the target video frame image, so as to reciprocate until two-dimensional code information is identified from an image area corresponding to the cut-out area in the target video frame.
In some embodiments, if the two-dimensional code information is not identified in the truncated area in the current video frame, performing a two-dimensional code identification operation on the truncated area in a target video frame preceding the current video frame in the video stream includes: acquiring a starting time point of the track drawing operation; and acquiring a target video frame corresponding to the starting time point from the video stream, and executing two-dimensional code identification operation on the intercepted area in the target video frame. In some embodiments, in response to a track drawing operation performed by a first user on a video frame of a second user, a starting point in time of the track drawing operation is recorded, which may be recorded in a memory, or may also be recorded locally to the first user device. In some embodiments, a starting time point of the track drawing operation is read, a video frame image corresponding to the video stream of the second user at the starting time point is determined to be a target video frame image, and a two-dimensional code identification operation is performed on an image area corresponding to the intercepting area in the target video frame image.
In some embodiments, the method further comprises: and if the first user equipment does not recognize the two-dimensional code information in the intercepting area in the current video frame, executing two-dimensional code recognition operation on all display areas of the current video frame. In some embodiments, if the two-dimensional code is not recognized in the image area corresponding to the truncated area in the current video frame image corresponding to the video stream of the second user, performing a two-dimensional code recognition operation on all display areas in the current video frame image.
In some embodiments, the method further comprises: if the identification is successful, the first user equipment generates identification success prompt information and sends the identification success prompt information to the second user equipment corresponding to the second user so as to present the identification success prompt information on the second user equipment. In some embodiments, if the two-dimensional code is identified successfully, an identification success prompt message is generated and sent to the second user device and presented to prompt the second user not to continuously aim the camera of the second user device at the two-dimensional code required to be displayed to the first user, and the identification success prompt message may be directly sent to the second user device or may also be sent to the second user device via the server. In some embodiments, the recognition success prompt may be presented on the second user device in a visual form (e.g., text, icon, text + icon, etc.), or may also be presented on the second user device in a voice-played form.
In some embodiments, the method further comprises: and the first user equipment sends the track drawn by the first user to second user equipment corresponding to the second user in real time so as to present the track drawn by the first user on the second user equipment in real time. In some embodiments, when the first user performs the track drawing operation on the video frame of the second user, the first user device sends the track drawn by the first user to the second user device in real time and displays the track.
Fig. 2 shows a flowchart of a method for identifying a two-dimensional code applied to a second user equipment according to an embodiment of the present application, the method including step S21 and step S22. In step S21, in the video call process between the first user and the second user, the second user equipment responds to the track drawing operation of the second user on the video stream of the second user, obtains the track drawn by the second user, determines an interception area according to the track, and executes two-dimensional code recognition operation on the interception area in the video stream; in step S22, if the second user equipment successfully identifies, the identified two-dimensional code information is sent to the first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
In step S21, in the process of the video call between the first user and the second user, the second user device responds to the track drawing operation of the second user on the video stream of the second user, obtains the track drawn by the second user, determines the intercepting region according to the track, and executes the two-dimensional code recognition operation on the intercepting region in the video stream. In some embodiments, during the video call between the first user and the second user, the second user aims at the two-dimensional code with the camera of the second user device used by the second user, so that the two-dimensional code is displayed on the video picture of the second user presented on the second user device, the second user does not need to exit the video call at all, and does not need to acquire the two-dimensional code in a photographing or screenshot mode, preferably, the second user needs to switch the front camera currently used by the second user device to the rear camera, aim the rear camera to the two-dimensional code, and preferably, the second user needs to switch the video picture of the first user currently presented on the second user device to the video picture of the second user. The related operations are the same as or similar to those in the foregoing embodiments, and will not be described in detail herein.
In step S22, if the second user equipment successfully identifies, the identified two-dimensional code information is sent to the first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information. The related operations are the same as or similar to those in the foregoing embodiments, and will not be described in detail herein.
Fig. 3 shows a first user equipment structure diagram for identifying a two-dimensional code according to an embodiment of the present application, where the first user equipment structure diagram includes a one-to-one module 11 and a two-to-two module 12. The one-to-one module 11 is configured to obtain a track drawn by a first user in response to a track drawing operation of the first user on a video stream of a second user in a video call process of the first user and the second user, determine an interception area according to the track, and perform a two-dimensional code recognition operation on the interception area in the video stream; and the second module 12 is used for processing the two-dimensional code information obtained by the identification if the identification is successful.
And the one-to-one module 11 is used for responding to the track drawing operation of the first user on the video stream of the second user in the video call process of the first user and the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code recognition operation on the intercepting area in the video stream. In some embodiments, during a video call between a first user and a second user, the second user aims at a two-dimensional code of a second user device used by the second user at the two-dimensional code to be displayed to the first user, and sends the two-dimensional code to the first user device in a video stream mode, so that the two-dimensional code is displayed on a video picture of the second user displayed on the first user device, the second user does not need to exit the video call at all, and the two-dimensional code to be displayed to the first user is provided to the first user in a photographing or screenshot mode. In some embodiments, when the first user sees that the two-dimensional code is displayed on the video picture of the second user, the first user may perform a track drawing operation on the video picture through his finger (for example, the first user's finger is pressed at a certain position on the screen of the first user device, and the finger is moved in a state of being kept pressed), at this time, the first user device may obtain a track drawn by the first user, and in response to a track drawing end event corresponding to the track drawing operation (for example, the first user's finger is lifted from the screen of the first user device), determine a corresponding intercepting region according to the track currently drawn by the first user. In some embodiments, the first user drawn trajectory may or may not be displayed on the first user device screen. In some embodiments, if the track currently drawn by the first user is closed, determining an area surrounded by the closed track as an intercepting area. In some embodiments, if the current drawn track of the first user is not closed, a drawing start point (for example, at a finger pressing position) and a drawing end point (for example, at a finger lifting position) may be connected by a virtual straight line, so as to obtain a virtual closed track corresponding to the drawn track, and an area surrounded by the virtual closed track is determined as an intercepting area. In some embodiments, if the current track drawn by the first user is not closed, a virtual tangent line extension line may be drawn at the drawing start point and the drawing end point, and according to the two drawn virtual switching extension lines and the screen boundary of the first user device, a virtual closed track corresponding to the drawn track is obtained, and an area surrounded by the virtual closed track is determined as the intercepting area. In some embodiments, a two-dimensional code identification operation is performed on a video frame image area corresponding to the intercepting area in a video picture of the second user, so that two-dimensional code information contained in the two-dimensional code displayed on the video frame image area is identified. In some embodiments, the two-dimensional code identification operation is performed on the video frame image area corresponding to the intercepting area in the video picture of the second user, instead of performing the two-dimensional code identification operation on all the display areas in the video picture of the second user, so that the identification speed of the two-dimensional code can be increased, and the identification precision and the identification efficiency of the two-dimensional code can be improved.
And the second module 12 is used for processing the two-dimensional code information obtained by the identification if the identification is successful. In some embodiments, if the identification is successful, the two-dimensional code information obtained by the identification may be directly processed, or the two-dimensional code information obtained by the identification may be processed according to the user authorization information or the user identification information (for example, token, uuid (Universally Unique Identifier, universal unique identification code)) of the first user, or the two-dimensional code information obtained by the identification may be processed according to the personal real identity information of the first user to which the video call application is bound.
In some embodiments, the one-to-one module 11 is configured to: in the video call process of a first user and a second user, responding to track drawing start triggering operation of the first user on the video stream of the second user, and suspending playing of the video stream of the second user; responding to the track drawing operation of the first user on the video stream of the second user, obtaining the track drawn by the first user, and determining an intercepting region according to the track; and executing two-dimensional code identification operation on the intercepting region in the current video frame corresponding to the video stream, and recovering to play the video stream of the second user. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the one-to-one module 11 is configured to: in the video call process of a first user and a second user, responding to track drawing start triggering operation of the first user on a video stream of the second user, acquiring a first current video frame image corresponding to the video stream, and presenting the first current video frame image on the video stream; responding to the track drawing operation of the first user on the first current video frame image, obtaining a track drawn by the first user, and determining a cut-out area according to the track; and executing two-dimensional code identification operation on the intercepting region in the first current video frame image, and canceling presentation of the first current video frame image. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the one-to-one module 11 is configured to: in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in a current video frame corresponding to the video stream. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the obtaining the track drawn by the first user, determining the intercepting region according to the track, includes: and obtaining the track drawn by the first user, responding to a track drawing ending event corresponding to the track drawing operation, detecting whether the track drawn by the first user is closed, and if so, determining an area enclosed by the drawn track as an intercepting area. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: if the drawn track of the first user is not closed, determining a virtual closed track corresponding to the track according to a drawing starting point and a drawing ending point corresponding to the track, and determining an area surrounded by the virtual closed track as an intercepting area. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the determining the virtual closed area corresponding to the track according to the drawing start point and the drawing end point corresponding to the track includes: and connecting the drawing starting point and the drawing ending point through a virtual straight line to obtain a virtual closed region corresponding to the track. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the determining the virtual closed track corresponding to the track according to the drawing start point and the drawing end point corresponding to the track includes: and respectively drawing a virtual tangent extension line at the drawing starting point and the drawing ending point, and obtaining a virtual closed region corresponding to the track according to the two drawn virtual switching extension lines and the boundary of the video stream. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the obtaining the track drawn by the first user, determining the intercepting region according to the track, includes: and obtaining the track drawn by the first user, and determining an area surrounded by the track drawn by the first user as an intercepting area in response to a track closing event corresponding to the track drawing operation. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and if the two-dimensional code information is not recognized in the intercepting region in the current video frame, executing two-dimensional code recognition operation on the intercepting region in a target video frame before the current video frame in the video stream. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, if the two-dimensional code information is not identified in the truncated area in the current video frame, performing a two-dimensional code identification operation on the truncated area in a target video frame preceding the current video frame in the video stream includes: if the two-dimensional code information is not recognized in the intercepting area in the current video frame, acquiring a previous video frame corresponding to the current video frame, and executing two-dimensional code recognition operation on the intercepting area in the previous video frame, so that the two-dimensional code information is reciprocally recognized until the two-dimensional code information is recognized from the intercepting area in the target video frame. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, if the two-dimensional code information is not identified in the truncated area in the current video frame, performing a two-dimensional code identification operation on the truncated area in a target video frame preceding the current video frame in the video stream includes: acquiring a starting time point of the track drawing operation; and acquiring a target video frame corresponding to the starting time point from the video stream, and executing two-dimensional code identification operation on the intercepted area in the target video frame. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and if the two-dimensional code information is not recognized in the intercepting area in the current video frame, executing two-dimensional code recognition operation on all display areas of the current video frame. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: if the identification is successful, generating identification success prompt information, and sending the identification success prompt information to second user equipment corresponding to the second user so as to present the identification success prompt information on the second user equipment. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the apparatus is further to: and sending the track drawn by the first user to second user equipment corresponding to the second user in real time so as to present the track drawn by the first user on the second user equipment in real time. The related operations are the same as or similar to those of the embodiment shown in fig. 1, and thus are not described in detail herein, and are incorporated by reference.
Fig. 4 shows a second user equipment structure diagram for identifying two-dimensional codes according to an embodiment of the present application, where the second user equipment structure diagram includes a second module 21 and a second module 22. The second module 21 is configured to, in a video call process between a first user and a second user, obtain a track drawn by the second user in response to a track drawing operation of the second user on a video stream of the second user, determine an interception area according to the track, and perform a two-dimensional code recognition operation on the interception area in the video stream; and the second-second module 22 is configured to send the two-dimensional code information obtained by identification to the first user equipment corresponding to the first user if the identification is successful, so that the first user equipment processes the two-dimensional code information.
And the second module 21 is configured to, in a video call process between a first user and a second user, respond to a track drawing operation of the second user on a video stream of the second user, obtain a track drawn by the second user, determine an interception area according to the track, and perform a two-dimensional code recognition operation on the interception area in the video stream. In some embodiments, during the video call between the first user and the second user, the second user aims at the two-dimensional code with the camera of the second user device used by the second user, so that the two-dimensional code is displayed on the video picture of the second user presented on the second user device, the second user does not need to exit the video call at all, and does not need to acquire the two-dimensional code in a photographing or screenshot mode, preferably, the second user needs to switch the front camera currently used by the second user device to the rear camera, aim the rear camera to the two-dimensional code, and preferably, the second user needs to switch the video picture of the first user currently presented on the second user device to the video picture of the second user. The related operations are the same as or similar to those in the foregoing embodiments, and will not be described in detail herein.
And the second-second module 22 is configured to send the two-dimensional code information obtained by identification to the first user equipment corresponding to the first user if the identification is successful, so that the first user equipment processes the two-dimensional code information. The related operations are the same as or similar to those in the foregoing embodiments, and will not be described in detail herein.
FIG. 5 illustrates an exemplary system that can be used to implement various embodiments described herein.
In some embodiments, as shown in fig. 5, the system 300 can function as any of the devices of the various described embodiments. In some embodiments, system 300 can include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described herein.
For one embodiment, the system control module 310 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
The system memory 315 may be used, for example, to load and store data and/or instructions for the system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, the system memory 315 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 320 may be accessed over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. The system 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as logic of one or more controllers of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on chip (SoC).
In various embodiments, the system 300 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
The present application also provides a computer readable storage medium storing computer code which, when executed, performs a method as claimed in any preceding claim.
The present application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
The one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions as described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the present application as described above.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (16)

1. The method for identifying the two-dimensional code is applied to first user equipment, wherein the method comprises the following steps:
in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
if the identification is successful, processing the two-dimensional code information obtained by the identification;
in the video call process of a first user and a second user, responding to a track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting area according to the track, and executing a two-dimensional code recognition operation on the intercepting area in the video stream, wherein the method comprises the following steps:
in the video call process of a first user and a second user, responding to track drawing operation of the first user on a video stream of the second user, obtaining a track drawn by the first user, determining an intercepting region according to the track, and executing two-dimensional code identification operation on the intercepting region in a current video frame corresponding to the video stream;
Wherein the method further comprises:
and if the two-dimensional code information is not recognized in the intercepting region in the current video frame, executing two-dimensional code recognition operation on the intercepting region in a target video frame before the current video frame in the video stream.
2. The method of claim 1, wherein the responding to the track drawing operation of the first user for the video stream of the second user in the video call process of the first user and the second user, obtaining the track drawn by the first user, determining a interception area according to the track, and executing two-dimensional code recognition operation on the interception area in the video stream comprises:
in the video call process of a first user and a second user, responding to track drawing start triggering operation of the first user on the video stream of the second user, and suspending playing of the video stream of the second user;
responding to the track drawing operation of the first user on the video stream of the second user, obtaining the track drawn by the first user, and determining an intercepting region according to the track;
and executing two-dimensional code identification operation on the intercepting region in the current video frame corresponding to the video stream, and recovering to play the video stream of the second user.
3. The method of claim 1, wherein the responding to the track drawing operation of the first user for the video stream of the second user in the video call process of the first user and the second user, obtaining the track drawn by the first user, determining a interception area according to the track, and executing two-dimensional code recognition operation on the interception area in the video stream comprises:
in the video call process of a first user and a second user, responding to track drawing start triggering operation of the first user on a video stream of the second user, acquiring a first current video frame image corresponding to the video stream, and presenting the first current video frame image on the video stream;
responding to the track drawing operation of the first user on the first current video frame image, obtaining a track drawn by the first user, and determining a cut-out area according to the track;
and executing two-dimensional code identification operation on the intercepting region in the first current video frame image, and canceling presentation of the first current video frame image.
4. A method according to any one of claims 1 to 3, wherein the obtaining the first user drawn trajectory, determining an intercepting region from the trajectory, comprises:
And obtaining the track drawn by the first user, responding to a track drawing ending event corresponding to the track drawing operation, detecting whether the track drawn by the first user is closed, and if so, determining an area enclosed by the drawn track as an intercepting area.
5. The method of claim 4, wherein the method further comprises:
if the drawn track of the first user is not closed, determining a virtual closed track corresponding to the track according to a drawing starting point and a drawing ending point corresponding to the track, and determining an area surrounded by the virtual closed track as an intercepting area.
6. The method of claim 5, wherein the determining the virtual closed area corresponding to the track according to the drawing start point and the drawing end point corresponding to the track comprises:
and connecting the drawing starting point and the drawing ending point through a virtual straight line to obtain a virtual closed region corresponding to the track.
7. The method of claim 5, wherein the determining the virtual closed trajectory corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory comprises:
And respectively drawing a virtual tangent extension line at the drawing starting point and the drawing ending point, and obtaining a virtual closed region corresponding to the track according to the two drawn virtual switching extension lines and the boundary of the video stream.
8. A method according to any one of claims 1 to 3, wherein the obtaining the first user drawn trajectory, determining an intercepting region from the trajectory, comprises:
and obtaining the track drawn by the first user, and determining an area surrounded by the track drawn by the first user as an intercepting area in response to a track closing event corresponding to the track drawing operation.
9. The method of claim 1, wherein the performing the two-dimensional code identification operation on the truncated region in the target video frame preceding the current video frame in the video stream if no two-dimensional code information is identified in the truncated region in the current video frame comprises:
if the two-dimensional code information is not recognized in the intercepting area in the current video frame, acquiring a previous video frame corresponding to the current video frame, and executing two-dimensional code recognition operation on the intercepting area in the previous video frame, so that the two-dimensional code information is reciprocally recognized until the two-dimensional code information is recognized from the intercepting area in the target video frame.
10. The method of claim 1, wherein the performing the two-dimensional code identification operation on the truncated region in the target video frame preceding the current video frame in the video stream if no two-dimensional code information is identified in the truncated region in the current video frame comprises:
acquiring a starting time point of the track drawing operation;
and acquiring a target video frame corresponding to the starting time point from the video stream, and executing two-dimensional code identification operation on the intercepted area in the target video frame.
11. The method of claim 1, wherein the method further comprises:
and if the two-dimensional code information is not recognized in the intercepting area in the current video frame, executing two-dimensional code recognition operation on all display areas of the current video frame.
12. The method of claim 1, wherein the method further comprises:
if the identification is successful, generating identification success prompt information, and sending the identification success prompt information to second user equipment corresponding to the second user so as to present the identification success prompt information on the second user equipment.
13. The method of claim 1, wherein the method further comprises:
And sending the track drawn by the first user to second user equipment corresponding to the second user in real time so as to present the track drawn by the first user on the second user equipment in real time.
14. The method for identifying the two-dimensional code is applied to second user equipment, wherein the method comprises the following steps:
in the video call process of a first user and a second user, responding to track drawing operation of the second user on a video stream of the second user, obtaining a track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in a current video frame corresponding to the video stream;
if the identification is successful, the two-dimensional code information obtained through the identification is sent to first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information;
wherein the method further comprises:
and if the two-dimensional code information is not recognized in the intercepting region in the current video frame, executing two-dimensional code recognition operation on the intercepting region in a target video frame before the current video frame in the video stream.
15. An apparatus for identifying a two-dimensional code, wherein the apparatus comprises:
A processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any one of claims 1 to 14.
16. A computer readable medium storing instructions which, when executed by a computer, cause the computer to perform the operations of the method of any one of claims 1 to 14.
CN202011618821.3A 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code Active CN112818719B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011618821.3A CN112818719B (en) 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code
PCT/CN2021/125287 WO2022142620A1 (en) 2020-12-30 2021-10-21 Method and device for recognizing qr code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011618821.3A CN112818719B (en) 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code

Publications (2)

Publication Number Publication Date
CN112818719A CN112818719A (en) 2021-05-18
CN112818719B true CN112818719B (en) 2023-06-23

Family

ID=75855836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011618821.3A Active CN112818719B (en) 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code

Country Status (2)

Country Link
CN (1) CN112818719B (en)
WO (1) WO2022142620A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818719B (en) * 2020-12-30 2023-06-23 上海掌门科技有限公司 Method and equipment for identifying two-dimensional code
CN113592468B (en) * 2021-07-12 2022-11-01 见面(天津)网络科技有限公司 Online payment method and device based on two-dimensional code

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090761A (en) * 2014-07-10 2014-10-08 福州瑞芯微电子有限公司 Screenshot application device and method
CN104573608A (en) * 2015-01-23 2015-04-29 苏州海博智能***有限公司 Coded message scanning method and device
CN109286848A (en) * 2018-10-08 2019-01-29 腾讯科技(深圳)有限公司 A kind of exchange method, device and the storage medium of terminal video information
CN110659533A (en) * 2019-08-26 2020-01-07 福建天晴数码有限公司 Method for identifying two-dimensional code in video and computer readable storage medium
CN111935439A (en) * 2020-08-12 2020-11-13 维沃移动通信有限公司 Identification method and device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4293111B2 (en) * 2004-10-27 2009-07-08 株式会社デンソー Camera driving device, camera driving program, geometric shape code decoding device, and geometric shape code decoding program
CN101510269B (en) * 2009-02-18 2011-02-02 华为终端有限公司 Method and device for acquiring two-dimensional code in video
CN109636512A (en) * 2018-11-29 2019-04-16 苏宁易购集团股份有限公司 A kind of method and apparatus for realizing shopping process by video
CN111770380A (en) * 2020-01-16 2020-10-13 北京沃东天骏信息技术有限公司 Video processing method and device
CN112818719B (en) * 2020-12-30 2023-06-23 上海掌门科技有限公司 Method and equipment for identifying two-dimensional code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090761A (en) * 2014-07-10 2014-10-08 福州瑞芯微电子有限公司 Screenshot application device and method
CN104573608A (en) * 2015-01-23 2015-04-29 苏州海博智能***有限公司 Coded message scanning method and device
CN109286848A (en) * 2018-10-08 2019-01-29 腾讯科技(深圳)有限公司 A kind of exchange method, device and the storage medium of terminal video information
CN110659533A (en) * 2019-08-26 2020-01-07 福建天晴数码有限公司 Method for identifying two-dimensional code in video and computer readable storage medium
CN111935439A (en) * 2020-08-12 2020-11-13 维沃移动通信有限公司 Identification method and device and electronic equipment

Also Published As

Publication number Publication date
WO2022142620A1 (en) 2022-07-07
CN112818719A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112818719B (en) Method and equipment for identifying two-dimensional code
CN112822431B (en) Method and equipment for private audio and video call
CN110795004B (en) Social method and device
CN110336733B (en) Method and equipment for presenting emoticon
CN110780955B (en) Method and equipment for processing expression message
CN113655927A (en) Interface interaction method and device
CN112822430B (en) Conference group merging method and device
CN111932230A (en) Method and equipment for modifying red envelope
CN114153535B (en) Method, apparatus, medium and program product for jumping pages on an open page
CN109636922B (en) Method and device for presenting augmented reality content
CN112261236B (en) Method and equipment for mute processing in multi-person voice
CN113157162B (en) Method, apparatus, medium and program product for revoking session messages
CN110460642B (en) Method and device for managing reading mode
CN112702257B (en) Method and device for deleting friend application
CN112788004B (en) Method, device and computer readable medium for executing instructions by virtual conference robot
CN112684961B (en) Method and equipment for processing session information
CN112787831B (en) Method and device for splitting conference group
CN111680249B (en) Method and device for pushing presentation information
CN110780788B (en) Method and device for executing touch operation
CN115278333B (en) Method, device, medium and program product for playing video
CN114338579B (en) Method, equipment and medium for dubbing
CN112685121B (en) Method and equipment for presenting session entry
CN114301861B (en) Method, equipment and medium for presenting mail
CN113535021B (en) Method, apparatus, medium, and program product for transmitting session message
CN111818013B (en) Method and device for adding friends

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant