WO2021047567A1 - 一种回调流的处理方法及设备 - Google Patents

一种回调流的处理方法及设备 Download PDF

Info

Publication number
WO2021047567A1
WO2021047567A1 PCT/CN2020/114342 CN2020114342W WO2021047567A1 WO 2021047567 A1 WO2021047567 A1 WO 2021047567A1 CN 2020114342 W CN2020114342 W CN 2020114342W WO 2021047567 A1 WO2021047567 A1 WO 2021047567A1
Authority
WO
WIPO (PCT)
Prior art keywords
callback
stream
electronic device
data
function
Prior art date
Application number
PCT/CN2020/114342
Other languages
English (en)
French (fr)
Inventor
陈谭坤
武华阳
沈日胜
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2022516016A priority Critical patent/JP7408784B2/ja
Priority to EP20862922.0A priority patent/EP4020966A4/en
Publication of WO2021047567A1 publication Critical patent/WO2021047567A1/zh
Priority to US17/693,032 priority patent/US11849213B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2222Prompting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the embodiments of the present application relate to the field of electronic technology, and in particular, to a method and device for processing a callback stream.
  • APP applications
  • the APP can use the camera to take pictures, scan codes, video chat, or augmented reality (AR) processing.
  • AR augmented reality
  • the user's operation of using the camera through the APP is also becoming more and more frequent.
  • the scene where the APP uses the camera function generally involves the preview stream of the camera (previewStream), and some scenes also involve the callback stream (callbackStream).
  • previewStream the preview stream of the camera
  • callbackStream the callback stream
  • scenarios such as code scanning, video chat, or AR involve callback streams, which may be referred to as callback stream scenarios.
  • the APP can call the openCamera() interface to start the camera, and then the APP can call the startPreview() interface to start previewing.
  • a hardware abstraction layer (HAL) in the electronic device can create a preview stream.
  • the APP can also perform initialization processing such as auto focus.
  • the APP can set the callback function.
  • the HAL layer can first stop the previous preview stream, then create a new preview stream, and create a callback stream after the new preview stream is successfully created.
  • the APP obtains the callback flow data through the callback function, and performs processing according to the callback flow data.
  • the embodiment of the present application provides a method and device for processing a callback stream.
  • the electronic device can set the preview stream and the callback stream in parallel, so as to pull up the callback stream in advance and obtain the callback stream data.
  • the electronic device can directly return the callback stream data to the application program through the callback function for processing, which can shorten the time it takes for the electronic device to create the preview stream and the callback stream, and reduce the time for the application program to obtain the callback stream data , Reduce the processing time of the application and the user's waiting time, and improve the user experience.
  • an embodiment of the present application provides a callback stream processing method, which is applied to an electronic device, and the method includes: the electronic device detects a first operation of a user using a first function of a first application. In response to the first operation, the electronic device starts the camera application and displays the first interface corresponding to the first function. After that, the electronic device creates the preview stream and the first callback stream in parallel. Then, the electronic device obtains first data, which is callback stream data corresponding to the first callback stream. The electronic device sets a callback function, and provides the first data to the first application program for processing through the callback function, so as to realize the first function.
  • the electronic device can create the preview stream and the callback stream in parallel, thereby reducing the time it takes for the electronic device to create the preview stream and the callback stream and reducing the first application
  • the time for the program to obtain the callback stream data shortens the time for the first application to process the callback stream data to realize the first function, thereby reducing the waiting time for the user to use the first function, and improving the user experience.
  • the first application is just a name to distinguish it from other applications.
  • the first application can be a target application, a certain application, a specific application, a preset application, WeChat, Alipay, QQ , Facetime, skype, Taobao, Meituan or Jingdong and other applications instead.
  • the first function is only used to distinguish it from other functions.
  • the first function can use specific functions, preset functions, target functions, a certain function, scan code function, video chat function, augmented reality AR function, and smart object recognition. Function, scanning question function, scanning bank card function or scanning document function and other functions are replaced.
  • the first operation is just a name to distinguish it from other operations.
  • the first operation can be replaced by an operation such as an input operation, a tap operation, a gesture input operation, or a voice input operation.
  • the electronic device includes a parameter matching library, the parameter matching library includes at least one scene parameter group, and the scene parameter group includes the package name, activity name, and page name of the application.
  • the method further includes: the electronic device obtains the first scene parameter group corresponding to the first interface. The electronic device determines that the first scene parameter group matches the scene parameter group in the parameter matching library.
  • the electronic device determines that the first scene parameter group corresponding to the first interface matches the scene parameter group in the parameter matching library, it can determine that the current scene is the callback stream scene, so that the preview stream and the first callback stream can be created in parallel.
  • the method further includes: the electronic device obtains the first package name of the first application corresponding to the first interface.
  • the electronic device searches for the reference scene parameter group including the first package name from the parameter matching library.
  • the electronic device determining that the first scene parameter group matches the scene parameter group in the parameter matching library includes: the electronic device determines that the first scene parameter group matches the reference scene parameter group.
  • the electronic device can firstly find a small number of reference scene parameter groups that include the first package name from a large number of reference scene parameter groups in the parameter matching library, so that the electronic device can use a small number of reference scene parameter groups Quickly determine whether the current scene is a callback flow.
  • At least one scene parameter group in the parameter matching library is integrated into the packaging file of the first application. Or, at least one scene parameter group in the parameter matching library is pushed by the server. Or, at least one parameter group in the parameter matching library is obtained through learning by the electronic device.
  • the electronic device can obtain the reference scene parameter group in the parameter matching library in a variety of ways.
  • the method further includes: if the first scene parameter group does not match the scene parameter group in the parameter matching library, the electronic device creates a preview stream. If the electronic device sets the callback function, the electronic device creates a second callback stream. Then, the electronic device obtains second data, which is callback stream data corresponding to the second callback stream. The electronic device provides the second data to the first application program for processing through the callback function. The electronic device saves the first scene parameter group in the parameter matching library.
  • the electronic device determines that the current callback flow scene is after determining to create the callback flow, so that the first scene parameter can be included in the parameter matching library, so that the subsequent electronic device determines according to the first scene parameter in the parameter matching library Whether it is a callback flow scene.
  • the method before the electronic device acquires the first data, the method further includes: the electronic device performs focusing through a camera application.
  • the electronic device can return the callback stream data corresponding to the clear image obtained after the focusing is completed to the first application program, so that the first application program can process according to the callback stream data, thereby realizing the first function.
  • the first function is a code scanning function, a video chat function, an augmented reality AR function, a smart object recognition function, a question scanning function, a bank card scanning function, or a document scanning function.
  • the electronic device can implement various camera-related functions through the above callback stream processing method.
  • the operating system of the electronic device includes a camera service
  • the method further includes: camera service setting identification information .
  • the electronic device creates the preview stream and the first callback stream in parallel, including: the first application program instructs to start the preview.
  • the camera service creates the preview stream and the first callback stream in parallel according to the identification information.
  • the camera service deletes the identification information after creating the first callback stream.
  • the electronic device can create a preview stream and a callback stream in parallel through the operating system and identification information.
  • acquiring the first data by the electronic device includes: acquiring the first data by a camera service.
  • the electronic device setting the callback function includes: the first application program setting the callback function.
  • the electronic device provides the first data to the first application through the callback function for processing, including: the camera service provides the first data to the first application through the callback function for processing.
  • the electronic device can specifically create a callback stream and obtain callback stream data through internal camera services, first application and other modules.
  • an embodiment of the present application provides a method for scanning a QR code, which is applied to an electronic device, and the method includes: the electronic device responds to the first operation after detecting a user's first operation on the code scanning function, Launch the camera app.
  • the electronic device displays the first interface corresponding to the code scanning function, and the camera of the electronic device camera is aligned with the two-dimensional code.
  • the electronic device obtains the first data according to the preview stream.
  • the electronic device displays the first image of the two-dimensional code according to the first data.
  • the electronic device acquires the second data according to the callback stream, and the second data and the first data have a different data format. Then, the electronic device completes the code scanning recognition according to the second data, and displays the second interface after the code scanning is successful.
  • the electronic device can create the preview stream and the callback stream in parallel, thereby reducing the time it takes for the electronic device to create the preview stream and the callback stream, and reducing the first application to obtain the callback stream.
  • the data duration shortens the duration of the first application processing the callback stream data to realize the QR code scanning function, thereby reducing the waiting time for the user to use the code scanning function and improving the user experience.
  • the method further includes: the electronic device acquires the third data according to the callback stream, and the electronic device discards the third data.
  • the third data is callback stream data obtained by the electronic device according to the callback stream before focusing. Since the callback stream data acquired before the focusing is completed will not be reported to the APP, the electronic device can discard the callback stream data.
  • the method further includes: the electronic device acquires fourth data according to the preview stream; the electronic device displays the second image of the QR code according to the fourth data; the first image It is different from the pixel value of the pixel in the second image.
  • the fourth data is preview stream data obtained by the electronic device according to the preview stream before focusing.
  • the first image of the QR code described by the callback stream data acquired by the electronic device before focusing is usually blurred, and the second image of the QR code described by the callback stream data acquired after the completion of focusing is usually clear.
  • the pixel values of the pixels in the image and the second image are different.
  • an embodiment of the present application provides an electronic device, including: a screen for displaying an interface; one or more processors; and a memory in which codes are stored.
  • the electronic device is caused to perform the following steps: detecting the first operation of the user using the first function of the first application; in response to the first operation, launching the camera application; displaying the first operation corresponding to the first function Interface; create the preview stream and the first callback stream in parallel; obtain the first data, which is the callback stream data corresponding to the first callback stream; set the callback function; provide the first data to the first application through the callback function for processing , In order to achieve the first function.
  • the electronic device includes a parameter matching library, the parameter matching library includes at least one scene parameter group, and the scene parameter group includes a package name, an activity name, and a page name of the application program.
  • the electronic device is also caused to perform the following steps: before the parallel creation of the preview stream and the first callback stream, obtain the first scene parameter group corresponding to the first interface; determine the first scene parameter group and the parameter matching library The scene parameter group in the match.
  • the electronic device when the code is executed by the electronic device, the electronic device is also caused to perform the following steps: after displaying the first interface corresponding to the first function, obtain the first interface of the first application corresponding to the first interface A package name. From the parameter matching library, find the reference scene parameter group that includes the first package name. Determining that the first scene parameter group matches the scene parameter group in the parameter matching library includes: determining that the first scene parameter group matches the reference scene parameter group.
  • At least one scene parameter group in the parameter matching library is integrated into the packaging file of the first application; or, at least one scene parameter group in the parameter matching library is pushed by the server; or, At least one parameter group in the parameter matching library is obtained through learning by the electronic device.
  • the electronic device when the code is executed by the electronic device, the electronic device is also caused to perform the following steps: if the first scene parameter group does not match the scene parameter group in the parameter matching library, create a preview stream; if Set the callback function, then create the second callback stream; obtain the second data, the second data is the callback stream data corresponding to the second callback stream; provide the second data to the first application through the callback function for processing; The scene parameter group is saved in the parameter matching library.
  • the electronic device when the code is executed by the electronic device, the electronic device is further caused to perform the following steps: before acquiring the first data, focus is performed through the camera application.
  • the operating system of the electronic device includes camera services, and when the code is executed by the electronic device, the electronic device is also caused to perform the following steps: after determining that the first scene parameter group matches the scene parameters in the parameter library After the groups are matched, the camera service sets the identification information. Create the preview stream and the first callback stream in parallel, including: the first application instructs to start the preview; the camera service creates the preview stream and the first callback stream in parallel according to the identification information; the camera service deletes the identification information after the first callback stream is created .
  • acquiring the first data specifically includes: acquiring the first data by the camera service.
  • Setting the callback function specifically includes: the first application setting the callback function.
  • Providing the first data to the first application program for processing through the callback function specifically includes: the camera service provides the first data to the first application program for processing through the callback function.
  • the first function is a code scanning function, a video chat function, an augmented reality AR function, a smart object recognition function, a question scanning function, a bank card scanning function, or a document scanning function.
  • an embodiment of the present application provides an electronic device, including: a screen for displaying an interface; one or more processors; and a memory in which codes are stored.
  • the electronic device is caused to perform the following steps: detecting the user's first operation on the code scanning function; in response to the first operation, launching the camera application; displaying the first interface corresponding to the code scanning function, and the electronic device.
  • the camera of the camera is aligned with the QR code; the preview stream and the callback stream are created in parallel, and the first data is obtained according to the preview stream; the first image of the QR code is displayed according to the first data; after the electronic device has completed focusing, the second image is obtained according to the callback stream.
  • Second data, the second data and the first data have different data formats; the code scanning recognition is completed according to the second data, and the second interface after the code scanning is successfully displayed.
  • the electronic device when the code is executed by the electronic device, the electronic device is further caused to perform the following steps: before the electronic device completes focusing, obtain the third data according to the callback flow; discard the third data.
  • the electronic device when the code is executed by the electronic device, the electronic device is also caused to perform the following steps: after the electronic device completes focusing, obtain the fourth data according to the preview stream; display the QR code according to the fourth data The second image; the pixel values of the pixels in the first image and the second image are different.
  • an embodiment of the present application provides a callback stream processing device, which is included in an electronic device.
  • the device has the function of realizing the behavior of the electronic device in any of the above aspects and possible design methods.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions. For example, application modules/units, framework modules/units, camera modules/units, scene recognition modules/units, etc.
  • an embodiment of the present application provides an electronic device, including: one or more processors; and a memory, in which code is stored.
  • the electronic device is caused to execute the callback stream processing method or the method of scanning the two-dimensional code in any one of the foregoing aspects or any possible implementation manner.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute any of the above aspects or the callback flow in any possible implementation manner.
  • the processing method or the method of scanning the QR code is described.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, causes the computer to make an electronic device execute the callback stream processing method in any one of the above aspects or any possible implementation manner. Or scan the QR code.
  • the embodiments of the present application provide a chip system, which is applied to electronic equipment; the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected by wires; the interface circuit Used to receive a signal from the memory of an electronic device and send a signal to the processor, the signal includes a computer instruction stored in the memory; when the processor executes the computer instruction, the electronic device executes any of the above aspects or any possible implementation manner The callback stream processing method in or the method of scanning the QR code.
  • 1A-1C are schematic diagrams of creating a preview stream and a callback stream provided by the prior art
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 3 is a schematic structural diagram of another electronic device provided by an embodiment of the application.
  • FIG. 4 is a scanning flowchart provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of a set of interfaces provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of another set of interfaces provided by an embodiment of the application.
  • FIG. 7A is a schematic diagram of creating a preview stream and a callback stream according to an embodiment of the application.
  • FIG. 7B is another schematic diagram of creating a preview stream and a callback stream provided by an embodiment of the application.
  • FIG. 7C is another schematic diagram of creating a preview stream and a callback stream according to an embodiment of the application.
  • FIG. 7D is a schematic diagram of a code scanning interface provided by an embodiment of the application.
  • FIG. 7E is a schematic diagram of another code scanning interface provided by an embodiment of this application.
  • FIG. 8A is a sequence diagram for processing callback stream requests provided in the prior art
  • FIG. 8B is a sequence diagram for processing a callback stream request provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of an interface after a successful code scan provided by an embodiment of the application.
  • 10A-10B are schematic diagrams of creating a preview stream and a callback stream according to an embodiment of the application.
  • FIG. 11A is a schematic diagram of an interface during a scanning process provided by the prior art.
  • FIG. 11B is a schematic diagram of an interface during a scanning process according to an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of another electronic device provided by an embodiment of the application.
  • FIG. 13 is a flowchart of callback stream processing provided by an embodiment of the application.
  • FIG. 14 is a schematic diagram of another interface provided by an embodiment of this application.
  • the HAL layer can update the previous preview stream, and then start to create the callback stream.
  • the APP obtains the callback stream data through the callback function, and performs processing according to the callback stream data.
  • FIG. 1C before the APP sets the callback function, it may not perform processing such as focusing or initialization, but directly stop the preview stream or update the preview stream, and start to create the callback stream.
  • the preview stream is a data stream
  • the preview stream data includes data information of the image collected by the camera.
  • the preview stream is used to return the preview image captured by the camera to the APP so that the APP can display the preview image on the screen.
  • the callback stream is also a data stream, and the callback stream data includes the data information of the image collected by the camera.
  • the callback stream data is used to return the data information of the image collected by the camera to the APP, so that the APP can process the image data collected by the camera to achieve specific functions.
  • the electronic device may perform identification processing during code scanning according to the callback stream data, or video codec and upload processing during video chat, so as to realize the function of code scanning or video chat.
  • the preview stream data and the callback stream data may include the data information of the two-dimensional code image collected by the camera.
  • the preview stream data is continuously acquired and updated; after the creation of the callback stream is completed, the callback stream data is continuously acquired and updated.
  • the processing methods after the preview stream and the callback stream are returned to the APP are different. Therefore, the data information of the QR code image collected by the camera included in the preview stream and the callback stream is different.
  • the specific data format can also be different.
  • the data format of the preview stream may be format 33 HAL_PIXEL_FORMAT_BLOB, or format 34 HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
  • the data format of the callback stream can be format 35 HAL_PIXEL_FORMAT_YCbCr_420_888.
  • the upper-level APP of the electronic device needs to call the lower-level interface, and the lower-level interface also needs to notify the underlying hardware to make the corresponding settings, so as to complete the creation of the preview stream and the callback stream.
  • Get the preview stream data and callback stream data Therefore, the creation process of the preview stream and the callback stream takes a certain amount of time.
  • FIG. 1A the time from when the electronic device starts the camera through the APP to successfully creating the preview stream and the callback stream, and the time from when the electronic device starts the camera through the APP to when the APP obtains the callback stream data can be about 1070 ms.
  • FIG. 1A the time from when the electronic device starts the camera through the APP to successfully creating the preview stream and the callback stream, and the time from when the electronic device starts the camera through the APP to when the APP obtains the callback stream data can be about 1070 ms.
  • the time from when the electronic device starts the camera through the APP to successfully creating the preview stream and the callback stream, and the time from when the electronic device starts the camera through the APP to when the APP obtains the callback stream data can be approximately 920 ms.
  • the time from when the electronic device starts the camera through the APP to successfully creating the preview stream and the callback stream, and the time from when the electronic device starts the camera through the APP to when the APP obtains the callback stream data can be approximately 560 ms.
  • the embodiment of the present application provides a method for processing a callback stream, which can be applied to an electronic device.
  • the operating system can also create the callback stream when the APP instructs to create the preview stream, that is, the electronic device can set the preview stream and the callback stream in parallel, so as to initiate the callback in advance. Stream and get callback stream data.
  • the electronic device can directly return the callback stream data obtained in advance to the APP for processing through the callback function.
  • the method for parallel creation of the preview stream and the callback stream provided by the embodiment of the present application can reduce the time consumed by the electronic device to create the preview stream and the callback stream, and reduce APP It takes time to obtain callback stream data, thereby reducing the time that the APP processes according to the callback stream data to implement camera-related functions, reducing the user's waiting time, and improving the user's experience.
  • the camera-related functions that can be implemented by the APP calling the camera to obtain the callback stream may include: scanning codes (two-dimensional codes, barcodes, etc.), video chat, AR, smart identification, scanning questions, scanning bank cards or scanning documents, etc.
  • scanning codes two-dimensional codes, barcodes, etc.
  • video chat AR
  • smart identification scanning questions
  • scanning bank cards or scanning documents etc.
  • the APP can be WeChat, Alipay, QQ, facetime, skype, Taobao, Meituan, or JD.
  • the electronic device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer) computer, UMPC), netbook, personal digital assistant (PDA), smart home equipment, etc.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • netbook netbook
  • PDA personal digital assistant
  • smart home equipment etc.
  • the embodiment of this application does not specifically limit the device type of the electronic device.
  • FIG. 2 shows a schematic structural diagram of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface can include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, which includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may couple the touch sensor 180K, the charger, the flash, the camera 193, etc., respectively through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both I2S interface and PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect to other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies can include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), and broadband code division. Multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM , And/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • CDMA broadband code division. Multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM , And/or IR technology, etc.
  • GNSS can include global positioning system (GP
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the display screen 194 may be used to display the interface of the application program, display the interface after the application program starts the camera, and so on.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, and the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, which is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is also called a camera and is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • the camera may capture image data, and return a preview stream and a callback stream to the electronic device 100.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, and the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the processor 110 can run instructions stored in the internal memory 121, and when the application calls the camera, in the callback stream scenario, the preview stream and the callback stream are created in parallel, so as to pull up the callback stream in advance and obtain Callback stream data; after the callback function is set in the application, the callback stream data obtained in advance is directly returned to the application through the callback function for processing, which can reduce the time consumption of creating the preview stream and the callback stream, and reduce the application to obtain the callback stream. The time of the data.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example, when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers and so on.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the touch sensor 180K can detect a user's touch operation, which is used to use a camera-related function of the application program, and the function needs to be implemented by starting the camera.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations that act on different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the touch sensor 180K or other detection components can detect the user's operation of using the application program and camera-related functions, and the function needs to start the camera.
  • the processor 110 can create a preview stream and a callback stream in parallel when determining that the current scene is a callback stream, so as to pull up the callback stream in advance and obtain the callback stream data; subsequently, after the application program sets the callback function, you can create a preview stream and a callback stream in parallel.
  • the callback stream data obtained in advance is directly returned to the application through the callback function for processing, which can reduce the time consumption of the electronic device to create the preview stream and the callback stream, reduce the time for the application to obtain the callback stream data, and reduce the application to implement camera-related functions
  • the operation time is reduced, the waiting time of the user is reduced, and the user experience is improved.
  • the electronic device in order to implement the callback stream processing function of the electronic device, includes hardware and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Those skilled in the art can use different methods for each specific application in combination with the embodiments to implement the described functions, but such implementation should not be considered as going beyond the scope of the present application.
  • the electronic device may be divided into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • an electronic device may include an application program layer, an application program framework layer, a HAL layer, a kernel layer, and a hardware layer.
  • the application layer, application framework layer, HAL layer and kernel layer are software layers.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application layer includes the first APP that can call the camera to realize the first function.
  • the first APP needs to create a callback stream when calling the camera.
  • the first APP may be an application program with functions such as scanning codes, video chatting, AR, smart object recognition, scanning questions, scanning bank cards, or scanning documents.
  • the first APP can be Alipay, WeChat, QQ, facetime or skype.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include an activity thread (ActivityThread), a camera service (CameraService) and a scene recognition system.
  • ActivityThread an activity thread
  • CameraService a camera service
  • scene recognition system a scene recognition system
  • the operating system will provide corresponding activity threads at the application framework layer to support the application's related scheduling, execution of activities, broadcasting, and related operations of activity management requests.
  • the camera service can be used to manage the camera, including the start and stop of the camera, the creation of the preview stream and the callback stream, the acquisition of the preview stream data and the callback stream data, and the reporting of the callback stream data to the upper application through the callback function.
  • the scene recognition system is used to identify whether the current application scene is a callback flow scene.
  • the API interface in the application framework layer may include the first API interface between the active thread and the scene recognition system, and the second API interface between the active thread and the camera service, etc.
  • the application framework layer can also include a window manager, a content provider, a view system, a phone manager, a resource manager, or a notification manager, etc.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • Data can include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • the HAL layer is used to abstract the underlying hardware to provide the upper layer with an abstracted unified camera service.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes camera drivers, display drivers, audio drivers, and sensor drivers.
  • the hardware layer includes hardware such as cameras, displays, speakers, and sensors.
  • the camera is used to capture still images or videos.
  • the camera can collect an image of a two-dimensional code.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element can be a CCD or CMOS phototransistor.
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • the camera may be the camera 193 shown in FIG. 2.
  • the electronic device is a mobile phone with the structure shown in FIG. 2 and FIG. 3, the first function is a code scanning function, and the first APP is WeChat as an example, to describe the callback stream processing method provided in the embodiment of the present application.
  • the first API interface and the second API interface may be in the form of a software development kit (SDK).
  • SDK software development kit
  • the first API interface may be a HWSDK interface
  • the second API interface may be a CameraSDK (or camera SDK) interface.
  • the callback stream processing method provided by the embodiment of the present application may include:
  • the operating system After the mobile phone detects the user's operation of opening WeChat, the operating system generates an activity thread corresponding to WeChat.
  • the user can instruct to open WeChat in a variety of ways, for example, you can click on the icon of WeChat to open WeChat, you can open WeChat through voice commands, or you can instruct you to open WeChat through air gestures.
  • the user's operation of opening WeChat may be an operation of the user clicking on the WeChat icon 501.
  • the operating system of the mobile phone can generate an activity thread corresponding to WeChat.
  • the screen displays the WeChat interface.
  • the screen may be the display screen 194 shown in FIG. 2 or a touch screen composed of the display screen 194 and the touch sensor 180K.
  • the WeChat interface displayed by the mobile phone can be seen in (b) in FIG. 5.
  • the active thread sends the first package name (packageName) corresponding to WeChat to the scene recognition system through the HWSDK interface.
  • the package name is the identification information of the application, which is used to uniquely identify an application.
  • the activity thread can obtain the first package name corresponding to WeChat and send it to the scene recognition system in the mobile phone.
  • the scene recognition system searches for a reference scene parameter group corresponding to the first package name.
  • the scene recognition system includes a parameter matching library, and the parameter matching library includes at least one scene parameter group corresponding to a callback flow scene.
  • the scene described by the scene parameter group in the parameter matching library is a callback flow scene.
  • Each scene parameter group includes a package name.
  • a large number of scene parameter groups may be stored in the scene recognition system, and the scene recognition system can select a small number of reference scene parameter groups including the first package name from the large number of scene parameter groups.
  • the parameter matching library in the scene recognition system can be integrated in the package file of the first APP, and obtained with the download of the first APP.
  • the server can train and learn through artificial intelligence AI to obtain a parameter matching library corresponding to the callback flow scene.
  • the mobile phone can also request the server to obtain the parameter matching library.
  • the parameter matching library may also be pushed to the mobile phone periodically by the server, or pushed to the mobile phone by the server when the scene parameter group corresponding to the callback stream scene is updated.
  • the parameter matching library may also be saved when the mobile phone recognizes the callback flow scene during the use process. Moreover, the parameter matching library can also be updated as the mobile phone continuously trains and learns the callback stream scene.
  • WeChat After WeChat receives the user's first operation of using the WeChat scan code function, it can call the openCamera interface to start the camera (ie camera application) to perform the scan code operation through the camera.
  • step 412 can be executed to instruct to start the preview. It takes time t1 from WeChat to start the camera to the instruction to start the preview. For example, t1 can be hundreds of milliseconds.
  • step 406 to step 411 can also be executed to determine whether the current scene is a callback stream.
  • the time required by WeChat from starting the camera to obtaining the result of whether it is the current callback stream scene is t2, and t2 is less than t1.
  • t2 can be several milliseconds.
  • the mobile phone can perform step 412 and subsequent steps after determining that the current scene is the callback stream, thereby creating the preview stream and the first callback stream in parallel.
  • the screen displays the first interface corresponding to the code scanning function in response to the first operation.
  • the first operation of the user using the WeChat code scanning function may be the operation of clicking the scan control 502 for the user.
  • the first interface may be a code scanning interface after the code scanning function is enabled on the mobile phone.
  • the active thread obtains the first scene parameter group corresponding to the first interface.
  • the first scene parameter group is used to describe the current scene and state of the first interface.
  • each scene parameter group may include an activity name (activityName) in addition to the package name.
  • the activity name can be used to identify the activity corresponding to the currently displayed interface (Activity).
  • the combination of different package names and activity names can describe different interface scenarios.
  • the first scene parameter group corresponding to the first interface includes the first package name and the first activity name.
  • the same package name and the same activity name may correspond to interfaces of multiple callback flow scenes, and cannot uniquely identify the interface of one callback flow scene. Therefore, the scene parameter group may also include a fragment name (fragmentName), also called a page name.
  • fragmentName fragment name
  • the first interface corresponding to the scan code may refer to (a) in FIG. 6, and the first interface corresponding to the first interface corresponding to AR may refer to (b) in FIG. 6.
  • the first interface corresponding to the scan code, the first interface corresponding to AR corresponds to the same package name and the same event name, but corresponds to a different page name. Therefore, the first scene parameter group corresponding to the first interface includes the first package name, the first activity name, and the first page name.
  • the first active thread sends the first scene parameter group to the scene recognition system through the HWSDK interface.
  • the scene recognition system determines whether the first scene parameter group matches the scene parameter group in the parameter matching library.
  • the first activity thread may send the first scene parameter group corresponding to the first interface to the scene recognition system, so that the scene recognition system determines whether the first scene parameter group matches the scene parameter group in the parameter matching library.
  • the scene recognition system can conveniently and quickly determine whether the reference scene parameter group includes the first scene parameter. Group, that is, determine whether the first scene parameter group matches the scene parameter group in the parameter matching library.
  • the mobile phone may not perform the above steps 403 to 404.
  • the mobile phone may sequentially determine whether the first scene parameter group matches the scene parameter group in the parameter matching library.
  • the scene recognition system determines that the first scene parameter group matches the scene parameter group in the parameter matching library, it notifies the active thread that the current callback flow scene is through the first API interface.
  • the first scene parameter group matches the scene parameter group in the parameter matching library, it can indicate that the scene corresponding to the first interface is a callback flow scene.
  • the mobile phone can start the callback flow optimization process, thereby pulling up the callback flow in advance.
  • the active thread indicates the first information to the camera service through the CameraSDK interface.
  • the first information may be used to indicate that the current callback flow scene is, and instruct the camera service to start the callback flow optimization process, and pull up the callback flow in advance.
  • WeChat can call the startPreview() interface in the CameraSDK interface to instruct to start the preview.
  • the camera service instructs to start the preview on WeChat, it creates a preview stream and a first callback stream in parallel according to the first information, and sends a request for creating the preview stream and the first callback stream to the camera driver through the HAL layer.
  • WeChat can call the startPreview() interface to instruct to start the preview, and notify the camera service through the internal processing logic of the code.
  • the camera service determines that the current callback flow scene is based on the first information indicated by the active thread, thereby opening the callback flow optimization processing flow, and creating the preview flow and the first callback flow in parallel. In this way, before WeChat sets the callback function, the operating system can pull up the first callback stream in advance.
  • the parallel creation of the preview stream and the first callback stream by the camera service includes, as shown in FIG. 7A, the camera service simultaneously creates the preview stream and the first callback stream.
  • the camera service creates the preview stream and the first callback stream in parallel, including that after the camera service starts to create the preview stream, it starts to create the first callback stream before the preview stream is created. That is, the creation process of the preview stream and the first callback stream is in time There is overlap.
  • the creation process of the preview stream and the first callback stream is in time There is overlap.
  • T there is a time interval T between the time when the first callback stream is created and the time when the preview stream is created, and T is less than the duration required for the creation process of the preview stream, that is, the preview stream and the first callback
  • T is less than the duration required for the creation process of the preview stream, that is, the preview stream and the first callback
  • the flow creation process overlaps in time.
  • the camera service creating the preview stream and the first callback stream in parallel includes that after the camera service starts to create the first callback stream, it starts to create the preview stream before the creation of the first callback stream is completed, that is, the process of creating the preview stream and the first callback stream.
  • a time interval T between the time when the preview stream is created and the time when the first callback stream is created, and T is less than the time required for the creation process of the first callback stream, that is, the preview stream and the first callback stream.
  • the creation process of a callback stream overlaps in time.
  • the first information in step 411 may be a flag; the mobile phone sets the first callback stream according to the flag in step 413; after step 413, the mobile phone can delete the flag to avoid Next time in a non-callback stream scenario, the camera service directly pulls up the callback stream in advance based on this flag bit.
  • the camera driver issues a request for creating a preview stream and a first callback stream to the camera hardware.
  • the camera hardware performs corresponding settings for the preview stream, obtains the preview stream data, and returns the preview stream and a notification message that the preview stream is successfully created to the camera service through the HAL layer.
  • a certain creation time is required for the preview stream to be created successfully from the beginning of creation, for example, it can be about 100 ms. It is understandable that after the preview stream is successfully created, the preview stream data is continuously acquired.
  • the camera hardware performs corresponding settings for the first callback stream, obtains the callback stream data corresponding to the first callback stream, and returns the callback stream data of the first callback stream and the notification message that the first callback stream is successfully created to the camera service through the HAL layer .
  • the creation of the first callback stream from the beginning to the success of creation also requires a certain creation time, for example, it can be about 300 ms. It is understandable that after the first callback stream is successfully created, the callback stream data of the first callback stream is continuously acquired.
  • the camera service returns the preview stream data to WeChat.
  • the camera service returns the preview stream data to WeChat so that WeChat can perform related processing based on the preview stream data.
  • WeChat may display the first image on the first interface according to the preview stream data.
  • the first interface after displaying the first image can be seen in FIG. 7D.
  • the mobile phone can also perform initialization processing such as focusing according to its own business logic.
  • the method may further include step 418:
  • WeChat can call the autoFocus() interface in the CameraSDK interface to instruct autofocus processing, and notify the camera service through the internal processing logic of the code.
  • the camera service can instruct the camera hardware to perform focus processing through the HAL layer.
  • WeChat can show the user a clear preview image after focusing through the first interface, that is, the latest preview stream data in the preview stream is the data information of the clear image collected by the camera hardware.
  • the second image displayed on the first interface on the screen after the focus processing can be seen in FIG. 7E.
  • the second image displayed by WeChat according to the preview stream data includes a clear two-dimensional code image.
  • the first image displayed by WeChat according to the preview stream data may include a blurred QR code image, may also include only a part of the QR code image, or may include a relatively clear QR code image.
  • the pixel values of the pixels of the two-dimensional code in the first image and the second image are different.
  • WeChat can determine the timing of setting the callback function according to its own business logic (for example, after focus processing), and call the callback function interface in the CameraSDK interface to set the callback function at this timing.
  • the callback function interface may include a setCallback() interface, a setPreviewCallback() interface, a setOneShotPreviewCallback() interface, or an addCallbackBuffer() interface.
  • the camera service directly returns the callback stream data obtained after creating the first callback stream to WeChat through the callback function.
  • WeChat After WeChat calls the callback function interface, it can notify the camera service through the internal processing logic of the code. If the camera service determines that there is no callback stream currently created, it starts to create the callback stream. In the embodiment of the present application, the camera service determines that the first callback stream has been created before, so the callback stream is no longer created, and there is no need to issue a request to create the callback stream to the bottom layer, but the first callback stream created in advance can be directly added. The callback stream data obtained by the callback stream is returned to WeChat through the callback function for processing by WeChat. Therefore, after the callback function was set by WeChat, the camera service did not create the callback stream again, and there was no time-consuming to create the callback stream again.
  • the callback stream data of the first callback stream is continuously acquired and updated.
  • the callback stream data obtained before the focus is completed may be invalid information corresponding to the blurred QR code image; it may also only contain part of the QR code information; it may also be the same as the preview stream data obtained after the focus is completed Same, including complete QR code information.
  • the callback stream data obtained by the camera service can be discarded; after the focus is completed, the latest callback stream data obtained by the camera service can be reported to WeChat for processing.
  • the first callback stream can be created in parallel when WeChat instructs to create the preview stream, so that the first callback stream can be pulled up in advance and the callback stream data can be obtained.
  • the camera service no longer creates a callback stream, but directly returns the callback stream data obtained in advance to WeChat through the callback function.
  • the mobile phone creates the preview stream and the first callback stream in parallel, which reduces the time it takes for the mobile phone to create the preview stream and the callback stream, thereby reducing WeChat’s acquisition of callback stream data The length of time.
  • FIG. 8A and FIG. 8B respectively show the working sequence diagrams of the mobile phone processor obtained by using the mobile phone performance analysis tool corresponding to the prior art and the embodiment of the present application.
  • position 1 indicates the operation of applying for memory after the camera is started
  • position 2 indicates the operation of processing a callback flow creation request. Comparing FIG. 8A and FIG. 8B, it can be seen that in the prior art, the callback stream is not created until a long time after the camera is started; however, the method provided in the embodiment of the present application quickly pre-creates the callback stream after the camera is started.
  • WeChat processes the acquired callback stream data to realize the code scanning function.
  • WeChat After WeChat obtains the callback stream data, it can perform identification and other processing, so as to realize the code scanning function of WeChat.
  • the two-dimensional code records data symbol information with a black and white pattern distributed on a plane (two-dimensional direction) with a specific geometric pattern according to a certain rule.
  • WeChat can analyze the black and white pixel matrix of the two-dimensional code image described by the callback stream data to obtain the data symbol information corresponding to the black and white pixel matrix, so as to perform further processing such as identification and linking according to the data symbol information.
  • the screen displays the second interface after successfully scanning the code.
  • the mobile phone scans the QR code of Huawei's recruitment official account
  • the second interface after the successful scanning of the code may be the official account page of Huawei's recruitment as shown in FIG. 9.
  • the mobile phone sets the preview stream and the first callback stream in parallel.
  • the camera service does not need to create a callback stream, but can directly return the callback stream data of the previously created callback stream to WeChat.
  • the mobile phone takes less time to create the preview stream and the callback stream, and it takes less time for WeChat to obtain the callback stream data.
  • the mobile phone sets the callback stream serially after creating the preview stream, and there may be a long time interval between the serial creation of the preview stream and the creation of the callback stream, so it takes a long time for WeChat to obtain the callback stream data. .
  • the callback stream processing procedures provided in the embodiment of the present application can be seen in FIG. 10A.
  • the time from when the phone starts the camera through the application to the successful creation of the preview stream and callback stream can be approximately 410ms; the time from when the phone starts the camera through the application to when the application obtains the callback stream data can be approximately 570ms ; 410ms and 570ms are significantly less than 1070ms and 920ms.
  • the callback flow processing flow provided in the embodiment of the present application can be seen in FIG. 10B.
  • the time from when the phone starts the camera through the application to the successful creation of the preview stream and the callback stream, and the time from when the phone starts the camera through the application to when the application obtains the callback stream data can be approximately 410ms, which is significantly less than 560ms.
  • the embodiment of the present application can save the time of creating the preview stream and the callback stream by setting the preview stream and the callback stream in parallel, so that the application program can obtain the callback stream data.
  • the time is shorter, which can make the scanning process faster.
  • FIG. 11A the sequence diagram of the code scanning process corresponding to the prior art can be seen in FIG. 11A; the sequence diagram of the code scanning process corresponding to the method provided in this embodiment of the application can be seen in FIG. 11B.
  • FIG. 11A and FIG. 11B the method provided by the embodiment of the present application can make the code scanning time shorter and the code scanning speed faster.
  • the mobile phone after the mobile phone starts the camera through WeChat, if it is determined that the current callback stream scene is, the mobile phone also creates the first callback stream in parallel when WeChat instructs to create the preview stream, so that it can be obtained in advance. Callback stream data. Later, after WeChat sets the callback function according to its own business processing logic, the camera service does not need to create a callback stream, but can directly return the previously obtained callback stream data to WeChat through the callback function.
  • the mobile phone creates the preview stream and the callback stream in parallel, reducing the time length of the WeChat creation of the preview stream and the callback stream, thereby reducing the time length for WeChat to obtain the callback stream data.
  • the scene recognition system determines the first scene parameter in the above step 409 The group does not match the scene parameter group in the parameter matching library.
  • the mobile phone can create a preview stream or create a preview stream and a second callback stream according to the process in the prior art, and then implement related functions according to the acquired data.
  • the first APP creates the second callback flow based on its own business logic, it can indicate that the first interface is a callback flow business scenario.
  • the phone may not have been trained and learned before, or the parameter matching library may not have learned the second callback flow.
  • a scene parameter group Therefore, the mobile phone can save the first scene parameter group to the parameter matching library, so that next time, according to the first scene parameter group saved in the parameter matching library, it is determined that the interface scene corresponding to the first scene parameter group is the callback flow scene. Pull up the callback stream in advance and quickly.
  • the above description is mainly based on an example in which the callback stream scenario is a code scanning scenario.
  • the callback stream processing method provided in the embodiment of the present application can also be applied to other callback stream scenarios.
  • using the callback stream processing method provided in this application embodiment can also reduce the time it takes for the mobile phone to create the preview stream and the callback stream, and reduce the time it takes for the Skype application to obtain the callback stream data, so that
  • the skype application can encode, compress and upload the callback stream data faster, reducing the processing time of skype, so that the chat peer can quickly receive the image data sent by the local end, and the image displayed by the peer during the chat process
  • the real-time performance is better, and the user experience is better.
  • the electronic device may include an application program, a framework module, a camera module, and a scene recognition module.
  • the application program may be the above-mentioned first APP.
  • the framework module can include a variety of API interfaces, which can provide a basic framework for the application program, and the collaboration process between the modules is completed in the framework module.
  • the application program can call the API interface of the framework module to start the camera.
  • the framework module can pass the current interface scene to the scene recognition module to obtain the scene recognition result of whether it is a callback flow scene.
  • the framework module determines whether to start the callback flow optimization process according to the scene recognition result. If it is determined to start the callback flow optimization process, the framework module notifies the camera module to start the callback flow optimization process.
  • the framework module can also receive the preview data stream and the callback data stream uploaded by the camera module, and upload them to the application program for use and processing.
  • the scene recognition module may include an AI module and a recognition module.
  • the AI module can be used to train the scene parameter group of the callback flow scene, and store the training result in the parameter matching library.
  • the identification module can be used to identify the callback stream scene according to the first scene parameter group provided by the framework module, combined with the scene parameter group in the parameter matching library, and return the corresponding identification result to the framework module.
  • the camera module may include camera services, HAL layer, and drivers and hardware.
  • the camera module is used to capture the image and data of the camera-related functions of the application and return it to the framework module; and to implement the optimized processing flow of the callback flow.
  • the optimization processing flow of the callback flow includes: when the camera service receives the notification that the framework module starts optimization, it sets the identification information.
  • the camera service starts the callback stream optimization process, creates the preview stream and the callback stream in parallel, and then sends the creation request of the preview stream and the callback stream to the driver through the HAL layer, waiting for the preview stream data and callback stream Data return.
  • the above is mainly from the perspective of each module of the electronic device to describe the callback stream processing method provided in the embodiment of the present application.
  • the operations performed by the modules of the electronic device are also the operations performed by the electronic device.
  • the electronic device may use the process shown in FIG. 13 to execute the callback flow processing method provided in the above embodiments of the present application.
  • the method can include:
  • the electronic device displays the interface of the first application.
  • the first application program may be WeChat, and the interface of the first application program may be the interface shown in (b) in FIG. 5.
  • the electronic device detects a first operation of the user using the first function of the first application program.
  • the first application program may be WeChat
  • the first function may be a code scanning function
  • the first operation may be an operation of the user clicking the scan control 502 shown in (c) in FIG. 5.
  • the electronic device starts the camera application.
  • the electronic device displays a first interface corresponding to the first function.
  • the first interface may be the code scanning interface shown in (d) in FIG. 5.
  • the electronic device creates a preview stream and a first callback stream in parallel.
  • the electronic device acquires first data, where the first data is callback stream data corresponding to the first callback stream.
  • the electronic device sets a callback function.
  • the electronic device provides the first data to the first application through the callback function for processing, so as to implement the first function.
  • the method may further include:
  • the electronic device displays a second interface after implementing the first function.
  • the second interface may be the interface shown in FIG. 9 after the code scan is successful.
  • the second interface may no longer be displayed.
  • step 1303 to step 1309 please refer to the related description of step 405 to step 422 above, which will not be repeated here.
  • the mobile phone may also display the first interface corresponding to the first function in response to the user's first operation of using the first function of the first application without displaying the interface of the first application.
  • the first interface is an interface related to the camera function of the first application. That is, the mobile phone may not perform step 1301, and directly perform step 1302 to step 1309.
  • the user's first operation of using the first function of the first application program may be an operation of the user clicking the shortcut icon 1401 of the WeChat scan.
  • the first interface may be (d) in FIG. 5 The interface shown.
  • the electronic device after the electronic device starts the camera through the first application, if it is determined that the current scene is the callback stream, the electronic device also creates the first callback in parallel when the first application instructs to create the preview stream. Stream, so that callback stream data can be obtained in advance. Subsequently, after the first application program sets the callback function according to its own business processing logic, the electronic device may no longer create a callback stream, but directly returns the previously acquired callback stream data to the first application program through the callback function.
  • the electronic device creates the preview stream and the callback stream in parallel, which can reduce the time it takes for the electronic device to create the preview stream and the callback stream, thereby reducing the first application to obtain the callback.
  • the duration of the stream data shortens the duration of the first application processing the callback stream data to realize the first function, reduces the waiting time for the user to use the first function, and improves the user experience.
  • the embodiments of the present application also provide an electronic device, including one or more processors; a memory; and one or more computer programs.
  • One or more computer programs are stored in the memory, and the one or more computer programs include instructions.
  • the electronic device is caused to execute each step in the foregoing embodiment, so as to implement the foregoing callback flow processing method.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium stores a computer instruction, when the computer instruction runs on an electronic device, the electronic device executes the above-mentioned related method steps to implement the callback flow in the above-mentioned embodiment The treatment method.
  • the embodiment of the present application also provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the above-mentioned related steps, so as to implement the callback flow processing method executed by the electronic device in the above-mentioned embodiment.
  • the embodiments of the present application also provide a device.
  • the device may specifically be a chip, component or module.
  • the device may include a processor and a memory connected to each other.
  • the memory is used to store computer execution instructions.
  • the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the callback flow processing method executed by the electronic device in the foregoing method embodiments.
  • the electronic device, computer storage medium, computer program product, or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided above. The beneficial effects of the method will not be repeated here.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stored Programmes (AREA)

Abstract

本申请实施例提供一种回调流的处理方法及设备,涉及电子技术领域,在回调流场景下,电子设备可以并行设置预览流和回调流,在应用程序设置回调函数后,电子设备可以直接将回调流数据通过回调函数返回给应用程序进行处理,从而可以缩短创建预览流和回调流的耗时,减少获取回调流数据的时间和用户等待时长。具体方案为:电子设备在检测到用户使用第一应用程序的第一功能的第一操作后,启动相机应用,显示第一功能对应的第一界面;并行创建预览流和第一回调流;获取第一数据,该第一数据为第一回调流的回调流数据;设置回调函数;将第一数据通过回调函数提供给第一应用程序进行处理。本申请实施例用于处理回调流。

Description

一种回调流的处理方法及设备
本申请要求于2019年9月12日提交国家知识产权局、申请号为201910867438.2、申请名称为“一种回调流的处理方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子技术领域,尤其涉及一种回调流的处理方法及设备。
背景技术
目前,电子设备上越来越多的应用程序(application,APP)加入了使用相机的功能。例如,APP可以使用相机进行拍照、扫码、视频聊天或增强现实(augmented reality,AR)等处理。用户通过APP使用相机的操作也越来越频繁。
其中,APP使用相机功能的场景一般涉及相机的预览流(previewStream),部分场景也涉及到回调流(callbackStream)。例如,扫码、视频聊天或AR等场景涉及回调流,可以称为回调流场景。
在现有技术中,对于回调流场景,参见图1A,APP可以调用openCamera()接口启动相机,而后APP可以调用startPreview()接口开始预览。在开始预览后,电子设备中的硬件抽象层(hardware abstraction layer,HAL)可以创建预览流。而后,APP还可以进行自动对焦等初始化处理。之后,APP可以设置回调函数。在设置回调函数之后,HAL层可以先停止之前的预览流,再创建一个新的预览流,并在该新的预览流成功创建后,再创建一个回调流。之后,APP在回调流创建成功后,通过回调函数获取回调流数据,并根据回调流数据进行处理。
在图1A所示的方案中,用户通过APP调用相机时,APP需要较长时间才能获取到回调流数据,从而才能根据回调流数据进行处理,因而APP的处理耗时较长,用户体验较差。
发明内容
本申请实施例提供一种回调流的处理方法及设备。在回调流场景下,电子设备可以并行设置预览流和回调流,从而提前拉起回调流并获取回调流数据。在应用程序设置回调函数后,电子设备可以直接将回调流数据通过回调函数返回给应用程序进行处理,从而可以缩短电子设备创建预览流和回调流的耗时,减少应用程序获取回调流数据的时间,减少应用程序的处理耗时和用户的等待时长,提高用户使用体验。
为达到上述目的,本申请实施例采用如下技术方案:
一方面,本申请实施例提供了一种回调流的处理方法,应用于电子设备,该方法包括:电子设备检测到用户使用第一应用程序的第一功能的第一操作。响应于第一操作,电子设备启动相机应用,并显示第一功能对应的第一界面。之后,电子设备并行创建预览流和第一回调流。然后,电子设备获取第一数据,该第一数据为第一回调流对应的回调流数据。电子设备设置回调函数,将第一数据通过回调函数提供给第一应 用程序进行处理,以实现第一功能。
这样,与现有技术中电子设备串行创建预览流和回调流相比,电子设备可以并行创建预览流和回调流,从而可以减少电子设备创建预览流和回调流的耗时,减少第一应用程序获取回调流数据的时长,缩短第一应用程序对回调流数据进行处理从而实现第一功能的时长,从而减少了用户使用第一功能的等待时长,提高了用户体验。
可以理解的,第一应用程序仅仅是一个称呼,用于与其他应用程序区分,第一应用程序可以用目标应用程序、某一应用程序、特定应用程序、预设应用程序、微信、支付宝、QQ、facetime、skype、淘宝、美团或京东等应用程序代替。
可以理解的,第一功能仅仅用于与其他功能区分,第一功能可以用特定功能、预设功能、目标功能、某一功能、扫码功能,视频聊天功能、增强现实AR功能、智慧识物功能、扫题功能、扫银行卡功能或扫证件功能等功能代替。
可以理解的,第一操作仅仅是一个称呼,用于与其他操作区分。第一操作可以用输入操作、点击操作、手势输入操作、或者语音输入操作等操作代替。
在一种可能的实现方式中,电子设备包括参数匹配库,参数匹配库包括至少一个场景参数组,场景参数组包括应用程序的包名、活动名和页面名。在电子设备并行创建预览流和第一回调流之前,该方法还包括:电子设备获取第一界面对应的第一场景参数组。电子设备确定第一场景参数组与参数匹配库中的场景参数组匹配。
也就是说,电子设备在确定第一界面对应第一场景参数组与参数匹配库中的场景参数组匹配时,可以确定当前为回调流场景,从而可以并行创建预览流和第一回调流。
在另一种可能的实现方式中,在电子设备显示第一功能对应的第一界面之后,该方法还包括:电子设备获取第一界面对应的第一应用程序的第一包名。电子设备从参数匹配库中,查找包括第一包名的参考场景参数组。电子设备确定第一场景参数组与参数匹配库中的场景参数组匹配,包括:电子设备确定第一场景参数组与参考场景参数组匹配。
在该方案中,电子设备可以先从参数匹配库中大量的参考场景参数组,找出包括第一包名且数量较少的参考场景参数组,以便电子设备根据数量较少的参考场景参数组快速确定当前是否为回调流场景。
在另一种可能的实现方式中,参数匹配库中的至少一个场景参数组集成于第一应用程序的打包文件中。或者,参数匹配库中的至少一个场景参数组是服务器推送的。或者,参数匹配库中的至少一个参数组是电子设备学习获得的。
也就是说,电子设备可以通过多种方式获得该参数匹配库中的参考场景参数组。
在另一种可能的实现方式中,该方法还包括:若第一场景参数组与参数匹配库中的场景参数组不匹配,则电子设备创建预览流。若电子设备设置了回调函数,则电子设备创建第二回调流。而后,电子设备获取第二数据,第二数据为第二回调流对应的回调流数据。电子设备将第二数据通过回调函数提供给第一应用程序进行处理。电子设备将第一场景参数组保存到参数匹配库中。
在该方案中,电子设备在确定创建回调流后确定当前为回调流场景,从而可以将第一场景参数包括到参数匹配库中,以便后续电子设备根据参数匹配库中的该第一场景参数确定是否为回调流场景。
在另一种可能的实现方式中,在电子设备获取第一数据之前,该方法还包括:电子设备通过相机应用进行对焦。
这样,电子设备可以将完成对焦后获取到的清晰的图像所对应的回调流数据,返回给第一应用程序,以便第一应用程序根据该回调流数据进行处理,从而实现第一功能。
在另一种可能的实现方式中,第一功能为扫码功能,视频聊天功能、增强现实AR功能、智慧识物功能、扫题功能、扫银行卡功能或扫证件功能。
也就是说,电子设备可以通过上述回调流处理方法实现多种与相机相关的功能。
在另一种可能的实现方式中,电子设备的操作***包括相机服务,在电子设备确定第一场景参数组与参数匹配库中的场景参数组匹配之后,该方法还包括:相机服务设置标识信息。电子设备并行创建预览流和第一回调流,包括:第一应用程序指示开始预览。相机服务根据标识信息,并行创建预览流和第一回调流。相机服务在创建第一回调流之后,删除标识信息。
也就是说,电子设备可以通过操作***和标识信息,并行创建预览流和回调流。
在另一种可能的实现方式中,电子设备获取第一数据,包括:相机服务获取第一数据。电子设备设置回调函数,包括:第一应用程序设置回调函数。电子设备将第一数据通过回调函数提供给第一应用程序进行处理,包括:相机服务将第一数据通过回调函数提供给第一应用程序进行处理。
也就是说,电子设备具体可以通过内部的相机服务、第一应用程序等模块,创建回调流并获取回调流数据。
另一方面,本申请实施例提供了一种扫描二维码的方法,应用于电子设备,该方法包括:电子设备在检测到用户对扫码功能的第一操作后,响应于第一操作,启动相机应用。电子设备显示扫码功能对应的第一界面,该电子设备相机的摄像头对准二维码。电子设备在并行创建预览流和回调流之后,根据预览流获取第一数据。电子设备根据第一数据显示二维码的第一图像。电子设备在完成对焦后,根据回调流获取第二数据,第二数据与第一数据具有不同的数据格式。而后,电子设备根据第二数据完成扫码识别,显示扫码成功后的第二界面。
这样,与电子设备串行创建预览流和回调流相比,电子设备可以并行创建预览流和回调流,从而可以减少电子设备创建预览流和回调流的耗时,减少第一应用程序获取回调流数据的时长,缩短第一应用程序对回调流数据进行处理从而实现扫描二维码功能的时长,从而减少了用户使用扫码功能的等待时长,提高了用户体验。
在一种可能的实现方式中,在电子设备完成对焦之前,该方法还包括:电子设备根据回调流获取第三数据,并且电子设备丢弃第三数据。
其中,该第三数据是电子设备在完成对焦之前根据回调流获取的回调流数据。由于在完成对焦之前获取的回调流数据不会上报给APP,因而电子设备可以丢弃该回调流数据。
在另一种可能的实现方式中,在电子设备完成对焦之后,该方法还包括:电子设备根据预览流获取第四数据;电子设备根据第四数据显示二维码的第二图像;第一图像和第二图像中像素点的像素值不同。
其中,该第四数据是电子设备在完成对焦之前根据预览流获取的预览流数据。电子设备在完成对焦之前获取的回调流数据描述的二维码的第一图像通常是模糊的,在完成对焦之后获取的回调流数据描述的二维码的第二图像通常是清晰的,第一图像和第二图像中像素点的像素值不同。
另一方面,本申请实施例提供了一种电子设备,包括:屏幕,用于显示界面;一个或多个处理器;以及存储器,存储器中存储有代码。当代码被电子设备执行时,使得电子设备执行以下步骤:检测到用户使用第一应用程序的第一功能的第一操作;响应于第一操作,启动相机应用;显示第一功能对应的第一界面;并行创建预览流和第一回调流;获取第一数据,第一数据为第一回调流对应的回调流数据;设置回调函数;将第一数据通过回调函数提供给第一应用程序进行处理,以实现第一功能。
在一种可能的实现方式中,电子设备包括参数匹配库,该参数匹配库包括至少一个场景参数组,场景参数组包括应用程序的包名、活动名和页面名。当代码被电子设备执行时,还使得电子设备执行以下步骤:在并行创建预览流和第一回调流之前,获取第一界面对应的第一场景参数组;确定第一场景参数组与参数匹配库中的场景参数组匹配。
在另一种可能的实现方式中,当代码被电子设备执行时,还使得电子设备执行以下步骤:在显示第一功能对应的第一界面之后,获取第一界面对应的第一应用程序的第一包名。从参数匹配库中,查找包括第一包名的参考场景参数组。确定第一场景参数组与参数匹配库中的场景参数组匹配,包括:确定第一场景参数组与参考场景参数组匹配。
在另一种可能的实现方式中,参数匹配库中的至少一个场景参数组集成于第一应用程序的打包文件中;或者,参数匹配库中的至少一个场景参数组是服务器推送的;或者,参数匹配库中的至少一个参数组是电子设备学习获得的。
在另一种可能的实现方式中,当代码被电子设备执行时,还使得电子设备执行以下步骤:若第一场景参数组与参数匹配库中的场景参数组不匹配,则创建预览流;若设置了回调函数,则创建第二回调流;获取第二数据,第二数据为第二回调流对应的回调流数据;将第二数据通过回调函数提供给第一应用程序进行处理;将第一场景参数组保存到参数匹配库中。
在另一种可能的实现方式中,当代码被电子设备执行时,还使得电子设备执行以下步骤:在获取第一数据之前,通过相机应用进行对焦。
在另一种可能的实现方式中,电子设备的操作***包括相机服务,当代码被电子设备执行时,还使得电子设备执行以下步骤:在确定第一场景参数组与参数匹配库中的场景参数组匹配之后,相机服务设置标识信息。并行创建预览流和第一回调流,具体包括:第一应用程序指示开始预览;相机服务根据标识信息,并行创建预览流和第一回调流;相机服务在创建第一回调流之后,删除标识信息。
在另一种可能的实现方式中,获取第一数据,具体包括:相机服务获取第一数据。设置回调函数,具体包括:第一应用程序设置回调函数。将第一数据通过回调函数提供给第一应用程序进行处理,具体包括:相机服务将第一数据通过回调函数提供给第一应用程序进行处理。
在另一种可能的实现方式中,第一功能为扫码功能,视频聊天功能、增强现实AR功能、智慧识物功能、扫题功能、扫银行卡功能或扫证件功能。
另一方面,本申请实施例提供了一种电子设备,包括:屏幕,用于显示界面;一个或多个处理器;以及存储器,存储器中存储有代码。当代码被电子设备执行时,使得电子设备执行以下步骤:检测到用户对扫码功能的第一操作;响应于第一操作,启动相机应用;显示扫码功能对应的第一界面,该电子设备相机的摄像头对准二维码;并行创建预览流和回调流,根据预览流获取第一数据;根据第一数据显示二维码的第一图像;在电子设备完成对焦之后,根据回调流获取第二数据,第二数据与第一数据具有不同的数据格式;根据第二数据完成扫码识别,显示扫码成功后的第二界面。
在一种可能的实现方式中,当代码被电子设备执行时,还使得电子设备执行以下步骤:在电子设备完成对焦之前,根据回调流获取第三数据;丢弃第三数据。
在另一种可能的实现方式中,当代码被电子设备执行时,还使得电子设备执行以下步骤:在电子设备完成对焦之后,根据预览流获取第四数据;根据第四数据显示二维码的第二图像;第一图像和第二图像中像素点的像素值不同。
另一方面,本申请实施例提供了一种回调流处理装置,该装置包含在电子设备中。该装置具有实现上述方面及可能的设计中任一方法中电子设备行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括至少一个与上述功能相对应的模块或单元。例如,应用程序模块/单元,框架模块/单元,相机模块/单元,场景识别模块/单元等。
另一方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;以及存储器,存储器中存储有代码。当代码被电子设备执行时,使得电子设备执行上述任一方面或任一种可能的实现方式中的回调流处理方法或扫描二维码的方法。
另一方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述任一方面或任一种可能的实现方式中的回调流处理方法或扫描二维码的方法。
又一方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机使得电子设备执行上述任一方面或任一种可能的实现方式中的回调流处理方法或扫描二维码的方法。
另一方面,本申请实施例提供了一种芯片***,芯片***应用于电子设备;芯片***包括一个或多个接口电路和一个或多个处理器;接口电路和处理器通过线路互联;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,使得电子设备执行上述任一方面或任一种可能的实现方式中的回调流处理方法或扫描二维码的方法。
上述其他方面对应的有益效果,可以参见关于方法方面的有益效果的描述,此处不予赘述。
附图说明
图1A-图1C为现有技术提供的一种创建预览流和回调流的示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3为本申请实施例提供的另一种电子设备的结构示意图;
图4为本申请实施例提供的一种扫描流程图;
图5为本申请实施例提供的一组界面示意图;
图6为本申请实施例提供的另一组界面示意图;
图7A为本申请实施例提供的一种创建预览流和回调流的示意图;
图7B为本申请实施例提供的另一种创建预览流和回调流的示意图;
图7C为本申请实施例提供的另一种创建预览流和回调流的示意图;
图7D为本申请实施例提供的一种扫码界面示意图;
图7E为本申请实施例提供的另一种扫码界面示意图;
图8A为现有技术提供的一种处理回调流请求的时序图;
图8B为本申请实施例提供的一种处理回调流请求的时序图;
图9为本申请实施例提供的一种扫码成功后的界面示意图;
图10A-图10B为本申请实施例提供的一种创建预览流和回调流的示意图;
图11A为现有技术提供的一种扫描过程中的界面示意图;
图11B为本申请实施例提供的一种扫描过程中的界面示意图;
图12为本申请实施例提供的另一种电子设备的结构示意图;
图13为本申请实施例提供的一种回调流处理流程图;
图14为本申请实施例提供的另一种界面示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
在回调流场景下,不同APP根据自身的业务逻辑,在创建预览流之后再创建回调流的具体时机不同。例如,在一些现有技术中,参见图1B,在APP设置回调函数之后,HAL层可以更新之前的预览流,而后开始创建回调流。之后,APP通过回调函数获取回调流数据,并根据回调流数据进行处理。在其他一些现有技术中,参见图1C,APP在设置回调函数之前,可以不进行对焦或初始化等处理,而直接停止预览流或更新预览流,进行开始创建回调流。
其中,预览流是一个数据流,预览流数据包括相机采集到的图像的数据信息。预览流用于将相机采集的预览画面返回给APP,以便APP在屏幕上显示预览图像。
回调流也是一个数据流,回调流数据包括相机采集到的图像的数据信息。回调流数据用于将相机采集到的图像的数据信息返回给APP,以便APP根据相机采集的图像数据进行处理,以实现特定的功能。例如,电子设备可以根据回调流数据进行扫码时的识别处理,或者视频聊天时的视频编解码及上传处理等,以实现扫码或视频聊天的功能。
例如,在扫码场景下,预览流数据和回调流数据可以包括相机采集到的二维码图像的数据信息。并且,在预览流创建完成后,预览流数据是持续获取并更新的;在回调流创建完成后,回调流数据是持续获取并更新的。
需要说明的是,由于预览流和回调流的功能不同,预览流和回调流返回给APP后的处理方式不同,因而预览流和回调流中包括的相机采集到的二维码图像的数据信息的具体数据格式也可以不同。示例性的,预览流的数据格式可以为format 33 HAL_PIXEL_FORMAT_BLOB,或format 34 HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED。回调流的数据格式可以为format 35 HAL_PIXEL_FORMAT_YCbCr_420_888。
还需要说明的是,在创建预览流和回调流时,电子设备上层的APP需要调用下层的接口,下层的接口还需要通知底层的硬件进行相应的设置,从而才能完成预览流和回调流的创建,获取到预览流数据和回调流数据。因而,预览流和回调流的创建过程需要一定的时间。
在图1A-图1C所示的现有技术中,在回调流场景下,电子设备通过APP启动相机后,串行创建预览流和回调流,并且在创建预览流后经过较长时间才能创建回调流,从而才能获取到回调流数据。因此,APP需要较长时间才能根据回调流数据进行处理,实现APP与相机相关的功能。
举例来说,在图1A-图1C中,启动相机大概需要110ms左右,创建和停止预览流大概各需要100ms左右,更新预览流大概需要50ms左右,创建回调流大概需要300ms左右,自动对焦大概需要360ms左右。这样,在图1A中,电子设备通过APP启动相机到成功创建预览流和回调流的时长,以及电子设备通过APP启动相机到APP获取到回调流数据的时长,均大概可以为1070ms。在图1B中,电子设备通过APP启动相机到成功创建预览流和回调流的时长,以及电子设备通过APP启动相机到APP获取到回调流数据的时长,均大概可以为920ms。在图1C中,电子设备通过APP启动相机到成功创建预览流和回调流的时长,以及电子设备通过APP启动相机到APP获取到回调流数据的时长,均大概可以为560ms。
本申请实施例提供了一种回调流的处理方法,可以应用于电子设备。电子设备通过APP启动相机后,若确定当前为回调流场景,则在APP指示创建预览流时,操作***也可以创建回调流,即电子设备可以并行设置预览流和回调流,从而提前拉起回调流并获取回调流数据。而后,在APP设置回调函数后,电子设备可以直接将提前获取的回调流数据通过回调函数返回给APP进行处理。因而,与现有技术中串行创建预览流和回调流相比,本申请实施例提供的并行创建预览流和回调流的方法,可以减少电子设备创建预览流和回调流的耗时,减少APP获取回调流数据的耗时,从而减少APP根据回调流数据进行处理以实现相机相关功能的时长,减少用户的等待时长,提高用户的使用体验。
其中,APP调用相机获取回调流可以实现的相机相关的功能可以包括:扫码(二维码、条形码等)、视频聊天、AR、智慧识物、扫题、扫银行卡或扫证件等。例如,该APP可以是微信、支付宝、QQ、facetime、skype、淘宝、美团或京东等。
例如,该电子设备可以是手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能家居设备等。本申请实施例对电子设备的设 备类型不作具体限定。
示例性的,图2示出了电子设备100的一种结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100 的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等***器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块 141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM, 和/或IR技术等。GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
在本申请的实施例中,显示屏194可以用于显示应用程序的界面,显示应用程序启动相机后的界面等。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193也称相机,用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
在本申请的实施例中,相机可以捕获图像数据,并向电子设备100返回预览流和回调流。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
在本申请的实施例中,处理器110可以运行存储在内部存储器121的指令,在应用程序调用相机时,在回调流场景下,并行创建预览流和回调流,从而提前拉起回调流并获取回调流数据;后续在应用程序设置回调函数后,直接将提前获取的回调流数据通过回调函数返回给应用程序进行处理,从而可以减少创建预览流和回调流的耗时,减少应用程序获取回调流数据的时间。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA, CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。 环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
例如,触摸传感器180K可以检测到用户的触摸操作,该操作用于使用应用程序的与相机相关的功能,该功能需要启动相机来实现。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时***多张卡。多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195 也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
在本申请的实施例中,触摸传感器180K或其他检测部件可以检测到用户使用应用程序与相机相关的功能的操作,该功能需要启动相机。在应用程序启动相机后,处理器110在确定当前为回调流场景时,可以并行创建预览流和回调流,从而提前拉起回调流并获取回调流数据;后续在应用程序设置回调函数后,可以直接将提前获取的回调流数据通过回调函数返回给应用程序进行处理,从而可以减少电子设备创建预览流和回调流的耗时,减少应用程序获取回调流数据的时间,减少应用程序实现相机相关功能的操作时长,减少用户的等待时长,提高用户使用体验。
可以理解的是,为了实现电子设备的回调流处理功能,电子设备包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
例如,在一种划分方式中,参见图3,电子设备可以包括应用程序层,应用程序框架层,HAL层,内核层以及硬件层。其中,应用程序层、应用程序框架层、HAL层和内核层为软件层。
其中,应用程序层可以包括一系列应用程序包。例如,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
在本申请的实施例中,应用程序层包括可以调用相机以实现第一功能的第一APP。并且,第一APP在调用相机时需要创建回调流。例如,第一APP可以是具有扫码、视频聊天、AR、智慧识物、扫题、扫银行卡或扫证件等功能的应用程序。比如,第一APP可以是支付宝、微信、QQ、facetime或skype等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
应用程序框架层可以包括活动线程(ActivityThread),相机服务(CameraService)和场景识别***。
其中,应用程序启动后,操作***会在应用程序框架层提供相应的活动线程来支持该应用程序的相关调度,执行活动,广播,以及活动管理请求的相关操作等。
相机服务可以用于管理相机,包括相机的启动和停止,预览流和回调流的创建,预览流数据和回调流数据的获取,以及通过回调函数向上层应用程序上报回调流数据等。
场景识别***,用于识别当前应用场景是否为回调流场景。
其中,应用程序框架层中的API接口可以包括活动线程与场景识别***之间的第一 API接口,以及活动线程与相机服务之间的第二API接口等。
此外,应用程序框架层还可以包括窗口管理器,内容提供器,视图***,电话管理器,资源管理器或通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
HAL层用于将底层硬件抽象化,以为上层提供抽象后的统一的相机服务。
内核层是硬件和软件之间的层。内核层包括相机驱动,显示驱动,音频驱动,以及传感器驱动等。
硬件层包括相机,显示屏,扬声器,以及传感器等硬件。其中,相机用于捕获静态图像或视频。例如,在扫码场景下,相机可以采集二维码的图像画面。物体通过镜头生成光学图像投射到感光元件。感光元件可以是CCD或CMOS光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。例如,该相机可以为图2所示的摄像头193。
以下将以电子设备为具有图2和图3所示结构的手机,第一功能为扫码功能,第一APP为微信为例,对本申请实施例提供的回调流的处理方法进行阐述。
在一些实施例中,第一API接口和第二API接口可以为软件开发工具包(software development kit,SDK)的形式。例如,第一API接口可以为HWSDK接口,第二API接口可以为CameraSDK(或称相机SDK)接口。
参见图4,本申请实施例提供的回调流处理方法可以包括:
401、在手机检测到用户打开微信的操作后,操作***生成微信对应的活动线程。
用户可以通过多种方式指示打开微信,例如可以点击微信的图标打开微信,可以通过语音指令打开微信,或者可以通过隔空手势指示打开微信等。
示例性的,参见图5中的(a),用户打开微信的操作可以为用户点击微信图标501的操作。
手机在检测到用户打开微信的操作后,手机的操作***可以生成微信对应的活动线程。
402、屏幕显示微信的界面。
其中,该屏幕可以是图2所示的显示屏194,也可以为显示屏194与触控传感器180K组合成的触控屏。示例性的,手机显示的微信的界面可以参见图5中的(b)。
403、活动线程将微信对应的第一包名(packageName),通过HWSDK接口发送给场景识别***。
其中,包名为应用程序的标识信息,用于唯一标识一个应用程序。在手机打开微信后,活动线程可以获取到微信对应的第一包名,并发送给手机中的场景识别***。
404、场景识别***查找与第一包名对应的参考场景参数组。
场景识别***包括参数匹配库,该参数匹配库包括回调流场景对应的至少一个场景参数组。该参数匹配库中的场景参数组描述的场景为回调流场景。每个场景参数组均包括一个包名。
场景识别***中可能保存有大量的场景参数组,场景识别***可以从大量的场景参数组中,选出数量较少的包括第一包名的参考场景参数组。
其中,场景识别***中的参数匹配库可以集成在第一APP的打包文件中,并随着第一APP的下载获得。
或者,服务器可以通过人工智能AI进行训练和学习,从而得到回调流场景对应的参数匹配库。手机也可以向服务器请求获取该参数匹配库。或者,该参数匹配库还可以是服务器周期性地推送给手机的,也可以是服务器在回调流场景对应的场景参数组有更新时推送给手机的。
再或者,该参数匹配库也可以是手机在使用过程,通过自身识别到回调流场景时保存的。并且,该参数匹配库还可以随着手机对回调流场景的不断训练和学习进行更新。
405、微信启动相机。
微信在接收到用户使用微信扫码功能的第一操作后,可以调用openCamera接口来启动相机(即相机应用),以通过相机进行扫码操作。
微信在启动相机后,可以执行步骤412以指示开始预览。微信从启动相机到指示开始预览之间需要经过时间t1,例如t1可以为上百毫秒。
微信在启动相机后,还可以执行步骤406-步骤411以确定当前是否为回调流场景。其中,微信从启动相机到获得当前是否为回调流场景的结果之间需要的时长为t2,且t2小于t1。例如,t2可以为几个毫秒。这样,手机可以在确定当前为回调流场景之后,再执行步骤412及后续步骤,从而并行创建预览流和第一回调流。
406、在手机检测到用户使用微信扫码功能的第一操作后,屏幕响应于第一操作显示扫码功能对应的第一界面。
示例性的,参见图5中的(c),用户使用微信扫码功能的第一操作,可以为用户点击扫一扫控件502的操作。
示例性的,参见图5中的(d),第一界面可以为手机开启扫码功能后的扫码界面。
407、活动线程获取第一界面对应的第一场景参数组。
其中,第一场景参数组用于描述第一界面当前所处的场景和状态。其中,每个场景参数组除了可以包括包名以外,还可以包括活动名(activityName)。活动名可以用 于标识当前所显示的界面对应的活动(Activity)。不同包名和活动名的组合,可以描述不同的界面场景。例如,第一界面对应的第一场景参数组包括第一包名和第一活动名。
在一些实施例中,同一包名和同一活动名可能对应多个回调流场景的界面,不能唯一标识一个回调流场景的界面,因而场景参数组还可以包括片段名(fragmentName),也称页面名。示例性的,在另一APP中,扫码对应的第一界面可以参见图6中的(a),AR对应的第一界面对应的第一界面可以参见图6中的(b)。扫码对应的第一界面,与AR对应的第一界面对应同一包名和同一活动名,但对应不同的页面名。因而,第一界面对应的第一场景参数组包括第一包名、第一活动名和第一页面名。
408、第一活动线程通过HWSDK接口将第一场景参数组发送给场景识别***。
409、场景识别***确定第一场景参数组与参数匹配库中的场景参数组是否匹配。
第一活动线程可以将第一界面对应的第一场景参数组发送给场景识别***,以便场景识别***确定第一场景参数组与参数匹配库中的场景参数组是否匹配。
其中,在步骤403-步骤404中已经查找到包括第一包名的,数量较少的参考场景参数组的基础上,场景识别***可以便捷、快速地确定参考场景参数组是否包括第一场景参数组,即确定第一场景参数组与参数匹配库中的场景参数组是否匹配。
在一些实施例中,手机也可以不执行上述步骤403-步骤404,手机在步骤410中可以依次判定第一场景参数组与参数匹配库中的场景参数组是否匹配。
410、若场景识别***确定第一场景参数组与参数匹配库中的场景参数组匹配,则通过第一API接口通知活动线程当前为回调流场景。
若第一场景参数组与参数匹配库中的场景参数组匹配,则可以表明第一界面对应的场景为回调流场景。
若第一界面对应的场景为回调流场景,则手机可以开启回调流优化流程,从而提前拉起回调流。
411、活动线程通过CameraSDK接口向相机服务指示第一信息。
其中,该第一信息可以用于表示当前为回调流场景,且指示相机服务开启回调流优化流程,提前拉起回调流。
412、微信通过CameraSDK接口指示开始预览。
微信可以调用CameraSDK接口中的startPreview()接口指示开始预览。
413、相机服务在微信指示开始预览后,根据第一信息并行创建预览流和第一回调流,并通过HAL层下发创建预览流和第一回调流的请求至相机驱动。
微信可以调用startPreview()接口指示开始预览,并通过代码内部处理逻辑通知到相机服务。相机服务根据活动线程指示的第一信息确定当前为回调流场景,从而开启回调流优化处理流程,并行创建预览流和第一回调流。这样,在微信设置回调函数之前,操作***可以提前拉起第一回调流。
其中,相机服务并行创建预览流和第一回调流包括,如图7A所示,相机服务同时创建预览流和第一回调流。
或者,相机服务并行创建预览流和第一回调流包括,相机服务在开始创建预览流之后,在预览流创建完成之前开始创建第一回调流,即预览流和第一回调流的创建过 程在时间上有交叠。示例性的,参见图7B,开始创建第一回调流的时刻与开始创建预览流的时刻之间间隔了时长T,且T小于预览流的创建过程所需的时长,即预览流和第一回调流的创建过程在时间上有交叠。
或者,相机服务并行创建预览流和第一回调流包括,相机服务在开始创建第一回调流之后,在第一回调流创建完成之前开始创建预览流,即预览流和第一回调流的创建过程在时间上有交叠。示例性的,参见图7C,开始创建预览流的时刻与开始创建第一回调流的时刻之间间隔了时长T,且T小于第一回调流的创建过程所需的时长,即预览流和第一回调流的创建过程在时间上有交叠。
在一些实施例中,步骤411中的第一信息可以是标志位(flag);手机在步骤413中根据该标志位设置第一回调流;在步骤413之后,手机可以删除该标志位,以避免下次在非回调流场景中,相机服务直接根据该标志位提前拉起回调流。
414、相机驱动下发创建预览流和第一回调流的请求至相机硬件。
415、相机硬件针对预览流进行相应设置,获取预览流数据,并通过HAL层向相机服务返回预览流和预览流创建成功的通知消息。
其中,预览流从开始创建到创建成功需要一定的创建时间,例如可以为100ms左右。可以理解的是,在预览流创建成功后,预览流数据是持续获取的。
416、相机硬件针对第一回调流进行相应设置,获取第一回调流对应的回调流数据,并通过HAL层向相机服务返回第一回调流的回调流数据和第一回调流创建成功的通知消息。
其中,第一回调流从开始创建到创建成功也需要一定的创建时间,例如可以为300ms左右。可以理解的是,在第一回调流创建成功后,第一回调流的回调流数据是持续获取的。
417、相机服务将预览流数据返回给微信。
相机服务将预览流数据返回给微信,以便微信根据预览流数据进行相关处理。例如,微信可以根据预览流数据在第一界面上显示第一图像。示例性的,显示第一图像后的第一界面可以参见图7D。
可选地,手机还可以根据自身的业务逻辑进行对焦等初始化处理。例如,该方法还可以包括步骤418:
418、微信通过CameraSDK接口指示进行对焦处理。
其中,微信可以调用CameraSDK接口中的autoFocus()接口指示进行自动对焦处理,并通过代码内部处理逻辑通知到相机服务。相机服务可以通过HAL层指示相机硬件进行对焦处理。
在对焦完成后,微信可以通过第一界面向用户展示对焦后的清晰的预览图像,即预览流中最新的预览流数据为相机硬件采集到的清晰的图像的数据信息。示例性的,对焦处理后屏幕在第一界面上显示的第二图像可以参见图7E。
在对焦完成之后,微信根据预览流数据显示的第二图像包括清晰的二维码图像。而在对焦完成之前,微信根据预览流数据显示的第一图像可能包括模糊的二维码图像,还可能仅包括部分二维码图像,也可能包括较为清晰的二维码图像。相对应的,第一图像和第二图像中二维码的像素点的像素值是不同的。
419、微信通过CameraSDK接口设置回调函数。
微信可以根据自身的业务逻辑确定设置回调函数的时机(例如在对焦处理后),并在该时机调用CameraSDK接口中的回调函数接口来设置回调函数。例如,该回调函数接口可以包括setCallback()接口、setPreviewCallback()接口、setOneShotPreviewCallback()接口或addCallbackBuffer()接口等。
420、相机服务直接将创建第一回调流后获取的回调流数据,通过回调函数返回给微信。
微信调用回调函数接口后,可以通过代码内部处理逻辑通知到相机服务。若相机服务确定当前未创建有回调流,则开始创建回调流。而在本申请的实施例中,相机服务确定之前已经创建了第一回调流,因而不再创建回调流,也不需要向底层下发创建回调流的请求,而可以直接将提前创建的第一回调流获取的回调流数据通过回调函数返回给微信,以便微信进行处理。因而,微信设置回调函数后,相机服务并未再创建回调流,未产生再次创建回调流的耗时。
值得说明的是,在第一回调流创建成功后,在对焦完成前/后,第一回调流的回调流数据均是持续获取并更新的。在扫码场景下,由于对焦完成前获取的回调流数据可能是模糊的二维码图像对应的无效信息;还可能仅包含部分的二维码信息;也可能与对焦完成后获取的预览流数据一样,包括完整的二维码信息。但在对焦完成之前,由于回调流数据不会上报给APP,因此相机服务获取的回调流数据可以丢弃;在对焦完成之后,相机服务获取的最新的回调流数据可以上报给微信进行处理。
可以理解的是,在视频聊天等不需要对焦的场景下,回调流数据是持续返回给视频聊天应用程序的,不会被舍弃。
这样,手机在通过微信启动相机后,若确定当前为回调流场景,则可以在微信指示创建预览流时并行创建第一回调流,从而可以提前拉起第一回调流并获取回调流数据。后续,在微信设置回调函数后,相机服务不再创建回调流,而直接将提前获取的回调流数据通过回调函数返回给微信。从而,与现有技术中串行创建预览流和回调流相比,手机并行创建预览流和第一回调流,减少了手机创建预览流和回调流的耗时,从而减少了微信获取回调流数据的时长。
示例性的,图8A和图8B分别示出了现有技术和本申请实施例分别对应的,使用手机性能分析工具获取到的手机处理器的工作时序图。其中,位置1表示相机启动后申请内存的操作;位置2表示处理回调流创建请求的操作。对比图8A和图8B可知,现有技术中在相机启动后很长时间后,才开始创建回调流;而本申请实施例提供的方法在相机启动后,很快就预先创建了回调流。
421、微信对获取到的回调流数据进行处理,实现扫码功能。
微信在获取到回调流数据后,可以进行识别等处理,从而实现微信的扫码功能。其中,二维码是用特定的几何图形按照一定的规律在平面(二维方向)上分布的黑白相间的图形来记录数据符号信息的。微信可以对回调流数据描述的二维码图像的黑白像素矩阵进行解析,获得黑白像素矩阵对应的数据符号信息,从而根据该数据符号信息进行识别、链接等进一步处理。
422、屏幕显示扫码成功后的第二界面。
示例性的,手机扫描的是华为招聘公众号的二维码,扫码成功后的第二界面可以为如图9所示的华为招聘的公众号页面。
在步骤401-步骤422描述的方案中,手机并行设置了预览流和第一回调流。在微信指示设置回调函数后,相机服务不需要再创建回调流,而可以直接将之前创建的回调流的回调流数据返回给微信。手机创建预览流和回调流的耗时较短,微信获取到回调流数据的时间较短。而在现有技术中,手机在创建预览流后串行设置了回调流,且串行创建预览流和创建回调流之间可能间隔了较长时间,因而微信获取到回调流数据的时间较长。
举例来说,对比图1A和图1B所示的回调流处理流程,本申请实施例提供的回调流处理流程可以参见图10A。在图10A所示情况下,手机通过应用程序启动相机到成功创建预览流和回调流的时长大概可以为410ms;手机通过应用程序启动相机到应用程序获取到回调流数据的时长,大概可以为570ms;410ms和570ms均明显小于1070ms和920ms。
对比图1C所示的回调流处理流程,本申请实施例提供的回调流处理流程可以参见图10B。在图10B所示情况下,手机通过应用程序启动相机到成功创建预览流和回调流的时长,以及手机通过应用程序启动相机到应用程序获取到回调流数据的时长,大概可以为410ms,明显小于560ms。
可见,与现有技术中串行设置预览流和回调流相比,本申请实施例通过并行设置预览流和回调流,可以节省创建预览流和回调流的时长,使得应用程序获取到回调流数据的时间更短,从而可以使得扫码过程更快。
示例性的,在扫码场景下,现有技术对应的扫码过程的时序图可以参见图11A;本申请实施例提供的方法对应的扫码过程的时序图可以参见图11B。对比,图11A和图11B可知,采用本申请实施例提供的方法可以使得扫码耗时更短,扫码速度更快。
因而,在本申请实施例提供的方案中,手机在通过微信启动相机后,若确定当前为回调流场景,则手机在微信指示创建预览流时也并行创建了第一回调流,从而可以预先获取回调流数据。后续,在微信根据自身业务处理逻辑设置回调函数后,相机服务不用再创建回调流,而可以直接将之前获取的回调流数据通过回调函数返回给微信。因而,与现有技术中串行创建预览流和回调流相比,手机通过并行创建预览流和回调流,减少了微信创建预览流和回调流的时长,从而减少了微信获取回调流数据的时长,缩短了微信根据回调流数据进行处理从而实现扫码等回调流业务的时长,减少了用户使用扫码等功能时的等待时间,提高了用户的使用体验。
此外,若手机上的参数匹配库为空(例如手机上未设置有参数匹配库);或者,参数匹配库未包括第一场景参数组,则场景识别***在上述步骤409中确定第一场景参数组与参数匹配库中的场景参数组不匹配。此时,手机可以按照现有技术中的流程创建预览流或者创建预览流和第二回调流,进而根据获取的数据实现相关功能。
在后续流程中,若第一APP根据自身的业务逻辑创建了第二回调流,则可以表明第一界面为回调流业务场景,手机之前可能未经过训练和学习,或者参数匹配库未学习到第一场景参数组。因而,手机可以将第一场景参数组保存至参数匹配库中,以便下次根据参数匹配库中保存的该第一场景参数组,确定第一场景参数组对应的界面场 景为回调流场景,从而提前、快速拉起回调流。
以上主要是以回调流场景为扫码场景为例进行说明的。本申请实施例提供的回调流的处理方法还可以应用于其他回调流场景。例如,在skype的视频聊天场景中,采用本申请实施例提供的回调流的处理方法,同样可以减少手机创建预览流和回调流的耗时,减少skype应用程序获取回调流数据的时间,从而使得skype应用程序能够更快地对回调流数据进行编码、压缩和上传等处理,减少skype的处理时长,使得聊天对端能够快速接收到本端发送的图像数据,使得聊天过程中对端显示的图像的实时性更好,用户体验也更好。
再例如,在另一种划分方式中,参见图12,电子设备可以包括应用程序、框架模块、相机模块和场景识别模块。
其中,该应用程序可以为上述第一APP。框架模块可以包括多种API接口,可以为应用程序提供基础框架,且各模块之间的协作流程均在框架模块完成。
具体的,应用程序可以调用框架模块的API接口来启动相机。框架模块可以将当前界面场景传递给场景识别模块,获取是否为回调流场景的场景识别结果。框架模块根据场景识别结果,确定是否开启回调流优化流程。若确定开启回调流优化流程,则框架模块通知相机模块开启回调流优化流程。此外,框架模块还可以接收相机模块上传的预览数据流和回调数据流,并上传给应用程序进行使用和处理。
场景识别模块可以包括AI模块和识别模块。AI模块可以用于对回调流场景的场景参数组进行训练,并将训练结果存放在参数匹配库中。识别模块可以用于根据框架模块提供的第一场景参数组,结合参数匹配库中的场景参数组,进行回调流场景的识别,并返回对应的识别结果给框架模块。
相机模块可以包括相机服务、HAL层以及驱动和硬件。相机模块用于实现应用程序与相机相关的功能的图像和数据的抓取,并返回给框架模块;以及实现回调流的优化处理流程。
回调流的优化处理流程包括:当相机服务接收到框架模块开启优化的通知后,设置标识信息。当框架模块进入预览流程后,相机服务开启回调流优化流程,并行化创建预览流和回调流,然后将预览流和回调流的创建请求通过HAL层发送到驱动中,等待预览流数据和回调流数据的返回。
以上主要是从电子设备各模块的角度,来描述本申请实施例提供的回调流处理方法的。电子设备各模块执行的操作也就是电子设备执行的操作。从电子设备的角度来说,电子设备可以采用如图13所示的流程,执行本申请以上实施例提供的回调流的处理方法。该方法可以包括:
1301、电子设备显示第一应用程序的界面。
示例性的,第一应用程序可以为微信,第一应用程序的界面可以为图5中的(b)所示的界面。
1302、电子设备检测到用户使用第一应用程序的第一功能的第一操作。
示例性的,第一应用程序可以为微信,第一功能可以为扫码功能,第一操作可以为用户点击图5中的(c)所示的扫一扫控件502的操作。
1303、电子设备响应于第一操作,启动相机应用。
1304、电子设备显示第一功能对应的第一界面。
示例性的,该第一界面可以为图5中的(d)所示的扫码界面。
1305、电子设备并行创建预览流和第一回调流。
1306、电子设备获取第一数据,第一数据为第一回调流对应的回调流数据。
1307、电子设备设置回调函数。
1308、电子设备将第一数据通过回调函数提供给第一应用程序进行处理,以实现第一功能。
在一些实施例中,该方法还可以包括:
1309、电子设备在实现第一功能后显示第二界面。
示例性的,该第二界面可以为图9所示的扫码成功后的界面。在视频聊天或其他回调流场景中,在实现第一功能后,可能不会再显示第二界面。
其中,关于步骤1303-步骤1309的说明,可以参见上述步骤405-步骤422的相关描述,此处不予赘述。
在其他一些实施例中,手机也可以在不显示第一应用程序的界面的情况下,响应于用户使用第一应用程序的第一功能的第一操作,显示第一功能对应的第一界面。该第一界面为第一应用程序与相机功能相关的界面。即,手机可以不执行步骤1301,直接执行步骤1302-步骤1309。
示例性的,参见图14,用户使用第一应用程序的第一功能的第一操作,可以为用户点击微信扫一扫的快捷图标1401的操作,第一界面可以为图5中的(d)所示的界面。
在图13所示流程描述的方案中,电子设备在通过第一应用程序启动相机后,若确定当前为回调流场景,则在第一应用程序指示创建预览流时电子设备也并行创建第一回调流,从而可以预先获取回调流数据。后续,在第一应用程序根据自身业务处理逻辑设置回调函数后,电子设备可以不再创建回调流,而直接将之前获取的回调流数据通过回调函数返回给第一应用程序。因而,与电子设备串行创建预览流和回调流相比,电子设备通过并行创建预览流和回调流,可以减少电子设备创建预览流和回调流的耗时,从而可以减少第一应用程序获取回调流数据的时长,缩短第一应用程序对回调流数据进行处理从而实现第一功能的时长,减少了用户使用第一功能的等待时长,提高了用户体验。
另外,本申请实施例还提供了一种电子设备,包括一个或多个处理器;存储器;以及一个或多个计算机程序。一个或多个计算机程序被存储在存储器中,一个或多个计算机程序包括指令。当指令被一个或多个处理器执行时,使得电子设备执行上述实施例中的各个步骤,以实现上述回调流的处理方法。
本申请的实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的回调流的处理方法。
本申请的实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中电子设备执行的回调流的处理方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中电子设备执行的回调流的处理方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (25)

  1. 一种回调流的处理方法,应用于电子设备,其特征在于,所述方法包括:
    检测到用户使用第一应用程序的第一功能的第一操作;
    响应于所述第一操作,启动相机应用;
    显示所述第一功能对应的第一界面;
    并行创建预览流和第一回调流;
    获取第一数据,所述第一数据为所述第一回调流对应的回调流数据;
    设置回调函数;
    将所述第一数据通过所述回调函数提供给所述第一应用程序进行处理,以实现所述第一功能。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备包括参数匹配库,所述参数匹配库包括至少一个场景参数组,所述场景参数组包括应用程序的包名、活动名和页面名,在所述并行创建预览流和第一回调流之前,所述方法还包括:
    获取所述第一界面对应的第一场景参数组;
    确定所述第一场景参数组与所述参数匹配库中的场景参数组匹配。
  3. 根据权利要求2所述的方法,其特征在于,在所述显示所述第一功能对应的第一界面之后,所述方法还包括:
    获取所述第一界面对应的所述第一应用程序的第一包名;
    从所述参数匹配库中,查找包括所述第一包名的参考场景参数组;
    确定所述第一场景参数组与所述参数匹配库中的场景参数组匹配,包括:
    确定所述第一场景参数组与所述参考场景参数组匹配。
  4. 根据权利要求2或3所述的方法,其特征在于,所述参数匹配库中的所述至少一个场景参数组集成于所述第一应用程序的打包文件中;
    或者,所述参数匹配库中的所述至少一个场景参数组是服务器推送的;
    或者,所述参数匹配库中的所述至少一个参数组是所述电子设备学习获得的。
  5. 根据权利要求2-4任一项所述的方法,其特征在于,所述方法还包括:
    若所述第一场景参数组与所述参数匹配库中的场景参数组不匹配,则创建预览流;
    若设置了回调函数,则创建第二回调流;
    获取第二数据,所述第二数据为所述第二回调流对应的回调流数据;
    将所述第二数据通过所述回调函数提供给所述第一应用程序进行处理;
    将所述第一场景参数组保存到所述参数匹配库中。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,在所述获取第一数据之前,所述方法还包括:
    通过所述相机应用进行对焦。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述第一功能为扫码功能,视频聊天功能、增强现实AR功能、智慧识物功能、扫题功能、扫银行卡功能或扫证件功能。
  8. 根据权利要求2-7任一项所述的方法,其特征在于,所述电子设备的操作***包括相机服务,在所述确定所述第一场景参数组与所述参数匹配库中的场景参数组匹 配之后,所述方法还包括:
    所述相机服务设置标识信息;
    所述并行创建预览流和第一回调流,包括:
    所述第一应用程序指示开始预览;
    所述相机服务根据所述标识信息,并行创建所述预览流和所述第一回调流;
    所述相机服务在创建所述第一回调流之后,删除所述标识信息。
  9. 根据权利要求8所述的方法,其特征在于,所述获取第一数据,包括:
    所述相机服务获取所述第一数据;
    所述设置回调函数,包括:
    所述第一应用程序设置所述回调函数;
    所述将所述第一数据通过所述回调函数提供给所述第一应用程序进行处理,包括:
    所述相机服务将所述第一数据通过所述回调函数提供给所述第一应用程序进行处理。
  10. 一种扫描二维码的方法,应用于电子设备,其特征在于,所述方法包括:
    检测到用户对扫码功能的第一操作;
    响应于所述第一操作,启动相机应用;
    显示所述扫码功能对应的第一界面,所述相机的摄像头对准二维码;
    并行创建预览流和回调流;
    根据所述预览流获取第一数据;
    根据所述第一数据显示所述二维码的第一图像;
    在所述电子设备完成对焦之后,根据所述回调流获取第二数据,所述第二数据与所述第一数据具有不同的数据格式;
    根据所述第二数据显示扫码成功后的第二界面。
  11. 根据权利要求10所述的方法,其特征在于,在所述电子设备完成对焦之前,所述方法还包括:
    根据所述回调流获取第三数据;
    丢弃所述第三数据。
  12. 根据权利要求10或11所述的方法,其特征在于,在所述电子设备完成对焦之后,所述方法还包括:
    根据所述预览流获取第四数据;
    根据所述第四数据显示所述二维码的第二图像;所述第一图像和所述第二图像中像素点的像素值不同。
  13. 一种电子设备,其特征在于,包括:
    屏幕,用于显示界面;
    一个或多个处理器;
    以及存储器,所述存储器中存储有代码;
    当所述代码被所述电子设备执行时,使得所述电子设备执行以下步骤:
    检测到用户使用第一应用程序的第一功能的第一操作;
    响应于所述第一操作,启动相机应用;
    显示所述第一功能对应的第一界面;
    并行创建预览流和第一回调流;
    获取第一数据,所述第一数据为所述第一回调流对应的回调流数据;
    设置回调函数;
    将所述第一数据通过所述回调函数提供给所述第一应用程序进行处理,以实现所述第一功能。
  14. 根据权利要求13所述的电子设备,其特征在于,所述电子设备包括参数匹配库,所述参数匹配库包括至少一个场景参数组,所述场景参数组包括应用程序的包名、活动名和页面名;当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述并行创建预览流和第一回调流之前,获取所述第一界面对应的第一场景参数组;
    确定所述第一场景参数组与所述参数匹配库中的场景参数组匹配。
  15. 根据权利要求14所述的电子设备,其特征在于,当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述显示所述第一功能对应的第一界面之后,获取所述第一界面对应的所述第一应用程序的第一包名;
    从所述参数匹配库中,查找包括所述第一包名的参考场景参数组;
    确定所述第一场景参数组与所述参数匹配库中的场景参数组匹配,包括:
    确定所述第一场景参数组与所述参考场景参数组匹配。
  16. 根据权利要求14或15所述的电子设备,其特征在于,所述参数匹配库中的所述至少一个场景参数组集成于所述第一应用程序的打包文件中;
    或者,所述参数匹配库中的所述至少一个场景参数组是服务器推送的;
    或者,所述参数匹配库中的所述至少一个参数组是所述电子设备学习获得的。
  17. 根据权利要求14-16任一项所述的电子设备,其特征在于,当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    若所述第一场景参数组与所述参数匹配库中的场景参数组不匹配,则创建预览流;
    若设置了回调函数,则创建第二回调流;
    获取第二数据,所述第二数据为所述第二回调流对应的回调流数据;
    将所述第二数据通过所述回调函数提供给所述第一应用程序进行处理;
    将所述第一场景参数组保存到所述参数匹配库中。
  18. 根据权利要求13-17任一项所述的电子设备,其特征在于,当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述获取第一数据之前,通过所述相机应用进行对焦。
  19. 根据权利要求14-18任一项所述的电子设备,其特征在于,所述电子设备的操作***包括相机服务,当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述确定所述第一场景参数组与所述参数匹配库中的场景参数组匹配之后,所述相机服务设置标识信息;
    所述并行创建预览流和第一回调流,具体包括:
    所述第一应用程序指示开始预览;
    所述相机服务根据所述标识信息,并行创建所述预览流和所述第一回调流;
    所述相机服务在创建所述第一回调流之后,删除所述标识信息。
  20. 根据权利要求19所述的电子设备,其特征在于,所述获取第一数据,具体包括:
    所述相机服务获取所述第一数据;
    所述设置回调函数,具体包括:
    所述第一应用程序设置所述回调函数;
    所述将所述第一数据通过所述回调函数提供给所述第一应用程序进行处理,具体包括:
    所述相机服务将所述第一数据通过所述回调函数提供给所述第一应用程序进行处理。
  21. 一种电子设备,其特征在于,包括:
    屏幕,用于显示界面;
    一个或多个处理器;
    以及存储器,所述存储器中存储有代码;
    当所述代码被所述电子设备执行时,使得所述电子设备执行以下步骤:
    检测到用户对扫码功能的第一操作;
    响应于第一操作,启动相机应用;
    显示所述扫码功能对应的第一界面,所述相机的摄像头对准二维码;
    并行创建预览流和回调流;
    根据所述预览流获取第一数据;
    根据所述第一数据显示所述二维码的第一图像;
    在所述电子设备完成对焦之后,根据所述回调流获取第二数据,所述第二数据与所述第一数据具有不同的数据格式;
    根据所述第二数据显示扫码成功后的第二界面。
  22. 根据权利要求21所述的电子设备,其特征在于,当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述完成对焦之前,根据所述回调流获取第三数据;
    丢弃所述第三数据。
  23. 根据权利要求21或22所述的电子设备,其特征在于,当所述代码被所述电子设备执行时,还使得所述电子设备执行以下步骤:
    在所述电子设备完成对焦之后,根据所述预览流获取第四数据;
    根据所述第四数据显示所述二维码的第二图像;所述第一图像和所述第二图像的像素值不同。
  24. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-9中任一项所述的回调流的处理方法,或执行如权利要求10-12中任一项所述的扫描二维码的方法。
  25. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-9中任一项所述的回调流的处理方法,或执行如权利要求10-12中任一项所述的扫描二维码的方法。
PCT/CN2020/114342 2019-09-12 2020-09-10 一种回调流的处理方法及设备 WO2021047567A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022516016A JP7408784B2 (ja) 2019-09-12 2020-09-10 コールバックストリーム処理方法およびデバイス
EP20862922.0A EP4020966A4 (en) 2019-09-12 2020-09-10 REMINDER FLOW PROCESSING METHOD AND DEVICE
US17/693,032 US11849213B2 (en) 2019-09-12 2022-03-11 Parallel preview stream and callback stream processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910867438.2A CN112492193B (zh) 2019-09-12 2019-09-12 一种回调流的处理方法及设备
CN201910867438.2 2019-09-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/693,032 Continuation US11849213B2 (en) 2019-09-12 2022-03-11 Parallel preview stream and callback stream processing method and device

Publications (1)

Publication Number Publication Date
WO2021047567A1 true WO2021047567A1 (zh) 2021-03-18

Family

ID=74865786

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114342 WO2021047567A1 (zh) 2019-09-12 2020-09-10 一种回调流的处理方法及设备

Country Status (5)

Country Link
US (1) US11849213B2 (zh)
EP (1) EP4020966A4 (zh)
JP (1) JP7408784B2 (zh)
CN (2) CN112492193B (zh)
WO (1) WO2021047567A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194211A (zh) * 2021-03-25 2021-07-30 深圳市优***科技股份有限公司 一种扫描头的控制方法及***
CN113766120A (zh) * 2021-08-09 2021-12-07 荣耀终端有限公司 拍摄模式的切换方法及电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220446A (zh) * 2021-03-26 2021-08-06 西安神鸟软件科技有限公司 一种图像或视频数据处理方法及终端设备
CN115543649B (zh) * 2022-08-31 2023-11-03 荣耀终端有限公司 一种数据获取方法及电子设备
CN116720533B (zh) * 2022-09-19 2024-05-17 荣耀终端有限公司 扫码方法、电子设备及可读存储介质
CN116193001B (zh) * 2023-02-16 2023-11-03 中国人民解放军61660部队 一种用于实现NDIS6-Hooking的方法
CN116341586B (zh) * 2023-02-27 2023-12-01 荣耀终端有限公司 扫码方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903039A (zh) * 2014-03-26 2014-07-02 深圳大学 隐形编码全息防伪膜、其制作方法及识别***
US20180046353A1 (en) * 2016-08-12 2018-02-15 Line Corporation Method and system for video recording
CN108845861A (zh) * 2018-05-17 2018-11-20 北京奇虎科技有限公司 虚拟摄像头的实现方法及装置
CN109669783A (zh) * 2017-10-13 2019-04-23 阿里巴巴集团控股有限公司 数据处理方法和设备

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9153074B2 (en) * 2011-07-18 2015-10-06 Dylan T X Zhou Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command
JP4300811B2 (ja) * 2003-02-03 2009-07-22 コニカミノルタホールディングス株式会社 撮像装置及び携帯端末
US9766089B2 (en) * 2009-12-14 2017-09-19 Nokia Technologies Oy Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image
US20110161875A1 (en) * 2009-12-29 2011-06-30 Nokia Corporation Method and apparatus for decluttering a mapping display
US20110279446A1 (en) * 2010-05-16 2011-11-17 Nokia Corporation Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device
WO2012164149A1 (en) * 2011-05-31 2012-12-06 Nokia Corporation Method and apparatus for controlling a perspective display of advertisements using sensor data
US9342610B2 (en) * 2011-08-25 2016-05-17 Microsoft Technology Licensing, Llc Portals: registered objects as virtualized, personalized displays
KR101371958B1 (ko) * 2012-08-31 2014-03-07 주식회사 팬택 콜백 정보 표시 장치 및 상기 장치의 동작 방법
CN103716691B (zh) 2012-09-29 2017-10-27 腾讯科技(深圳)有限公司 一种视频采集方法和装置
EP2909699A1 (en) 2012-10-22 2015-08-26 VID SCALE, Inc. User presence detection in mobile devices
US9124762B2 (en) * 2012-12-20 2015-09-01 Microsoft Technology Licensing, Llc Privacy camera
US20140278053A1 (en) * 2013-03-14 2014-09-18 Samsung Electronics Co., Ltd. Navigation system with dynamic update mechanism and method of operation thereof
CN103345385A (zh) * 2013-07-29 2013-10-09 北京汉邦高科数字技术股份有限公司 一种串行事件转换成并行事件的方法
CN103995787A (zh) 2014-05-15 2014-08-20 Tcl集团股份有限公司 一种摄像头应用的调控方法及装置
US9703615B2 (en) 2014-07-18 2017-07-11 Facebook, Inc. Stepped sizing of code execution
CN104184947B (zh) * 2014-08-22 2019-01-29 惠州Tcl移动通信有限公司 一种远程拍照调焦的方法及***
KR20160041435A (ko) * 2014-10-07 2016-04-18 엘지전자 주식회사 이동 단말기 및 그것의 제어방법
CN105959630B (zh) 2016-05-17 2019-03-12 中国人民解放军海军航空大学 基于远距离光电摄像的飞机姿态近距离观测***与方法
CN107800943B (zh) * 2016-08-30 2020-04-03 贵州火星探索科技有限公司 一种摄像***及其控制方法
CN106603910B (zh) * 2016-11-28 2020-09-04 上海传英信息技术有限公司 一种基于智能终端的照片保存方法
CN107360400B (zh) * 2017-07-27 2021-05-28 上海传英信息技术有限公司 一种用于智能设备摄像头的录像方法及录像装置
US10613870B2 (en) * 2017-09-21 2020-04-07 Qualcomm Incorporated Fully extensible camera processing pipeline interface
CN108197029B (zh) * 2018-01-08 2021-06-01 华为技术有限公司 一种获取进程信息的方法和设备
CN108737881A (zh) * 2018-04-27 2018-11-02 晨星半导体股份有限公司 一种信号源实时动态预览方法及***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903039A (zh) * 2014-03-26 2014-07-02 深圳大学 隐形编码全息防伪膜、其制作方法及识别***
US20180046353A1 (en) * 2016-08-12 2018-02-15 Line Corporation Method and system for video recording
CN109669783A (zh) * 2017-10-13 2019-04-23 阿里巴巴集团控股有限公司 数据处理方法和设备
CN108845861A (zh) * 2018-05-17 2018-11-20 北京奇虎科技有限公司 虚拟摄像头的实现方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4020966A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194211A (zh) * 2021-03-25 2021-07-30 深圳市优***科技股份有限公司 一种扫描头的控制方法及***
CN113194211B (zh) * 2021-03-25 2022-11-15 深圳市优***科技股份有限公司 一种扫描头的控制方法及***
CN113766120A (zh) * 2021-08-09 2021-12-07 荣耀终端有限公司 拍摄模式的切换方法及电子设备

Also Published As

Publication number Publication date
EP4020966A4 (en) 2022-10-26
US20220201201A1 (en) 2022-06-23
CN112492193A (zh) 2021-03-12
EP4020966A1 (en) 2022-06-29
US11849213B2 (en) 2023-12-19
JP2022547572A (ja) 2022-11-14
CN114615423B (zh) 2024-05-14
CN114615423A (zh) 2022-06-10
JP7408784B2 (ja) 2024-01-05
CN112492193B (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
WO2020259452A1 (zh) 一种移动终端的全屏显示方法及设备
WO2021017889A1 (zh) 一种应用于电子设备的视频通话的显示方法及相关装置
WO2021052232A1 (zh) 一种延时摄影的拍摄方法及设备
WO2021047567A1 (zh) 一种回调流的处理方法及设备
WO2021000807A1 (zh) 一种应用程序中等待场景的处理方法和装置
CN113885759B (zh) 通知消息处理方法、设备、***及计算机可读存储介质
WO2021052282A1 (zh) 数据处理方法、蓝牙模块、电子设备与可读存储介质
EP4084486B1 (en) Cross-device content projection method, and electronic device
WO2022127787A1 (zh) 一种图像显示的方法及电子设备
WO2020221060A1 (zh) 一种卡片处理方法及设备
WO2021052139A1 (zh) 手势输入方法及电子设备
WO2021036898A1 (zh) 折叠屏设备中应用打开方法及相关装置
WO2022033320A1 (zh) 蓝牙通信方法、终端设备及计算机可读存储介质
WO2022042326A1 (zh) 显示控制的方法及相关装置
WO2022042770A1 (zh) 控制通信服务状态的方法、终端设备和可读存储介质
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
WO2021197071A1 (zh) 无线通信***及方法
EP4181498A1 (en) Photographing method and electronic device
WO2022206764A1 (zh) 一种显示方法、电子设备和***
WO2021052388A1 (zh) 一种视频通信方法及视频通信装置
CN116389884B (zh) 缩略图显示方法及终端设备
WO2021218544A1 (zh) 一种提供无线上网的***、方法及电子设备
WO2024067037A1 (zh) 一种服务调用方法、***和电子设备
WO2024093614A1 (zh) 设备输入方法、***、电子设备及存储介质
EP4276618A1 (en) Image processing method, electronic device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20862922

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022516016

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020862922

Country of ref document: EP

Effective date: 20220324