CN114520867B - Camera control method based on distributed control and terminal equipment - Google Patents

Camera control method based on distributed control and terminal equipment Download PDF

Info

Publication number
CN114520867B
CN114520867B CN202011308989.4A CN202011308989A CN114520867B CN 114520867 B CN114520867 B CN 114520867B CN 202011308989 A CN202011308989 A CN 202011308989A CN 114520867 B CN114520867 B CN 114520867B
Authority
CN
China
Prior art keywords
camera
local
authorized
task
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011308989.4A
Other languages
Chinese (zh)
Other versions
CN114520867A (en
Inventor
占航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011308989.4A priority Critical patent/CN114520867B/en
Priority to CN202210973225.XA priority patent/CN115484404B/en
Priority to PCT/CN2021/130672 priority patent/WO2022105716A1/en
Publication of CN114520867A publication Critical patent/CN114520867A/en
Application granted granted Critical
Publication of CN114520867B publication Critical patent/CN114520867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a camera control method based on distributed control and a terminal device, wherein the method for a first device comprises the following steps: displaying a camera to be selected which can be controlled by the first device, wherein the camera to be selected comprises a local camera of the first device and a local camera of the second device mapped by a first virtual camera in the first device; determining a selected camera in the cameras to be selected and a target task required to be executed by the selected camera; and generating a first task command according to a first camera identifier of the selected camera in the first device and the target task, and sending the first task command to the selected camera so as to enable the selected camera to control the target task to be executed according to the first task command, wherein each first virtual camera of the first device is used for realizing the control of the mapped local camera of the second device with at least one level of mapping relation. The obtaining and the device provided by the application realize the direct and/or indirect control of the local cameras in the plurality of second devices through one first device, and meet the camera control requirements of different application scenes.

Description

Camera control method based on distributed control and terminal equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a camera control method and a terminal device based on distributed control.
Background
With the development of cameras, the types of devices in which cameras are installed are increasing, for example, smart tvs in which cameras are installed, bluetooth cameras, home cameras, road monitoring cameras, unmanned planes in which cameras are installed, and the like. In the related art, a device equipped with a camera can be remotely controlled through a terminal device or a system such as a mobile phone, so that tasks such as image shooting and video shooting can be executed. Taking a plurality of unmanned aerial vehicles of the control terminal to control photographing as an example, in the related technology, each unmanned aerial vehicle carries out network signal transmission with the control terminal through the Ethernet, the control terminal transmits control instruction networks such as photographing to each unmanned aerial vehicle, the unmanned aerial vehicle further carries out photographing actions after resolving, and then transmits the photo data network back to the control terminal. For realizing above-mentioned control need every unmanned aerial vehicle all to contain the network module to can carry out network connection with control terminal and just can realize unmanned aerial vehicle's direct control, and this kind of mode leads to control terminal to control only through the unmanned aerial vehicle of network lug connection, then can't control to the unmanned aerial vehicle that can not directly pass through network connection with control terminal. How to realize indirect control of camera equipment on the basis of realizing direct control of the camera equipment meets the requirements of different use scenes of the camera equipment, and is a technical problem to be solved urgently.
Disclosure of Invention
In view of this, a camera control method and a terminal device based on distributed control are provided.
In a first aspect, an embodiment of the present application provides a camera control method based on distributed control, which is applied to a first device, and the method includes:
displaying candidate cameras which can be controlled by the first device, wherein the candidate cameras comprise a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device;
according to the detected task creating operation aiming at the cameras to be selected, determining a selected camera and a target task required to be executed by the selected camera from the cameras to be selected;
generating a first task command according to a first camera identification of the selected camera in the first device and the target task;
sending the first task command to the selected camera to cause the selected camera to control execution of the target task according to the first task command,
wherein the first device comprises at least one first virtual camera, each first virtual camera is used for realizing the control of the local camera of the mapped second device, at least one level of mapping relation exists between each first virtual camera and the local camera of the mapped second device,
when the first virtual camera and the local camera of the mapped second device are in a multi-level mapping relationship, the second device and the first device are in different local area networks.
By the method provided by the first aspect, the control of the local cameras in the plurality of second devices can be realized through one first device, and the second devices can be directly connected with the first device through the same local area network or indirectly connected with the first devices in different local area networks by means of intermediate devices, so that distributed and hierarchical control over the local cameras of different second devices is realized, and camera control requirements of different application scenes can be met.
According to a first aspect, in a first possible implementation manner of the method, the method further includes:
when a device connection request is detected, searching for a third device which can be connected with the first device and satisfies connection conditions, wherein the connection conditions include: the third device is provided with a local camera and/or at least one second virtual camera is created in the third device;
sending a first authorization control request to the third equipment, and receiving a first authorization indication returned by the third equipment in response to the first authorization control request;
after determining an authorized first authorized camera according to the first authorization indication, obtaining a second camera identifier of the first authorized camera in the third device, where the first authorized camera includes a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera;
determining a first camera identification of the first authorized camera in the first device according to the second camera identification, and creating a first virtual camera for controlling the first authorized camera according to the first camera identification of the first authorized camera,
wherein each second virtual camera is used for realizing the control of the local camera of the fourth device mapped by the second virtual camera, and at least one level of mapping relation exists between the second virtual camera and the mapped local camera of the fourth device.
By a first possible implementation, a first virtual camera is created that is capable of controlling an authorized camera.
According to the first possible implementation manner, in a second possible implementation manner of the method, sending a first authorization control request to the third device, and receiving a first authorization indication returned by the third device in response to the first authorization control request includes:
selecting a first requesting camera according to a request operation for a local camera of the third device which can be controlled by the third device and/or a local camera of a fourth device mapped by the second virtual camera;
the first authorization control request is generated according to the first request camera, and is sent to the third device, so that the third device generates the first authorization indication according to the detected authorization operation aiming at the first authorization control request.
Through a second possible implementation manner, the first request camera can be selected according to the needs of the user, and the authorization control requirements of different users are met.
According to the first possible implementation manner, in a third possible implementation manner of the method, determining, according to the second camera identifier, a first camera identifier of the first authorized camera in the first device, and creating, according to the first camera identifier of the first authorized camera, a first virtual camera for controlling the first authorized camera includes:
when the first authorized camera comprises a local camera of a fourth device mapped by a second virtual camera, determining a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera according to a mapping relation level which indicates a mapping relation between the first authorized camera and the second virtual camera in the second camera identification;
determining a first identity identifier of the first authorized camera in the first device according to the identity identifier in the second camera identifier and the identity identifier of the existing camera in the first device corresponding to the first mapping relation level;
and determining a first camera identifier of the first authorized camera in the first device according to the first mapping relation level and the first identity identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier.
Through a third possible implementation manner, after the first virtual camera is created, the first device may directly determine the mapping relationship level between the first device and the first authorized camera according to the first camera identifier of the first virtual camera and the first mapping relationship level, so as to facilitate sending of the command and receiving of the data.
According to a third possible implementation manner, in a fourth possible implementation manner of the method, determining, according to the second camera identifier, a first camera identifier of the first authorized camera in the first device, and creating, according to the first camera identifier of the first authorized camera, a first virtual camera for controlling the first authorized camera includes:
determining a first level of mapping as a first level of mapping between the first authorized camera and a first virtual camera that needs to be created to control the first authorized camera when the first authorized camera includes a local camera of the third device.
According to a third possible implementation manner, in a fifth possible implementation manner of the method, determining, according to an identity identifier in the second camera identifier and an identity identifier of an existing camera in the first device corresponding to the first mapping relationship level, a first identity identifier of the first authorized camera in the first device includes:
when the identity identifier in the second camera identifier exists in the existing identity identifiers of the cameras corresponding to the first mapping relation level in the first device, creating a first identity identifier of the first authorized camera in the first device according to a preset identity identifier creation rule; or alternatively
And when the identity identifier in the second camera identifier is different from the identity identifier of the existing camera corresponding to the first mapping relation level in the first device, determining the identity identifier in the second camera identifier as the first identity identifier of the first authorized camera in the first device.
Through the fifth possible implementation manner, uniqueness of the first identity of the first authorized camera among identities of all cameras corresponding to the first mapping relation level that can be controlled by the first device can be ensured, so that the cameras of the same mapping relation level that can be controlled by the first device can be distinguished.
According to the first aspect, in a sixth possible implementation manner of the method, determining, according to a detected task creation operation for the cameras to be selected, a selected camera and a target task that the selected camera needs to execute from the cameras to be selected includes:
determining a selected camera according to the detected selection operation aiming at the camera to be selected;
determining a target task to be executed by the selected camera according to the task setting operation aiming at the selected camera,
the task parameters of the target task comprise at least one of a task type, execution time information and camera parameter setting when the selected camera executes the target task, wherein the task type comprises at least one of the following: a photographing task, a shooting task and an image previewing task.
Through a sixth possible implementation manner, the target task can be accurately set so that the selected camera can execute the target task.
In a seventh possible implementation manner of the method according to the first aspect, sending the first task command to the selected camera to cause the selected camera to control the target task to be executed according to the first task command includes at least one of:
when the selected camera comprises a local camera of the first device, sending the first task command to the local camera of the first device to cause the local camera of the first device to perform a target task indicated by the first task command;
when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a primary mapping relationship, forwarding the first task command to the local camera of the second device through the first virtual camera of the first device corresponding to the selected camera, so that the local camera of the second device executes a target task indicated by the first task command;
when the selected camera comprises a local camera of second equipment mapped by a first virtual camera, and the selected camera and a first virtual camera corresponding to the selected camera are in a multi-level mapping relationship, at least one intermediate equipment completing forwarding of the first task command is determined according to a first mapping relationship level between the selected camera and the corresponding first virtual camera, and the first task command is forwarded to the local camera of the second equipment sequentially through the virtual camera corresponding to the selected camera in each intermediate equipment, so that the local camera of the second equipment executes a target task indicated by the first task command.
With a seventh possible implementation, the control of the local camera of the device in the different local area network from the first device may be achieved by forwarding the first task command to the selected camera via the at least one intermediate device.
In an eighth possible implementation manner of the method according to the first aspect or the seventh possible implementation manner, the method further includes:
when target task data obtained by the selected camera executing the target task is received, image and/or video display is carried out according to the target task data,
wherein, receiving the target task data comprises at least one of the following modes:
when the selected camera comprises a local camera of the first device, directly receiving target task data sent by the local camera of the first device;
when the selected camera comprises a local camera of second equipment mapped by a first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a primary mapping relation, directly receiving target task data sent by the local camera of the second equipment by using the first virtual camera corresponding to the selected camera;
and when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, receiving target task data transmitted by the local camera of the second device and forwarded by the at least one intermediate device by using the first virtual camera corresponding to the selected camera.
Through an eighth possible implementation manner, the first device may receive the target task data after the selected camera executes the target task, and perform image and/or video display according to the target task data.
In a ninth possible implementation form of the method according to the first aspect, the method further comprises:
when a second authorization control request from a fifth device is received, an authorization prompt is displayed according to a second request camera in the second authorization control request;
determining an authorized second authorized camera according to the detected authorized operation aiming at the second request camera;
and generating a second authorization indication according to the first camera identification of the second authorization camera in the first device, and sending the second authorization indication to the fifth device, so that the fifth device creates a virtual camera for controlling the second authorization camera according to the second authorization indication.
Through the ninth possible implementation manner, the first device can directly control the local camera of the first device and the local camera of the second device mapped by the set first virtual camera according to the command sent by the user, and meanwhile, the first device can establish a control relationship with the fifth device and be controlled by the fifth device.
In a second aspect, an embodiment of the present application provides a terminal device, where the terminal device may perform the camera control method based on distributed control according to the first aspect or one or more of multiple possible implementations of the first aspect.
In a third aspect, an embodiment of the present application provides a computer program product, which includes computer readable code or a non-transitory computer readable storage medium carrying computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes a distributed control based camera control method according to the first aspect or one or more of the possible implementations of the first aspect.
These and other aspects of the present application will be more readily apparent from the following description of the embodiment(s).
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the description, serve to explain the principles of the application.
Fig. 1 shows a schematic structural diagram of a terminal device according to an embodiment of the present application.
Fig. 2 shows a block diagram of a software structure of a terminal device according to an embodiment of the present application.
Fig. 3 illustrates a flowchart of a camera control method based on distributed control according to an embodiment of the present application.
Fig. 4 illustrates an application scenario diagram of a camera control method based on distributed control according to an embodiment of the present application.
FIG. 5 illustrates a process diagram for determining a target task according to an embodiment of the application.
Fig. 6 illustrates a flowchart of a camera control method based on distributed control according to an embodiment of the present application.
FIG. 7 shows a schematic diagram of a selective authorization camera according to an embodiment of the present application.
Fig. 8 is a schematic diagram illustrating an implementation process of a camera control method based on distributed control according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
In order to solve the above technical problem, the present application provides a camera control method based on distributed control, and the camera control method based on distributed control according to the embodiments of the present application can implement multi-level indirect control of a camera device, and is suitable for use scenarios of different camera devices.
The devices (including the first device, the second device, the third device, the fourth device, and the like) related to the present application may be devices having a wireless connection function, where the wireless connection function may be connected to other devices through wireless connection modes such as wifi and bluetooth, and the devices of the present application may also have a function of communicating through wired connection. The device of the application can be a touch screen, can also be a non-touch screen, and can also be a device without a screen, the touch screen can control the device by clicking, sliding and other modes on a display screen through fingers, a touch pen and the like, the non-touch screen device can be connected with input devices such as a mouse, a keyboard, a touch panel and the like, the device can be controlled through the input devices, and the device without the screen can be a bluetooth sound box and the like without the screen.
For example, the terminal device in the device related to the present application may be a smart phone, a netbook, a tablet computer, a notebook computer, a wearable electronic device (such as a smart band, a smart watch, and the like), a TV, a virtual reality device, a sound, electronic ink, and the like.
Fig. 1 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. Taking the terminal device as a mobile phone as an example, fig. 1 shows a schematic structural diagram of a mobile phone 200.
The mobile phone 200 may include a processor 210, an external memory interface 220, an internal memory 221, a usb interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, a key 290, a motor 291, an indicator 292, a camera 293, a display 294, a SIM card interface 295, and the like. The sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity light sensor 280G, a fingerprint sensor 280H, and a touch sensor 280K (of course, the mobile phone 200 may also include other sensors, such as a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, an air pressure sensor, a bone conduction sensor, and the like, which are not shown in the figure).
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 200. In other embodiments of the present application, handset 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. Wherein the controller can be the neural center and the command center of the cell phone 200. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
The processor 210 may execute the camera control method based on distributed control provided in the embodiment of the present application, so as to facilitate multi-level indirect control of the camera device, and is suitable for use scenarios of different camera devices. The processor 210 may include different devices, for example, when the CPU and the GPU are integrated, the CPU and the GPU may cooperate to execute the camera control method based on distributed control provided in the embodiment of the present application, for example, part of algorithms in the camera control method based on distributed control is executed by the CPU, and another part of algorithms is executed by the GPU, so as to obtain faster processing efficiency.
The display screen 294 is used to display images, video, and the like. The display screen 294 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, cell phone 200 may include 1 or N display screens 294, N being a positive integer greater than 1. The display screen 294 may be used to display information input by or provided to the user as well as various Graphical User Interfaces (GUIs). For example, the display 294 may display a photograph, video, web page, or file, among others. As another example, the display 294 may display a graphical user interface. The graphical user interface comprises a status bar, a hidden navigation bar, a time and weather widget (widget) and an application icon, such as a browser icon. The status bar includes the name of the operator (e.g., china mobile), the mobile network (e.g., 4G), the time and the remaining power. The navigation bar includes a back key icon, a home key icon, and a forward key icon. Further, it is understood that in some embodiments, a Bluetooth icon, a Wi-Fi icon, an add-on icon, etc. may also be included in the status bar. It will also be appreciated that in other embodiments, a Dock bar may be included in the graphical user interface, a commonly used application icon may be included in the Dock bar, and so on. When the processor 210 detects a touch event of a finger (or a stylus, etc.) of a user with respect to an application icon, in response to the touch event, a user interface of an application corresponding to the application icon is opened and displayed on the display 294.
In the embodiment of the present application, the display screen 294 may be an integrated flexible display screen, or a spliced display screen formed by two rigid screens and a flexible screen located between the two rigid screens may be adopted.
After the processor 210 runs the camera control method based on distributed control provided in this embodiment of the present application, the terminal device as a first device may establish a communication connection with a second device that can be directly connected through the antenna 1 and the antenna 2, and according to the camera control method based on distributed control provided in this embodiment of the present application, the first device controls a local camera of the second device, and the second device that cannot directly establish a communication connection with the first device controls the local camera.
The cameras 293 (front camera or rear camera, or one camera may be used as both front camera and rear camera) are used for capturing still images or video. In general, the camera 293 may include a photosensitive element such as a lens group including a plurality of lenses (convex or concave lenses) for collecting an optical signal reflected by an object to be photographed and transferring the collected optical signal to an image sensor, and an image sensor. And the image sensor generates an original image of the object to be shot according to the optical signal.
The internal memory 221 may be used to store computer-executable program code, which includes instructions. The processor 210 executes various functional applications and data processing of the cellular phone 200 by executing instructions stored in the internal memory 221. The internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, codes of application programs (such as a camera application, a WeChat application, and the like), and the like. The data storage area can store data (such as images, videos and the like acquired by a camera application) and the like created in the use process of the mobile phone 200.
The internal memory 221 may further store one or more computer programs 1310 corresponding to the camera control method based on distributed control provided in the embodiment of the present application. The one or more computer programs 1304 are stored in the memory 221 and configured to be executed by the one or more processors 210, the one or more computer programs 1310 include instructions that can be used to perform the steps as in the respective embodiments of fig. 3 and 6, and the computer programs 1310 can include one or more modules that perform the following steps, so as to realize multi-level indirect control of the camera device, which is suitable for use scenarios of different camera devices.
In addition, the internal memory 221 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like.
Of course, the code of the camera control method based on distributed control provided in the embodiment of the present application may also be stored in the external memory. In this case, the processor 210 may execute the code of the camera control method based on the distributed control stored in the external memory through the external memory interface 220.
The function of the sensor module 280 is described below.
The gyro sensor 280A may be used to determine the motion attitude of the cellular phone 200. In some embodiments, the angular velocity of the cell phone 200 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 280A. I.e., the gyro sensor 280A may be used to detect the current state of motion of the handset 200, such as shaking or standing still.
When the display screen in the embodiment of the present application is a foldable screen, the gyro sensor 280A may be used to detect a folding or unfolding operation acting on the display screen 294. The gyro sensor 280A may report the detected folding operation or unfolding operation as an event to the processor 210 to determine the folded state or unfolded state of the display screen 294.
The acceleration sensor 280B can detect the magnitude of acceleration of the cellular phone 200 in various directions (typically three axes). I.e., the gyro sensor 280A may be used to detect the current state of motion of the handset 200, such as shaking or standing still. When the display screen in the embodiment of the present application is a foldable screen, the acceleration sensor 280B may be used to detect a folding or unfolding operation acting on the display screen 294. The acceleration sensor 280B may report the detected folding operation or unfolding operation as an event to the processor 210 to determine the folding state or unfolding state of the display screen 294.
The proximity light sensor 280G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The mobile phone emits infrared light outwards through the light emitting diode. The handset uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the handset. When insufficient reflected light is detected, the handset can determine that there are no objects near the handset. When the display screen in the embodiment of the present application is a foldable display screen, the proximity optical sensor 280G may be disposed on a first screen of the foldable display screen 294, and the proximity optical sensor 280G may detect a folding angle or an unfolding angle of the first screen and the second screen according to an optical path difference of the infrared signal.
The gyro sensor 280A (or the acceleration sensor 280B) may transmit the detected motion state information (such as an angular velocity) to the processor 210. The processor 210 determines whether the mobile phone 200 is currently in the hand-held state or the tripod state (for example, when the angular velocity is not 0, it indicates that the mobile phone 200 is in the hand-held state) based on the motion state information.
The fingerprint sensor 280H is used to collect a fingerprint. The mobile phone 200 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The touch sensor 280K is also referred to as a "touch panel". The touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, which is also called a "touch screen". The touch sensor 280K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operations may be provided through the display screen 294. In other embodiments, the touch sensor 280K can be disposed on the surface of the mobile phone 200 at a different location than the display 294.
Illustratively, the display 294 of the cell phone 200 displays a home interface that includes icons for a plurality of applications (e.g., a camera application, a WeChat application, etc.). The user clicks an icon of the camera application in the main interface through the touch sensor 280K, and the trigger processor 210 starts the camera application and opens the camera 293. Display screen 294 displays an interface, such as a viewfinder interface, for a camera application.
The wireless communication function of the mobile phone 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 251 can provide a solution including 2G/3G/4G/5G wireless communication applied to the handset 200. The mobile communication module 251 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 251 can receive electromagnetic waves from the antenna 1, and filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 251 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 251 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 251 may be disposed in the same device as at least some of the modules of the processor 210. In this embodiment, the mobile communication module 251 may be further configured to perform information interaction, such as a first task command and target task data, with other terminal devices.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 270A, the receiver 270B, etc.) or displays images or video through the display screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 251 or other functional modules, independent of the processor 210.
The wireless communication module 252 may provide solutions for wireless communication applied to the mobile phone 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 252 may be one or more devices that integrate at least one communication processing module. The wireless communication module 252 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on the electromagnetic wave signal, and transmits the processed signal to the processor 210. The wireless communication module 252 may also receive a signal to be transmitted from the processor 210, perform frequency modulation on the signal, amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves. In this embodiment of the present application, the wireless communication module 252 is configured to transmit data with other terminal devices under the control of the processor 210, for example, when the processor 210 executes the camera control method based on distributed control provided in this embodiment of the present application, the processor in the first device may control the wireless communication module 252 to send the first task command to the local camera of the second device that establishes direct communication connection with the first device, and send the first task command to the local camera of the second device that cannot directly establish communication connection with the first device through mapping of the first virtual camera, so as to implement multi-level indirect control of the camera device, and is suitable for use scenarios of different camera devices.
In addition, the mobile phone 200 can implement an audio function through the audio module 270, the speaker 270A, the receiver 270B, the microphone 270C, the earphone interface 270D, and the application processor. Such as music playing, recording, etc. The cellular phone 200 may receive a key 290 input, generating a key signal input related to user settings and function control of the cellular phone 200. The cell phone 200 can generate a vibration alert (e.g., an incoming call vibration alert) using the motor 291. The indicator 292 in the mobile phone 200 may be an indicator light, and may be used to indicate a charging status, a power change, or an indication message, a missed call, a notification, or the like. The SIM card interface 295 in the handset 200 is used to connect a SIM card. The SIM card can be attached to and detached from the mobile phone 200 by being inserted into the SIM card interface 295 or being pulled out from the SIM card interface 295.
It should be understood that in practical applications, the mobile phone 200 may include more or less components than those shown in fig. 1, and the embodiment of the present application is not limited thereto. The illustrated handset 200 is merely an example, and the handset 200 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The software system of the terminal device may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of a terminal device.
Fig. 2 shows a block diagram of a software structure of a terminal device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include phone, camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used for providing a communication function of the terminal equipment. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text in a status bar at the top of the system, such as a notification of a running application in the background, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal device vibrates, an indicator light flickers, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide a fusion of the 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The embodiment of the application provides a camera control method based on distributed control, which can realize control of local cameras in a plurality of second devices through one first device, and the second devices can be directly connected with the first devices through the same local area network or indirectly connected with the first devices in different local area networks by means of intermediate devices, so that distributed and hierarchical control of the local cameras of different second devices is realized, and camera control requirements of different application scenes can be met.
Fig. 3 illustrates a flowchart of a camera control method based on distributed control according to an embodiment of the present application. Fig. 4 illustrates an application scenario diagram of a camera control method based on distributed control according to an embodiment of the present application. As shown in fig. 3, the method is applied to the first device, and includes steps S11 to S14.
In step S11, candidate cameras that can be controlled by the first device are displayed, where the candidate cameras include a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device. Wherein the first device comprises at least one first virtual camera, each first virtual camera is used for realizing the control of the local camera of the mapped second device, and at least one level of mapping relation exists between each first virtual camera and the local camera of the mapped second device. When the first virtual camera and the local camera of the mapped second device are in a multi-level mapping relationship, the second device and the first device are in different local area networks.
In this embodiment, the mapping relationship between the first virtual camera and the mapped local camera of the second device may indicate that the first virtual camera implements the mapping times for controlling the mapped local camera of the second device under the control of the first device, and the transmission process can be completed only by forwarding the mapping times for command of the first virtual camera and command of the second virtual camera and data obtained by the camera executing the task. The mapping relationship between the first virtual camera and the mapped local camera of the second device may also be considered as the mapping relationship between the first device and the local camera of the second device mapped by the first virtual camera. The lower the level of the mapping relationship, the fewer the number of mappings between the first virtual camera and the mapped local camera of the second device, and the fewer the number of transmission and forwarding. The mapping relation level comprises a zero-level mapping relation, a first-level mapping relation and a second-level mapping relation … N-level mapping relation, the level of the first-level mapping relation is smaller than the level of the second-level mapping relation, and the second-level mapping relation and the mapping relation above the second level are in the multi-level mapping relation.
To further explain the mapping relationship, the following description is made with reference to the application scenario example given in fig. 4. As shown in fig. 4, the local camera 100 and at least one first virtual camera 200 are included in the first device a. The mapping relationship between the local camera 100 of the first device a and the first device a is the zero-level mapping relationship. The second device mapped by each first virtual camera 200 may be a device capable of directly connecting with the first device through a network, as shown in fig. 4, a primary mapping relationship is between a local camera in the device B1 and the corresponding first virtual camera 200, and the device B1 and the first device a may be in the same local area network. The second device mapped by the first virtual camera 200 may also be a device that cannot be directly connected to the first device through a network, and indirectly communicates with the first device a through other devices, such as devices B2 and B3 shown in fig. 4. Wherein device B2 is mapped to a first virtual camera202, the first device A follows through the first virtual camera 202
Figure BDA0002789181930000111
The communication mode of (3) implements control of the local camera of the device B2, including sending a first task command and receiving target task data; the device B2 and the first virtual camera 202 are in a two-level mapping relationship, the device B2 and the first device a are in different local area networks, the device B2 and the device C1 may be in the same local area network, and the device C1 and the first device a may be in the same local area network. Device B3 is mapped to the first virtual camera 203 and the first device A follows the first virtual camera 203
Figure BDA0002789181930000112
Figure BDA0002789181930000113
The communication mode of (3) implements control of the local camera of the device B3, including sending a first task command and receiving target task data; the device B3 and the first virtual camera 203 are in an x-level mapping relationship (x = the number of virtual cameras), the device B3 and the first device a are in different local area networks, the device C2 and the device C3 may be in the same local area network, and the device C2 and the first device a may be in the same local area network.
The device B1, the device B2, and the device B3 may be the same or different types of devices, for example, the device B1 is a mobile phone, the device B2 is a monitoring camera device, and the device B3 is an unmanned aerial vehicle with a camera. The local camera of the device C1, the local camera of the device C2, and the local camera of the device C3 may also be mapped into the first device a after being authorized to form a virtual camera, for simplicity of illustration, the mapping relationship is not shown in fig. 4, and a person skilled in the art may implement the method according to the method provided in the present application according to an implementation example in which the device B1, the device B2, and the device B3 are mapped into the first device a, and details are not described here.
In step S12, according to the detected task creating operation for the candidate cameras, a selected camera and a target task that the selected camera needs to execute are determined from the candidate cameras.
In one possible implementation, step S12 may include: determining a selected camera according to the detected selection operation aiming at the camera to be selected; and determining a target task required to be executed by the selected camera according to the task setting operation aiming at the selected camera. The task parameters of the target task may include at least one of a task type, execution time information, and camera parameter settings when the selected camera executes the target task, where the task type includes at least one of: a photographing task, a shooting task, and an image preview task. In this way, the target task can be accurately set so that the selected camera performs the target task.
The execution time information and the camera parameters corresponding to different task types need to be correspondingly set. For the photo task, the execution time information may include the number of photos to be taken and the time of the photos to be taken, and the camera parameters may include the pixels to be taken, the exposure duration of the photos to be taken, the photo mode (e.g., day mode, night mode, panorama mode, etc.), and other parameters for setting the camera to take the photos. For the shooting task, the execution time information may include the start and stop time of shooting, the shooting duration, and the like, and the camera parameters may include the pixels of shooting, the shooting mode (such as day mode, night mode, and the like), and the like, which are used for setting the camera to shoot the video. For the image preview task, the execution time information may include a shooting time of a photo to be previewed (a photo taken by opening a camera view frame in real time or a photo taken before the camera), a shooting time and duration of a video to be previewed (a video taken by opening the camera in real time or a video taken before the camera), and the camera parameters may include a pixel to be shot, a shooting exposure duration, a shooting mode (such as a daytime mode, a nighttime mode, a panoramic mode, and the like), and other parameters for setting the camera to shoot the photo.
In this embodiment, when the first device is provided with a display screen, the camera to be selected may be displayed in the display screen in a form of a combination of pictures and/or characters. The user can be prompted to select the camera to be selected in a voice mode by utilizing a loudspeaker and the like arranged in the first device. FIG. 5 illustrates a process diagram for determining a target task according to an embodiment of the application. As shown in fig. 4 and 5, the first device is the first device a shown in fig. 4, a camera to be selected "local camera 100", a first virtual camera 201 mapped to the local camera of the device B1, a first virtual camera 202 mapped to the local camera of the device B2, and a first virtual camera 203 mapped to the local camera of the device B3 "which can be selected by the user are displayed in the interface T1, and the selected camera" local camera of the device B1 mapped by the first virtual camera 201 "selected by the user is determined according to a detected trigger operation such as clicking and sliding on the selection control K. Then, the interface T1 in the display screen can be further switched to T2, the task type "photographing, shooting, image previewing" is displayed in the interface T2, and the task type selected by the user is determined to be the "photographing task" according to the detected trigger operations such as clicking, sliding and the like for the selection control K in the interface T2. Then, the interface T2 in the display screen may be further switched to T3, and a camera parameter setting prompt corresponding to the task type "shooting task" is displayed in the interface T3, as shown in fig. 5, the "time length and a blank frame thereafter" are used to prompt the user to determine shooting time length for shooting in a manner of direct input or pull-down selection. The "camera parameters and a plurality of blank boxes thereafter" are used to prompt the user to determine the camera parameters for imaging by direct input or pull-down selection. The implementation manner of step S11 and step S12 can be set by those skilled in the art according to actual needs, and the present application is not limited to this.
In step S13, a first task command is generated according to the first camera identification of the selected camera in the first device and the target task.
In this embodiment, the generated first task command includes a target task that needs to be executed by the selected camera and a first camera identifier. The first camera identifier includes a first mapping relationship level and a first identity identifier, and the first mapping relationship level may be used to represent a mapping relationship between the selected camera and a corresponding first virtual camera, or to represent a mapping relationship between the selected camera and the first device. The first identity identification may be used to indicate a distinction between existing cameras of the selected camera in the first device corresponding to the first mapping level. The mapping relationship level and the identity may be set by those skilled in the art according to actual needs, and the present application is not limited thereto. Therefore, different cameras controlled by the first equipment can be distinguished through the first identity, and accurate delivery of the first task command is also ensured.
In step S14, the first task command is sent to the selected camera, so that the selected camera controls the target task to be executed according to the first task command.
In a possible implementation manner, the step S14 may include at least one of the following operations one, two, and three:
operation one, when the selected camera includes the local camera of the first device, the first task command is sent to the local camera of the first device, so that the local camera of the first device executes the target task indicated by the first task command. For example, as shown in fig. 4, the first device a may directly transmit the first task command to its own local device 100.
And operation three, when the selected camera includes a local camera of the second device to which the first virtual camera is mapped, and the selected camera and a first virtual camera corresponding to the selected camera are in a primary mapping relationship, forwarding the first task command to the local camera of the second device through the first virtual camera corresponding to the selected camera in the first device, so that the local camera of the second device executes a target task indicated by the first task command. For example, as shown in fig. 4, the first device a may control the first virtual camera 201 to transmit the first task command to the "local camera of the device B1".
Operation three, when the selected camera includes a local camera of the second device mapped by the first virtual camera, and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relationship, according to a first mapping relationship level between the selected camera and the corresponding first virtual camera, at least one intermediate device completing forwarding of the first task command is determined, and the first task command is forwarded to the local camera of the second device sequentially through the virtual camera corresponding to the selected camera in each intermediate device, so that the local camera of the second device executes the target task indicated by the first task command. The intermediate device may directly forward the first task command and forward target task data and command results described below in a transparent transmission manner, or encrypt the first task command and the target task data and command results in a preset encryption manner and forward the first task command and the target task data and command results in the following forwarding process. For example, as shown in fig. 4, the first device a may control the first virtual camera 202 to send the first task command to the "second virtual camera 301 of the device C1", and the second virtual camera 301 of the device C1 further forwards the first task command to the "local device of the device B2", in which process the intermediate device is the device C1. The first device a may control the first virtual camera 203 to send the first task command to the "second virtual camera 302 of the device C2", the second virtual camera 302 of the device C2 further forwards the first task command to the "third virtual camera 400 of the device C3", and the third virtual camera 400 … of the device C3 until the first task command is sent to the local device of the device B2, in which process the intermediate devices are the device C2 and the device C3 …. In this way, control of the local camera of the device in a different local area network from the first device may be achieved by forwarding the first task command to the selected camera via the at least one intermediate device.
In a possible implementation manner, when the selected camera needs to execute different target tasks from different devices which conflict with each other in the same time period, the target task of the device with the prior priority can be selected to be executed according to the priority of the device which sends out the target task. Alternatively, one of the target tasks is randomly selected to be executed. Alternatively, to avoid "the selected camera needs to perform different target tasks from different devices that conflict with each other in the same time period", the device in which the selected camera is located may control each camera that it can control to only one device to control through the virtual camera when the permission is granted.
Fig. 6 illustrates a flowchart of a camera control method based on distributed control according to an embodiment of the present application. In one possible implementation, as shown in fig. 6, the method may further include a "virtual camera creation step" step S15 to step S18. Here, the "virtual camera creation step" may be performed before step S11 (as shown in fig. 6), or may be performed after step S11 (not shown in the figure). When the third device described below is the same device as the second device described above, the "virtual camera creation step" may be performed before step S11. When the following third device is a different device from the above second device, the "virtual camera creating step" may be performed before, after, or simultaneously with step S11, and the present application does not limit this. A first virtual camera capable of controlling the authorized camera is created by the "virtual camera creating step".
In step S15, when a device connection request is detected, a third device that can be connected to the first device and satisfies a connection condition is searched, where the connection condition may include: the third device is provided with a local camera and/or at least one second virtual camera is created in the third device. Wherein each second virtual camera is used for realizing the control of the local camera of the fourth device mapped by the second virtual camera, and the second virtual camera and the local camera of the mapped fourth device have at least one-level mapping relation. The setting of the connection condition may ensure that the determined presence of the third device is controllable by the first device by creating the virtual camera. The third device can be any device such as a mobile phone and an unmanned aerial vehicle, and the third device is not manufactured in the application.
For example, as shown in fig. 4, it is assumed that the third devices determined by the first device a after searching and satisfying the connection condition are the device B1 and the device C1. The first device a may then access the third devices to determine which camera of each third device is capable of controlling. For example, it may be determined by the access that the device B1 can control only its local camera, the device C1 can control its local camera, and the local camera of the device B2 mapped by the second virtual camera 301.
In step S16, a first authorization control request is sent to the third device, and a first authorization indication returned by the third device in response to the first authorization control request is received.
In one possible implementation, step S16 may include: selecting a first request camera according to a request operation for a local camera of the third device which can be controlled by the third device and/or a local camera of a fourth device mapped by the second virtual camera; the first authorization control request is generated according to the first request camera, and is sent to the third device, so that the third device generates the first authorization indication according to the detected authorization operation aiming at the first authorization control request. Therefore, the first request camera can be selected according to the needs of the user, and the authorization control requirements of different users are met.
In this implementation, information of a third device to which the first device can be connected and a camera that can be controlled by the third device may be displayed in a display screen of the first device with reference to the display manner of the interface T1 in fig. 5. And then determining a third device to be connected by the user and a first request camera which can be controlled by the third device to be connected according to the detected operations (namely request operations) such as clicking and the like, further generating a first authorization control request, and sending the first authorization control request to the corresponding third device. After receiving the first authorization control request, the third device generates a first authorization indication according to an authorization operation of the user (this process may refer to the following implementation process and manner of the first device responding to the second authorization control request). And the third equipment to be connected by the user and the first request camera which can be controlled by the third equipment to be connected can be determined according to the recognition result. The information of the third device and the camera controlled by the third device can be played to the user in a voice mode, and the third device to be connected by the user and the first request camera controlled by the third device to be connected are determined according to the voice sent by the user in response. The method for generating the first authorization control request according to the request operation may be set by a person skilled in the art according to actual needs, and the present application is not limited to this.
In a possible implementation manner, after the third devices are determined, the first authorization control request for each camera controlled by the detected third devices is directly sent to each detected third device, and the first authorization control request does not need to be generated according to a request operation based on a user. Therefore, the operation of the user can be simplified, the speed of creating the virtual camera can be increased, and the operation required by the user in the creating process can be simplified.
In step S17, after determining an authorized first authorized camera according to the first authorization indication, obtain a second camera identifier of the first authorized camera in the third device, where the first authorized camera includes a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera.
In this implementation, the second camera identifier of the first authorized camera in the third device also includes a mapping relationship level and an identity identifier, and the meaning represented by the second camera identifier is described in the above description of the first mapping relationship level and the first identity identifier, which is not described herein again.
In step S18, a first camera identifier of the first authorized camera in the first device is determined according to the second camera identifier, and a first virtual camera for controlling the first authorized camera is created according to the first camera identifier of the first authorized camera.
In one possible implementation, step S18 may include: when the first authorized camera comprises a local camera of a fourth device mapped by a second virtual camera, determining a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera according to a mapping relation level which indicates a mapping relation between the first authorized camera and the second virtual camera in the second camera identification; determining a first identity identifier of the first authorized camera in the first device according to an identity identifier in the second camera identifier and an identity identifier of an existing camera in the first device corresponding to the first mapping relation level; and determining a first camera identifier of the first authorized camera in the first device according to the first mapping relation level and the first identity identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier. In this way, after the first virtual camera is created, the first device may directly determine the mapping relationship level between the first device and the first authorized camera according to the first camera identifier and the first mapping relationship level of the first virtual camera, so as to facilitate command transmission and data reception.
In a possible implementation manner, determining, according to an identity identifier in the second camera identifier and an identity identifier of an existing camera in the first device corresponding to the first mapping relationship level, a first identity identifier of the first authorized camera in the first device may include: when the identity identifier in the second camera identifier exists in the existing identity identifiers of the cameras corresponding to the first mapping relation level in the first device, creating a first identity identifier of the first authorized camera in the first device according to a preset identity identifier creation rule; or when the identity in the second camera identifier is different from the identity of the existing camera in the first device corresponding to the first mapping relation level, determining the identity in the second camera identifier as the first identity of the first authorized camera in the first device. In this way, the uniqueness of the first identity of the first authorized camera among the identities of all cameras corresponding to the first mapping level that can be controlled by the first device can be ensured, so as to facilitate the differentiation of cameras of the same mapping level that can be controlled by the first device.
In a possible implementation manner, step S18 may further include:
determining a first level of mapping as a first level of mapping between the first authorized camera and a first virtual camera that needs to be created to control the first authorized camera when the first authorized camera includes a local camera of the third device.
In this implementation, the first mapping relationship level may be obtained by increasing "the mapping relationship level indicating the mapping relationship between the first authorized camera and the second virtual camera" by one level in the second camera identification. Different mapping relation levels can be distinguished by using different numbers, letters and other character representations, the identity marks of different cameras in the same mapping relation level can be distinguished by using different numbers, letters and other character representations of the cameras, and the identity marks of the cameras in the equipment where the cameras are located can also be used for distinguishing.
For example, referring to the first device a in fig. 4, it is assumed that the identity of its local camera is "2" in the device B1, the identity of its local camera is "2" in the device C1, the identity of its local camera is "1" in the device B2, and the identity of its local camera is "3" in the first device a. The zero-level mapping relation, the first-level mapping relation and the second-level mapping relation … are respectively used for representation of 0000, 1000 and 2000 …. Then the process of the first step is carried out,
in device B1, the camera identification of its local camera may be 0002.
In device B2, the camera identification of its local camera may be 0001.
In the device C1, the camera identification of the local camera thereof may be 0002, and the camera identification of the local camera of the device B2 mapped by the second virtual camera 301 may be 1001.
In the first device a, since there is no identity reuse, the first identity of its local camera is 0003, the first identity of the local camera of device C1 may be 1002, and the first identity of the local camera of device B2 may be 2001. Since the identity 2 of the local camera of device B1 "0002" is already occupied by the local camera of device C1, the local camera identity of device B1 may be adjusted, for example, to "2" to "4" and the first identity of the local camera of device B1 is 1004.
In one possible implementation, the method may further include: when a command result returned after the selected camera receives the first task command is received, it may be determined whether the selected camera successfully executes the target task, whether the first task command is received, and other information according to the command result. Wherein receiving the command result may include at least one of:
when the selected camera comprises a local camera of the first device, directly receiving a command result from the local camera of the first device;
when the selected camera comprises a local camera of second equipment mapped by a first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a primary mapping relation, directly receiving a command result sent by the local camera of the second equipment by using the first virtual camera corresponding to the selected camera;
and when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, the first virtual camera corresponding to the selected camera is used for receiving a command result which is transmitted by the local camera of the second device and forwarded by the at least one intermediate device in sequence.
In one possible implementation, the method may further include: and when target task data obtained by the selected camera executing the target task is received, displaying images and/or videos according to the target task data. The first device may store target task data in addition to the presentation of the image and/or video.
Wherein, receiving the target task data comprises at least one of the following modes:
when the selected camera comprises a local camera of the first device, directly receiving target task data sent by the local camera of the first device;
when the selected camera comprises a local camera of second equipment mapped by a first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a primary mapping relation, directly receiving target task data sent by the local camera of the second equipment by using the first virtual camera corresponding to the selected camera;
and when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, receiving target task data transmitted by the local camera of the second device and forwarded by the at least one intermediate device by using the first virtual camera corresponding to the selected camera.
In one possible implementation, the method may further include:
when a second authorization control request from a fifth device is received, an authorization prompt is displayed according to a second request camera in the second authorization control request;
determining an authorized second authorized camera according to the detected authorized operation aiming at the second request camera;
and generating a second authorization indication according to the first camera identification of the second authorization camera in the first device, and sending the second authorization indication to the fifth device, so that the fifth device creates a virtual camera for controlling the second authorization camera according to the second authorization indication.
Therefore, the first device can directly control the local camera of the first device and the local camera of the second device mapped by the set first virtual camera according to the command sent by the user, and meanwhile, the first device can establish a control relation with the fifth device and be controlled by the fifth device. Wherein the fifth device may or may not be provided with a local camera itself.
For example, fig. 7 shows a schematic diagram of a selective authorization camera according to an embodiment of the present application. As shown in fig. 7, assuming that the second requesting camera in the second authorization control request is the local camera 100 of the first device a, the local camera of the device B1 to which the first virtual camera 201 is mapped, the local camera of the device B2 to which the first virtual camera 202 is mapped, and the local camera of the device B3 to which the first virtual camera 203 is mapped in fig. 4, an interface T4 may be displayed in the display screen of the first device a, and the local camera of the device B1 to which the second authorizing camera authorized by the user, such as the first virtual camera 201, is determined according to a detected operation triggered by a click, a slide, and the like of the selection control K of each second requesting camera. The first device a may then generate a second authorization indication according to the first camera identification of the local camera of the device B1 mapped by the first virtual camera 201 in the first device a, and send the second authorization indication to the fifth device D, so that the fifth device D creates a virtual camera 401 for controlling the second authorization camera according to the second authorization indication (refer to the implementation processes of step S17 and step S18 above).
Fig. 8 is a schematic diagram illustrating an implementation process of a camera control method based on distributed control according to an embodiment of the present application. As shown in fig. 8, a process is shown in which a first device a controls a local camera of a second device B2 through an intermediate device C1, wherein,
in the first task command issuing process:
a "camera application" in an application layer of the first device a detects a task setting operation for a preview task, a photographing task, and a photographing task issued by a user, and then a "camera service" module in a service layer of the first device a generates a photographing command 1 (hereinafter and in the drawings, also referred to as command 1), a photographing command 2 (hereinafter and in the drawings, also referred to as command 2), and a preview command 3 (hereinafter and in the drawings, also referred to as command 3) according to the detected task device operation, where each command carries task parameters required for executing the task (see above for a process of determining the command and the task parameters, which is not described herein again). The generation time of the camera shooting command 1, the shooting command 2, and the preview command 3, or the time when the user issues the three commands, are the same or different, and fig. 8 shows the three commands for the issuing process of the target tasks of three different task types, but actually the issuing times of the three commands do not affect each other. A virtual camera device (i.e., the second virtual camera 301 in fig. 4) in the virtual camera HAL in the HAL Layer (Hardware Abstraction Layer) of the first device a sends "command 1, command 2, and/or command 3" to a "distributed device virtualization platform service" in the service Layer of the first device a, and the "distributed device virtualization platform service" of the first device a sends "command 1, command 2, and/or command 3" to the intermediate device C1 in a transparent manner through its own transparent transmission pipeline.
The 'multi-device virtualization module' in the application layer of the intermediate device C1 receives and transmits the 'command 1, command 2 and/or command 3' to the 'distributed device virtualization platform service' of the self service layer in a transparent transmission mode through a transparent transmission pipeline. The distributed device virtualization platform service also sends the command 1, the command 2 and/or the command 3 to the second device B2 through the transparent transmission pipeline in a transparent transmission mode.
The "multi-device virtualization module" of the second device B2 application layer receives "command 1, command 2, and/or command 3" through the pass-through pipe, and then sends "command 1, command 2, and/or command 3" to the "camera service" of the service layer. The "camera service" issues a specific target task to be executed to its camera device (i.e. the local camera of B2 shown in fig. 4) according to "command 1, command 2 and/or command 3", so that the camera device can control the sensor of the camera hardware to perform specific operations such as shooting, view-finding and the like, and process data collected by the sensor through an Image Signal Processing ISP (Image Signal Processing) of the camera hardware, thereby generating target task data. The target task data may include data corresponding to command 1, data corresponding to command 2, and/or data corresponding to command 3.
In the target task data uploading process:
the phase 'camera device' of the second device B2 sends target task data of different task types to corresponding buffers of its 'camera service', and then the 'multi-device virtualization module' of the second device B2 recalls the target task data and sends the target task data to the 'distributed device virtualization platform service' of the intermediate device in a transparent transmission manner through a transparent transmission pipeline.
After the distributed device virtualization platform service of the intermediate device C1 receives the target task data through the transparent transmission pipeline, the target task data is sent to the first device a through the transparent transmission pipeline of the multi-device virtualization module of the intermediate device C1 in a transparent transmission manner.
The distributed device virtualization platform service of the first device A receives target task data through a transparent transmission pipeline of the distributed device virtualization platform service, processes the target task data, determines which task type the target task data corresponds to, and sends the target task data corresponding to different task types to corresponding buffer areas of a camera service in a service layer of the first device A, so that a camera application of the first device A can perform preview display, photo display and/or video display when the target task data is determined to be stored in the corresponding buffer areas and/or a display instruction of a user is received.
In this way, the control of the local camera of the second device B2, which is no longer on the same local area network as the first device a, is achieved with one first device a.
An embodiment of the present application provides a terminal device, including: a local camera; a first virtual camera; a processor and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described distributed control-based camera control method when executing the instructions.
Embodiments of the present application provide a non-transitory computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable Programmable Read-Only Memory (EPROM or flash Memory), a Static Random Access Memory (SRAM), a portable Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disc (DVD), a Memory stick, a floppy disk, a mechanical coding device, a punch card or an in-groove protrusion structure, for example, having instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded to the respective computing/processing device from a computer readable storage medium, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present application may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present application by utilizing state information of the computer-readable program instructions to personalize custom electronic circuitry, such as Programmable Logic circuits, field-Programmable Gate arrays (FPGAs), or Programmable Logic Arrays (PLAs).
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It is also noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., a Circuit or an ASIC) for performing the corresponding function or action, or by combinations of hardware and software, such as firmware.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A camera control method based on distributed control is applied to a first device, and the method comprises the following steps:
displaying candidate cameras which can be controlled by the first device, wherein the candidate cameras comprise a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device;
according to the detected task creating operation aiming at the camera to be selected, determining a selected camera and a target task to be executed by the selected camera from the camera to be selected;
generating a first task command according to a first camera identification of the selected camera in the first device and the target task;
sending the first task command to the selected camera to cause the selected camera to control execution of the target task according to the first task command,
wherein the first device comprises at least one first virtual camera, each first virtual camera is used for realizing the control of the local camera of the mapped second device, at least one level of mapping relation exists between each first virtual camera and the local camera of the mapped second device,
when the first virtual camera and the mapped local camera of the second device are in a multi-level mapping relationship, the second device and the first device are in different local area networks, the second device establishes indirect connection with the first device through at least one intermediate device, and the at least one intermediate device utilizes the respective virtual camera to forward data between the first virtual camera and the mapped local camera of the second device.
2. The method of claim 1, further comprising:
when a device connection request is detected, searching for a third device which can be connected with the first device and satisfies connection conditions, wherein the connection conditions include: the third device is provided with a local camera and/or at least one second virtual camera is created in the third device;
sending a first authorization control request to the third equipment, and receiving a first authorization indication returned by the third equipment in response to the first authorization control request;
after determining an authorized first authorized camera according to the first authorization indication, obtaining a second camera identifier of the first authorized camera in the third device, where the first authorized camera includes a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera;
determining a first camera identification of the first authorized camera in the first device according to the second camera identification, and creating a first virtual camera for controlling the first authorized camera according to the first camera identification of the first authorized camera,
wherein each second virtual camera is used for realizing the control of the local camera of the fourth device mapped by the second virtual camera, and the second virtual camera and the local camera of the mapped fourth device have at least one-level mapping relation.
3. The method of claim 2, wherein issuing a first authorization control request to the third device and receiving a first authorization indication returned by the third device in response to the first authorization control request comprises:
selecting a first request camera according to a request operation for a local camera of the third device which can be controlled by the third device and/or a local camera of a fourth device mapped by the second virtual camera;
the first authorization control request is generated according to the first request camera, and is sent to the third device, so that the third device generates the first authorization indication according to the detected authorization operation aiming at the first authorization control request.
4. The method of claim 2, wherein determining a first camera identifier of the first authorized camera in the first device according to the second camera identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier of the first authorized camera comprises:
when the first authorized camera comprises a local camera of a fourth device mapped by a second virtual camera, determining a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera according to a mapping relation level which indicates a mapping relation between the first authorized camera and the second virtual camera in the second camera identification;
determining a first identity identifier of the first authorized camera in the first device according to the identity identifier in the second camera identifier and the identity identifier of the existing camera in the first device corresponding to the first mapping relation level;
and determining a first camera identifier of the first authorized camera in the first device according to the first mapping relation level and the first identity identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier.
5. The method of claim 4, wherein determining a first camera identifier of the first authorized camera in the first device according to the second camera identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier of the first authorized camera comprises:
determining a first level of mapping as a first level of mapping between the first authorized camera and a first virtual camera that needs to be created to control the first authorized camera when the first authorized camera includes a local camera of the third device.
6. The method of claim 4, wherein determining the first identity of the first authorized camera in the first device according to the identity of the second camera identity and the identity of an existing camera in the first device corresponding to the first mapping relationship level comprises:
when the identity identifier in the second camera identifier exists in the existing identity identifiers of the cameras corresponding to the first mapping relation level in the first device, creating a first identity identifier of the first authorized camera in the first device according to a preset identity identifier creation rule; or alternatively
And when the identity identifier in the second camera identifier is different from the identity identifier of the existing camera in the first device corresponding to the first mapping relation level, determining the identity identifier in the second camera identifier as the first identity identifier of the first authorized camera in the first device.
7. The method according to claim 1, wherein determining a selected camera and a target task that the selected camera needs to execute from the cameras to be selected according to the detected task creating operation for the cameras to be selected comprises:
determining a selected camera according to the detected selection operation aiming at the camera to be selected;
determining a target task to be executed by the selected camera according to the task setting operation aiming at the selected camera,
the task parameters of the target task comprise at least one of a task type, execution time information and camera parameter setting when the selected camera executes the target task, wherein the task type comprises at least one of the following: a photographing task, a shooting task, and an image preview task.
8. The method of claim 1, wherein sending the first task command to the selected camera to cause the selected camera to control the target task to be performed according to the first task command comprises at least one of:
when the selected camera comprises a local camera of the first device, sending the first task command to the local camera of the first device to cause the local camera of the first device to perform a target task indicated by the first task command;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and the selected camera and a first virtual camera corresponding to the selected camera are in a primary mapping relation, forwarding the first task command to the local camera of the second device through the first virtual camera corresponding to the selected camera in the first device, so that the local camera of the second device executes a target task indicated by the first task command;
when the selected camera comprises a local camera of second equipment mapped by a first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relationship, determining at least one intermediate equipment completing forwarding of the first task command according to a first mapping relationship level between the selected camera and the corresponding first virtual camera, and sequentially forwarding the first task command to a local camera of the second equipment through the virtual camera corresponding to the selected camera in each intermediate equipment so that the local camera of the second equipment executes a target task indicated by the first task command.
9. The method according to claim 1 or 8, characterized in that the method further comprises:
when target task data obtained by the selected camera executing the target task is received, image and/or video display is carried out according to the target task data,
wherein, receiving the target task data comprises at least one of the following modes:
when the selected camera comprises a local camera of the first device, directly receiving target task data sent by the local camera of the first device;
when the selected camera comprises a local camera of second equipment mapped by a first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a primary mapping relation, directly receiving target task data sent by the local camera of the second equipment by using the first virtual camera corresponding to the selected camera;
and when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, the first virtual camera corresponding to the selected camera is used for receiving target task data which are transmitted by the local camera of the second device and forwarded by the at least one intermediate device in sequence.
10. The method of claim 1, further comprising:
when a second authorization control request from a fifth device is received, an authorization prompt is displayed according to a second request camera in the second authorization control request;
determining an authorized second authorized camera according to the detected authorized operation aiming at the second request camera;
and generating a second authorization indication according to the first camera identification of the second authorization camera in the first device, and sending the second authorization indication to the fifth device, so that the fifth device creates a virtual camera for controlling the second authorization camera according to the second authorization indication.
11. A terminal device, comprising:
a local camera;
a first virtual camera;
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1-10 when executing the instructions.
12. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1-10.
CN202011308989.4A 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment Active CN114520867B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011308989.4A CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment
CN202210973225.XA CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment
PCT/CN2021/130672 WO2022105716A1 (en) 2020-11-20 2021-11-15 Camera control method based on distributed control, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308989.4A CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210973225.XA Division CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Publications (2)

Publication Number Publication Date
CN114520867A CN114520867A (en) 2022-05-20
CN114520867B true CN114520867B (en) 2023-02-03

Family

ID=81594926

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011308989.4A Active CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment
CN202210973225.XA Active CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210973225.XA Active CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Country Status (2)

Country Link
CN (2) CN114520867B (en)
WO (1) WO2022105716A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116366957B (en) * 2022-07-21 2023-11-14 荣耀终端有限公司 Virtualized camera enabling method, electronic equipment and cooperative work system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123131A (en) * 2010-12-08 2012-06-28 Nec Access Technica Ltd Camera synchronization system, controlling device, and camera synchronization method used for them
CN107154868A (en) * 2017-04-24 2017-09-12 北京小米移动软件有限公司 Smart machine control method and device
CN111083364A (en) * 2019-12-18 2020-04-28 华为技术有限公司 Control method, electronic equipment, computer readable storage medium and chip
WO2020107040A2 (en) * 2020-02-20 2020-05-28 Futurewei Technologies, Inc. Integration of internet of things devices

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPQ412899A0 (en) * 1999-11-18 1999-12-09 Prescient Networks Pty Ltd A gateway system for interconnecting wireless ad-hoc networks
US8429630B2 (en) * 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
WO2014032259A1 (en) * 2012-08-30 2014-03-06 Motorola Mobility Llc A system for controlling a plurality of cameras in a device
CN104639418B (en) * 2015-03-06 2018-04-27 北京深思数盾科技股份有限公司 The method and system that structure LAN is transmitted into row information
CA2979406C (en) * 2015-03-12 2024-02-27 Alarm.Com Incorporated Virtual enhancement of security monitoring
US20170332009A1 (en) * 2016-05-11 2017-11-16 Canon Canada Inc. Devices, systems, and methods for a virtual reality camera simulator
CN109600549A (en) * 2018-12-14 2019-04-09 北京小米移动软件有限公司 Photographic method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123131A (en) * 2010-12-08 2012-06-28 Nec Access Technica Ltd Camera synchronization system, controlling device, and camera synchronization method used for them
CN107154868A (en) * 2017-04-24 2017-09-12 北京小米移动软件有限公司 Smart machine control method and device
CN111083364A (en) * 2019-12-18 2020-04-28 华为技术有限公司 Control method, electronic equipment, computer readable storage medium and chip
WO2020107040A2 (en) * 2020-02-20 2020-05-28 Futurewei Technologies, Inc. Integration of internet of things devices

Also Published As

Publication number Publication date
CN115484404A (en) 2022-12-16
CN115484404B (en) 2023-06-02
CN114520867A (en) 2022-05-20
WO2022105716A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN111399789B (en) Interface layout method, device and system
WO2022100237A1 (en) Screen projection display method and related product
CN111666055B (en) Data transmission method and device
CN114520868B (en) Video processing method, device and storage medium
CN112286618A (en) Device cooperation method, device, system, electronic device and storage medium
CN111221845A (en) Cross-device information searching method and terminal device
CN114442969B (en) Inter-equipment screen collaboration method and equipment
CN114065706A (en) Multi-device data cooperation method and electronic device
CN114371963A (en) Fault detection method and electronic terminal
CN114520867B (en) Camera control method based on distributed control and terminal equipment
WO2022105758A1 (en) Path identification method and apparatus
WO2022194005A1 (en) Control method and system for synchronous display across devices
WO2022105793A1 (en) Image processing method and device
CN115114607A (en) Sharing authorization method, device and storage medium
CN116954409A (en) Application display method and device and storage medium
WO2022121751A1 (en) Camera control method and apparatus, and storage medium
EP4273679A1 (en) Method and apparatus for executing control operation, storage medium, and control
CN114513760B (en) Font library synchronization method, device and storage medium
WO2024078337A1 (en) Display-screen selection method, and electronic device
WO2022179471A1 (en) Card text recognition method and apparatus, and storage medium
CN116108118A (en) Method and terminal equipment for generating thermal map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant