CN115794019A - Projection method, projection device, electronic equipment and readable storage medium - Google Patents

Projection method, projection device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115794019A
CN115794019A CN202211581824.3A CN202211581824A CN115794019A CN 115794019 A CN115794019 A CN 115794019A CN 202211581824 A CN202211581824 A CN 202211581824A CN 115794019 A CN115794019 A CN 115794019A
Authority
CN
China
Prior art keywords
display area
virtual display
equipment
preset
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211581824.3A
Other languages
Chinese (zh)
Inventor
张鑫
晏燕楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211581824.3A priority Critical patent/CN115794019A/en
Publication of CN115794019A publication Critical patent/CN115794019A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a projection method, a projection device, electronic equipment and a readable storage medium, and belongs to the technical field of communication. The method is applied to first equipment, the first equipment can display a virtual display area, the virtual display area comprises a preset area, the first equipment is in communication connection with second equipment, and the method comprises the following steps: receiving a first input of the second equipment in the preset area; in response to the first input, projecting display content of the second device to the virtual display area.

Description

Projection method, projection device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a projection method, a projection device, electronic equipment and a readable storage medium.
Background
With the rapid development of intelligent terminal devices, such as intelligent glasses, new forms of intelligent glasses, such as Augmented Reality (AR), mixed Reality (MR), and new technologies, are emerging. The scenes and characters seen by the AR intelligent glasses are partly real and partly fake, virtual information is brought into the real world, and the MR intelligent glasses combine Virtual Reality (VR) technology and AR technology.
AR intelligent glasses and MR equipment can realize the interactive operation of user's intelligent terminal and virtual picture to a certain extent, but need the user at first to get into and predetermine the interface, then carry out manual selection and pair the setting, realize intelligent terminal and intelligent glasses interconnection, then just can be with the data content projection that needs the interaction to the intelligent glasses after the interconnection, the projection operation flow is loaded down with trivial details complicated. For a user, the effect of augmented reality is not realistic enough, and how to simplify the projection flow and make the augmented reality more realistic is a technical problem to be solved at present.
Disclosure of Invention
The embodiment of the application aims to provide a projection method, a projection device, electronic equipment and a readable storage medium, and can solve the problem that the augmented reality effect of the existing intelligent equipment is not realistic enough.
In a first aspect, an embodiment of the present application provides a projection method, which is applied to a first device, where the first device may display a virtual display area, the virtual display area includes a preset area, and the first device is in communication connection with a second device, where the projection method includes: receiving a first input of the second equipment in the preset area; in response to the first input, projecting display content of the second device to the virtual display area.
In a second aspect, an embodiment of the present application provides a projection apparatus, which is applied to a first device, where the first device may display a virtual display area, the virtual display area includes a preset area, the first device is in communication connection with a second device, and the projection apparatus includes: the receiving module is used for receiving a first input of the second equipment in the preset area; a projection module to project display content of the second device to the virtual display area in response to the first input.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the projection method is applied to a first device, the first device can display a virtual display area, the virtual display area comprises a preset area, the first device is in communication connection with a second device, and first input of the second device in the preset area is received; in response to the first input, the display content of the second equipment is projected to the virtual display area, so that the display content of the second equipment can be directly projected on the second equipment by adopting an interactive operation mode that real second equipment touches the virtual display area of the first equipment, the projection flow is greatly simplified, the interaction efficiency between the equipment is improved, and then the 'augmented reality' of the first equipment is more realistic, so that a user of the first equipment feels more realistic experience in a virtual projection picture, and the experience of the user in using the first equipment is improved.
Drawings
Fig. 1 is a schematic flow chart of a projection method according to an embodiment of the present application.
Fig. 2 is a schematic application scenario diagram of a projection method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a user operation of the second device according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of a projection system according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a projection apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 7 is a hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived from the embodiments in the present application by a person skilled in the art, are within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The projection method, the apparatus, the system, and the electronic device provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
In an embodiment of the present application, a projection method is applied to a first device, where the first device may display a virtual display area, the virtual display area includes a preset area, and the first device is in communication connection with a second device, and the projection method includes: receiving a first input of a second device in a preset area; in response to the first input, display content of the second device is projected to the virtual display area.
Referring now to fig. 1, fig. 1 is a schematic flow chart of a projection method according to an embodiment of the present application, where the projection method is applied to a first device.
The first device may be smart glasses, such as AR smart glasses or MR smart glasses, and projects a virtual display area to be displayed to a user wearing the smart glasses through a corresponding augmented reality technology, for example, the virtual display area correspondingly displays a virtual computer screen or a virtual screen in another mode, and the user may operate the virtual display area through the smart glasses or operate a terminal device associated with the smart glasses.
The preset area of the virtual display area is an area used for triggering the first device to execute the projection operation, and the first device executes the projection operation triggered by the input of the second device in the preset area. The second device may be a mobile electronic device, such as a terminal device.
As shown in fig. 1, the projection method of the embodiment of the present application includes the following steps 102 to 104.
Step 102, receiving a first input of a second device in a preset area.
The first input is an operation of a user holding the second device on a preset region on the virtual display region displayed by the first device through the second device, for example, an operation of touching the preset region of the first device by the second device.
Step 104, in response to the first input, projecting the display content of the second device to the virtual display area.
That is, when the second device touches the preset area of the first device, it means that the user holding the second device wants to transfer the display content of the second device to the first device, and projects the display content in the virtual display area of the first device. And responding to the touch operation, and projecting the display content of the second device to the virtual display area of the first device.
And if the second device has input in the virtual display area of the first device, but the input does not exist in the preset area, not executing the projection operation.
Optionally, the first device is provided with a camera, and before receiving the first input of the second device in the preset area, the method further includes: acquiring a video frame of a virtual display area when target input is carried out on second equipment through a camera; identifying the actual position of the second equipment when the target input is carried out in the virtual display area according to the video frame; determining whether the actual position falls in a preset area; and in the case that the actual position falls in the preset area, determining the target input as a first input of the preset area.
In the above embodiment, whether a touch operation of the second device on the preset area of the first device exists is monitored through the camera of the first device.
The camera of the first device monitors the virtual display area projected by the first device, so as to monitor the operation of the user in the virtual display area, and determine whether to respond to the operation for user interaction.
For a first device such as smart glasses, cameras may be disposed on one side or both sides of the smart glasses, video frames of a virtual display area are collected by the cameras, and whether a second device touches the virtual display area of the first device or not is identified according to images of the video frames, and whether the area when the second device touches the virtual display area is a preset area or not is identified.
Optionally, identifying an actual position of the second device when performing target input in the virtual display area according to the video frame includes: extracting the characteristics of the second equipment, the characteristics of the virtual display area and the position change characteristics of the second equipment relative to the virtual display area in the video frame by an instant positioning and mapping SLAM algorithm; and identifying whether the second equipment operates the actual position in the virtual display area by inputting the characteristics and the position change characteristics of the second equipment into a neural network identification model, wherein the neural network identification model is obtained based on sample data including the characteristics and the position change characteristics of the second equipment and label training of whether the second equipment operates the target position in the virtual display area.
The method includes the steps that a synchronous positioning and Mapping (SLAM) algorithm extracts typical characteristics of an object, such as a feature data set of second equipment and a feature data set of a virtual display area, the SLAM algorithm also extracts position change features of the second equipment relative to position change of the virtual display area, and the features are input into a trained neural network recognition model for recognition, so that whether an action that the second equipment touches the virtual display area exists or not and the actual position of the virtual display area touched by the second equipment is judged.
And under the condition that the second equipment is recognized to be touched with the virtual display area, further determining whether the actual position corresponding to the touch operation falls in a preset area of the virtual display area.
Optionally, determining whether the actual position falls within a preset area comprises: determining a first orientation of the actual location relative to the first device, the orientation comprising a direction and three-dimensional coordinates; comparing the first position with a preset second position, wherein the second position is a position of a preset position on a preset area relative to the first equipment; and under the condition that the angle difference between the direction of the first azimuth and the direction of the second azimuth meets a first angle threshold and the coordinate difference between the three-dimensional coordinate of the first azimuth and the three-dimensional coordinate of the second azimuth meets a first coordinate threshold, determining that the actual position falls in the preset area.
In this embodiment, the preset position may be a certain reference point position in the preset area, for example, a central point position of the preset area. The second position is the position of the reference point corresponding to the position of the first device, namely the three-dimensional coordinate of the reference point and the direction angle relative to the first device, and the preset range around the reference point is regarded as a preset area.
After the actual position of the second device for inputting in the virtual display area is determined, comparing a first position of the actual position relative to the first device with a second position of the reference point relative to the first device, determining a coordinate difference and an angle difference between the first position and the second position, and if the coordinate difference and the angle difference do not exceed the maximum difference between the reference point and other position points in the preset area, namely a preset coordinate threshold and a preset angle threshold, determining that the actual position falls in the preset area, namely determining that the second device performs first input on the preset area of the first device; otherwise, judging that the second equipment does not perform first input on the preset area of the first equipment.
The second orientation of the preset position on the preset area relative to the first device is known in advance, and the first orientation of the actual position relative to the first device is determined by detection of the first device when the first device inputs in the virtual display area.
Optionally, determining a first orientation of the actual location relative to the first device comprises: a first orientation of the actual position relative to the first device is determined based on a disparity between lateral coordinates of the actual position imaged on the two camera corresponding views of the first device and a distance of the actual position from the imaging plane.
In the embodiment, two cameras are respectively arranged on the left side and the right side of the first device, the touch positions can be imaged on corresponding views of the two cameras respectively, the parallax of the touch positions between the transverse coordinates of the two imaging views is calculated, and the position of the touch positions relative to the first device can be calculated by combining the distance from the touch positions to the imaging plane.
The preset area is an area preset for the second device to trigger data interaction operation for projection of the display content, so that the orientation of the preset position on the preset area relative to the first device is known. Of course, the predetermined position relative to the orientation of the first device may also be determined by the binocular range finding algorithm described above.
As shown in fig. 2, when a user wearing the smart glasses 20 (i.e., the first device) holds the terminal device 10 (i.e., the second device) and performs a "touch and click" operation toward the preset area B in the virtual display area a projected by the smart glasses 20, the action of the terminal device 10 touching the preset area B in the virtual display area a can be recognized through the camera 22 on the smart glasses 20, that is, the smart glasses 20 can determine that the user requests to project the display content of the terminal device 10 onto the virtual display area a of the smart glasses 20.
After it is recognized through the above steps that the second device touches the preset area in the virtual display area of the first device, in step 104, the display content of the second device may be projected to the virtual display area of the first device in response to the first input. Before projection, the first device needs to identify the second device performing the first input and perform communication interaction with the second device to receive the display content that the first device needs to project.
There may be a plurality of second devices around the first device, and the first device needs to identify an actual second device touching the preset area of the first device from the surrounding second devices.
Optionally, in response to the first input, before projecting the display content of the second device to the virtual display area, further comprising: determining a third direction of each device within the preset distance of the first device relative to the first device; comparing the direction of each third direction with the direction of the second direction, and comparing the three-dimensional coordinates of each third direction with the three-dimensional coordinates of the second direction; in a case where there is a case where an angle difference between the direction of the third orientation of the device and the direction of the second orientation satisfies a second angle threshold, and a coordinate difference between the three-dimensional coordinates of the third orientation and the three-dimensional coordinates of the second orientation satisfies a second coordinate threshold, the second device is identified from among the devices.
Identifying a second device making a first input in a preset area of the first device may be accomplished by the orientation of the second device relative to the first device.
Optionally, determining a third orientation of each device within the preset distance of the first device with respect to the first device includes: and determining a third position of each device within the preset distance of the first device relative to the first device through the ultra-wideband UWB positioning technology.
Indoor centimeter level precision positioning can be realized to Ultra Wide Band (UWB), can discern the position of a plurality of equipment and first equipment respectively in real time through UWB technique.
The preset distance of the first device may be a distance that can be covered by the UWB base station, and the first device and each device turn on UWB communication, thereby enabling mutual communication through the UWB communication. When the first equipment identifies that the equipment touches the preset area, the first equipment is triggered to search surrounding equipment through a UWB positioning technology, and the position, including three-dimensional coordinates and the direction, of each equipment relative to the first equipment is calculated.
Then, the third orientations of the devices relative to the first device are respectively compared with a preset second orientation, that is, the orientation of the preset position on the preset area of the first device relative to the first device, that is, the directions of the third orientations are respectively compared with the direction of the second orientation, and the three-dimensional coordinates of the third orientations are respectively compared with the three-dimensional coordinates of the second orientation, so as to calculate the angle difference and the three-dimensional coordinate difference between the second orientation and the third orientation corresponding to each device. Under the condition that the angle difference and the three-dimensional coordinate difference both meet the corresponding preset threshold value, the fact that the third direction and the second direction of a certain device are close to each other is indicated, the device can be judged to be a second device touching the preset area in the virtual display area, and therefore the second device performing first input in the preset area is identified.
By means of UWB technology, an accurate identification of the second device can be achieved, so that the display content data of the subsequent second device can be accurately transmitted to the first device, but not to other unspecified devices.
Optionally, in step 104, projecting the display content of the second device to the virtual display area in response to the first input, comprises: responding to the first input, sending a notification to the second equipment to trigger the second equipment to send data corresponding to the display content; data is received and projected onto a virtual display area.
After identifying the second device, the first device may then perform data interaction with the second device.
The first device sends a notice to the corresponding identified second device, and the second device knows that the display content can be projected to the first device at present after receiving the notice, so that the display content data needing to be projected can be sent to the first device. The interaction between the first device and the second device may transmit data through UWB communication or WIFI communication.
As shown in fig. 4, the second device 10 and the first device 20 are both provided with a UWB communication interface and a WIFI communication interface, and when the corresponding communication interfaces are turned on, data interaction between the two devices can be achieved through UWB or WIFI communication in corresponding modes.
After receiving the display content data transmitted by the second device, the first device may correspondingly project the display content data to the virtual display area or share the display content data with the first device according to the data interaction mode selected by the second device. Of course, the second device may not select the data interaction mode, and after the second device performs an operation such as a bump, the display content data transmitted by the second device is directly projected to the virtual display area of the first device.
Under the condition that multiple data interaction modes exist, the second device can select and determine the data interaction mode which needs to be executed currently, wherein the data interaction mode comprises a data screen projection mode and a data sharing mode. And if the second equipment selects the data screen projection mode, the first equipment projects the data to the virtual display area after receiving the corresponding display content data. And if the second equipment selects the data sharing mode, the first equipment stores the data after receiving the corresponding display content data.
The user may select a corresponding data interaction mode through an operation interface of a User Interface (UI) of the second device. As shown in fig. 3, a user determines a data interaction mode or function triggered by touching a preset region by clicking a "share" or "screen-shot" control of the UI operation interface. For example, after the picture 12 is displayed on the UI operation interface of fig. 3, if the user selects the "share" control, it is determined that the data interaction mode is the data share mode, and the picture 12 is transmitted to the first device for storage; if the user selects the "screen-casting" control, the data interaction mode is determined to be the data voting mode, and the display content of the picture 12 is projected to the virtual display area of the first device.
In this way, through the operation that the second device 10 touches the preset area B of the virtual display area of the first device, the picture 12 displayed on the second device 10 can be transmitted to the first device 20, and the picture 12 can be projected to the virtual display area of the first device 20.
In the embodiment of the application, the projection method is applied to a first device, the first device can display a virtual display area, the virtual display area comprises a preset area, the first device is in communication connection with a second device, and first input of the second device in the preset area is received; in response to the first input, the display content of the second equipment is projected to the virtual display area, so that the display content of the second equipment can be directly projected on the second equipment by adopting an interactive operation mode that the real second equipment touches the virtual display area of the first equipment, the projection flow is greatly simplified, the interaction efficiency between the equipment is improved, and then the 'augmented reality' of the first equipment is more realistic, so that the user of the first equipment feels more realistic experience in a virtual projection picture, and the experience of the user in using the first equipment is improved.
According to the projection method provided by the embodiment of the application, the execution main body can be a projection device. In the embodiment of the present application, a projection method performed by a projection apparatus is taken as an example, and the projection apparatus provided in the embodiment of the present application is described.
As shown in fig. 5, an embodiment of the present application provides a projection apparatus 800, which is applied to a first device, where the first device can display a virtual display area, the virtual display area includes a preset area, and the first device is in communication connection with a second device, and the projection apparatus includes: a receiving module 820, configured to receive a first input of a second device in a preset area; a projection module 840 to project the display content of the second device to the virtual display area in response to the first input.
Optionally, the first device is provided with a camera, and the projection apparatus 800 further includes: the determining module is used for acquiring a video frame of a virtual display area of the second equipment during target input through the camera before receiving first input of the second equipment in a preset area; identifying the actual position of the second equipment when the target input is carried out in the virtual display area according to the video frame; determining whether the actual position falls in a preset area; and in the case that the actual position falls in the preset area, determining the target input as a first input of the preset area.
Optionally, the determining module is specifically configured to: determining a first orientation of the actual location relative to the first device, the orientation comprising a direction and three-dimensional coordinates; comparing the first position with a preset second position, wherein the second position is a position of a preset position on a preset area relative to the first equipment; and under the condition that the angle difference between the direction of the first azimuth and the direction of the second azimuth meets a first angle threshold and the coordinate difference between the three-dimensional coordinate of the first azimuth and the three-dimensional coordinate of the second azimuth meets a first coordinate threshold, determining that the actual position falls in the preset area.
Optionally, the determining module is specifically configured to: extracting the characteristics of second equipment, the characteristics of a virtual display area and the position change characteristics of the second equipment relative to the virtual display area in a video frame by an instant positioning and map building SLAM algorithm; and identifying whether the second equipment operates the actual position in the virtual display area by inputting the characteristics and the position change characteristics of the second equipment into a neural network identification model, wherein the neural network identification model is obtained based on sample data including the characteristics and the position change characteristics of the second equipment and label training of whether the second equipment operates the target position in the virtual display area.
Optionally, the determining module is specifically configured to: a first orientation of the actual position relative to the first device is determined based on a disparity between lateral coordinates of the actual position imaged on the two camera corresponding views of the first device and a distance of the actual position from the imaging plane.
Optionally, the projection apparatus 800 further comprises: the identification module is used for determining a third position of each device within a preset distance of the first device relative to the first device before the display content of the second device is projected to the virtual display area in response to the first input; comparing the direction of each third direction with the direction of the second direction, and comparing the three-dimensional coordinates of each third direction with the three-dimensional coordinates of the second direction; in a case where there is a difference in angle between the direction of the third orientation of the device and the direction of the second orientation that satisfies a second angle threshold, and a difference in coordinates between the three-dimensional coordinates of the third orientation and the three-dimensional coordinates of the second orientation that satisfies a second coordinate threshold, the second device is identified from among the devices.
Optionally, the identification module is specifically configured to: and determining a third position of each device in the preset distance of the first device relative to the first device through an ultra-wideband UWB positioning technology.
Optionally, the projection module 840 is specifically configured to: responding to the first input, sending a notification to the second equipment to trigger the second equipment to send data corresponding to the display content; data is received and projected onto a virtual display area.
In the embodiment of the application, the projection device is applied to a first device, the first device can display a virtual display area, the virtual display area comprises a preset area, the first device is in communication connection with a second device, and first input of the second device in the preset area is received; in response to the first input, the display content of the second equipment is projected to the virtual display area, so that the display content of the second equipment can be directly projected on the second equipment by adopting an interactive operation mode that the real second equipment touches the virtual display area of the first equipment, the projection flow is greatly simplified, the interaction efficiency between the equipment is improved, and then the 'augmented reality' of the first equipment is more realistic, so that the user of the first equipment feels more realistic experience in a virtual projection picture, and the experience of the user in using the first equipment is improved.
The projection device in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. By way of example, the electronic Device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a Personal Computer (PC), and the like, and the embodiments of the present application are not limited in particular.
The projection device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The projection apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 3, and is not described here again to avoid repetition.
Optionally, as shown in fig. 4, an embodiment of the present application further provides a projection system 100, which includes the first device 20 and the second device 10 in any of the embodiments described above.
Optionally, as shown in fig. 6, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 940 and a memory 920, where the memory 920 stores a program or an instruction that can be executed on the processor 940, and when the program or the instruction is executed by the processor 940, the steps of the projection method embodiment are implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The electronic device 1000 may display a virtual display area, where the virtual display area includes a preset area, and the electronic device 1000 is in communication connection with a second device.
A user input unit 1007, configured to receive a first input of the second device in a preset area.
A processor 1010 to project display content of a second device to a virtual display area in response to a first input.
Optionally, the electronic device 1000 is provided with a camera, and the processor 1010 is further configured to receive a video frame of a virtual display area of a second device when a target input is performed by the second device, where the video frame is acquired by the camera before a first input of the second device in a preset area; identifying the actual position of the second equipment when the target input is carried out in the virtual display area according to the video frame; determining whether the actual position falls in a preset area; and in the case that the actual position falls in the preset area, determining the target input as a first input of the preset area.
Optionally, the processor 1010 is specifically configured to: determining a first orientation of the actual location relative to the first device, the orientation comprising a direction and three-dimensional coordinates; comparing the first position with a preset second position, wherein the second position is a position of a preset position on the preset area relative to the first equipment; and under the condition that the angle difference between the direction of the first azimuth and the direction of the second azimuth meets a first angle threshold and the coordinate difference between the three-dimensional coordinate of the first azimuth and the three-dimensional coordinate of the second azimuth meets a first coordinate threshold, determining that the actual position falls in the preset area.
Optionally, the processor 1010 is specifically configured to: extracting the characteristics of second equipment, the characteristics of a virtual display area and the position change characteristics of the second equipment relative to the virtual display area in a video frame by an instant positioning and map building SLAM algorithm; and identifying whether the second equipment operates the actual position in the virtual display area by inputting the characteristics and the position change characteristics of the second equipment into a neural network identification model, wherein the neural network identification model is obtained based on sample data including the characteristics and the position change characteristics of the second equipment and label training of whether the second equipment operates the target position in the virtual display area.
Optionally, the processor 1010 is specifically configured to: a first orientation of the actual position relative to the first device is determined based on a disparity between lateral coordinates of the actual position imaged on the two camera corresponding views of the first device and a distance of the actual position from the imaging plane.
Optionally, the processor 1010 is further configured to determine a third position of each device within the preset distance of the first device relative to the first device before projecting the display content of the second device to the virtual display area in response to the first input; comparing the direction of each third direction with the direction of the second direction, and comparing the three-dimensional coordinates of each third direction with the three-dimensional coordinates of the second direction; in a case where there is a case where an angle difference between the direction of the third orientation of the device and the direction of the second orientation satisfies a second angle threshold, and a coordinate difference between the three-dimensional coordinates of the third orientation and the three-dimensional coordinates of the second orientation satisfies a second coordinate threshold, the second device is identified from among the devices.
Optionally, the processor 1010 is specifically configured to: and determining a third position of each device in the preset distance of the first device relative to the first device through an ultra-wideband UWB positioning technology.
Optionally, the processor 1010 is specifically configured to: responding to the first input, sending a notification to the second equipment to trigger the second equipment to send data corresponding to the display content; data is received and projected onto a virtual display area.
In the embodiment of the application, the electronic device can display a virtual display area, the virtual display area comprises a preset area, the first device is in communication connection with the second device, and first input of the second device in the preset area is received; in response to the first input, the display content of the second equipment is projected to the virtual display area, so that the display content of the second equipment can be directly projected on the second equipment by adopting an interactive operation mode that the real second equipment touches the virtual display area of the first equipment, the projection flow is greatly simplified, the interaction efficiency between the equipment is improved, and then the 'augmented reality' of the first equipment is more realistic, so that the user of the first equipment feels more realistic experience in a virtual projection picture, and the experience of the user in using the first equipment is improved.
It should be understood that, in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the projection method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media such as computer read only memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the projection method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing projection method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (10)

1. A projection method is applied to a first device, the first device can display a virtual display area, the virtual display area comprises a preset area, and the first device is in communication connection with a second device, and the projection method comprises the following steps:
receiving a first input of the second equipment in the preset area;
in response to the first input, projecting display content of the second device to the virtual display area.
2. The method of claim 1, wherein the first device is provided with a camera, and wherein receiving the first input of the second device in the preset area further comprises:
acquiring a video frame of the virtual display area when the second equipment carries out target input through the camera;
identifying the actual position of the second equipment when the target input is carried out in the virtual display area according to the video frame;
determining whether the actual position falls within the preset area;
determining that the target input is the first input of the preset area when the actual position falls within the preset area.
3. The method of claim 2, wherein said determining whether said actual location falls within said preset region comprises:
determining a first position of the actual location relative to the first device, the position comprising a direction and three-dimensional coordinates;
comparing the first position with a preset second position, wherein the second position is a position of a preset position on the preset area relative to the first equipment;
and determining that the actual position falls in the preset area under the condition that an angle difference value between the direction of the first azimuth and the direction of the second azimuth meets a first angle threshold value, and a coordinate difference value between the three-dimensional coordinate of the first azimuth and the three-dimensional coordinate of the second azimuth meets a first coordinate threshold value.
4. The method of claim 2, wherein the identifying the actual position of the second device at the time of the target input to the virtual display area from the video frame comprises:
extracting the characteristics of a second device, the characteristics of a virtual display area and the position change characteristics of the second device relative to the virtual display area in the video frame through an instant positioning and mapping (SLAM) algorithm;
identifying whether the second device operates the actual location in the virtual display area by inputting the feature of the second device and the location change feature into a neural network identification model, the neural network identification model being trained based on sample data including the second device feature and the location change feature and a tag of whether the second device operates a target location in the virtual display area.
5. The method of claim 3, wherein the determining the first orientation of the actual location relative to the first device comprises:
determining a first orientation of the actual location relative to the first apparatus based on a disparity between lateral coordinates of the actual location imaged on two camera corresponding views of the first apparatus and a distance of the actual location from an imaging plane.
6. The method of claim 3, wherein prior to projecting the display content of the second device to the virtual display area in response to the first input, further comprising:
determining a third position of each device within a preset distance of the first device relative to the first device;
comparing the direction of each third direction with the direction of the second direction, and comparing the three-dimensional coordinates of each third direction with the three-dimensional coordinates of the second direction;
the second device is identified from the devices in a case where there is an angular difference between a direction of a third orientation of the device and a direction of the second orientation that satisfies a second angular threshold, and a coordinate difference between three-dimensional coordinates of the third orientation and three-dimensional coordinates of the second orientation that satisfies a second coordinate threshold.
7. The method of claim 6, wherein determining a third orientation of each device within the predetermined distance of the first device relative to the first device comprises:
and determining a third position of each device in the preset distance of the first device relative to the first device through an ultra-wideband UWB positioning technology.
8. A projection device is applied to a first device, the first device can display a virtual display area, the virtual display area comprises a preset area, the first device is in communication connection with a second device, and the projection device comprises:
the receiving module is used for receiving a first input of the second equipment in the preset area;
a projection module to project display content of the second device to the virtual display area in response to the first input.
9. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the projection method of any of claims 1-7.
10. A readable storage medium, characterized in that a program or instructions are stored on the readable storage medium, which program or instructions, when executed by a processor, implement the steps of the projection method according to any of claims 1-7.
CN202211581824.3A 2022-12-09 2022-12-09 Projection method, projection device, electronic equipment and readable storage medium Pending CN115794019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211581824.3A CN115794019A (en) 2022-12-09 2022-12-09 Projection method, projection device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211581824.3A CN115794019A (en) 2022-12-09 2022-12-09 Projection method, projection device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115794019A true CN115794019A (en) 2023-03-14

Family

ID=85418362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211581824.3A Pending CN115794019A (en) 2022-12-09 2022-12-09 Projection method, projection device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115794019A (en)

Similar Documents

Publication Publication Date Title
CN109584276B (en) Key point detection method, device, equipment and readable medium
CN111417028B (en) Information processing method, information processing device, storage medium and electronic equipment
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
EP2812770B1 (en) Image processing device, and computer program product
EP2814000A1 (en) Image processing apparatus, image processing method, and program
CN107885871A (en) Synchronous superposition method, system, interactive system based on cloud computing
CN112181141B (en) AR positioning method and device, electronic equipment and storage medium
CN112068698A (en) Interaction method and device, electronic equipment and computer storage medium
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN112261340B (en) Visual field sharing method and device, electronic equipment and readable storage medium
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN112669381A (en) Pose determination method and device, electronic equipment and storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN112818733B (en) Information processing method, device, storage medium and terminal
WO2024012268A1 (en) Virtual operation method and apparatus, electronic device, and readable storage medium
CN117455989A (en) Indoor scene SLAM tracking method and device, head-mounted equipment and medium
CN110097061B (en) Image display method and device
EP3629291A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115794019A (en) Projection method, projection device, electronic equipment and readable storage medium
US8755819B1 (en) Device location determination using images
CN112534379B (en) Media resource pushing device, method, electronic equipment and storage medium
US20240203068A1 (en) Method and system for providing augmented reality object based on identification code
CN113158085B (en) Information switching processing method and device, electronic equipment and storage medium
CN117991967A (en) Virtual keyboard interaction method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination