CN115840552A - Display method and device and first electronic equipment - Google Patents

Display method and device and first electronic equipment Download PDF

Info

Publication number
CN115840552A
CN115840552A CN202211714169.4A CN202211714169A CN115840552A CN 115840552 A CN115840552 A CN 115840552A CN 202211714169 A CN202211714169 A CN 202211714169A CN 115840552 A CN115840552 A CN 115840552A
Authority
CN
China
Prior art keywords
target object
image
electronic device
information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211714169.4A
Other languages
Chinese (zh)
Inventor
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211714169.4A priority Critical patent/CN115840552A/en
Publication of CN115840552A publication Critical patent/CN115840552A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method, a display device and first electronic equipment, and belongs to the field of image display. The method comprises the following steps: acquiring a first image corresponding to a first visual angle of a first camera; acquiring position information and modeling information of a target object, wherein the target object is shielded by a first object in a first visual angle, and the modeling information is used for establishing an object model of the target object; based on the position information and modeling information of the target object, and the first image, a second image is generated and displayed, the second image including the target object.

Description

Display method and device and first electronic equipment
Technical Field
The application belongs to the field of image display, and particularly relates to a display method and device and first electronic equipment.
Background
The Extended Reality (XR) refers to a human-computer interactive environment in which a real physical world scene and a virtual application scene are combined, which is generated by a computer technology and an electronic device.
However, taking the electronic device as XR glasses as an example, when the user uses the XR glasses, the XR glasses can only present objects in the real physical world scene collected by the camera in real time for the user, so that some occluded objects cannot be presented by the XR glasses for the user, and the user cannot know the position of the occluded object in the current view angle range.
Disclosure of Invention
An object of the embodiments of the present application is to provide a display method and apparatus, and a first electronic device, which can solve a problem that the electronic device cannot help a user to locate a blocked object.
In a first aspect, an embodiment of the present application provides a display method, where the method includes: acquiring a first image corresponding to a first visual angle of a first camera; acquiring position information and modeling information of a target object, wherein the target object is shielded by a first object in a first visual angle, and the modeling information is used for establishing an object model of the target object; based on the position information and modeling information of the target object, and the first image, a second image is generated and displayed, the second image including the target object.
In a second aspect, an embodiment of the present application provides a display device, including: the device comprises an acquisition module and a processing module. The acquisition module is used for acquiring a first image corresponding to a first visual angle of the first camera; and acquiring position information and modeling information of a target object, wherein the target object is shielded by a first object in a first visual angle, and the modeling information is used for establishing an object model of the target object. And the processing module is used for generating and displaying a second image based on the position information and the modeling information of the target object and the first image, wherein the second image comprises the target object.
In a third aspect, embodiments of the present application provide a first electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the first electronic device may acquire a first image corresponding to a first viewing angle of the first camera; and acquiring position information and modeling information of a target object, the target object being occluded by a first object within a first view angle, to generate and display a second image based on the position information and modeling information of the target object and the first image, the second image including the target object. In this scheme, because the first electronic device can acquire the first image corresponding to the first view angle of the first camera, and the position information and the modeling information of the target object that is blocked by the first object in the first view angle, the first electronic device can generate and display the second image including the target object based on the acquired position information and modeling information of the target object and the first image, so that the first electronic device can help the user to position the blocked target object in the first view angle.
Drawings
Fig. 1 is a schematic diagram of a display method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an example of a display target object according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an example of displaying a target object according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 5 is a schematic hardware structure diagram of a first electronic device according to an embodiment of the present disclosure;
fig. 6 is a second schematic diagram of a hardware structure of a first electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The display method in the embodiment of the application can be applied to electronic equipment to display the scene of the sheltered object.
Currently, when an electronic device (e.g., XR glasses) displays an application scene, it is sometimes considered to mix the content of the application scene with a real physical world scene and then display the mixed content, which is often found in a boundary system of a Virtual Reality (VR) device or a visual experience of an Augmented Reality (AR). Specifically, for example, when the electronic device is used as the XR glasses, when the application scene content and the real physical world scene are mixed by the XR glasses, the real physical world scene is generally considered as a background, and the application scene content is considered as a foreground to be mixed, so that the experience effect of the visual sense of the real human eyes is achieved. However, in the above method, the XR glasses can only present the objects in the real physical world scene collected by the camera in real time for the user, and some occluded objects cannot be presented for the user, so that the user cannot know the position of the occluded object within the current view angle range.
In this embodiment, when the target object (i.e., the occluded object) is outside the first view angle range of the first camera, the XR glasses may acquire the first image corresponding to the first view angle of the first camera, and the position information and the modeling information of the target object occluded by the first object in the first view angle, so that the second image including the target object is generated and displayed based on the acquired position information and modeling information of the target object and the first image, and thus, the XR glasses may help the user to locate the occluded target object in the first view angle.
The execution main body of the display method provided by the embodiment of the application is a display device. The display device may be the first electronic device, or may be a function control module inside the first electronic device, which is not limited in this embodiment of the application. The technical solutions provided by the embodiments of the present application will be described below by taking the first electronic device as an example.
An embodiment of the present application provides a display method, and fig. 1 shows a flowchart of the display method provided in the embodiment of the present application. As shown in fig. 1, the display method provided in the embodiment of the present application may include steps 201 to 203 described below.
Step 201, a first electronic device acquires a first image corresponding to a first viewing angle of a first camera.
Optionally, in this embodiment of the application, the first camera is a camera of a first electronic device.
Optionally, in this embodiment of the application, the first image may be an image, that is, a two-dimensional image, of the first electronic device within a first viewing angle acquired based on the first camera.
Optionally, in this embodiment of the application, the first image may be a three-dimensional image constructed by the first electronic device based on a two-dimensional image within a first view angle acquired by the first camera.
Illustratively, the first electronic device may convert the two-dimensional image within the first view acquired by the first camera into a three-dimensional image through a Structure From Motion (SFM).
Step 202, the first electronic device obtains position information and modeling information of the target object.
In the embodiment of the present application, the target object is occluded by the first object in the first view, and the modeling information is used to establish an object model of the target object.
Optionally, in this embodiment of the application, the target object may include at least one of the following: other users, plants, animals, buildings, electronic products, tables and chairs, etc.
Illustratively, the first object may include at least one of: other users, plants, animals, buildings, electronic products, tables and chairs, etc.
It should be noted that, the target object is occluded by the first object in the first view angle, and it may be understood that the target object is outside the first view angle of the first camera, or the target object is not included in the display scene of the first electronic device, or the target object is not included in the image, acquired by the first electronic device through the first camera, in the first view angle, that is, the user cannot observe the target object through the display screen of the first electronic device.
Optionally, in this embodiment of the application, the first electronic device may obtain an image acquired by the second camera, and then obtain the position information of the target object based on the image and the position information of the second camera.
Illustratively, the second camera may be any one of: a camera of a target electronic device connected with the first electronic device, a network camera, a local camera, and the like.
Illustratively, the distance between the second camera and the first electronic device is smaller than or equal to a preset threshold.
The target electronic device is a mobile electronic device (e.g., a mobile phone, a tablet computer, a notebook computer, a palm computer, etc.), and may also be a non-mobile electronic device (e.g., a personal computer, a television, etc.).
For example, when using the first electronic device, the user may trigger the first electronic device to establish a connection with the second camera or a target electronic device corresponding to the second camera, so as to obtain a scene within the second viewing angle of the second camera in real time, that is, obtain an image acquired by the second camera based on the second viewing angle in real time.
Illustratively, the target object is within the second view angle of the second camera, and it is understood that the target object is included in the image captured by the second camera.
For example, after the first electronic device obtains the image including the target object captured by the second camera, the image may be analyzed to obtain the position information of the target object relative to the second camera, and then the first electronic device may calculate the position information of the target object relative to the first electronic device based on the position information of the second camera and the position information of the target object relative to the second camera.
Optionally, in this embodiment of the application, the first electronic device may obtain modeling information of the target object based on an image acquired by the second camera.
Illustratively, the modeling information of the target object includes data rendered by the target object, including: color data, texture data, etc.
After the first electronic device obtains 5 the image including the target object captured by the second camera, the image may be processed by using a picture modeling technique to obtain modeling information of the target object.
Illustratively, the first electronic device may directly acquire modeling information of a target object stored in advance.
Optionally, in this embodiment, the first electronic device may obtain at least one of the position information of the target object and the modeling information of the target object based on the map data of the target map.
Illustratively, the target map contains target objects.
0 for example, the map data of the target map may be a map prepared in advance for a local area or a network
And (4) data.
For example, the first electronic device may obtain a name of the target object to query in the map data of the target map according to the name of the target object, thereby obtaining the location information of the target object.
For example, the first electronic device may query the map data 5 of the target map according to the name of the target object to obtain some picture information associated with the target object, and then the first electronic device may obtain the modeling information of the target object by using a picture modeling technique according to the picture information.
For example, the first electronic device may directly acquire modeling information of the target object from map data of the target map according to a name of the target object, the modeling information of the target object being stored in the map data in advance.
Step 203, the first electronic device based on the position information and modeling information of the target object, and the first electronic device
And generating and displaying a second image.
In an embodiment of the present application, the second image includes a target object.
Optionally, in this embodiment, the first electronic device may process the first image based on the position information and the modeling information of the target object to generate and display the second image.
By way of example, the second image may be understood as a perspective image of the target object, which is a perspective image
Obtained by perspective projection of an object model of the target object on the first object.
Optionally, in this embodiment of the application, after the first electronic device generates and displays the second image, some resources of the first electronic device may be released, for example: memory, central Processing Unit (CPU) occupancy, etc.
Optionally, in this embodiment of the application, after the first electronic device generates and displays the second image, if the size of the window displaying the second image completely covers the foreground application of the first electronic device, the first electronic device may pause running the foreground application.
For example, the size of the window displaying the second image may be displayed by default, and the user may adjust the display size of the window.
Optionally, in this embodiment of the application, the first electronic device generating and displaying the second image including the target object may be understood as entering a perspective mode to perspectively display the occluded target object.
Illustratively, the user may make an input to the first electronic device to trigger the first electronic device to exit the see-through mode, i.e., to cancel the display of the second image.
Illustratively, the input may be any one of: the voice input, the key input, the gesture input, and the like may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the user may trigger the first electronic device to exit the see-through mode by a voice input of "exit the see-through mode", "reposition XX item cancelled", or the like.
For example, the user may input a physical key of a common object associated with the first electronic device, such as a mobile phone, an earphone, a watch, or perform a specific gesture input, so as to trigger the first electronic device to exit the perspective mode.
For example, the first electronic device may actively exit the perspective mode when the first electronic device detects that the target object has not been occluded by other objects.
Illustratively, the resource may be reloaded after the first electronic device exits the perspective mode.
For example, after the first electronic device exits the perspective mode, if the foreground application is in the suspended running state, the first electronic device may re-run the foreground application.
The embodiment of the application provides a display method, wherein first electronic equipment can acquire a first image corresponding to a first visual angle of a first camera; and acquiring position information and modeling information of a target object, the target object being occluded by a first object within a first view angle, to generate and display a second image based on the position information and modeling information of the target object and the first image, the second image including the target object. In this scheme, because the first electronic device can acquire the first image corresponding to the first view angle of the first camera, and the position information and the modeling information of the target object that is blocked by the first object in the first view angle, the first electronic device can generate and display the second image including the target object based on the acquired position information and modeling information of the target object and the first image, so that the first electronic device can help the user to position the blocked target object in the first view angle.
Optionally, in this embodiment of the application, the first camera is a camera of a first electronic device, and the target object is a second electronic device, and the step 202 may be specifically implemented by the following step 202 a.
Step 202a, under the condition that the second electronic device receives the information, the first electronic device acquires the position information and the modeling information of the second electronic device.
Optionally, in this embodiment of the application, the second electronic device may be a mobile electronic device or a non-mobile electronic device.
Optionally, in this embodiment of the present application, when using the first electronic device, a user may trigger the first electronic device to establish a connection (for example, a wired connection or a wireless connection) with the second electronic device, so that when the second electronic device receives information, it can be determined that the second electronic device receives the information, and thus location information and modeling information of the second electronic device are obtained.
For example, in a case where the first electronic device establishes a connection with the second electronic device, the first electronic device may directly acquire the location information of the second electronic device (i.e., the target object).
Optionally, in this embodiment of the present application, the information may be a voice call, a video call, a notification, and the like.
In the embodiment of the application, when the second electronic device receives the information, the first electronic device can acquire the position information and the modeling information of the second electronic device, so that the second image is generated and displayed based on the position information and the modeling information of the second electronic device and the first image, and thus, a user can be quickly helped to locate the second electronic device to view the received information.
Alternatively, in this embodiment of the application, the step 202 may be specifically implemented by the step 202b described below.
Step 202b, under the condition that the acquired user voice information comprises the target object, the first electronic equipment acquires the position information and the modeling information of the target object.
Optionally, in this embodiment of the application, when a user needs to locate some occluded objects, that is, target objects, voice input may be performed on the first electronic device to trigger the first electronic device to acquire position information and modeling information of the target objects, so that a second image including the target objects can be generated and displayed based on the acquired position information and modeling information of the target objects and the first image.
For example, when a user needs to locate a target object, for example: when the article is the XX article, the user can trigger the first electronic device to acquire the position information and the modeling information of the target object through voice inputs of 'helping me find the XX article', 'locating the XX article', 'where the XX article is' and the like.
In the embodiment of the application, when the user voice information recognized by the first electronic device includes the target object, the first electronic device can acquire the position information and the modeling information of the target object, so that the second image is generated and displayed based on the position information and the modeling information of the target object and the first image, and thus, the user can be quickly helped to locate the target object.
Optionally, in this embodiment of the application, when the user needs to position the target object, key input or gesture input may be performed on the first electronic device, which may specifically be determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the user may input a physical key of a common object associated with the first electronic device, such as a mobile phone, an earphone, a watch, or perform a specific gesture input, so as to trigger the first electronic device to acquire the position information and the modeling information of the target object.
Alternatively, in this embodiment of the present application, the step 203 may be specifically implemented by the following step 203 a.
And 203a, under the condition that the target object has potential safety hazard to the user, generating and displaying a second image by the first electronic equipment based on the position information and the modeling information of the target object and the first image.
Optionally, in this embodiment of the application, when it is determined that the target object is outside the first viewing angle of the first camera, that is, the target object is occluded, the first electronic device may determine that the target object has a potential safety hazard to the user.
For example, in a case where the image acquired by the first electronic device in real time does not include the target object and the target object is within the second viewing angle of the second camera, the first electronic device may determine that the target object is occluded.
It can be understood that when the first electronic device detects that the target object is located within the second view angle of the second camera through the second camera, but the image within the first view angle acquired by the first camera does not include the target object, the first electronic device may determine that the target object is located outside the first view angle of the first camera, that is, the first electronic device is about to approach the target object, but the target object is blocked by other objects, so that the first electronic device may determine that the target object has a potential safety hazard to the user.
For example, as shown in fig. 2 (a), the current environment includes an object 11 and a tree 12, and the first camera 13 may capture an image 14 within a corresponding first view angle; as shown in fig. 2 (B), the object 11 is included in the image 14, at this time, if the first electronic device detects, through the second camera, that the tree 12 is included in the second view angle corresponding to the second camera, the first electronic device may determine that the first electronic device is about to approach the tree 12, but the tree 12 is blocked by another object, so that the first electronic device may determine that the target object has a potential safety hazard to the user.
In the embodiment of the application, when the potential safety hazard of the target object to the user is identified, the first electronic device can generate and display the second image based on the acquired position information and modeling information of the target object and the first image, so that the user can be rapidly helped to locate the target object, and the potential safety hazard is avoided.
Optionally, in this embodiment of the application, when the target object is a second electronic device, when the second electronic device establishes a connection with the first electronic device and receives information, the first electronic device may acquire position information of the second electronic device to detect whether the second electronic device is outside the first viewing angle of the first camera according to the position information of the second electronic device, and when it is determined that the second electronic device is outside the first viewing angle of the first camera, generate and display a second image including the second electronic device based on the acquired position information and modeling information of the second electronic device and the first image.
For example, the first electronic device may detect whether the second electronic device is outside the first view angle range of the first camera in combination with the position information of the second electronic device and the image within the first view angle captured by the first camera.
For example: if the first electronic device detects that the image in the first visual angle acquired by the first camera does not include the second electronic device, and meanwhile, the first electronic device judges that the second electronic device is located within the preset range of the first electronic device according to the position information of the second electronic device, the first electronic device can determine that the second electronic device is located beyond the first visual angle of the first camera.
Alternatively, in this embodiment of the present application, the step 203 may be specifically implemented by the following steps 203b and 203 c.
Step 203b, the first electronic device adds the image of the target object in the first image based on the position information and the modeling information of the target object to generate a third image.
Optionally, in this embodiment of the application, in a case that the first image is a three-dimensional image constructed by the first electronic device based on the two-dimensional image within the first view angle acquired by the first camera, the first electronic device may add the three-dimensional image of the target object in the first image based on the position information and the modeling information of the target object to generate the third image.
It is to be understood that the third image includes a three-dimensional image of the target object, and the third image is a three-dimensional image.
Illustratively, the third image is a three-dimensional image corresponding to the first viewing angle.
For example, after the first electronic device obtains the first image, the first electronic device may determine depth information of the target object in the first image according to the position information of the target object; then, the first electronic device can obtain a three-dimensional image of the target object according to the modeling information of the target object and the depth information of the target object; finally, the first electronic device may obtain a placement direction of the target object in the real physical world scene according to the target object within the second view angle of the second camera, so that the first electronic device may correspondingly add the three-dimensional image of the target object to the first image according to the direction of the target object in the real physical world scene to obtain a third image.
For example: assuming that the target object is an "automobile", when the automobile is not within the first viewing angle of the first camera and is within the second viewing angle of the second camera, the first electronic device may determine that the automobile is occluded by other objects within the first viewing angle; and, within the second viewing angle of the second camera, when the right side of the car is displayed, then it may be determined that the car is placed in the real physical world scene with the right side of the car facing the second camera, and therefore, the first electronic device may add the three-dimensional image of the car to the first image with the right side facing the second camera, that is, add the three-dimensional image of the car to the first image according to the placement direction of the car in the real physical world scene to generate the third image.
It is to be appreciated that after generating the third image, the first electronic device may derive a relative position of the target object in the scene within the first perspective based on the third image.
And step 203c, the first electronic device processes the third image based on the first visual angle, and generates and displays a second image.
Optionally, in this embodiment of the application, the first electronic device may acquire, by using the first camera, an image within a first view angle, and then process, based on the first view angle, the third image and the image within the first view angle acquired by the first camera, so as to generate and display the second image including the target object.
For example, the first electronic device may perform perspective projection on the third image and the image acquired by the first camera within the first view angle based on the first view angle to generate and display a second image including the target object.
By way of example, the second image may be understood as a perspective image of the target object, which is obtained by perspective projection of an object model of the target object on the first object.
Optionally, in this embodiment of the application, the first electronic device may project, based on the first perspective, a body of the target object in the third image onto the first image to generate and display the second image including the target object.
For example, the first electronic device may perform image rendering processing on the third image and the first image based on the first perspective to generate and display a second image including the target object.
Illustratively, the first electronic device may synthesize the third image and the first image through an Open Graphics Library (GL) based on the first perspective to generate and display a second image including the target object.
For example, referring to fig. 2, a first electronic device may construct a first image corresponding to a first perspective based on an image 14, and then the electronic device may add a three-dimensional image of the tree 12 in the first image based on the position information and modeling information of the tree 12 to generate a third image, so that the first electronic device may obtain a relative position of the tree 12 in a scene within the first perspective according to the third image, and then the first electronic device may perform perspective projection on the third image and the first image based on the first perspective to obtain a perspective image 15 of the tree 12 within the first perspective, that is, a second image, as shown in fig. 3, where the tree 12 is perspectively displayed on an object 11, so as to facilitate a user to quickly locate the tree 12.
Fig. 4 shows a schematic diagram of a possible structure of the display device according to the embodiment of the present application. As shown in fig. 4, the display device 70 may include: an acquisition module 71 and a processing module 72.
The acquiring module 71 is configured to acquire a first image corresponding to a first viewing angle of a first camera; and acquiring position information and modeling information of the target object, wherein the target object is shielded by the first object in the first view angle, and the modeling information is used for establishing an object model of the target object. A processing module 72 for generating and displaying a second image including the target object based on the position information and the modeling information of the target object and the first image.
The embodiment of the application provides a display device, because the display device can acquire a first image corresponding to a first visual angle of a first camera, and position information and modeling information of a target object blocked by a first object in the first visual angle, the display device can generate and display a second image including the target object based on the acquired position information and modeling information of the target object and the first image, and thus, the display device can help a user to position the blocked target object in the first visual angle.
In a possible implementation manner, the first camera is a camera of the first electronic device, the target object is the second electronic device, and the obtaining module 71 is specifically configured to, in a case that the second electronic device receives the information, obtain the location information and the modeling information of the second electronic device by the first electronic device.
In a possible implementation manner, the obtaining module 71 is specifically configured to obtain the position information and the modeling information of the target object when the obtained user voice information includes the target object.
In a possible implementation manner, the processing module 72 is specifically configured to generate and display the second image based on the position information and the modeling information of the target object and the first image in a case where it is identified that the target object has a safety hazard to the user.
In one possible implementation, the processing module 72 is specifically configured to add an image of the target object in the first image to generate a third image based on the position information and the modeling information of the target object; and processing the third image based on the first viewing angle to generate and display the second image.
The display device in the embodiment of the present application may be the first electronic device, or may be a component in the first electronic device, such as an integrated circuit or a chip. The first electronic device may be a terminal, or may be a device other than a terminal. By way of example, the first electronic Device may be a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an AR/VR Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a kiosk, etc., and the embodiments of the present application are not limited in particular.
The display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The display device provided in the embodiment of the present application can implement each process implemented by the method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, and when the program or the instruction is executed by the processor 901, the steps of the foregoing method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
It should be noted that the first electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of a first electronic device for implementing an embodiment of the present application.
The first electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the first electronic device 100 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The first electronic device structure shown in fig. 6 does not constitute a limitation of the first electronic device, and the first electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The processor 110 is configured to obtain a first image corresponding to a first viewing angle of the first camera; acquiring position information and modeling information of a target object, wherein the target object is shielded by a first object in a first visual angle, and the modeling information is used for establishing an object model of the target object; and generating and displaying a second image including the target object based on the position information and the modeling information of the target object and the first image.
The embodiment of the application provides a first electronic device, and because the first electronic device can acquire a first image corresponding to a first view angle of a first camera and position information and modeling information of a target object which is blocked by a first object in the first view angle, the first electronic device can generate and display a second image including the target object based on the acquired position information and modeling information of the target object and the first image, so that the first electronic device can help a user to position the blocked target object in the first view angle.
Optionally, the first camera is a camera of the first electronic device, the target object is a second electronic device, and the processor 110 is specifically configured to, when the second electronic device receives the information, the first electronic device obtains the position information and the modeling information of the second electronic device.
Optionally, the processor 110 is specifically configured to, when the obtained user voice information includes a target object, obtain position information and modeling information of the target object.
Optionally, the processor 110 is specifically configured to generate and display the second image based on the position information and the modeling information of the target object and the first image when it is identified that the target object has a safety hazard to the user.
Optionally, the processor 110 is specifically configured to add an image of the target object in the first image based on the position information and the modeling information of the target object to generate a third image; and processing the third image based on the first viewing angle to generate and display the second image.
The first electronic device provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing 5 programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, memory 109 may include volatile memory or non-volatile memory, or memory 109 may include both volatile and non-volatile memory. Wherein the non-volatile memory
The Memory may be Read-Only Memory (ROM), programmable Read-Only Memory 0 (PROM), erasable Programmable Read-Only Memory (EPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), or flash Memory. Volatile Memory may be Random Access Memory (RAM), static RAM (SRAM), dynamic RAM (Dynamic RAM,
DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data 5 Rate Synchronous dynamic random access memory (DDRSDRAM), enhanced Synchronous SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus DRAM (DRRAM). The memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
The 0 processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor.
It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program 5 or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the first electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of displaying, the method comprising:
acquiring a first image corresponding to a first visual angle of a first camera;
acquiring position information and modeling information of a target object, wherein the target object is shielded by a first object in the first visual angle, and the modeling information is used for establishing an object model of the target object;
generating and displaying a second image including the target object based on the position information and modeling information of the target object and the first image.
2. The method of claim 1, wherein the first camera is a camera of a first electronic device, the target object is a second electronic device, and the obtaining the position information and the modeling information of the target object comprises:
and under the condition that the second electronic equipment receives information, the first electronic equipment acquires the position information and the modeling information of the second electronic equipment.
3. The method of claim 1, wherein the obtaining of the position information and the modeling information of the target object comprises:
and under the condition that the acquired user voice information comprises the target object, acquiring the position information and the modeling information of the target object.
4. The method of claim 1, wherein generating and displaying a second image based on the position information and modeling information of the target object and the first image comprises:
and under the condition that the target object has potential safety hazard to the user, generating and displaying a second image based on the position information and the modeling information of the target object and the first image.
5. The method of any of claims 1 to 4, wherein generating and displaying a second image based on the position information and modeling information of the target object and the first image comprises:
adding an image of the target object in the first image based on the position information and modeling information of the target object to generate a third image;
and processing the third image based on the first visual angle, and generating and displaying a second image.
6. A display device, characterized in that the display device comprises: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring a first image corresponding to a first visual angle of the first camera; acquiring position information and modeling information of a target object, wherein the target object is shielded by a first object in the first visual angle, and the modeling information is used for establishing an object model of the target object;
the processing module is used for generating and displaying a second image based on the position information and the modeling information of the target object and the first image, wherein the second image comprises the target object.
7. The apparatus according to claim 6, wherein the first camera is a camera of a first electronic device, the target object is a second electronic device, and the obtaining module is specifically configured to, in a case where the second electronic device receives information, obtain, by the first electronic device, location information and modeling information of the second electronic device.
8. The apparatus according to claim 6, wherein the obtaining module is specifically configured to obtain the position information and the modeling information of the target object when the obtained user voice information includes the target object.
9. The apparatus according to claim 6, wherein the processing module is specifically configured to generate and display a second image based on the position information and the modeling information of the target object and the first image when it is identified that the target object has a safety risk to a user.
10. The apparatus according to any of the claims 6 to 9, wherein the processing module is specifically configured to add an image of the target object in the first image based on the position information and the modeling information of the target object to generate a third image; and processing the third image based on the first visual angle to generate and display a second image.
11. A first electronic device, comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the display method according to any one of claims 1-5.
CN202211714169.4A 2022-12-27 2022-12-27 Display method and device and first electronic equipment Pending CN115840552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211714169.4A CN115840552A (en) 2022-12-27 2022-12-27 Display method and device and first electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211714169.4A CN115840552A (en) 2022-12-27 2022-12-27 Display method and device and first electronic equipment

Publications (1)

Publication Number Publication Date
CN115840552A true CN115840552A (en) 2023-03-24

Family

ID=85577571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211714169.4A Pending CN115840552A (en) 2022-12-27 2022-12-27 Display method and device and first electronic equipment

Country Status (1)

Country Link
CN (1) CN115840552A (en)

Similar Documents

Publication Publication Date Title
CN106716302B (en) Method, apparatus, and computer-readable medium for displaying image
US20180173404A1 (en) Providing a user experience with virtual reality content and user-selected, real world objects
CN109920065B (en) Information display method, device, equipment and storage medium
US20200380724A1 (en) Personalized scene image processing method, apparatus and storage medium
CN112261340B (en) Visual field sharing method and device, electronic equipment and readable storage medium
CN113963108A (en) Medical image cooperation method and device based on mixed reality and electronic equipment
CN111093033B (en) Information processing method and device
CN111901518B (en) Display method and device and electronic equipment
CN112328155B (en) Input device control method and device and electronic device
CN115967854A (en) Photographing method and device and electronic equipment
US20200226833A1 (en) A method and system for providing a user interface for a 3d environment
CN114895813A (en) Information display method and device, electronic equipment and readable storage medium
CN115840552A (en) Display method and device and first electronic equipment
CN115037874A (en) Photographing method and device and electronic equipment
CN114827737A (en) Image generation method and device and electronic equipment
CN112822427A (en) Video image display control method and device and electronic equipment
CN111991801A (en) Display method and device and electronic equipment
CN111524240A (en) Scene switching method and device and augmented reality equipment
CN110941389A (en) Method and device for triggering AR information points by focus
CN112367562B (en) Image processing method and device and electronic equipment
TWI839830B (en) Mixed reality interaction method, device, electronic equipment and medium
CN114373064B (en) VRAR content editing method, VRAR content editing device, VRAR content editing equipment and medium
WO2024131490A1 (en) Virtual reality safety protection method and apparatus, electronic device, medium, chip and program product
CN116841379A (en) House property model display method and device, electronic equipment and readable storage medium
CN113840100A (en) Video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination