CN111736692A - Display method, display device, storage medium and head-mounted device - Google Patents

Display method, display device, storage medium and head-mounted device Download PDF

Info

Publication number
CN111736692A
CN111736692A CN202010485463.7A CN202010485463A CN111736692A CN 111736692 A CN111736692 A CN 111736692A CN 202010485463 A CN202010485463 A CN 202010485463A CN 111736692 A CN111736692 A CN 111736692A
Authority
CN
China
Prior art keywords
image
unit
scene
projection
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010485463.7A
Other languages
Chinese (zh)
Other versions
CN111736692B (en
Inventor
杜鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010485463.7A priority Critical patent/CN111736692B/en
Publication of CN111736692A publication Critical patent/CN111736692A/en
Application granted granted Critical
Publication of CN111736692B publication Critical patent/CN111736692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The disclosure provides a display method, a display device, a storage medium and a head-mounted device, and relates to the technical field of virtual reality and augmented reality. The display method is applied to a head-mounted device, and the head-mounted device comprises a display unit, a camera unit and a projection unit; the method comprises the following steps: controlling the projection unit to project the first image to a real scene; acquiring a scene image acquired by the camera unit; determining a second image according to the scene image; and displaying the second image in the display unit. The method and the device have the advantages that the displayed content seen by the user is better fused and matched with the real scene, the immersive watching feeling of the user is improved, and the content sharing is realized.

Description

Display method, display device, storage medium and head-mounted device
Technical Field
The present disclosure relates to the field of virtual reality and augmented reality technologies, and in particular, to a display method, a display device, a storage medium, and a head-mounted device.
Background
The AR (Augmented Reality) technology is a technology for displaying information of a real world and information of a virtual world in a fused manner, and the real world and the virtual world can exist on the same screen and in the same space after being overlapped with each other.
In the related art, the FOV (Field Of Vision) provided by the head-mounted device used in the AR is limited, for example, the FOV Of the head-mounted device is generally before 40 to 50 degrees, and the FOV seen by human eyes is usually above 140 degrees. Therefore, the width of the display content of the AR is insufficient, which results in that the real world and the virtual world cannot be well matched, the immersion sense of the user in use is low, and the user experience is influenced.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a display method, a display device, a storage medium and a head-mounted device, thereby improving, at least to some extent, the problem of low immersion caused by insufficient FOV of the head-mounted device in the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a display method applied to a head mounted device including a display unit, an image pickup unit, and a projection unit; the method comprises the following steps: controlling the projection unit to project the first image to a real scene; acquiring a scene image acquired by the camera unit; determining a second image according to the scene image; and displaying the second image in the display unit.
According to a second aspect of the present disclosure, there is provided a display apparatus applied to a head mounted device including a display unit, an image pickup unit, and a projection unit; the device comprises: the projection control module is used for controlling the projection unit to project the first image to a real scene; the scene image acquisition module is used for acquiring a scene image acquired by the camera shooting unit; the second image determining module is used for determining a second image according to the scene image; and the second image display module displays the second image in the display unit.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the display method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided a head mounted device comprising: a processing unit; a storage unit for storing executable instructions of the processing unit; a display unit; an image pickup unit; and a projection unit; wherein the processing unit is configured to execute the display method of the first aspect and possible implementations thereof via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
on one hand, the first image is projected to the real scene, the second image is displayed on the equipment, and the display content seen by the user can be better fused and matched with the real scene, so that the limitation of the FOV of the equipment is broken through, and the viewing immersion of the user is improved. On the other hand, after the first image is projected to a real scene, content sharing can be achieved without another device, the number of use scenes of the head-mounted device is increased, and the popularity is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic view of a field of view of a human eye;
FIG. 2 illustrates a block diagram of augmented reality glasses in the present exemplary embodiment;
FIG. 3 illustrates a flow chart of a display method in the present exemplary embodiment;
FIG. 4 shows a schematic diagram of capturing an image of a scene in this exemplary embodiment;
FIG. 5 is a diagram illustrating a method of determining a second image according to one exemplary embodiment;
fig. 6 shows a screen actually viewed by a user in the present exemplary embodiment;
fig. 7 shows a structural diagram of a display device in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The human eye is not a simple static optical system, the scanning of eyeballs is used as the visual accumulation, the transverse scanning movement is more and easier according to the characteristics of the human eye, and in a normal state, the transverse width of the human eye scanning once is 120 degrees, and the limit is close to 180 degrees. Fig. 1 shows the various views of the human eye. Generally, an image mapped on a retina of a human eye only has a central part which can be clearly distinguished, and the central part is about 10-20 degrees and is a character recognition area; the letter recognition area is formed between 10 and 30 degrees, human eyes can see the existence and the action of an object at once, the object can be distinguished without rotating eyeballs or heads, and the distinguishing capability is reduced compared with that of the letter recognition area; the color discrimination area is formed between 30 and 50 degrees, the resolution capability is further reduced, and when an object or an action needs to be clearly seen, the eyeball or the head needs to be rotated; the binocular vision region can reach 62 degrees, the visual limit of a single eye can reach 94-104 degrees, although the human eye cannot distinguish the image in such a large range, certain perception can be generated, for example, when an object with a particularly large color contrast appears at a position of 60 degrees, the human eye can generate contrast and discordance.
For the current head-mounted device, due to the limitation of the optical system, the FOV of the optical display module is generally about 40 degrees, that is, only 40 degrees of the display area can see the virtual picture through the optical display module. This FOV area can cover a well-defined area of the human eye but cannot cover a larger area of the human eye's afterglow. Therefore, when the user wears the device to watch the content, a larger contrast is easily formed in the area uncovered by the FOV of the optical display module, especially at the junction of the covered area and the uncovered area, and the immersion feeling of the user is reduced.
In addition, among the correlation technique, the display scheme of head mounted device is near-to-eye display scheme, only can see the display content of glasses when glasses are close to the optical display module assembly promptly, and only user oneself can see reality content like this, can't share for other people. Based on the existing head-mounted equipment, one mode is to send display content to other equipment for sharing in a wireless communication mode and other modes, but the implementation process is complicated, the sending is inconvenient when a file is large, and private content is easy to leak; another mode is to share the screen by projecting the screen to other display devices, but the devices are required to have the screen projecting function, and the two devices are required to be adaptively set, which is also difficult to implement. As can be seen, there is currently no effective solution for sharing the display content of the head mounted device.
In view of one or more of the above problems, the exemplary embodiment of the present disclosure first provides a head-mounted device, and the unit configuration inside the head-mounted device is exemplarily described below by taking the augmented reality glasses 200 in fig. 2 as an example. Those skilled in the art will appreciate that in actual practice, the head-mounted device may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is shown schematically and does not constitute a structural limitation of the head-mounted device. In other embodiments, the head-mounted device may also interface differently than in fig. 2, or a combination of multiple interfacing.
As shown in fig. 2, the augmented reality glasses 200 may include a storage unit 210, a processing unit 220, a display unit 230, a camera unit 240, and a projection unit 250, and optionally, the augmented reality glasses 200 may further include an audio unit 260, a communication unit 270, and a sensor unit 280.
The storage unit 210 is used for storing executable instructions, and may include an operating system code, a program code, and data generated during the running of the program, such as user data in the program. The storage unit 210 may be disposed in the mirror body between the two lenses, or disposed at other positions. The Storage unit 210 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like.
The Processing Unit 220 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Application Processor (AP), a modem Processor, an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor and/or a Neural Network Processor (NPU), and the like. The different processors may be implemented as separate units or may be integrated in one processing unit. The processing unit 220 may be disposed in the lens body between the two lenses, or disposed at other positions. Processing unit 220 may execute executable instructions on storage unit 210 to execute corresponding program commands.
The display unit 230 is used to display images, videos, and the like, and is generally provided in the form of a lens, or a certain display area is provided on the lens. The exemplary embodiment uses an optical perspective display mode, the user can see the real scene through the lens, and the processing unit 220 transmits the virtual image to the display unit 230 for displaying, so that the user can see the real and virtual superimposed image effect. Therefore, the display unit 230 can have a "See-Through" function, which can See both the real external world and the virtual information, so as to realize the fusion and "enhancement" of reality and virtual. In an alternative embodiment, as shown in fig. 2, the Display unit 230 may include a miniature Display screen (Display)2301 and a Lens (Lens) 2302. The micro display 2301 is used for providing display contents, and may be a self-luminous active device such as a light emitting diode panel, or a liquid crystal display with an external light source for illumination, or the like; the lens 2302 is used for allowing human eyes to see a real scene, thereby superimposing a real scene image and a virtual image.
The image capturing unit 240 is composed of a lens, a photosensitive element, and the like, and may be located at a position between the two lenses, or at a left side and a right side of the lenses, with the lens facing generally directly in front of the lenses. When the user wears the augmented reality glasses 200, the camera unit 240 may capture a still image or a video in front, for example, an image of a scene in front of the user, or the user makes a gesture operation in front of the user, and the camera unit 240 may capture a gesture image of the user. Further, as shown in fig. 2, the camera unit 240 may include a depth camera 2401, for example, a TOF (Time Of Flight) camera, a binocular camera, and the like, and may detect depth information (i.e., an axial distance from the augmented reality glasses 200) Of each part or each object in the scene image, so as to obtain richer image information, for example, after the gesture image is captured, accurate gesture recognition may be implemented according to the depth information Of the gesture. In one embodiment, a camera may be disposed on each of the left and right sides of the lens to simulate a binocular configuration for the human eye to capture images closer to what the human eye views.
The projection unit 250 is used for projecting images to an area outside the head-mounted device 200, and may be located at a position between the two lenses, or at a left side and a right side of the lenses, generally facing the front of the lenses, so as to project images to the front area. The projection unit 250 may adopt a Micro-projector based on a DMD (Digital Micro-mirror Device), an LCOS (Liquid Crystal on Silicon) or an LBS (Laser Beam Steering) scheme. The processing unit 220 transmits the image to the projection unit 250, and the projection unit 250 projects the image to a front curtain, a wall, etc. according to the corresponding projection parameters.
The audio unit 260 is used for converting a digital audio signal into an analog audio signal for output, converting an analog audio input into a digital audio signal, and encoding and decoding the audio signal. In some embodiments, the audio unit 260 may be disposed in the processing unit 220, or some functional modules of the audio unit 260 may be disposed in the processing unit 220. As shown in fig. 2, audio unit 260 may generally include a microphone 2601 and an earphone 2602. The microphone 2601 may be disposed at the bottom of one or both side temples of the augmented reality glasses 200 near the user's mouth, and the earphone 2602 may be disposed at the middle rear end of one or both side temples of the augmented reality glasses 200 near the user's ears. In addition, the audio unit 260 may also include a speaker, a power amplifier, and other components to achieve audio output.
The Communication unit 270 may provide solutions for Wireless Communication including a Wireless Local Area Network (WLAN) (e.g., a Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), a Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like, so that the augmented reality glasses 200 are connected to the internet or form a connection with other devices.
The sensor unit 280 is composed of different types of sensors for implementing different functions. For example, the sensor unit 280 may include at least one touch sensor 2801 disposed at an outer side of one side of the temple, so that a position touched by a user is conveniently formed to form a certain touch sensing area, and a function similar to a touch screen of a mobile phone is implemented, so that the user can perform an interactive control by performing a touch operation in the touch sensing area.
In addition, the sensor unit 280 may further include other sensors, such as a pressure sensor 2802 for detecting the strength of the pressing operation of the user, a fingerprint sensor 2803 for detecting fingerprint data of the user, and the like.
In an optional embodiment, the augmented reality glasses 200 may further include a USB (Universal serial bus) interface 290, which conforms to a USB standard specification, and specifically may be a MiniUSB interface, a microsub interface, a USBTypeC interface, or the like. The USB interface 290 may be used to connect a charger to charge the augmented reality glasses 200, or connect an earphone to play audio through the earphone, or connect other electronic devices, such as a computer and a peripheral device. The USB interface 290 may be disposed on the bottom of one or both side temples of the augmented reality glasses 200, or other suitable locations.
In an alternative embodiment, the augmented reality glasses 200 may further include a charging management unit 2901 for receiving charging input from a charger to charge the battery 2902. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management unit 2901 may receive charging input of a wired charger through the USB interface 290. In some wireless charging embodiments, the charging management unit 2901 may receive wireless charging input through a wireless charging coil of the augmented reality glasses 200. The charging management unit 2901 may also supply power to the device while charging the battery 2902.
Exemplary embodiments of the present disclosure provide a display method, which may be applied to the head-mounted device, such as the augmented reality glasses 200 in fig. 2.
The display method will be described in detail with reference to fig. 3. As shown in fig. 3, the method may include the following steps S310 to S340:
in step S310, the projection unit is controlled to project the first image to a real scene.
The real scene refers to the real world environment where the head-mounted device is currently located, such as a real living room, a conference room, a classroom and the like.
The first image is content or a part of the content that needs to be displayed by the head-mounted device, for example, an image to be displayed is taken as the first image, or a part of the image to be displayed is divided into the first image.
In an alternative embodiment, before performing step S310, the image to be displayed may be divided into a first portion and a second portion, and the first portion is used as the first image.
The image to be displayed may be an image selected by a user, for example, when a slide show is performed on pictures in a local album of the head-mounted device, each image in the images is sequentially used as the image to be displayed. The image to be displayed may also be a frame of image in the video, for example, when the video is viewed through the head-mounted device, each frame of image in the video may be sequentially used as the image to be displayed. In an alternative embodiment, the image to be displayed may also be a virtual image generated by the head-mounted device, for example, a virtual image matching the real scene may be generated as the image to be displayed. For example, an image of a real scene may be captured by the camera unit, and the processing unit may analyze the image of the real scene, identify a target therein, such as a plane, a table, a specific object, and the like, and generate a virtual image at a corresponding position of the target according to the setting of the AR program, such as generating a virtual projection on the plane, generating a virtual character standing on the table, generating a virtual frame for the specific real object, and the like. The process of generating the virtual image may also be implemented by a machine learning model, for example, the head-mounted device may run the machine learning model, perform model training by collecting the real-world image and the big data of the virtual image to learn the relationship between the real-world features and the virtual image, further collect and input the real-scene image into the model when in application, and generate a suitable virtual image by recognizing the target and the features in the real-scene image. The generated virtual image is the image to be displayed.
The first part and the second part may be a background part and a foreground part in the image to be displayed, respectively, for example, the image to be displayed is subject to target detection, the detected part is extracted as the foreground part, the remaining part is the background part, and the background part is the first image. The first part and the second part may also be two parts with fixed positions in the image to be displayed, for example, a part with a fixed size in the middle area of the image to be displayed is divided to be used as the second part, and the rest part is the first part. The first part and the second part may also be different layers in the image to be displayed, for example, a specific texture layer in the image to be displayed is divided as the first part, and the remaining layers are the second part.
In step S320, a scene image captured by the camera unit is acquired.
Fig. 4 shows a schematic diagram of capturing a scene image, when the user wears the head-mounted device, the camera unit can capture a real scene image in front of the user in real time, form a video stream, and transmit the video stream to the processing unit. In the exemplary embodiment, after the first image is projected to the front area in the real scene, the captured scene image may also include the content of the first image.
In an optional embodiment, the camera unit may include a depth camera, and the acquired scene image may carry depth information.
In an alternative embodiment, before performing step S330, the projection parameters may be adjusted according to the scene image; then, step S310 is executed again, i.e. the projection unit is controlled to re-project the first image by using the adjusted projection parameters, and step S320 is executed again, i.e. the image of the scene captured by the camera unit is acquired again.
The adjusting of the projection parameters according to the scene image may include any one or more of the following:
(1) and when the first image in the scene image is detected to be blurred, adjusting the projection focal length. Specifically, a portion of the first image may be extracted from the scene image, and an image gradient may be calculated, for example, by a Tenengrad gradient function, a Brenner gradient function, a Laplacian gradient function, or the like, where if the gradient is lower than a certain value, it is determined that the first image is blurred. When the projection focal length is adjusted, the projection focal length may be adjusted in one direction, for example, the focal length is increased, then the scene image is collected again, the image gradient of the first image portion is calculated, if the gradient is increased, the adjustment direction is generally correct, and if the gradient is decreased, the adjustment direction is generally opposite. Furthermore, the best projection focal length (generally, projection distance) can be found by repeatedly adjusting, so that the first image is clearest.
(2) And when the first image in the scene image is detected to be distorted, adjusting the projection angle. In the case where the projection unit itself has been distortion corrected (e.g., keystone corrected), if the first image in the scene image still has distortion, it may be that the projection area is not on a flat plane, and by adjusting the projection angle, the projection area may be changed to move to a flat (or nearly flat) plane.
(3) And adjusting the projection size according to the proportion of the first image in the scene image. Generally, if the proportion of the first image in the scene image is too small, the projection size is increased, and the area of the first image is increased. In an alternative embodiment, the first image may cover exactly the whole scene image, and specifically, the proportion of the first image in the scene image is up to or close to 100% by increasing the projection size.
In practical applications, the above modes can be combined arbitrarily, or other modes can be adopted to adjust the projection parameters. The present disclosure is not limited thereto.
By adjusting the projection parameters, the first image and the scene image can be optimized to achieve the best display effect.
In step S330, a second image is determined according to the scene image.
The second image is the display content having a complementary relationship or a matching relationship with the first image. When determining the second image, the second image needs to be matched with the first image in the scene image.
In an alternative embodiment, as shown with reference to fig. 5, step S330 may include the following steps S510 and S520:
step S510, intercepting subimages according to preset sizes at preset positions in the scene images;
in step S520, a second image is determined according to the sub-images.
The preset position and the preset size can be determined according to the position and the size of the display area of the display unit. The display area may be a display area provided by the microdisplay 2301 in fig. 2 for displaying the display content of the head mounted device, typically a digital image frame processed by the processing unit.
In the exemplary embodiment, the scene image simulates an external image seen by human eyes, and because the positions of the camera and the lens are not completely overlapped, there may be a difference between the captured scene image and the external image actually seen by the human eyes, and the difference can be eliminated by fine-tuning the scene image. Under the condition that the scene image is basically consistent with the external image seen by human eyes, the position and the size of the display area mapped on the retina of the human eyes can be calculated according to the distance relationship among the human eyes, the lenses and the projection plane (for example, the empirical distance between the human eyes and the lenses and the projection focal length can be adopted) according to the projection relationship, so that the preset position and the preset size can be obtained, and the part is also a shielding part of the display area. It should be noted that, if the display unit of the head-mounted device has two display areas, for example, if one display area is respectively disposed on the left and right lenses, two sets of preset positions and preset sizes may be obtained correspondingly, and the screenshot obtains two sub-images.
After the sub-image is captured in the scene image, the second image may be determined according to the sub-image, for example, a background in the sub-image is used to generate a matched virtual character or virtual object as the second image.
In an embodiment, the sub-image may be directly determined as the second image, and the original scene image is obtained after the scene image seen by the user is overlapped with the second image, and has a certain stereoscopic effect.
In another embodiment, a text, an icon, a frame, or other virtual effects (such as a virtual cartoon image) may be added to the sub-image to obtain the second image.
In another embodiment, if the first image is derived from the first portion of the image to be displayed, the second portion of the image to be displayed may be adjusted according to the sub-image to obtain the second image. Specifically, the size of the second portion of the image to be displayed may be adaptively adjusted according to the size of the sub-image, or a corresponding area of the sub-image in the image to be displayed is determined, and the content of the second portion in the area is reserved as the second image, and so on.
In step S340, the second image is displayed on the display unit.
Fig. 6 shows a picture actually viewed by a user, the user views a projected front real scene through a lens, that is, the first image and the real scene are superimposed, and views the second image through the display unit, because the second image is matched with a scene image, the second image is equivalently 'embedded' in the first image and the real scene, and from the display unit to an area outside the display unit, the image content is continuous, the style is consistent, and no contrast, abrupt change or discordance can be caused to the user, thereby bringing better viewing experience.
In summary, in the exemplary embodiment, on one hand, by projecting the first image into the real scene and displaying the second image on the device, the display content seen by the user can be better fused and matched with the real scene, so as to break through the limitation of the FOV of the device and improve the immersive view feeling of the user. On the other hand, after the first image is projected to a real scene, content sharing can be achieved without another device, the number of use scenes of the head-mounted device is increased, and the popularity is improved.
Exemplary embodiments of the present disclosure also provide a display apparatus that may be applied to a head-mounted device, such as the augmented reality glasses 200 shown in fig. 2. As shown in fig. 7, the display device 700 may include:
a projection control module 710 for controlling the projection unit to project the first image to a real scene;
a scene image acquiring module 720, configured to acquire a scene image acquired by the camera unit;
a second image determining module 730, configured to determine a second image according to the scene image;
the second image display module 740 displays a second image on the display unit.
In an alternative embodiment, the second image determining module 730 is configured to:
intercepting a subimage at a preset position in a scene image according to a preset size;
and determining a second image according to the sub-images.
In an alternative embodiment, the display device 700 may further include:
the first image determining module is used for dividing the image to be displayed into a first part and a second part, and taking the first part as a first image.
The second image determining module 730 is further configured to adjust the second portion according to the sub-image, and use the adjusted second portion as a second image.
In an optional implementation manner, the first image determining module is further configured to use a virtual image matched with the real scene as an image to be displayed.
In an optional implementation manner, the second image determining module 730 is further configured to determine a preset position and a preset size according to the position and the size of the display area of the display unit.
In an alternative embodiment, the projection control module 710 is further configured to:
adjusting projection parameters according to the scene image;
and controlling the projection unit to project the first image again by adopting the adjusted projection parameters.
The scene image obtaining module 720 is further configured to obtain the scene image captured by the camera unit again.
Further, the projection control module 710 may adjust the projection parameters in any one or more of the following manners:
when the first image in the detected scene image is blurred, adjusting the projection focal length;
when the first image in the detected scene image is distorted, adjusting the projection angle;
and adjusting the projection size according to the proportion of the first image in the scene image.
In addition, the specific details of each part in the above device have been described in detail in the method part embodiment, and the details that are not disclosed may refer to the method part embodiment, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device. The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. The display method is applied to a head-mounted device, and the head-mounted device comprises a display unit, a camera unit and a projection unit; the method comprises the following steps:
controlling the projection unit to project the first image to a real scene;
acquiring a scene image acquired by the camera unit;
determining a second image according to the scene image;
and displaying the second image in the display unit.
2. The method of claim 1, wherein determining a second picture from the scene image comprises:
intercepting a subimage at a preset position in the scene image according to a preset size;
and determining the second image according to the sub-image.
3. The method of claim 2, wherein before controlling the projection unit to project the first image onto the real scene, the method further comprises:
dividing an image to be displayed into a first part and a second part, and taking the first part as the first image;
the determining the second image according to the sub-image comprises:
and adjusting the second part according to the sub-image, and taking the adjusted second part as the second image.
4. The method according to claim 3, characterized in that the image to be displayed is obtained by:
and taking the virtual image matched with the real scene as the image to be displayed.
5. The method of claim 2, further comprising:
and determining the preset position and the preset size according to the position and the size of the display area of the display unit.
6. The method of any of claims 1 to 5, wherein prior to determining a second picture from the scene image, the method further comprises:
adjusting projection parameters according to the scene image;
and controlling the projection unit to re-project the first image by adopting the adjusted projection parameters, and acquiring the scene image acquired by the camera unit again.
7. The method of claim 6, wherein the adjusting projection parameters according to the scene image comprises any one or more of:
when the first image in the scene image is detected to be blurred, adjusting a projection focal length;
when the first image in the scene image is detected to be distorted, adjusting a projection angle;
and adjusting the projection size according to the proportion of the first image in the scene image.
8. The display device is applied to head-mounted equipment, and the head-mounted equipment comprises a display unit, a camera unit and a projection unit; the device comprises:
the projection control module is used for controlling the projection unit to project the first image to a real scene;
the scene image acquisition module is used for acquiring a scene image acquired by the camera shooting unit;
the second image determining module is used for determining a second image according to the scene image;
and the second image display module displays the second image in the display unit.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. A head-mounted device, comprising:
a processing unit;
a storage unit for storing executable instructions of the processing unit;
a display unit;
an image pickup unit; and
a projection unit;
wherein the processing unit is configured to perform the method of any of claims 1 to 7 via execution of the executable instructions.
CN202010485463.7A 2020-06-01 2020-06-01 Display method, display device, storage medium and head-mounted device Active CN111736692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010485463.7A CN111736692B (en) 2020-06-01 2020-06-01 Display method, display device, storage medium and head-mounted device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010485463.7A CN111736692B (en) 2020-06-01 2020-06-01 Display method, display device, storage medium and head-mounted device

Publications (2)

Publication Number Publication Date
CN111736692A true CN111736692A (en) 2020-10-02
CN111736692B CN111736692B (en) 2023-01-31

Family

ID=72646645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010485463.7A Active CN111736692B (en) 2020-06-01 2020-06-01 Display method, display device, storage medium and head-mounted device

Country Status (1)

Country Link
CN (1) CN111736692B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510203A (en) * 2020-11-16 2022-05-17 荣耀终端有限公司 Electronic device, inter-device screen cooperation method and medium thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN108427194A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of display methods and equipment based on augmented reality
CN110244840A (en) * 2019-05-24 2019-09-17 华为技术有限公司 Image processing method, relevant device and computer storage medium
CN110286906A (en) * 2019-06-25 2019-09-27 网易(杭州)网络有限公司 Method for displaying user interface, device, storage medium and mobile terminal
CN111142673A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Scene switching method and head-mounted electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN108427194A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of display methods and equipment based on augmented reality
CN110244840A (en) * 2019-05-24 2019-09-17 华为技术有限公司 Image processing method, relevant device and computer storage medium
CN110286906A (en) * 2019-06-25 2019-09-27 网易(杭州)网络有限公司 Method for displaying user interface, device, storage medium and mobile terminal
CN111142673A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Scene switching method and head-mounted electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510203A (en) * 2020-11-16 2022-05-17 荣耀终端有限公司 Electronic device, inter-device screen cooperation method and medium thereof

Also Published As

Publication number Publication date
CN111736692B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN111415422B (en) Virtual object adjustment method and device, storage medium and augmented reality equipment
JP7316360B2 (en) Systems and methods for augmented reality
CN107924584B (en) Augmented reality
JP7408678B2 (en) Image processing method and head mounted display device
CN108292489B (en) Information processing apparatus and image generating method
US10943409B2 (en) Information processing apparatus, information processing method, and program for correcting display information drawn in a plurality of buffers
US8576276B2 (en) Head-mounted display device which provides surround video
US20130321390A1 (en) Augmented books in a mixed reality environment
US11487354B2 (en) Information processing apparatus, information processing method, and program
US11749141B2 (en) Information processing apparatus, information processing method, and recording medium
KR20140125183A (en) Eye-glasses which attaches projector and method of controlling thereof
US20180192031A1 (en) Virtual Reality Viewing System
US20210063746A1 (en) Information processing apparatus, information processing method, and program
CN108989784A (en) Image display method, device, equipment and the storage medium of virtual reality device
CN111736692B (en) Display method, display device, storage medium and head-mounted device
US11488365B2 (en) Non-uniform stereo rendering
US20190028690A1 (en) Detection system
US20210400234A1 (en) Information processing apparatus, information processing method, and program
CN111208964A (en) Low-vision aiding method, terminal and storage medium
US20240155093A1 (en) Device, system, camera device, and method for capturing immersive images with improved quality
US11615767B2 (en) Information processing apparatus, information processing method, and recording medium
CN109313823A (en) Information processing unit, information processing method and program
CN117376591A (en) Scene switching processing method, device, equipment and medium based on virtual reality
JP2013131884A (en) Spectacles
JP2021068296A (en) Information processing device, head-mounted display, and user operation processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant