CN111818326A - Image processing method, device, system, terminal device and storage medium - Google Patents

Image processing method, device, system, terminal device and storage medium Download PDF

Info

Publication number
CN111818326A
CN111818326A CN201910295517.0A CN201910295517A CN111818326A CN 111818326 A CN111818326 A CN 111818326A CN 201910295517 A CN201910295517 A CN 201910295517A CN 111818326 A CN111818326 A CN 111818326A
Authority
CN
China
Prior art keywords
content
screen
image
head
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910295517.0A
Other languages
Chinese (zh)
Other versions
CN111818326B (en
Inventor
贺杰
戴景文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910295517.0A priority Critical patent/CN111818326B/en
Priority to PCT/CN2019/130646 priority patent/WO2020140905A1/en
Publication of CN111818326A publication Critical patent/CN111818326A/en
Application granted granted Critical
Publication of CN111818326B publication Critical patent/CN111818326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, an image processing system, terminal equipment and a storage medium, and relates to the technical field of display. The image processing method is applied to a terminal device, the terminal device is in communication connection with a head-mounted display device, and the image processing method comprises the following steps: acquiring a relative spatial position relationship between the terminal equipment and the head-mounted display equipment; acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display equipment on the screen; acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen; the image content is subjected to designation processing, and screen content including the image content subjected to the designation processing is displayed, a hue difference value between a first hue of the image content subjected to the designation processing and a second hue of the virtual content being larger than a first threshold value. The method can improve the sense of reality when the virtual content is displayed.

Description

Image processing method, device, system, terminal device and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to an image processing method, apparatus, system, terminal device, and storage medium.
Background
With the development of science and technology, machine intellectualization and information intellectualization are increasingly popularized, and the technology of identifying user images through image acquisition devices such as machine vision or virtual vision and the like to realize human-computer interaction is more and more important. Augmented Reality (AR) constructs virtual content that does not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately fuses the virtual content into a real environment by means of an image recognition and positioning technology, fuses the virtual content and the real environment into a whole by means of a display device, and displays the virtual content to a user for real sensory experience. Therefore, how to improve the display effect of the virtual content is an important research direction for augmented reality or mixed reality.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an image processing system, a terminal device and a storage medium, which can reduce the interference of the content displayed on the screen of the terminal device to the virtual content when the head-mounted display device displays the virtual content, and improve the sense of reality and the display effect when the virtual content is displayed.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to a terminal device, where the terminal device is in communication connection with a head-mounted display device, and the method includes: acquiring a relative spatial position relationship between the terminal equipment and the head-mounted display equipment; acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display equipment on the screen; acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen; the image content is subjected to designation processing, and screen content including the image content subjected to the designation processing is displayed, a hue difference value between a first hue of the image content subjected to the designation processing and a second hue of the virtual content being larger than a first threshold value.
In a second aspect, an embodiment of the present application provides an image processing method, which is applied to a head-mounted display device, where the head-mounted display device is communicatively connected to a terminal device, and the method includes: displaying the virtual content; acquiring a relative spatial position relationship between the terminal equipment and the head-mounted display equipment; acquiring a projection area of the virtual content on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content on the screen observed by human eyes through the head-mounted display equipment; and sending the data of the projection area to the terminal equipment, wherein the data of the projection area is used for indicating the terminal equipment to perform specified processing on the image content corresponding to the projection area in the screen content to be displayed, and displaying the screen content containing the image content subjected to the specified processing, and the tone difference value between the first tone of the image content subjected to the specified processing and the second tone of the virtual content is larger than a first threshold value.
In a third aspect, an embodiment of the present application provides an image processing apparatus, which is applied to a terminal device, where the terminal device is in communication connection with a head-mounted display device, and the apparatus includes: the system comprises a position acquisition module, an area acquisition module, a content acquisition module and an image processing module, wherein the position acquisition module is used for acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment; the area acquisition module is used for acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display equipment on the screen; the content acquisition module is used for acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen; the image processing module is used for carrying out designation processing on the image content and displaying the screen content containing the image content after the designation processing, and the tone difference value between the first tone of the image content after the designation processing and the second tone of the virtual content is larger than a first threshold value.
In a fourth aspect, an embodiment of the present application provides a display system, where the display system includes a terminal device and a head-mounted display device, and the terminal device is communicatively connected to the head-mounted display device, where: a head-mounted display device for displaying virtual content; the terminal device is used for acquiring a relative spatial position relation between the terminal device and the head-mounted display device, acquiring a projection area of the virtual content on a screen of the terminal device according to the relative spatial position relation, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display device on the screen, acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen, performing designation processing on the image content, and displaying the screen content containing the image content after the designation processing, wherein a tone difference value between a first tone of the image content after the designation processing and a second tone of the virtual content is larger than a first threshold value.
In a fifth aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the image processing method provided by the first aspect described above.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the image processing method provided in the first aspect.
According to the scheme provided by the embodiment of the application, the projection area of the virtual content displayed by the head-mounted display equipment on the screen of the terminal equipment is obtained, the image content corresponding to the projection area in the screen content is obtained according to the screen content to be displayed on the screen, then the image content is subjected to appointed processing, and the screen content containing the image content subjected to the appointed processing is displayed, so that when the head-mounted display equipment displays the virtual content, the interference of the screen content displayed by the terminal equipment on the virtual content is reduced, and the reality sense and the display effect of the virtual content in the augmented reality are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows a schematic diagram of another application scenario applicable to the embodiments of the present application.
FIG. 3 shows a flow diagram of an image processing method according to one embodiment of the present application.
Fig. 4 shows a schematic diagram of a projection area according to an embodiment of the application.
Fig. 5 shows a schematic diagram of a display effect according to an embodiment of the application.
FIG. 6 shows a flow diagram of an image processing method according to another embodiment of the present application.
Fig. 7 shows a flowchart of step S210 in the image processing method according to the embodiment of the present application.
Fig. 8 shows a flowchart of step S220 in the image processing method according to the embodiment of the present application.
FIG. 9 illustrates a projection area schematic according to an embodiment of the present application.
Fig. 10 shows a flowchart of step S222 in the image processing method according to the embodiment of the present application.
Fig. 11 shows a flowchart of step S223 in the image processing method according to the embodiment of the present application.
Fig. 12 shows a schematic diagram of a display effect according to an embodiment of the application.
FIG. 13 shows a flow chart of another image processing method according to an embodiment of the application.
FIG. 14 shows a flow diagram of an image processing method according to yet another embodiment of the present application.
FIG. 15 shows a block diagram of an image processing apparatus according to an embodiment of the present application.
FIG. 16 shows a block diagram of a display system according to one embodiment of the present application.
Fig. 17 is a block diagram of a terminal device for executing an image processing method according to an embodiment of the present application.
Fig. 18 is a storage unit for storing or carrying a program code implementing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The head-mounted display equipment displays the virtual content, can achieve the display effect of augmented reality, and displays the virtual content and the real world in an overlapping mode. When the head-mounted display device displays the virtual content, the virtual content can be controlled by the mobile terminal connected with the head-mounted display device. The mobile terminal can comprise a display screen, and corresponding screen content can be displayed on the display screen. For example, when the virtual content displayed by the head-mounted display device is an animal, a grass field may be displayed on the screen of the mobile terminal. The screen content displayed by the mobile terminal is matched with the virtual content, the display effect of the virtual content is improved, and a user can enjoy the science-fiction type impression experience.
The inventor of the present invention has found, through long-term research, that when a display screen of a mobile terminal and a head-mounted display device simultaneously display content, because the virtual content displayed by the head-mounted display device is generally content with transparent texture, a user can see the screen content displayed by the mobile terminal through the virtual content, and the screen content displayed by the mobile terminal may interfere with the virtual content displayed by the head-mounted display device, so that the sense of reality of the virtual content seen by the user is not strong. The inventor provides an image processing method, an image processing device, an image processing system, a terminal device and a storage medium in the embodiment of the application, so as to improve the sense of reality of virtual content in augmented reality.
An application scenario of the image processing method provided in the embodiment of the present application is described below.
Referring to fig. 1, a schematic diagram of an application scenario of the image processing method provided in the embodiment of the present application is shown, where the application scenario includes a display system 10. The display system 10 includes: a terminal device 100 and a head mounted display apparatus 200 connected to the terminal device 100.
In this embodiment, the terminal device 100 has a display function, can be controlled by a user, and may be an electronic device capable of running an application program, such as a mobile phone, a smart watch, a tablet computer, an electronic reader, or other electronic devices including a display screen, for example, a desktop display device, which is not limited herein.
In the embodiment of the present application, the head-mounted display device 200 may be an integrated head-mounted display device, or may be an external/access head-mounted display device, that is, the head-mounted display device 200 may only include a display module, a communication module, a camera, and the like for displaying, and the processor, the memory, and the like of the terminal device 100 connected to the head-mounted display device 200 are used to control the displayed virtual content. The display module may include a display screen (or a projection device) and a display lens to display the virtual content.
The head-mounted display device 200 connected to the terminal device 100 can interact with the terminal device 100 for information and instructions. The information of the interaction may include virtual content displayed by the head-mounted display device 200. The terminal device 100 and the head-mounted display device 200 may be connected through Wireless communication methods such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (purple peak technology), and the like, or may be connected through a USB (Universal Serial Bus) interface for wired communication, for example, referring to fig. 2, when the terminal device 100 is a mobile phone terminal or a tablet computer, the head-mounted display device 200 is connected through the USB interface for wired communication with the tablet computer and the mobile phone terminal. Of course, the connection manner of the terminal device 100 and the head mounted display apparatus 200 may not be limited in the embodiment of the present application.
In some embodiments, a tag 101 is disposed on the terminal device 100. Wherein the marker 101 may comprise at least one sub-marker having one or more characteristic points. In some embodiments, the marker 101 may be integrated into the terminal device 100, may be attached to the terminal device 100 by pasting, or may be displayed on the display screen of the terminal device 100. When the marker 101 is within the visual field of the head-mounted display device 200, the head-mounted display device 200 may use the marker 101 within the visual field as a target marker, and acquire an image including the target marker. When the image including the target marker is acquired, the acquired image of the target marker may be recognized, and spatial position information such as a relative position, an orientation, and the like between the target marker and the head mounted display device 200 is acquired based on the recognition result, so that the relative spatial position information between the terminal device 100 and the head mounted display device 200 is obtained. The head mounted display device 200 may display a corresponding virtual object based on relative spatial position information between the terminal apparatus 100 and the head mounted display device 200. The head-mounted display device 200 can acquire an image of a target marker in real time, thereby positioning and tracking the terminal device 100. It is to be understood that the specific marker 101 is not limited in the embodiment of the present application, and only needs to be tracked by the head-mounted display device 200.
In some embodiments, the head-mounted display apparatus 200 can also track the shape of the terminal device 100 and determine the relative spatial position relationship between the terminal device 100 and the head-mounted display apparatus 200.
In some embodiments, the head-mounted display apparatus 200 can also determine the relative spatial position relationship between the terminal device 100 and the head-mounted display apparatus 200 according to the light points set on the terminal device 100.
For example, referring to fig. 1 again, in an embodiment, the terminal device 100 is connected to the head-mounted display device 200 in a wireless communication manner, the head-mounted display device 200 scans the marker 101 on the terminal device 100 and displays the virtual animal 301, the user can see, through the head-mounted display device 200 worn by the user, that the virtual animal 301 is superimposed on the terminal device 100 displayed in the real space, and a lawn image matched with the virtual object 301 can be simultaneously displayed on the screen of the terminal device 100, where the virtual animal 301 corresponds to the lawn image displayed on the tablet computer side, a display effect of augmented reality of virtual content is embodied, cooperation between screen content displayed by the terminal device 100 and the virtual content is embodied, and a display effect of the virtual content is improved.
Based on the display system, the embodiment of the application provides an image processing method, which is applied to the terminal device and the head-mounted display device of the display system. A specific image processing method is described below.
Referring to fig. 3, an image processing method provided in an embodiment of the present application is applicable to the terminal device, where the terminal device is communicatively connected to a head-mounted display device, where the head-mounted display device may be the head-mounted display apparatus, and the image processing method may include:
step S110: and acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment.
In the embodiment of the application, when the terminal device needs to acquire the projection area of the virtual content on the screen of the terminal device, the relative spatial position relationship between the terminal device and the head-mounted display device can be acquired, so as to obtain the spatial position information of the terminal device. The relative spatial position relationship includes relative position information between the terminal device and the head-mounted display device, posture information and the like, and the posture information may be an orientation and a rotation angle of the terminal device relative to the head-mounted display device.
In some embodiments, a light spot may be disposed on the terminal device, the head-mounted display device collects a light spot image on the terminal device through the image collecting device, and sends the light spot image to the terminal device, and the terminal device may identify the light spot in the light spot image and determine a relative spatial position relationship between the terminal device and the head-mounted display device according to the light spot image. When the light spot is an infrared light spot, an infrared camera can be arranged on the head-mounted display device and used for collecting the light spot image of the infrared light spot. The light spot set on the terminal device may be one light spot or a light spot sequence consisting of a plurality of light spots.
In an embodiment, the light spots may be arranged on the housing of the terminal device, for example around the screen of the terminal device. The light spot can also be arranged on a protective sleeve of the terminal equipment, and the protective sleeve containing the light spot can be sleeved when the terminal equipment is used, so that the terminal equipment can be positioned and tracked. The arrangement of the light spots may be various, and is not limited herein. For example, in order to obtain the posture information of the terminal device in real time, different light spots may be respectively arranged around a screen of the terminal device, for example, different numbers of light spots may be arranged around the screen, or light spots of different colors may be arranged around the screen, so that the head-mounted display device may determine the relative spatial position with the terminal device according to the distribution of each light spot in the light spot image.
In some embodiments, the identification of the light spot image may be processed in a head-mounted display device, and the head-mounted display device may acquire the relative spatial positional relationship according to the identification result of the light spot image, and then transmit data of the relative spatial positional relationship to a terminal device, so that the terminal device may acquire the relative spatial positional relationship.
In other embodiments, the terminal device includes an Inertial Measurement Unit (IMU), and therefore, the relative spatial position relationship between the terminal device and the head-mounted display device may be obtained, or measurement data of the Inertial measurement unit of the terminal device may be obtained first, and then the relative spatial position relationship between the terminal device and the head-mounted display device may be determined according to the measurement data.
Of course, the above manner of acquiring the relative spatial position relationship is only an example, and a specific manner of acquiring the relative spatial position relationship between the terminal device and the head-mounted display device may not be limited in this embodiment of the application. For example, the above-mentioned relative spatial position relationship may be obtained by recognizing a marker on the terminal device.
Step S120: and acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display equipment on the screen.
In this embodiment of the application, the terminal device may obtain, according to the above-mentioned relative spatial position relationship, a projection area of the virtual content displayed by the head-mounted display device on the screen of the terminal device, so as to process the display content on the screen according to the projection area. The projection area of the virtual content on the screen of the terminal device may be understood as an overlapping area between a projection area obtained on a plane where the shape of the virtual content is projected onto the plane where the screen of the terminal device is located and a screen area of the terminal device, with a human eye observing the virtual content as a reference point. In some embodiments, the projection area may be represented by coordinates of a screen coordinate system of the terminal device, which may have an origin at one corner of the screen (e.g., the lower left corner).
In some embodiments, the projection area of the virtual content on the screen of the terminal device may be obtained according to the virtual content and the spatial position of the terminal device in the virtual space. The virtual space can include a virtual camera, the virtual camera is used for simulating human eyes of a user, and the position of the virtual camera in the virtual space can be regarded as the position of the head-mounted display device in the virtual space. In one embodiment, the spatial location of the virtual content and the terminal device in the virtual space may be the location of the virtual content and the terminal device in a world coordinate system of the virtual space. Therefore, in the world coordinate system of the virtual space, the extension lines of the connecting lines of the virtual camera and each vertex in the virtual content can be obtained according to the virtual camera, the virtual content and the spatial position of the terminal device in the world coordinate system, the intersection points of each extension line and the screen of the terminal device in the virtual space are respectively calculated, and the projection points of each vertex of the virtual content on the screen of the terminal device are obtained. Because the coordinates of the projection point are three-dimensional space coordinates of the projection point in the world coordinate system of the virtual space, the coordinates of the projection point of the virtual content on the screen of the virtual space can be converted into two-dimensional coordinates of the screen coordinate system of the terminal device according to the conversion parameters of the world coordinate system of the virtual space and the screen coordinate system of the terminal device in the real environment, and the projection area of the virtual content on the screen of the terminal device is obtained.
In another embodiment, the spatial position of the virtual content and the terminal device in the virtual space may also be the position of the virtual content and the terminal device in a first spatial coordinate system of the virtual space, the first spatial coordinate system being a spatial coordinate system in the virtual space with the origin of the virtual camera. Therefore, in the first space coordinate system of the virtual space, the extended lines of the connecting lines of the origin and each vertex in the virtual content and the intersection point of the extended lines and the screen of the terminal device can be obtained according to the virtual content and the space position of the terminal device in the first space coordinate system, so as to obtain the projection point of the vertex on the screen of the terminal device. Because the coordinates of the projection point are three-dimensional space coordinates of the projection point in the first space coordinate system, the coordinates of the projection point of the virtual content on the screen of the virtual space can be converted into two-dimensional coordinates of the screen coordinate system of the terminal device according to the conversion parameters of the first space coordinate system of the virtual space and the screen coordinate system of the terminal device, and a projection area of the virtual content on the screen of the terminal device is obtained. For example, referring to fig. 4, the terminal device 100 is a tablet pc, and an area formed on the screen by an intersection of an extension line of a connection line between the human eye 400 and the virtual content 300 and the screen of the tablet pc with the human eye 400 as an origin (which may also be regarded as a head-mounted display device as an origin) is a projection area 102.
Of course, the above-mentioned acquisition of the projection area is only an example, and the specific acquisition mode of the projection area may not be limited in the embodiment of the present application. For example, the projection area may be obtained by acquiring the outline of the projection area of the virtual content on the screen of the terminal device only from the outline vertex of the virtual content and the spatial position of the terminal device in the virtual space.
In some embodiments, the terminal device may acquire the projection area from the head-mounted display device. Specifically, when the head-mounted display device displays the virtual content, the projection area may be obtained in the above manner according to the spatial position of the virtual content in the virtual space and the relative spatial position information between the terminal device and the head-mounted display device. And then the head-mounted display device transmits the data of the projection area to the terminal device, so that the terminal device can acquire the projection area.
In some embodiments, the terminal device may obtain the projection area through its own processor. Specifically, when the head-mounted display device displays virtual content, the head-mounted display device transmits display data of the virtual content to the terminal device, where the display data may include spatial position coordinates of the virtual content in a virtual space, and the terminal device acquires the projection area according to the display data of the virtual content and relative spatial position information between the terminal device and the head-mounted display device in the manner described above.
Of course, the above manner of acquiring the projection area is only an example, and the specific manner of acquiring the projection area may not be limited in the embodiment of the present application.
Step S130: and acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen.
In this embodiment of the application, when the terminal device obtains a projection area of a virtual content on a screen of the terminal device, the terminal device needs to obtain an image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen, so as to process the image content. For example, referring to fig. 4, the image 103 is the image content corresponding to the projection area in the screen content.
In some embodiments, after the terminal device obtains the projection area, the terminal device may obtain position information of the projection area on a screen of the terminal device, and the terminal device may obtain image content to be displayed in the projection area according to screen content to be displayed on the screen and the position information. The screen content to be displayed may be stored in the terminal device, may also be downloaded from the server, and may also be acquired from other terminal devices, such as a head-mounted display device, which is not limited herein.
In some embodiments, due to the position information of the projection area on the screen of the terminal device, the position information may be represented by coordinates of the projection area in a screen coordinate system of the terminal device, where the screen coordinate system may have an origin at one corner (e.g., a lower left corner) of the screen. Therefore, the terminal device can convert the coordinates of the projection area in the screen coordinate system of the terminal device into the coordinates of the image coordinate system of the screen content according to the conversion parameters of the screen coordinate system and the image coordinate system of the screen content to obtain the image area corresponding to the projection area, so that the terminal device can acquire the image content in the image area in the screen content.
Step S140: the image content is subjected to designation processing, and screen content including the image content subjected to the designation processing is displayed, a hue difference value between a first hue of the image content subjected to the designation processing and a second hue of the virtual content being larger than a first threshold value.
After obtaining the image content corresponding to the projection area, the terminal device may perform designation processing on the image content, and display screen content including the image content subjected to the designation processing, so that interference with the virtual content is reduced when the image content subjected to the designation processing is displayed. Therefore, when a user sees the virtual content overlaid and displayed on the terminal equipment in the real world through the display lens of the head-mounted display equipment, the interference of the screen content displayed by the terminal equipment to the virtual content is reduced, and the reality sense and the display effect of the virtual content in the augmented reality are improved.
In some embodiments, the above-described specifying process is an image processing operation capable of reducing interference of the image content with the virtual content, and the value of a hue difference between a first hue of the image content after the specifying display process and a second hue of the virtual content is made larger than a first threshold value, thereby causing the image content after the specifying display process to have an effect of highlighting the virtual content. The first threshold is a minimum hue difference value which needs to be satisfied between the image content after the appointed display processing and the virtual content when the virtual content is displayed on the image content after the appointed display processing in an overlapping mode and has readability and identifiability. The first threshold may be set reasonably according to the observation comfort of the user, and is not limited herein.
Hue may refer to the degree of shading of an image, and in some embodiments hue may include color and transparency, and thus the above-described specified processing may be processing of color of image content and may also be processing of transparency of image content. For example, the specifying process may be adjusting the color of the image to be a pure color (e.g., black, gray, etc.), adjusting the transparency of the image to be 50% transparent, overlaying a pure color picture on the image, etc., so that when the virtual content in the head-mounted display device and the image content on the terminal device are displayed simultaneously, a contrast is formed between a first color tone of the specified display-processed image content and a second color tone of the virtual content, so that the specified display-processed image content can highlight the virtual content, and interference of the image content displayed by the terminal device on the virtual content is reduced. Currently, the manner of the above-mentioned designated processing is only an example, and the specific manner of the designated processing may not be limited in the embodiment of the present application.
It is understood that, after the terminal device performs the designation processing on the image content corresponding to the projection area, when the head-mounted display device displays the virtual content, the terminal device may display the screen content including the image content after the designation processing. When the designated processing is to overlay or add other content to the image content, the designated processed image content includes the original image content and the overlaid or added other content. Therefore, when a user sees the virtual content overlaid and displayed on the terminal equipment in the real world through the display lens of the head-mounted display equipment, the interference of the screen content displayed by the terminal equipment to the virtual content is reduced, the visual significance of the virtual content is enhanced, and the sense of reality of the virtual content in the augmented reality is improved.
For example, referring to fig. 5, and comparing fig. 1, the terminal device 100 is connected to the head-mounted display device 200 in a wireless communication manner, the head-mounted display device 200 scans the marker 101 on the terminal device 100 and displays the virtual animal 301, the user can see, through the head-mounted display device 200 worn, that the virtual animal 301 is superimposed on the terminal device 100 displayed in real space, and the color of the image 103 (stone, grass, etc.) corresponding to the projection area of the virtual content in the screen content displayed by the terminal device 100 is adjusted to gray, so that the virtual animal is highlighted, interference of the image 103 with the virtual animal is reduced, cooperation between the screen content displayed by the terminal device 100 and the virtual content is reflected, and the display effect of the virtual content is improved.
According to the image processing method provided by the embodiment of the application, the projection area of the virtual content displayed by the head-mounted display equipment on the screen of the terminal equipment is obtained, the image content corresponding to the projection area in the screen content is obtained according to the screen content to be displayed on the screen, then the image content is subjected to appointed processing, and the screen content containing the image content subjected to the appointed processing is displayed, so that when the head-mounted display equipment displays the virtual content, the interference of the screen content displayed by the terminal equipment on the virtual content is reduced, the virtual content is highlighted, and the reality sense and the display effect of the virtual content in augmented reality are improved. And only the screen content corresponding to the projection area of the virtual content on the screen is processed, so that the virtual content is highlighted, normal display of other screen content is guaranteed, and interaction between a user and the mobile terminal is facilitated.
Referring to fig. 6, another embodiment of the present application provides an image processing method, which is applicable to a terminal device, where the terminal device is communicatively connected to a head-mounted display device, where the head-mounted display device may be the head-mounted display apparatus described above, and the image processing method may include:
step S210: and acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment.
In the embodiment of the application, when the terminal device needs to acquire the projection area of the virtual content on the screen of the terminal device, the relative spatial position relationship between the terminal device and the head-mounted display device can be acquired, so as to obtain the spatial position information of the terminal device.
In some embodiments, the terminal device is provided with a marker, so that an image of the marker on the terminal device can be acquired in real time through the head-mounted display device, so as to obtain a relative position relationship between the terminal device and the head-mounted display device according to the image change of the marker. The marker can be arranged on a shell of the terminal equipment, can also be displayed on a screen of the terminal equipment in an image mode, can also be an external marker, and can be inserted into the terminal equipment through a USB (universal serial bus) or an earphone hole and the like when in use, so that the positioning and tracking of the terminal equipment are realized. Of course, the manner of disposing the above-mentioned markers is merely an example, and is not limited herein.
Specifically, referring to fig. 7, the obtaining of the relative spatial position relationship between the terminal device and the head-mounted display device may include:
step S211: and receiving a marker image containing a marker sent by the head-mounted display device, wherein the marker image is obtained when the head-mounted display device collects the marker.
In some embodiments, when the terminal device needs to acquire the relative positional relationship between the terminal device and the head-mounted display device, the terminal device may receive a marker image sent by the head-mounted display device, so as to obtain the relative positional relationship between the terminal device and the head-mounted display device by identifying the marker image. In some embodiments, the head-mounted display device may scan the terminal device in real time through the camera to acquire a marker image including a marker on the terminal device, and then transmit the marker image to the terminal device, so that the terminal device receives the marker image.
The image of the marker containing the marker on the terminal device is acquired by the head-mounted display device, and the image of the marker can be acquired by adjusting the spatial position of the terminal device in the real space and also can be acquired by adjusting the spatial position of the head-mounted display device in the real space, so that the marker on the terminal device is in the visual field range of the image acquisition device of the head-mounted display device, and the image of the marker can be acquired by the head-mounted display device. The field of view of the image capturing device may be determined by the size of the field of view.
In some embodiments, the marker on the terminal device may include at least one sub-marker, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In addition, the distribution rules of the sub-markers within different markers are different, and thus, each marker may have different identity information. The terminal device may acquire identity information corresponding to the tag by identifying the sub-tag included in the tag, and the identity information may be information that can be used to uniquely identify the tag, such as a code, but is not limited thereto.
In one embodiment, the outline of the marker may be rectangular, but the shape of the marker may be other shapes, and the shape and size of the marker are not limited herein, and the rectangular region and the plurality of sub-markers in the region constitute one marker. Of course, the marker may also be an object which is composed of a light spot and can emit light, the light spot marker may emit light of different wavelength bands or different colors, and the terminal device acquires the identity information corresponding to the marker by identifying information such as the wavelength band or the color of the light emitted by the light spot marker. Of course, the shape, style, size, color, feature point number and distribution of the specific marker are not limited in the embodiment of the present application, and the marker only needs to be recognized by the terminal device.
Step S212: and identifying the marker in the marker image, and acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment based on the identification result.
In some embodiments, after obtaining the marker image, the terminal device may identify the marker in the marker image to obtain the relative position relationship between the terminal device and the head-mounted display device based on the identification result.
It is understood that after the terminal device identifies the marker in the marker image, the obtained identification result includes spatial position information between the marker and the head-mounted display device, the spatial position information may include position information, posture information and the like, and the posture information may include a rotation direction, a rotation angle and the like of the marker relative to the head-mounted display device. Therefore, the relative positional relationship between the terminal device and the head-mounted display device can be obtained with the head-mounted display device as a reference based on the positional information of the marker on the terminal device, that is, the positional information between the marker and the terminal device.
Step S220: and acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display equipment on the screen.
In this embodiment of the application, when the head-mounted display device is used, the left eye image displayed by the image source may be projected into the left eye of the user through the optical element, and the right eye image displayed by the image source may be projected into the right eye of the user, so as to implement stereoscopic display. The image source may be a display screen of a head-mounted display device or a projection device, and may be used to display an image. Therefore, when the terminal device needs to acquire a projection area of the virtual content displayed by the head-mounted display device on the screen of the terminal device, a projection area corresponding to the left eye and a projection area of the right eye can be acquired. Specifically, referring to fig. 8, the obtaining a projection area of the virtual content displayed by the head-mounted display device on the screen of the terminal device according to the relative spatial position relationship may include:
step S221: a left eye display image and a right eye display image for displaying virtual content in a head mounted display device are acquired.
In the embodiment of the application, the virtual content displayed by the head-mounted display device comprises a left-eye display image and a right-eye display image for forming three-dimensional virtual content in human eyes through reflection of the optical element, so that the terminal device can acquire the left-eye display image and the right-eye display image for displaying the virtual content in the head-mounted display device when the projection area of the virtual content on the screen of the terminal device needs to be acquired. When the virtual content is a stereoscopic content, the left eye display image and the right eye display image have parallax, the left eye display image is projected to the left eye of a user through the optical lens when being displayed, the right eye display image is projected to the right eye of the user through the optical lens when being displayed, and the left eye display image and the right eye display image with parallax can form a stereoscopic image after being fused with the brain of the user, so that the user can see the display effect of the stereoscopic image.
In the embodiment of the present application, when the head-mounted display device displays the virtual content, the virtual content needs to be rendered according to rendering coordinates of the virtual content. The rendering coordinates of the virtual content may be spatial coordinates of each point of the virtual content in a virtual space with the virtual camera as an origin. The virtual camera is a camera used for simulating human eye visual angles in a 3D software system, can track motion changes of virtual contents in a virtual space according to changes of motion (namely head motion) of the virtual camera, can generate corresponding left eye display images and right eye display images after rendering, and projects the left eye display images and the right eye display images onto optical lenses to achieve three-dimensional display.
Specifically, the virtual camera includes a left virtual camera and a right virtual camera. The left virtual camera is used for simulating the left eye of the human eye, and the right virtual camera is used for simulating the right eye of the human eye. Therefore, the rendering coordinates of the virtual content include left rendering coordinates of the virtual content in a second spatial coordinate system with the left virtual camera as an origin and right rendering coordinates of the virtual content in a third spatial coordinate system with the right virtual camera as an origin. And after the head-mounted display equipment renders the virtual content according to the left rendering coordinate, a left eye display image of the virtual content can be obtained. Similarly, after the head-mounted display device renders the virtual content according to the right rendering coordinate, a right eye display image of the virtual content can be obtained.
Therefore, when the head-mounted display device displays the virtual content, the head-mounted display device can transmit the left-eye display image and the right-eye display image for displaying the virtual content to the terminal device, so that the terminal device can acquire the left-eye display image and the right-eye display image of the virtual content.
Step S222: and acquiring a first projection area of the left-eye display image on the screen of the terminal equipment according to the left-eye display image and the relative spatial position relation.
When the terminal device acquires the left eye display image of the virtual content and the relative spatial position relationship between the terminal device and the head-mounted display device, a first projection area of the left eye display image on the screen of the terminal device can be acquired according to the left eye display image and the relative spatial position relationship, so as to process the screen content corresponding to the first projection area. The first projection area may be understood as an overlapping area between a projection area obtained on a plane where a screen of the terminal device is located and a screen area of the terminal device when the shape of the virtual content in the left-eye display image is projected onto the plane. That is, the projection area obtained by projecting the shape of the virtual content in the left-eye display image onto the plane on which the screen of the terminal device is located may be a projection area in which a part of the projection area overlaps with the screen area of the terminal device, or may be a projection area in which all the projection area is included in the screen area. For example, referring to fig. 9, the terminal device 100 is a tablet computer, and a left eye display image of the virtual content 300 may be incident to a left eye 401 of a user after being reflected by an optical lens of the head-mounted display device, where the left eye display image is in a corresponding first projection area 104 on a screen of the tablet computer.
In some embodiments, since the above-mentioned relative positional relationship obtained by the terminal device includes information such as the position, orientation, and rotation angle of the terminal device relative to the head-mounted display device, spatial position coordinates of the screen of the terminal device in the real space may be acquired, and then may be converted into spatial coordinates in the virtual space, which may be coordinates in the world coordinate system of the virtual space or coordinates in the second spatial coordinate system of the virtual space. Therefore, when the terminal device needs to acquire the first projection area of the left-eye display image on the screen of the terminal device, the intersection point of the extension line of the connecting line of each vertex of the virtual content in the left virtual camera and the left eye display image and the screen of the terminal equipment can be obtained according to the left virtual camera, the left eye display image and the spatial position of the terminal equipment in the same spatial coordinate system (such as a world coordinate system) of the virtual space, the intersection point is a projection point of the vertex on the screen of the terminal device, and since the coordinates of the projection point are three-dimensional space coordinates of the projection point in the space coordinate system, therefore, the coordinates of the projection point of the left-eye display image on the screen of the virtual space can be converted into the two-dimensional coordinates of the screen coordinate system of the terminal device according to the conversion parameters of the space coordinate system of the virtual space and the screen coordinate system of the terminal device, so that the first projection area of the left-eye display image on the screen of the terminal device can be obtained.
Further, the first projection area of the left-eye display image on the screen of the terminal device may be obtained only according to the coordinates of the outline area of the virtual content in the left-eye display image in the virtual space, so as to simplify the calculation step and optimize the processing process of the terminal device. Therefore, in some embodiments, referring to fig. 10, the obtaining a first projection area of the left-eye display image on the screen of the terminal device according to the left-eye display image and the relative spatial position relationship may include:
step S2221: a first set of contour coordinates of virtual content in a left eye display image in a virtual space is obtained.
In some embodiments, when the terminal device needs to acquire the first projection area of the left-eye display image on the screen of the terminal device, a first contour coordinate set of the virtual content in the left-eye display image in the virtual space may be acquired according to the left-eye display image, so as to acquire the first projection area according to the first contour coordinate set. The first contour coordinate set may be coordinates of each vertex of a contour region of the virtual content in the left-eye display image in a world coordinate system, or may be coordinates of each vertex of the contour region in a second spatial coordinate system.
The left-eye display image obtained by the terminal device may include the spatial coordinates of the outline region of the virtual content in the second spatial coordinate system, so that the terminal device may directly use the left-eye display image as the first outline coordinate set, or may obtain the spatial coordinates of the outline region of the virtual content in the left-eye display image in the world coordinate system according to the conversion parameters of the second spatial coordinate system and the world coordinate system, and then use the spatial coordinates as the first outline coordinate set.
Step S2222: and acquiring a screen coordinate set of the screen in the virtual space according to the relative spatial position relation.
Since the relative position relationship obtained by the terminal device includes information such as the position, orientation, and rotation angle of the terminal device relative to the head-mounted display device, the spatial position coordinate of the screen of the terminal device in the real space can be obtained, and then the spatial position coordinate can be converted into the spatial coordinate in the virtual space, so as to obtain the screen coordinate set of the screen in the virtual space. The spatial coordinates in the virtual space may be coordinates in a world coordinate system of the virtual space or coordinates in a second spatial coordinate system of the virtual space.
Step S2223: and respectively establishing a first connecting line between the left virtual camera in the virtual space and each point in the first contour coordinate set, and acquiring the coordinates of the points on each established first connecting line in the screen coordinate set to obtain a first coordinate set.
In some embodiments, the terminal device may respectively establish a first connection line between the left virtual camera in the virtual space and each point in the first contour coordinate set, and obtain coordinates of points on each established first connection line in the screen coordinate set to obtain the first coordinate set. The coordinate of the left virtual camera, the first contour coordinate set and the screen coordinate set are coordinates in the same space coordinate system, and the first coordinate set is a three-dimensional coordinate set in the space coordinate system.
Since there may be a first connection line that does not intersect with the screen of the terminal device in the first connection line between the left virtual camera in the virtual space and each point in the first contour coordinate set, the terminal device needs to determine whether there is a coordinate of a point on each established first connection line in the screen coordinate set. And if the point on the first connecting line exists, the first connecting line can be considered to be crossed with the screen, and if the point on the first connecting line does not exist, the first connecting line is not crossed with the screen. And the terminal equipment obtains a first coordinate set according to the coordinates of the points on the first connecting lines, namely the coordinate set of projection points of each vertex on the screen of the terminal equipment in the outline area of the virtual content in the left-eye display image.
Step S2224: and acquiring a first projection area of the left-eye display image on the screen according to the first coordinate set.
After the terminal device obtains the first coordinate set, the first coordinate set may be converted into a two-dimensional coordinate of a screen coordinate system of the terminal device according to a conversion parameter of a space coordinate system in a virtual space and the screen coordinate system of the terminal device, so as to obtain position information of a projection point of each vertex on a screen of the terminal device in a contour region of virtual content in a left-eye display image on the screen, so that the terminal device may obtain the first projection region of the left-eye display image on the screen. Therefore, the first projection area of the left eye display image on the screen of the terminal equipment is obtained only according to the coordinates of the outline area of the virtual content in the left eye display image in the virtual space, the calculation steps are simplified, and the processing process of the terminal equipment is optimized.
Similarly, the terminal device may refer to the above-mentioned corresponding step of obtaining the first projection area of the left-eye display image on the screen of the terminal device according to the left-eye display image and the relative spatial position relationship, and obtain the second projection area of the right-eye display image on the screen. Specifically, with reference to fig. 8, the obtaining a projection area of the virtual content displayed by the head-mounted display device on the screen of the terminal device may include:
step S223: and acquiring a second projection area of the right eye display image on the screen according to the right eye display image and the relative spatial position relation.
When the terminal device acquires the right-eye display image of the virtual content and the relative spatial position relationship between the terminal device and the head-mounted display device, a second projection area of the right-eye display image on the screen of the terminal device can be acquired according to the right-eye display image and the relative spatial position relationship, so as to process the screen content corresponding to the second projection area. The second projection area may be understood as an overlapping area between a projection area obtained on a plane where a screen of the terminal device is located and a screen area of the terminal device when the shape of the virtual content in the right-eye display image is projected onto the plane. That is, the image area obtained by projecting the shape of the virtual content in the right-eye display image onto the plane on which the screen of the terminal device is located may be an area in which a part of the projection area overlaps with the screen area of the terminal device, or may be an area in which all of the projection area is included in the screen area. For example, referring to fig. 9, the terminal device 100 is a tablet computer, and a right eye display image of the virtual content 300 may be incident to a right eye 402 of the user after being reflected by an optical lens of the head-mounted display device, where the right eye display image is in a corresponding second projection area 105 on a screen of the tablet computer.
When the terminal device needs to acquire the second projection area of the right-eye display image on the screen of the terminal device, the intersection point of the extension line of the connecting line of each vertex of the virtual content in the right virtual camera and the right eye display image and the screen of the terminal equipment can be obtained according to the right virtual camera, the right eye display image and the spatial position of the terminal equipment in the same spatial coordinate system (such as a third spatial coordinate system) of the virtual space, the intersection point is a projection point of the vertex on the screen of the terminal device, and since the coordinates of the projection point are three-dimensional space coordinates of the projection point in the space coordinate system, therefore, the coordinates of the projection point of the right-eye display image on the screen of the virtual space can be converted into the two-dimensional coordinates of the screen coordinate system of the terminal device according to the conversion parameters of the space coordinate system of the virtual space and the screen coordinate system of the terminal device, so that the second projection area of the right-eye display image on the screen of the terminal device can be obtained.
Further, the terminal device may also obtain the second projection area of the right-eye display image on the screen of the terminal device only according to the coordinates of the outline area of the virtual content in the right-eye display image in the virtual space, so as to simplify the calculation step and optimize the processing process of the terminal device. Therefore, in some embodiments, referring to fig. 11, the obtaining the second projection area of the right-eye display image on the screen according to the right-eye display image and the relative spatial position relationship may include:
step S2231: and acquiring a second contour coordinate set of the virtual content in the right-eye display image in the virtual space.
In some embodiments, when the terminal device needs to acquire the second projection area of the right-eye display image on the screen of the terminal device, the second contour coordinate set of the virtual content in the right-eye display image may be acquired according to the right-eye display image, so as to acquire the second projection area according to the second contour coordinate set. The second contour coordinate set may be coordinates of each vertex of a contour region of the virtual content in the right-eye display image in the world coordinate system, or may be coordinates of each vertex of the contour region in the third spatial coordinate system.
The right-eye display image obtained by the terminal device may include the spatial coordinates of the contour region of the virtual content in the third spatial coordinate system, so that the terminal device may directly use the right-eye display image as the second contour coordinate set, or may obtain the spatial coordinates of the virtual content in the right-eye display image in the world coordinate system according to the conversion parameters of the third spatial coordinate system and the world coordinate system, and then use the spatial coordinates as the second contour coordinate set.
Step S2232: and acquiring a screen coordinate set of the screen in the virtual space according to the relative spatial position relation.
The terminal device can acquire the spatial position coordinates of the screen of the terminal device in the real space according to the relative position relationship, and then can convert the spatial position coordinates into the spatial coordinates in the virtual space to obtain the screen coordinate set of the screen in the virtual space. The spatial coordinates in the virtual space may be coordinates in a world coordinate system of the virtual space, or may be coordinates in a third spatial coordinate system of the virtual space.
Step S2233: and respectively establishing a second connecting line between the right virtual camera in the virtual space and each point in the second contour coordinate set, and acquiring the coordinates of the points on each established second connecting line in the screen coordinate set to obtain a second coordinate set.
In some embodiments, the terminal device may respectively establish a second connection line between the right virtual camera in the virtual space and each point in the second contour coordinate set, and obtain coordinates of points on each established second connection line in the screen coordinate set to obtain the second coordinate set. The coordinate of the right virtual camera, the second contour coordinate set and the screen coordinate set are coordinates in the same space coordinate system, and the second coordinate set is a three-dimensional coordinate set in the space coordinate system.
Since there may be a second connection line that does not intersect with the screen of the terminal device in the second connection line between the right virtual camera in the virtual space and each point in the second contour coordinate set, the terminal device needs to determine whether there is a coordinate of a point on each established second connection line in the screen coordinate set. And if the point on the second connecting line exists, the second connecting line can be considered to be crossed with the screen, and if the point on the second connecting line does not exist, the second connecting line is not crossed with the screen. And the terminal equipment obtains a second coordinate set according to the coordinates of the points on the second connecting lines, namely the coordinate set of projection points of each vertex on the screen of the terminal equipment in the outline area of the virtual content in the right-eye display image.
Step S2234: and acquiring a second projection area of the right-eye display image on the screen according to the second coordinate set.
After the terminal device obtains the second coordinate set, the second coordinate set may be converted into a two-dimensional coordinate of a screen coordinate system of the terminal device according to a conversion parameter of a space coordinate system in the virtual space and the screen coordinate system of the terminal device, so as to obtain position information of projection points of each vertex on the screen of the terminal device in the contour region of the virtual content in the right-eye display image on the screen, so that the terminal device may obtain the second projection region of the right-eye display image on the screen. Therefore, the second projection area of the right eye display image on the screen of the terminal equipment is obtained only according to the coordinates of the outline area of the virtual content in the right eye display image in the virtual space, the calculation steps are simplified, and the processing process of the terminal equipment is optimized.
Step S224: and acquiring a synthetic area of the first projection area and the second projection area, and taking the synthetic area as a projection area of the virtual content on the screen.
After the terminal device acquires the first projection area of the left-eye display image on the screen and the second projection area of the right-eye display image on the screen, the terminal device may acquire a composite area of the first projection area and the second projection area, and use the composite area as a projection area of the virtual content on the screen, so that the terminal device may determine an area in the screen, which is blocked by the displayed virtual content.
The synthesized region of the first projection region and the second projection region is a region formed by a total coordinate set obtained by combining all the first coordinate sets in the first projection region and all the second coordinate sets in the second projection region, that is, a union region of the first projection region and the second projection region.
In addition, in some embodiments, the processing procedure of acquiring the projection area of the virtual content on the screen may also be performed in the head-mounted display device, that is, after the head-mounted display device acquires the projection area of the virtual content on the screen in the above manner, the data in the projection area may be transmitted to the terminal device, so that the terminal device may acquire the projection area of the virtual content on the screen, reduce the amount of calculation of the terminal device, and optimize the processing procedure of the terminal device.
Step S230: and acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen.
Step S240: the image content is subjected to designation processing, and screen content including the image content subjected to the designation processing is displayed, a hue difference value between a first hue of the image content subjected to the designation processing and a second hue of the virtual content being larger than a first threshold value.
In some embodiments, the content of step S220 and step S230 may refer to the content of the above embodiments, and is not described herein again.
In some embodiments, the above-mentioned specifying processing of the image content includes any one of: overlaying the overlay content of the specified color on the image content; adjusting the color of the image content to a specified color; adjusting the transparency value of the image content to a designated transparency value; wherein a color difference between the designated color and the color of the virtual content is greater than a second threshold value, and a difference between the designated transparency value and the transparency value of the virtual content is greater than a third threshold value.
The designated color may be a single color (i.e., a solid color) without mixing colors or hues of other hues, such as black, gray, and the like. In the embodiment of the present application, the color difference between the designated color and the color of the virtual content is greater than a second threshold, where the second threshold is a minimum color difference that needs to be satisfied when the user sees the virtual content through the head-mounted display device to be displayed superimposed on the image content of the designated color, and the image content can highlight the virtual content. The second threshold may be set reasonably according to the observation comfort of the user, and is not limited herein. For example, when the color of the virtual content is blue, the designated color may be gray, so that when the user observes the virtual content superimposed on the image content through the head mounted display device, interference of the image content is reduced, and the visual saliency of the virtual content is enhanced.
The above-mentioned specified transparency value is a transparency value that weakens the display effect of the image content, and the smaller the transparency value, the higher the transparency degree. If the specified transparency value is set to 5, i.e. 50% transparent, it can also be set to 0, i.e. 100% transparent. In some embodiments, the difference between the designated transparency value and the transparency value of the virtual content is greater than a third threshold value, where the third threshold value is a minimum value that needs to be satisfied when the user sees the virtual content through the head-mounted display device to be displayed superimposed on the image content with the designated transparency, and the image content can highlight the virtual content. The third threshold may be set reasonably according to the observation comfort of the user, and is not limited herein. For example, the third threshold value is set to 5, and when the transparency value of the virtual content is 10, that is, 0% transparent, the designated transparency value may be 1, that is, 90% transparent, so that when the user observes the virtual content superimposed and displayed on the image content through the head-mounted display device, the interference of the image content is reduced, and the visual saliency of the virtual content is enhanced.
As an embodiment, the terminal device may also directly set that the transparency value of the image content is smaller than the preset threshold value, so as to weaken the display effect of the image content. The preset threshold may be 1 or 2, which is not limited herein.
In some embodiments, the above-mentioned specifying processing of the image content may be adjusting the color of the image content to the specified color, for example, to black or gray, or adjusting the transparency value of the image content to the specified transparency value, for example, to 0 or 5, so that when the virtual content in the head-mounted display device and the image content on the terminal device are displayed simultaneously, the specified processed image content can highlight the virtual content, and the interference of the image content displayed by the terminal device on the virtual content is reduced.
In another embodiment, the process of designating the image content may be to overlay the overlay content of the designated color on the image content, where the overlay content may be a picture of the designated color, or a newly-created layer of the designated color, and the overlay content is not limited herein, and only needs to overlay the image content and present the color as the designated color.
In this way, after the image content is processed, the terminal device can display the screen content containing the image content after the designated processing. Therefore, when a user sees the virtual content overlaid and displayed on the screen of the terminal device in the real space through the head-mounted display device, the image content on the screen has the visual effect of highlighting the virtual content, and the reality and the display effect of the virtual content are enhanced.
Of course, the above processing manner of the image content is only an example, and is not limited in the embodiment of the present application, and only the processed image content can reduce interference with the virtual content.
Further, in some embodiments, when the virtual content is changed, the terminal device may update the projection area according to the changed virtual content. As an embodiment, when the head-mounted display device only includes a display module, a communication module and a camera for displaying, the control of the displayed virtual content may be performed by means of a processor, a memory, and the like of the terminal device. Specifically, with continuing reference to fig. 6, after performing the designation processing on the image content and displaying the screen content including the image content after the designation processing, the image processing method may further include:
step S250: and when the operation area detects the operation, generating a control instruction according to the operation.
In some embodiments, the terminal device includes a manipulation region through which a change in virtual content displayed by the head mounted display device can be manipulated. The control area comprises at least one of a key, a touch area and a pressure area. Specifically, when the operation area detects a manipulation operation, a control instruction is generated according to the manipulation operation, and the control instruction is used for controlling the virtual content displayed by the head-mounted display device to present a set display effect.
In some embodiments, the manipulation operation may include, but is not limited to, a single-finger slide, a click, a press, a multi-finger fit slide, and the like acting on a manipulation area of the terminal device, and the control instruction may include, but is not limited to, a movement instruction, an enlargement instruction, a reduction instruction, a rotation instruction, a selection instruction, a content switching instruction, and the like, so as to achieve a display effect of controlling movement, scaling, rotation, selection, and content switching of the virtual content. Of course, the above control commands are merely examples, and do not represent a limitation on the control commands in the embodiments of the present application.
In some embodiments, the manipulation operation and the control instruction have a corresponding relationship, that is, when the manipulation area of the terminal device detects the manipulation operation, the control instruction corresponding to the manipulation operation may be generated according to the manipulation operation and the corresponding relationship. The corresponding relation can be stored in the terminal device in advance, and can be set reasonably according to the specific use condition of the user. That is, when it is detected that the manipulation operation performed by the user in the manipulation area of the terminal device is in any one of the above cases, a corresponding control instruction may be generated to control the virtual content displayed by the head-mounted display device to present the set display effect.
Step S260: and adjusting the virtual content displayed in the head-mounted display equipment according to the control instruction, and sending display data corresponding to the adjusted virtual content to the head-mounted display equipment.
In some embodiments, since the head-mounted display device only includes a display module, a communication module, a camera and the like for displaying, it is necessary to control the displayed virtual content by means of a processor, a memory and the like of the terminal device. Therefore, the terminal device can adjust the virtual content displayed in the head-mounted display device according to the control instruction, and send the display data corresponding to the adjusted virtual content to the head-mounted display device, so that the head-mounted display device displays the adjusted virtual content according to the display data.
For example, when the control instruction is a move instruction, the terminal device may adjust the coordinate data of the display position of the virtual content, update the coordinate data to the set coordinate data, thereby obtaining display data corresponding to the adjusted virtual content, and then the terminal device may send the display data to the head-mounted display device, and the head-mounted display device may move the displayed virtual content to the set position according to the display data.
Step S270: and acquiring the projection area again based on the adjusted virtual content, and performing specified processing on the image content corresponding to the acquired projection area in the screen content.
After the virtual content is changed, the projection area of the virtual content on the screen of the terminal device is also changed, wherein the change of the virtual content can be a change in position or a change in content. Therefore, the terminal device may re-acquire the projection area in the manner of acquiring the projection area based on the adjusted virtual content, and perform a designation process on the image content corresponding to the re-acquired projection area in the screen content of the terminal device. Therefore, the terminal equipment can update the projection area in real time according to the change of the virtual content, and perform specified processing on the image content corresponding to the projection area in the screen content in real time, so that when a user watches the virtual content which is superposed and displayed on the screen of the terminal equipment through the head-mounted display device worn by the user, the image content on the screen can highlight the virtual content, and the reality sense and the display effect of the virtual content are improved.
For example, referring to fig. 5 and 12, when the user views the virtual animal 301 superimposed and displayed on the screen of the terminal device through the head mounted display device worn by the user and moves the virtual animal 301 to the right, the color of the image 103 (stone, grass, etc.) corresponding to the projection area of the virtual animal 301 in the screen content is adjusted to gray in real time, the virtual animal is highlighted, the user can clearly observe the virtual animal 301 at all times, and the interference of the image 103 on the screen with the virtual animal 301 is reduced.
As another embodiment, when the head mounted display device includes a processor and memory, control of the virtual content displayed may be performed by the head mounted display device. Specifically, referring to fig. 13, after the image content is subjected to the designation processing and the screen content including the image content subjected to the designation processing is displayed, the image processing method may further include:
step S280: when the control area detects the control operation, control parameters of the control operation are sent to the head-mounted display device, and the control parameters are used for indicating the head-mounted display device to adjust the displayed virtual content.
In some embodiments, when the manipulation area of the terminal device detects a manipulation operation, a manipulation parameter of the manipulation operation may be sent to the head-mounted display device, so that the terminal device may control the virtual content to present a set display effect according to the manipulation parameter. The control parameter is used for instructing the head-mounted display device to adjust the displayed virtual content.
In some embodiments, the above-mentioned manipulation parameters may include, but are not limited to, adjusting a movement parameter, a zoom-in parameter, a zoom-out parameter, a rotation parameter, a switching parameter, etc. of the virtual content to achieve a display effect of controlling movement, zooming, rotation, and content switching of the virtual content.
In some embodiments, the manipulation operation and the manipulation parameter have a corresponding relationship, that is, when the manipulation region of the terminal device detects the manipulation operation, the manipulation parameter corresponding to the manipulation operation can be generated according to the manipulation operation and the corresponding relationship, and the manipulation parameter is sent to the head-mounted display device, so that the head-mounted display device can adjust the display data of the displayed virtual content according to the manipulation parameter and regenerate the virtual content, thereby achieving the set display effect of the virtual content. The corresponding relation can be stored in the terminal device in advance, and can be set reasonably according to the specific use condition of the user. That is, when it is detected that the manipulation operation performed by the user in the manipulation area of the terminal device is in any of the above cases, the manipulation parameter may be generated, so that the head-mounted display device may adjust the displayed virtual content according to the manipulation parameter.
Step S290: and acquiring the projection area again based on the adjusted virtual content, and performing specified processing on the image content corresponding to the acquired projection area in the screen content.
Similarly, the terminal device may reacquire the projection area based on the adjusted virtual content in the manner of acquiring the projection area, and perform the specified processing on the image content corresponding to the reacquired projection area in the screen content of the terminal device, thereby improving the sense of reality of the virtual content.
According to the image processing method provided by the embodiment of the application, the projection areas of the left eye display image and the right eye display image of the virtual content on the screen of the terminal device are obtained, the image content corresponding to the projection areas in the screen content is obtained according to the screen content to be displayed on the screen, then the image content is subjected to appointed processing, and the screen content containing the image content subjected to the appointed processing is displayed, so that when the head-mounted display device displays the virtual content, the interference of the screen content displayed by the terminal device on the virtual content is reduced, the virtual content is highlighted, and the reality sense and the display effect of the virtual content in augmented reality are improved. And only the screen content corresponding to the projection area of the virtual content on the screen is processed, so that the virtual content is highlighted, normal display of other screen content is guaranteed, and interaction between a user and the mobile terminal is facilitated.
Referring to fig. 14, an embodiment of the present application provides an image processing method applied to a head-mounted display device, where the head-mounted display device is communicatively connected to a terminal device, and the image processing method may include:
step S310: and displaying the virtual content.
In the embodiment of the application, the head-mounted display device displays the virtual content, which may be that a relative position relationship between the terminal device and the head-mounted display device is obtained first, and then the virtual content is displayed according to the relative position relationship. The relative position relationship may include relative position information between the terminal device and the head-mounted display device, posture information, and the like, and the posture information may be an orientation and a rotation angle of the terminal device relative to the head-mounted display device.
In some embodiments, a marker may be disposed on the terminal device, and the head-mounted display device may obtain a relative positional relationship between the terminal device and the head-mounted display device by recognizing the marker on the terminal device. As an implementation manner, the head-mounted display device may scan the terminal device in real time through the camera to acquire the marker on the terminal device, so as to obtain the marker image, and the head-mounted display device may identify the marker in the marker image, so as to obtain the relative position relationship between the terminal device and the head-mounted display device.
The head-mounted display device may generate the virtual content based on the data of the virtual content and the relative positional relationship. The head-mounted display device generates the virtual content according to the data of the virtual content and the relative position relationship, or may construct the virtual content according to the data of the virtual content, and acquire the rendering position of the virtual content according to the relative position relationship between the terminal device and the head-mounted display device, so as to render the virtual content according to the rendering position. The rendering of the virtual content can obtain the RGB value of each pixel in the virtual content, the corresponding pixel coordinates, and the like.
In some embodiments, after the head mounted display device generates the virtual content, the virtual content may be displayed. Specifically, after the head-mounted display device constructs and renders virtual content, display data of the rendered virtual content may be acquired, where the display data may include RGB values of each pixel point in a display image, a corresponding pixel point coordinate, and the like, and the head-mounted display device may generate the display image according to the display data and project the display image onto a display lens through a display screen or a projection module, so as to display the virtual content. The user can see the virtual content overlaid and displayed on the screen of the terminal device in the real world through the display lens of the head-mounted display device, and the effect of augmented reality is achieved.
Step S320: and acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment.
In this embodiment of the application, when the head-mounted display device needs to acquire a projection area of virtual content on a screen of the terminal device, a relative spatial position relationship between the terminal device and the head-mounted display device may be acquired first, so as to obtain spatial position information of the terminal device. The relative spatial position relationship may be a relative position relationship that needs to be obtained when the head-mounted display device displays the virtual content.
Step S330: and acquiring a projection area of the virtual content on the screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content on the screen observed by human eyes through the head-mounted display equipment.
The head-mounted display device can acquire a projection area of the displayed virtual content on the screen of the terminal device according to the relative spatial position relationship so as to determine an area, which is shielded by the virtual content, in the screen of the terminal device. The specific step of obtaining the projection area of the virtual content on the screen of the terminal device may refer to the step of obtaining the projection area by the terminal device in the foregoing embodiment, which is not described herein again.
Step S340: and sending the data of the projection area to the terminal equipment, wherein the data of the projection area is used for indicating the terminal equipment to perform specified processing on the image content corresponding to the projection area in the screen content to be displayed, and displaying the screen content containing the image content subjected to the specified processing, and the tone difference value between the first tone of the image content subjected to the specified processing and the second tone of the virtual content is larger than a first threshold value.
In the embodiment of the application, when the head-mounted display device acquires the projection area of the virtual content on the screen of the terminal device, the data of the projection area can be sent to the terminal device, and the data of the projection area is used for indicating the terminal device to perform the specified processing on the image content corresponding to the projection area in the screen content to be displayed, so that the interference on the virtual content is reduced when the image content after the specified processing is displayed. Therefore, when the user watches the virtual content which is superposed and displayed on the screen of the terminal equipment through the head-mounted display device, the specified processed image content can highlight the virtual content, the interference of the image content displayed by the terminal equipment to the virtual content is reduced, and the sense of reality of the virtual content is improved.
The image processing method provided by the embodiment of the application is applied to the head-mounted display device, the virtual content is displayed, the relative spatial position relationship between the terminal device and the head-mounted display device is obtained, the projection area of the virtual content on the screen of the terminal device is obtained according to the relative spatial position relationship, then the data of the projection area is sent to the terminal device, the data of the projection area is used for indicating the terminal device to perform appointed processing on the image content corresponding to the projection area in the screen content to be displayed, and therefore when the head-mounted display device displays the virtual content, the interference of the screen content displayed by the terminal device on the virtual content is reduced, the virtual content is highlighted, and the reality sense and the display effect of the virtual content in the augmented reality are improved. And only the screen content corresponding to the projection area of the virtual content on the screen is processed, so that the virtual content is highlighted, normal display of other screen content is guaranteed, and interaction between a user and the mobile terminal is facilitated.
Referring to fig. 15, a block diagram of an image processing apparatus 500 according to an embodiment of the present application is shown, which is applied to a terminal device, and the apparatus may include: a location acquisition module 510, an area acquisition module 520, a content acquisition module 530, and an image processing module 540. The position obtaining module 510 is configured to obtain a relative spatial position relationship between the terminal device and the head-mounted display device; the area obtaining module 520 is configured to obtain, according to the relative spatial position relationship, a projection area of the virtual content displayed by the head-mounted display device on the screen of the terminal device, where the projection area is a projection area of the virtual content observed by the human eye through the head-mounted display device on the screen; the content obtaining module 530 is configured to obtain, according to screen content to be displayed on a screen, image content corresponding to a projection area in the screen content; the image processing module 540 is configured to perform a designation process on the image content and display the screen content including the image content after the designation process, where a hue difference value between a first hue of the image content after the designation process and a second hue of the virtual content is greater than a first threshold.
In some embodiments, the image processing module 540 may be specifically configured to: overlaying the overlay content of the specified color on the image content; adjusting the color of the image content to a specified color; adjusting the transparency value of the image content to a designated transparency value; a color difference between the designated color and the color of the virtual content is greater than a second threshold value, and a difference between the designated transparency value and the transparency value of the virtual content is greater than a third threshold value.
In some embodiments, the region acquisition module 520 may include: image acquisition unit a first projection acquisition unit, a second projection acquisition unit, and a synthesis region acquisition unit. The image acquisition unit is used for acquiring a left-eye display image and a right-eye display image which are used for displaying virtual content in the head-mounted display equipment; the first projection acquisition unit is used for acquiring a first projection area of a left-eye display image on a screen of the terminal equipment according to the left-eye display image and the relative spatial position relation; the second projection acquisition unit is used for acquiring a second projection area of the right eye display image on the screen according to the right eye display image and the relative spatial position relation; the synthesis area acquisition unit is used for acquiring a synthesis area of the first projection area and the second projection area and taking the synthesis area as a projection area of the virtual content on the screen.
In some embodiments, the first projection acquisition unit may be specifically configured to: acquiring a first contour coordinate set of virtual content in a left eye display image in a virtual space; acquiring a screen coordinate set of a screen in a virtual space according to the relative spatial position relation; respectively establishing a first connecting line between a left virtual camera in a virtual space and each point in a first contour coordinate set, and acquiring the coordinates of the points on each established first connecting line in a screen coordinate set to obtain a first coordinate set; and acquiring a first projection area of the left-eye display image on the screen according to the first coordinate set. The second projection acquisition unit may be specifically configured to: acquiring a second contour coordinate set of virtual content in the right-eye display image in a virtual space; acquiring a screen coordinate set of a screen in a virtual space according to the relative spatial position relation; respectively establishing a second connecting line between the right virtual camera in the virtual space and each point in the second contour coordinate set, and acquiring the coordinates of the points on each established second connecting line in the screen coordinate set to obtain a second coordinate set; and acquiring a second projection area of the right-eye display image on the screen according to the second coordinate set.
In some embodiments, the position obtaining unit may be specifically configured to: receiving a marker image containing a marker sent by head-mounted display equipment, wherein the marker image is obtained when the head-mounted display equipment collects the marker; and identifying the marker in the marker image, and acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment based on the identification result.
In some embodiments, the terminal device includes a manipulation area, the manipulation area includes at least one of a key, a touch area, and a pressure area, and the image processing apparatus 500 may further include: and an area updating module. The area updating module is used for generating a control instruction according to the control operation when the operation area detects the control operation; adjusting virtual content displayed in the head-mounted display equipment according to the control instruction, and sending display data corresponding to the adjusted virtual content to the head-mounted display equipment; acquiring a projection area again based on the adjusted virtual content, and performing specified processing on image content corresponding to the acquired projection area in the screen content; or
When the control region detects a control operation, sending a control parameter of the control operation to the head-mounted display device, wherein the control parameter is used for indicating the head-mounted display device to adjust the displayed virtual content; and the area updating module is used for acquiring the projection area again based on the adjusted virtual content and carrying out specified processing on the image content corresponding to the acquired projection area in the screen content.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
To sum up, the image processing apparatus provided in the embodiment of the present application is applied to a terminal device, and obtains a projection area of a virtual content displayed by a head-mounted display device on a screen of the terminal device, and obtains an image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen, and then performs a specific process on the image content, and displays the screen content including the image content after the specific process, so that when the head-mounted display device displays the virtual content, interference of the screen content displayed by the terminal device on the virtual content is reduced, thereby highlighting the virtual content, and improving a sense of reality and a display effect of the virtual content in augmented reality. And only the screen content corresponding to the projection area of the virtual content on the screen is processed, so that the virtual content is highlighted, normal display of other screen content is guaranteed, and interaction between a user and the mobile terminal is facilitated.
Referring to fig. 16, which shows a schematic structural diagram of a display system provided in an embodiment of the present application, the display system 10 may include: terminal equipment 11 and with terminal equipment 11 communication connection's head mounted display device 12, wherein:
head mounted display device 12 is used to display virtual content.
The terminal device 11 is configured to obtain a relative spatial position relationship between the terminal device and the head-mounted display device, obtain a projection area of the virtual content on a screen of the terminal device 11 according to the relative spatial position relationship, where the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display device on the screen, obtain, according to the screen content to be displayed on the screen, image content corresponding to the projection area in the screen content, perform designation processing on the image content, and display the screen content including the image content after the designation processing, where a hue difference value between a first hue of the image content after the designation processing and a second hue of the virtual content is greater than a first threshold.
Referring to fig. 17, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a terminal device capable of running an application, such as a smart phone, a tablet computer, an electronic book, or the like. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
The head mounted display device in the present application may include one or more of the following components: the device comprises a processor, a memory, an image acquisition device, a display device, an optical module, a communication module and a power supply. Wherein, image acquisition device can the electricity be connected in display device, and the optical module sets up adjacent display device, and communication module is connected with the treater.
The processor may comprise any suitable type of general or special purpose microprocessor, digital signal processor, or microcontroller. The processor may also process the data and/or signals to determine one or more operating conditions in the system. For example, the processor generates image data of the virtual world by rendering from image data stored in advance, and transmits the image data to the display device for display.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
The image capture device may be used to capture images of the marker and may also be used to obtain environmental information within its field of view. The image acquisition device may be an infrared camera or a color camera, and the specific type of the camera is not limited in the embodiment of the present application.
The display device may include a display control unit. The display control unit is used for receiving the display image of the virtual content rendered by the processor, and then displaying and projecting the display image onto the optical module, so that a user can view the virtual content through the optical module. The display device may be a display screen or a projection device, and may be used to display an image.
The optical module can be a transflective lens, so that a display image displayed by the display device can be directly reflected by the optical module and enters the eyes of a user. The user can see through the optical module and observe the reality environment when seeing the display image that display device throws through the optical module, consequently, the image that the user eyes obtained is the augmented reality scene after the display image of virtual content and the reality environment stack.
The communication module can be a module such as Bluetooth, WiFi (Wireless-Fidelity), ZigBee (Violet technology) and the like, and the head-mounted display device can be in communication connection with the terminal equipment through the communication module. The head-mounted display device in communication connection with the terminal equipment can perform information and instruction interaction with the terminal equipment. For example, the head-mounted display device may receive image data transmitted from the terminal device via the communication module, and generate and display virtual content of a virtual world from the received image data.
The power supply can supply power for the whole head-mounted display device, and the normal operation of each part of the head-mounted display device is ensured.
Referring to fig. 18, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. An image processing method is applied to a terminal device, wherein the terminal device is in communication connection with a head-mounted display device, and the method comprises the following steps:
acquiring a relative spatial position relationship between the terminal equipment and the head-mounted display equipment;
acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by the head-mounted display equipment on the screen;
acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen;
and performing designation processing on the image content, and displaying screen content including the image content after the designation processing, wherein a hue difference value between a first hue of the image content after the designation processing and a second hue of the virtual content is larger than a first threshold value.
2. The method according to claim 1, wherein the performing of the specified processing on the image content includes any one of:
overlaying overlay content of a specified color on the image content;
adjusting the color of the image content to a specified color;
adjusting the transparency value of the image content to a designated transparency value;
a color difference between the designated color and the color of the virtual content is greater than a second threshold, and a difference between the designated transparency value and the transparency value of the virtual content is greater than a third threshold.
3. The method according to claim 1, wherein the obtaining a projection area of the virtual content displayed by the head-mounted display device on the screen of the terminal device according to the relative spatial position relationship comprises:
acquiring a left eye display image and a right eye display image which are used for displaying virtual content in the head-mounted display equipment;
acquiring a first projection area of the left-eye display image on a screen of the terminal equipment according to the left-eye display image and the relative spatial position relation;
acquiring a second projection area of the right-eye display image on the screen according to the right-eye display image and the relative spatial position relation;
and acquiring a synthetic area of the first projection area and the second projection area, and taking the synthetic area as a projection area of the virtual content on the screen.
4. The method according to claim 3, wherein the obtaining a first projection area of the left-eye display image on the screen of the terminal device according to the left-eye display image and the relative spatial position relationship comprises:
acquiring a first contour coordinate set of virtual content in a virtual space in the left eye display image;
acquiring a screen coordinate set of the screen in the virtual space according to the relative spatial position relation;
respectively establishing a first connecting line between a left virtual camera in the virtual space and each point in the first contour coordinate set, and acquiring the coordinates of the points on each established first connecting line in the screen coordinate set to obtain a first coordinate set;
acquiring a first projection area of the left eye display image on the screen according to the first coordinate set;
the acquiring a second projection area of the right-eye display image on the screen according to the right-eye display image and the relative spatial position relationship includes:
acquiring a second contour coordinate set of virtual content in a virtual space in the right-eye display image;
acquiring a screen coordinate set of the screen in the virtual space according to the relative spatial position relation;
respectively establishing a second connecting line between the right virtual camera in the virtual space and each point in the second contour coordinate set, and acquiring the coordinates of the points on each established second connecting line in the screen coordinate set to obtain a second coordinate set;
and acquiring a second projection area of the right eye display image on the screen according to the second coordinate set.
5. The method according to claim 1, wherein a marker is provided on the terminal device, and the obtaining of the relative spatial position relationship between the terminal device and the head-mounted display device comprises:
receiving a marker image containing the marker sent by the head-mounted display device, wherein the marker image is obtained when the head-mounted display device collects the marker;
and identifying the marker in the marker image, and acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment based on the identification result.
6. The method according to any one of claims 1 to 5, wherein the terminal device comprises a manipulation area, the manipulation area comprises at least one of a key, a touch area and a pressure area, and after performing the specified processing on the image content and displaying the screen content containing the image content after the specified processing, the method further comprises:
when the operation area detects operation, generating a control instruction according to the operation;
adjusting virtual content displayed in the head-mounted display equipment according to the control instruction, and sending display data corresponding to the adjusted virtual content to the head-mounted display equipment;
reacquiring a projection area based on the adjusted virtual content, and performing the specified processing on the image content corresponding to the reacquired projection area in the screen content;
or
When the control area detects a control operation, sending a control parameter of the control operation to the head-mounted display device, wherein the control parameter is used for indicating the head-mounted display device to adjust the displayed virtual content;
and acquiring a projection area again based on the adjusted virtual content, and performing the specified processing on the image content corresponding to the acquired projection area in the screen content.
7. An image processing method applied to a head-mounted display device, wherein the head-mounted display device is in communication connection with a terminal device, the method comprising:
displaying the virtual content;
acquiring a relative spatial position relationship between the terminal equipment and the head-mounted display equipment;
acquiring a projection area of the virtual content on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content on the screen observed by human eyes through the head-mounted display equipment;
and sending the data of the projection area to the terminal equipment, wherein the data of the projection area is used for indicating the terminal equipment to perform specified processing on the image content corresponding to the projection area in the screen content to be displayed, and displaying the screen content containing the image content subjected to the specified processing, and the tone difference value between the first tone of the image content subjected to the specified processing and the second tone of the virtual content is larger than a first threshold value.
8. An image processing apparatus, applied to a terminal device, wherein the terminal device is in communication connection with a head-mounted display device, the apparatus comprising:
the position acquisition module is used for acquiring the relative spatial position relationship between the terminal equipment and the head-mounted display equipment;
the area acquisition module is used for acquiring a projection area of the virtual content displayed by the head-mounted display equipment on a screen of the terminal equipment according to the relative spatial position relationship, wherein the projection area is a projection area of the virtual content observed by human eyes through the head-mounted display equipment on the screen;
the content acquisition module is used for acquiring image content corresponding to the projection area in the screen content according to the screen content to be displayed on the screen;
and the image processing module is used for carrying out designation processing on the image content and displaying the screen content containing the image content after the designation processing, and the tone difference value between the first tone of the image content after the designation processing and the second tone of the virtual content is larger than a first threshold value.
9. A display system, comprising a terminal device and a head-mounted display device, the terminal device being in communication connection with the head-mounted display device, wherein:
the head-mounted display device is used for displaying virtual content;
the terminal device is configured to obtain a relative spatial position relationship between the terminal device and the head-mounted display device, obtain a projection area of the virtual content on a screen of the terminal device according to the relative spatial position relationship, where the projection area is a projection area of the virtual content on the screen observed by human eyes through the head-mounted display device, obtain, according to the screen content to be displayed on the screen, image content corresponding to the projection area in the screen content, perform designation processing on the image content, and display the screen content including the image content after the designation processing, where a hue difference value between a first hue of the image content after the designation processing and a second hue of the virtual content is greater than a first threshold value.
10. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-6.
11. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 6.
CN201910295517.0A 2019-01-03 2019-04-12 Image processing method, device, system, terminal device and storage medium Active CN111818326B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910295517.0A CN111818326B (en) 2019-04-12 2019-04-12 Image processing method, device, system, terminal device and storage medium
PCT/CN2019/130646 WO2020140905A1 (en) 2019-01-03 2019-12-31 Virtual content interaction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910295517.0A CN111818326B (en) 2019-04-12 2019-04-12 Image processing method, device, system, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN111818326A true CN111818326A (en) 2020-10-23
CN111818326B CN111818326B (en) 2022-01-28

Family

ID=72843998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910295517.0A Active CN111818326B (en) 2019-01-03 2019-04-12 Image processing method, device, system, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN111818326B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583976A (en) * 2020-12-29 2021-03-30 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium
US20220414990A1 (en) * 2021-06-25 2022-12-29 Acer Incorporated Augmented reality system and operation method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140266985A1 (en) * 2013-03-15 2014-09-18 Lockheed Martin Corporation System and method for chromatic aberration correction for an image projection system
CN106846438A (en) * 2016-12-30 2017-06-13 深圳市幻实科技有限公司 A kind of jewellery try-in method, apparatus and system based on augmented reality
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN108780578A (en) * 2016-03-15 2018-11-09 奇跃公司 Direct light compensation technique for augmented reality system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140266985A1 (en) * 2013-03-15 2014-09-18 Lockheed Martin Corporation System and method for chromatic aberration correction for an image projection system
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN108780578A (en) * 2016-03-15 2018-11-09 奇跃公司 Direct light compensation technique for augmented reality system
CN106846438A (en) * 2016-12-30 2017-06-13 深圳市幻实科技有限公司 A kind of jewellery try-in method, apparatus and system based on augmented reality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583976A (en) * 2020-12-29 2021-03-30 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium
CN112583976B (en) * 2020-12-29 2022-02-18 咪咕文化科技有限公司 Graphic code display method, equipment and readable storage medium
US20220414990A1 (en) * 2021-06-25 2022-12-29 Acer Incorporated Augmented reality system and operation method thereof

Also Published As

Publication number Publication date
CN111818326B (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US10698535B2 (en) Interface control system, interface control apparatus, interface control method, and program
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
US20180130264A1 (en) Virtual reality editor
US10999412B2 (en) Sharing mediated reality content
CN111766937B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN108780578A (en) Direct light compensation technique for augmented reality system
CN111353930B (en) Data processing method and device, electronic equipment and storage medium
CN111818326B (en) Image processing method, device, system, terminal device and storage medium
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
CN108038916B (en) Augmented reality display method
CN111161396B (en) Virtual content control method, device, terminal equipment and storage medium
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
CN111563966B (en) Virtual content display method, device, terminal equipment and storage medium
CN110874868A (en) Data processing method and device, terminal equipment and storage medium
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium
CN110874867A (en) Display method, display device, terminal equipment and storage medium
CN110874135A (en) Optical distortion correction method and device, terminal equipment and storage medium
CN111651031A (en) Virtual content display method and device, terminal equipment and storage medium
CN111399630B (en) Virtual content interaction method and device, terminal equipment and storage medium
JPWO2018084087A1 (en) Image display system, image display apparatus, control method thereof, and program
US11297296B2 (en) Display control apparatus, program, and display control method
WO2020140905A1 (en) Virtual content interaction system and method
CN111913560A (en) Virtual content display method, device, system, terminal equipment and storage medium
JP7210131B2 (en) Information processing device, information processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Image processing method, device, system, terminal device and storage medium

Effective date of registration: 20221223

Granted publication date: 20220128

Pledgee: Shanghai Pudong Development Bank Limited by Share Ltd. Guangzhou branch

Pledgor: GUANGDONG VIRTUAL REALITY TECHNOLOGY Co.,Ltd.

Registration number: Y2022980028733