WO2024022070A1 - 画面显示方法、装置、设备及介质 - Google Patents

画面显示方法、装置、设备及介质 Download PDF

Info

Publication number
WO2024022070A1
WO2024022070A1 PCT/CN2023/105985 CN2023105985W WO2024022070A1 WO 2024022070 A1 WO2024022070 A1 WO 2024022070A1 CN 2023105985 W CN2023105985 W CN 2023105985W WO 2024022070 A1 WO2024022070 A1 WO 2024022070A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
head
mounted device
dimensional model
candidate
Prior art date
Application number
PCT/CN2023/105985
Other languages
English (en)
French (fr)
Inventor
秦瑞峰
陈丽莉
赵砚秋
韩鹏
张�浩
何惠东
姜倩文
杜伟华
石娟娟
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2024022070A1 publication Critical patent/WO2024022070A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present invention relates to the field of computer technology, and in particular, to a screen display method, device, equipment and medium.
  • the historical relics of the grottoes are generally relatively large in size.
  • tourists are generally not allowed to view them at close range to avoid artificial damage to the historical relics of the grottoes.
  • tourists cannot appreciate the full picture of the grottoes' historical relics when visiting, nor can they appreciate the magnificence and majesty of the grottoes' historical relics. Therefore, there is an urgent need for a screen display method to help tourists better visit historical cultural relics such as grottoes.
  • the present invention provides a screen display method, device, equipment and medium to solve the deficiencies in related technologies.
  • a screen display method is provided, applied to a first wearable device, and the method includes:
  • a target three-dimensional model matching the target object is obtained from the target database.
  • the target database is used to store at least one candidate three-dimensional model and the feature matrix corresponding to each candidate three-dimensional model.
  • the candidate three-dimensional model is based on the image
  • the initial three-dimensional model scanned by the scanning device is obtained by completing the incomplete parts and/or restoring the color;
  • a target three-dimensional model matching the target object is obtained from the target database, including:
  • the candidate three-dimensional model corresponding to the feature matrix that matches the target feature matrix is determined as the target three-dimensional model.
  • the construction process of the target database includes:
  • the target database is also used to store media data corresponding to at least one candidate three-dimensional model.
  • media data is used to introduce the candidate 3D,model in the form of video or audio;
  • the method After displaying the target 3D model, the method also includes:
  • the first prompt information is displayed, and the first prompt information is used to inquire whether to play media data corresponding to the target three-dimensional model.
  • the method further includes:
  • the media data corresponding to the target three-dimensional model In response to receiving the first feedback information based on the first prompt information, obtain the media data corresponding to the target three-dimensional model from the target database, and the first feedback information is used to indicate that the media data corresponding to the target three-dimensional model needs to be played;
  • the method further includes:
  • the displayed picture of the target three-dimensional model is adjusted according to the display angle and/or the picture magnification indicated by the picture adjustment operation.
  • the displayed picture of the target three-dimensional model is adjusted according to the display angle and/or picture magnification indicated by the picture adjustment operation, including the following: At least one of the following:
  • the displayed screen of the target three-dimensional model is enlarged or reduced according to the screen magnification indicated by the screen adjustment operation.
  • the method further includes:
  • the second prompt information is displayed, and the second prompt information Used to indicate that the screen magnification indicated by the screen adjustment operation has reached the set magnification.
  • the method further includes any of the following:
  • ambient light brightness is obtained, and based on the ambient light brightness, the transparency of the dimming film of the first head-mounted device is adjusted.
  • the method further includes:
  • the screen displayed on the first head-mounted device is sent to the second head-mounted device.
  • the method further includes:
  • the first head-mounted device in response to a location acquisition operation on the first head-mounted device, obtain location information of the second head-mounted device;
  • the method further includes:
  • a first road map is displayed, and the first road map is used to indicate a route from a location where the first head-mounted device is located to a location where the second head-mounted device is located.
  • the method further includes:
  • the method also includes:
  • the second head mounted device in response to reaching the rendezvous time, sending rendezvous information to the second head mounted device, the second head mounted device configured to receive the rendezvous information
  • a second road map is displayed, and the second road map is used to indicate a route from the location of the second head-mounted device to the gathering location.
  • the target object is a historical artifact such as a grotto statue.
  • a screen display device is provided, applied to a first head-mounted device, and the device includes:
  • Collection module used to collect images containing target objects
  • a determination module for determining the target feature matrix of the target object based on the image containing the target object
  • the acquisition module is used to obtain the target three-dimensional model matching the target object from the target database based on the target feature matrix of the target object.
  • the target database is used to store at least one candidate three-dimensional model and the feature matrix corresponding to each candidate three-dimensional model.
  • the candidate The three-dimensional model is obtained by completing the incomplete parts and/or color restoration based on the initial three-dimensional model scanned by the image scanning device;
  • the display module is used to display the target 3D model.
  • the acquisition module when used to acquire a target three-dimensional model matching the target object from the target database based on the target feature matrix of the target object, is used to:
  • the candidate three-dimensional model corresponding to the feature matrix that matches the target feature matrix is determined as the target three-dimensional model.
  • the construction process of the target database includes:
  • the target database is also used to store media data corresponding to at least one candidate three-dimensional model, and the media data is used to introduce the candidate three-dimensional model in the form of video or audio;
  • the display module is also used to display first prompt information, and the first prompt information is used to inquire whether to play media data corresponding to the target three-dimensional model.
  • the acquisition module is further configured to obtain media data corresponding to the target three-dimensional model from the target database in response to receiving first feedback information based on the first prompt information, where the first feedback information is used to indicate that the target needs to be played.
  • Media data corresponding to the three-dimensional model
  • the device also includes:
  • the playback module is used to play the obtained media data.
  • the device further includes:
  • the adjustment module is configured to adjust the displayed image of the target three-dimensional model according to the display angle and/or the image magnification indicated by the image adjustment operation in response to the image adjustment operation on the first head-mounted device.
  • the adjustment module is configured to adjust the displayed target three-dimensional model according to the display angle and/or the screen magnification indicated by the screen adjustment operation in response to the screen adjustment operation on the first head-mounted device.
  • the adjustment module uses at least one of the following:
  • the displayed screen of the target three-dimensional model is enlarged or reduced according to the screen magnification indicated by the screen adjustment operation.
  • the display module is also configured to respond to the image adjustment operation at the second set position of the first head-mounted device, when the image magnification indicated by the image adjustment operation reaches the set magnification.
  • the second prompt information is displayed, and the second prompt information is used to indicate that the screen magnification indicated by the screen adjustment operation has reached the set magnification.
  • the adjustment module is also configured to set the dimming film of the first head-mounted device to an opaque state in response to a picture adjustment operation on the first head-mounted device;
  • the adjustment module is also used to obtain the ambient light brightness in response to the picture adjustment operation on the first head-mounted device, and adjust the transparency of the dimming film of the first head-mounted device based on the ambient light brightness.
  • the device further includes:
  • the first sending module is configured to send the screen displayed on the first head-mounted device to the second head-mounted device when the first head-mounted device has been paired with the second head-mounted device.
  • the acquisition module is also configured to acquire the second head-mounted device in response to a location acquisition operation on the first head-mounted device when the first head-mounted device has been paired with the second head-mounted device. Location information of the headset;
  • the display module is also used to display the obtained location information.
  • the display module is also used to display a first road map, and the first road map is used to indicate a route from the location of the first head-mounted device to the location of the second head-mounted device.
  • the device further includes:
  • a setting module for setting the gathering time and gathering location through the first head-mounted device
  • the second sending module is configured to send the gathering information to the second head-mounted device in response to reaching the gathering time when the first head-mounted device has been paired with the second head-mounted device.
  • the second head-mounted device It is used to display a second road map when the gathering information is received, and the second road map is used to indicate a route from the location of the second head-mounted device to the gathering location.
  • the target object is a historical artifact such as a grotto statue.
  • a head-mounted device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes
  • the computer program implements the operations performed by the screen display method provided by the above-mentioned first aspect and any embodiment of the first aspect.
  • a computer-readable storage medium is provided, the computer-readable storage medium is The storage medium stores a program.
  • the program is executed by the processor, the operations performed by the screen display method provided by the above-mentioned first aspect and any embodiment of the first aspect are implemented.
  • a computer program product includes a computer program.
  • the computer program When the computer program is executed by a processor, it implements the above-mentioned first aspect and any of the embodiments of the first aspect. The operation performed by the screen display method.
  • the present invention creates a target database for storing at least one candidate three-dimensional model and a feature matrix corresponding to each candidate three-dimensional model, wherein the candidate three-dimensional model completes incomplete parts and/or based on an initial three-dimensional model scanned by an image scanning device. Color restoration is obtained. Therefore, after collecting an image containing the target object and determining the target feature matrix of the target object based on the image containing the target object, the target feature matrix can be obtained from the target database based on the target feature matrix of the target object. Displaying the target 3D model with an object-matched, partially completed and/or color restored target 3D model can improve the display effect of the first head-mounted device, thereby improving the user experience.
  • Figure 1 is a schematic diagram of an implementation environment of a screen display method according to an embodiment of the present invention.
  • Figure 2 is a flow chart of a screen display method according to an embodiment of the present invention.
  • Figure 3 is a schematic diagram of the creation process of a target database according to an embodiment of the present invention.
  • Figure 4 is a flow chart of a screen display method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a picture adjustment method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the arrangement of a light-adjustable film according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of the adjustment process of the transparency of a light-adjustable film according to an embodiment of the present invention.
  • Figure 8 is a processing flow chart of a head-mounted device in a parent-child function mode according to an embodiment of the present invention.
  • Figure 9 is a processing flow chart of a head-mounted device in a tour guide function mode according to an embodiment of the present invention.
  • Figure 10 is a block diagram of a screen display device according to an embodiment of the present invention.
  • Figure 11 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention.
  • the present invention provides a screen display method for realizing three-dimensional model restoration and color restoration of target objects based on augmented reality technology, so that users can see what the object looked like before it was damaged, thereby improving High user experience.
  • the target object can be a historical cultural relic like a grotto statue. That is, the image display method provided by the present invention can be used to complete the incomplete parts and restore the color of the three-dimensional model of the statue, so that the user can see the statue. The original style and experience of the statue's true grandeur and majesty.
  • the above screen display method can be executed by a head-mounted device, which can be smart glasses, augmented reality (Augmented Reality, AR) glasses, etc.
  • a head-mounted device which can be smart glasses, augmented reality (Augmented Reality, AR) glasses, etc.
  • the present invention does not limit the device type and number of devices of the head-mounted device.
  • Figure 1 is a schematic diagram of an implementation environment of a screen display method according to an embodiment of the present invention.
  • the implementation environment may include a head-mounted device 101 and a server 102.
  • the head-mounted device 101 can be smart glasses, AR glasses, etc.
  • the server 102 can be a server, multiple servers, a server cluster, a cloud computing platform, etc.
  • the head-mounted device 101 can communicate with the server 102 through wired or wireless communication, so that the head-mounted device 101 can complete the incomplete parts and restore the color of the target object through the screen display method provided by the present invention. Display of the target 3D model.
  • the screen display method provided by the present invention can also be applied in other implementation environments.
  • the implementation environment can also only include head-mounted devices 101, and each head-mounted device 101 can be connected through a wired or wireless connection. Communicate using a communication method to implement the screen display method provided by the present invention.
  • FIG. 2 is a flow chart of a screen display method according to an embodiment of the present invention. As shown in Figure 2, it is applied to a first head-mounted device.
  • the first head-mounted device can be multiple heads-mounted devices. Any one of the head-mounted devices, for example, the first head-mounted device may be a head-mounted device used by parents, or the first head-mounted device may be a head-mounted device used by children, or the first head-mounted device may be a head-mounted device used by children.
  • the head-mounted device may be a head-mounted device used by a tour guide, or the first head-mounted device may be a head-mounted device used by tourists, etc., and the present invention is not limited thereto.
  • the screen display method may include:
  • Step 101 Collect images containing the target object.
  • the target object can be any object.
  • the target object can be a grotto-type sculpture of historical culture, such as a statue.
  • the target object can also be other objects.
  • the present invention does not limit the specific type of the target object.
  • the first head-mounted device may have a camera device internally or externally connected, and the present invention does not limit the manner in which the camera device is installed.
  • the frame of the first head-mounted device may be provided with a camera (that is, a camera device).
  • the first head-mounted device can collect images containing the target object through a camera device built in or external to the first head-mounted device.
  • Step 102 Determine the target feature matrix of the target object based on the image containing the target object.
  • Step 103 Based on the target feature matrix of the target object, obtain the target three-dimensional model matching the target object from the target database.
  • the target database is used to store at least one candidate three-dimensional model and the feature matrix corresponding to each candidate three-dimensional model.
  • the candidate three-dimensional model It is obtained by completing incomplete parts and/or color restoration based on the initial three-dimensional model scanned by an image scanning device.
  • Step 104 Display the target three-dimensional model.
  • the first head-mounted device may be provided with a display device. Therefore, the first head-mounted device may display the target three-dimensional model through the display device.
  • the present invention creates a target database for storing at least one candidate three-dimensional model and a feature matrix corresponding to each candidate three-dimensional model, wherein the candidate three-dimensional model completes incomplete parts and/or based on an initial three-dimensional model scanned by an image scanning device. Color restoration is obtained. Therefore, after collecting an image containing the target object and determining the target feature matrix of the target object based on the image containing the target object, the target feature matrix can be obtained from the target database based on the target feature matrix of the target object. Displaying the target 3D model with an object-matched, partially completed and/or color restored target 3D model can improve the display effect of the first head-mounted device, thereby improving the user experience.
  • step 102 when determining the target feature matrix of the target object based on the image containing the target object, it can be implemented in the following manner:
  • the feature extraction model can be multiple types of neural network models.
  • the feature extraction model can be a convolutional neural network (CNN) model.
  • CNN convolutional neural network
  • the feature extraction model can also be other types of models.
  • the present invention does not limit the specific type of feature extraction model.
  • the feature extraction model can include a convolution layer and a pooling layer.
  • the input image can be convolved through the convolution layer. Process to obtain the convolution features of the image, and then pool the convolution features through the pooling layer to obtain the target feature matrix of the target object.
  • the above is only an exemplary way to obtain the target feature matrix of the target object. In more possible implementations, other ways can be used to obtain the target feature matrix.
  • the present invention does not limit the specific method used. .
  • the above process is explained by taking the example of determining the target feature matrix directly based on the acquired image after acquiring an image containing the target object.
  • the image acquired through step 101 is located in the camera coordinate system, but Generally, when processing, it is necessary to process based on the image located in the human eye coordinate system. Therefore, before determining the target feature matrix of the target object based on the image containing the target object, the image can be converted from the camera coordinate system to the human eye coordinate system. system to ensure the accuracy of the target feature matrix subsequently extracted.
  • the image when converting the image from the camera coordinate system to the human eye coordinate system, the image can be rotated and/or translated to realize the conversion of the image from the camera coordinate system to the human eye coordinate system.
  • the image When rotating and/or translating the image, the image can be rotated according to a set angle and translated according to a set displacement.
  • the set angle and the set distance may be predetermined.
  • the set angle and set distance can be obtained as follows:
  • the user can not only see the actual picture through the head-mounted device, but also see the screen picture displayed on the display device of the head-mounted device.
  • Controls to adjust the screen image on the display device such as rotation, translation, etc.
  • the head-mounted device can obtain the screen image from the beginning to move to the actual image.
  • the angle of rotation and/or the distance of movement of the position where the actual picture overlaps is used as the set angle, and the obtained distance is used as the set distance, so as to achieve the acquisition of the set angle and the set distance.
  • the controls involved in the above process can be in various forms such as keys, knobs, touch buttons, etc.
  • the present invention does not limit the specific types of controls.
  • the target three-dimensional model corresponding to the target object can be obtained from the target database based on the obtained target feature matrix.
  • the target database may be a database associated with the head-mounted device, and the target database may be pre-constructed.
  • the target database may be pre-created by relevant technical personnel through computer equipment or the head-mounted device.
  • the target database can store at least one candidate three-dimensional model and the feature matrix corresponding to each candidate three-dimensional model. The construction process of the target database is introduced below.
  • the construction process of the target database may include:
  • Step 1 Obtain the initial three-dimensional model scanned by the image scanning device.
  • the image scanning device may be a three-dimensional scanner, a drone, etc.
  • the present invention does not limit the type of the image scanning device.
  • a three-dimensional scanner and a drone can be used as the image scanning device, so that the image can be obtained through the three-dimensional scanner as the image scanning device.
  • the point cloud data of the lower part of the target object is obtained through the drone as the image scanning device, so as to obtain the complete data of the target object, which can then be based on the scanned data (such as point cloud data, image data, etc.) to reconstruct the three-dimensional model to obtain the initial three-dimensional model.
  • Step 2 Obtain a candidate three-dimensional model obtained by completing incomplete parts and/or color restoration based on the initial three-dimensional model.
  • the target object may be relatively old and may have been corroded by some substances in the natural environment, the target object may be damaged or the color of the target object may disappear.
  • the original style and color of the target object can be completed and/or the color restored based on the initial three-dimensional model.
  • historical relic experts can be invited to provide guidance, so that relevant technical personnel can complete the incomplete parts and/or color restoration of the initial three-dimensional model according to the guidance of the historical relic expert, so that the computer equipment (or head-mounted device) can obtain the candidate 3D model obtained through incomplete partial completion and/or color restoration.
  • Step 3 Extract the feature matrix of the candidate 3D model at different angles.
  • the angle of the obtained candidate three-dimensional model can be adjusted to obtain images of the candidate three-dimensional model at different angles, and then based on the images of the candidate three-dimensional model at different angles, it is determined that the candidate three-dimensional model has different angles. Characteristic matrix under angle.
  • step 102 determines the target features of the target object based on the image containing the target object.
  • the method used to determine the feature matrix is consistent with the method used to determine the target feature matrix, thereby ensuring that the subsequent matching process based on the feature matrix can proceed smoothly.
  • Step 4 Store the candidate three-dimensional model and the extracted feature matrix into the target database.
  • the candidate three-dimensional model can be stored in association with the corresponding feature matrix
  • the corresponding three-dimensional model can be determined based on the feature matrix later.
  • Figure 3 is a schematic diagram of the creation process of a target database according to an embodiment of the present invention.
  • an initial three-dimensional model of an object can be established by scanning the object, and then the The initial 3D model is completed with incomplete parts and color restored to obtain the candidate 3D model, thereby extracting the feature matrix of the candidate 3D model at different angles to obtain the feature matrix of the same object at different angles, and then combining the candidate 3D model with The corresponding feature matrix is stored accordingly, and the target database can be constructed.
  • the construction of the target database can be completed, so that after the target feature matrix is determined through step 102, the feature matrix can be matched through step 103 to achieve the acquisition of the target three-dimensional model.
  • step 103 when obtaining a target three-dimensional model matching the target object from the target database based on the target feature matrix of the target object, the following steps may be included:
  • Step 1031 Match the target feature matrix with the feature matrix stored in the target database.
  • the target feature matrix and the feature matrix stored in the target database can be compared one by one to achieve matching between the target feature matrix and the feature matrix stored in the target database.
  • Step 1032 Determine the candidate three-dimensional model corresponding to the feature matrix that matches the target feature matrix as the target three-dimensional model.
  • one candidate three-dimensional model can correspond to multiple feature matrices. After the feature matrix matching the target feature matrix is determined in step 1031, the candidate three-dimensional model corresponding to the feature matrix can be determined. , thereby determining the candidate three-dimensional model corresponding to the feature matrix as the target three-dimensional model, and obtaining the target three-dimensional model from the target database.
  • the target three-dimensional model can be displayed through step 104, so that the user can see the displayed target three-dimensional model, and the target three-dimensional model is completed through incomplete parts and/or color Restoration, so it can better restore the original style and color of the target object, thereby improving the user experience.
  • the target database can also store media data corresponding to each candidate 3D model.
  • the media data can be used to introduce the candidate 3D model in the form of video or audio, for example, introducing the construction time and construction history of the candidate 3D model. etc., that is, introducing the construction time and construction history of the target object, so that users can better understand the target object through media data.
  • the media data can be pre-recorded and stored in the target database, and the media data and the corresponding candidate three-dimensional model can be stored in association, so that after the target three-dimensional model is determined in step 103, the target three-dimensional model can be directly determined. Corresponding media data.
  • the first prompt information may also be displayed to ask whether to play the media data corresponding to the target three-dimensional model through the first prompt information.
  • the user only needs to perform the first prompt information.
  • the prompt information provides feedback, and the head-mounted device can determine whether it needs to play the media data corresponding to the target 3D model.
  • a first head mounted device may display a first feedback control and a second feedback control that the user can display via Trigger the first feedback control to trigger the first feedback information.
  • the first feedback information can be used to indicate that the media data corresponding to the target three-dimensional model needs to be played, so that the first head-mounted device can determine that the target needs to be played based on the received first feedback information.
  • the media data corresponding to the three-dimensional model, or the user can trigger the second feedback information by triggering the second feedback control.
  • the second feedback information can be used to indicate that the media data corresponding to the target three-dimensional model does not need to be played, so that the first head-mounted device It may be determined based on the received second feedback information that the media data corresponding to the target three-dimensional model does not need to be played.
  • the first head-mounted device can, in response to receiving the first feedback information based on the first prompt information, obtain the media data corresponding to the target three-dimensional model from the target database, and then play the obtained media data.
  • Media data so that users can understand the target object through the played media data.
  • FIG 4 is a flow chart of a screen display method according to an embodiment of the present invention.
  • the camera of the first head-mounted device The device can capture images in real time, and the first head-mounted device can convert the captured image from the camera coordinate system to the human eye coordinate system, thereby extracting the target feature matrix of the target object based on the image in the human eye coordinate system, and thus based on the target
  • the feature matrix is used to match the candidate 3D model in the database.
  • the matched 3D model can be output to the display end, and it can also be prompted whether to play the matched 3D model.
  • the corresponding media data is the tour introduction; in addition, if the candidate 3D model is not matched from the database, just continue to capture the scene in real time through the camera equipment.
  • the above process mainly introduces the process of obtaining the target 3D model and displaying the target 3D model.
  • the user can also adjust the displayed image of the target 3D model according to his own needs.
  • the user can perform a picture adjustment operation through the first head-mounted device, so that the first head-mounted device can respond to the picture adjustment operation on the first head-mounted device, according to the display of the picture adjustment operation instruction.
  • the viewing angle and/or screen magnification are used to adjust the displayed screen of the target 3D model.
  • the following describes the process of adjusting the screen according to the display angle indicated by the screen adjustment operation instruction and the process of adjusting the screen according to the screen magnification indicated by the screen adjustment operation instruction.
  • the user can trigger the picture adjustment operation at the first set position of the first head-mounted device, so that the first head-mounted device can respond to the first set position of the first head-mounted device.
  • the screen adjustment operation at the set position displays the image of the target three-dimensional image under the display angle of view indicated by the screen adjustment operation according to the display angle indicated by the screen adjustment operation.
  • the first setting position may be an area with a touch function on the left temple of the first head-mounted device, and the user can slide in the area with a touch function on the left temple. operation to trigger the screen adjustment operation.
  • different sliding directions may correspond to different adjustment methods of the display viewing angle.
  • the displayed image when sliding in the direction of the tail of the temple in the area with the Touch function on the left temple, the displayed image can be adjusted to The picture seen from a position higher than the user's actual height to adjust the display angle.
  • the displayed image when you slide in the area with the Touch function on the left temple in the direction opposite to the tail of the temple, the displayed image can be adjusted to the image seen at a position lower than the user's actual height. to adjust the display angle.
  • the images seen are the effects seen by people standing on the ground.
  • the effects seen at different heights are different.
  • the proportions of the various parts of the Buddha statue in the picture are very coordinated when viewed from the ground.
  • the picture seen by the human eye is larger near and far smaller, the head of the Buddha statue is actually closer to us than it is.
  • the user can trigger the picture adjustment operation at the second set position of the first head-mounted device, so that the first head-mounted device can respond to the adjustment of the first head-mounted device at the second set position.
  • the screen adjustment operation at the position to enlarge or reduce the displayed screen of the target three-dimensional model according to the screen magnification indicated by the screen adjustment operation.
  • the second setting position may be an area with a Touch function located on the right temple leg of the first head-mounted device, and the user can perform a sliding operation in the area with a Touch function on the right temple leg to trigger Screen adjustment operation.
  • different sliding directions can correspond to different adjustment methods of the picture magnification. For example, when sliding in the direction of the tail of the temple in the area with the Touch function on the right temple, the displayed image can be enlarged. , to adjust the display angle. Correspondingly, when you slide in the area with the Touch function on the right temple leg in the direction opposite to the tail of the temple leg, the displayed image can be reduced.
  • the second prompt information may be used to indicate that the screen magnification indicated by the screen adjustment operation has reached the set magnification.
  • the second prompt information may be voice prompt information, text prompt information, etc. The present invention does not limit the specific type of the second prompt information.
  • the set magnification factor may be preset. When a person carefully observes an object, the distance between the eyes and the object is generally fixed. At this time, the observation effect of the object is better. Therefore, the set magnification can be set to be equivalent to this distance to give the viewer a sense of close distance. The feeling of viewing objects, which can improve the user experience.
  • Figure 5 is a schematic diagram of a picture adjustment method according to an embodiment of the present invention.
  • the user can trigger the picture adjustment operation on the right temple, so that the first head-mounted device can adjust the picture according to the Adjust the picture according to the picture magnification indicated by the picture adjustment operation; in addition, the user can also trigger the picture adjustment operation on the left temple, so that the first head-mounted device can adjust the picture according to the display angle indicated by the picture adjustment operation. Adjust the screen.
  • the lens of the head-mounted device can also be provided with a dimming film.
  • the opacity of the dimming film can be adjusted to provide the user with a better viewing effect.
  • FIG. 6 is a schematic diagram illustrating the arrangement of a light-adjustable film according to an embodiment of the present invention. As shown in FIG. 6 , the light-adjustable film can be arranged on the surface of a lens.
  • the dimming film can be a film made of polymer dispersed liquid crystal (Polymer Dispersed Liquid Crystal, PDLC) material, and the opacity of the film can be controlled by controlling the voltage.
  • PDLC Polymer Dispersed Liquid Crystal
  • the dimming film of the first head-mounted device may be set to an opaque state in response to a picture adjustment operation on the first head-mounted device.
  • the user can only see the screen image, preventing the screen image from not coinciding with the actual image due to image adjustment operations and affecting the user's viewing experience, and improving the user's viewing immersion.
  • ambient light brightness may be obtained in response to a picture adjustment operation on the first head-mounted device, and the transparency of the dimming film of the first head-mounted device may be adjusted based on the ambient light brightness.
  • different ambient light brightness can correspond to different transparency levels of the dimming film, and the corresponding relationship can be preset, so that the head-mounted device can directly adjust the brightness of the ambient light according to the preset corresponding relationship after acquiring the ambient light brightness. , determine to what extent the transparency of the dimming film needs to be adjusted, and then adjust the transparency of the dimming film.
  • Figure 7 is a schematic diagram of the adjustment process of the transparency of a dimming film according to an embodiment of the present invention.
  • the dimming film when a picture adjustment operation is detected, the dimming film can be directly set to opaque. state (that is, completely opaque), you can also obtain the ambient light brightness, and then adjust the transparency of the dimming film according to the ambient light.
  • Parents often take their children to tour.
  • the pictures seen by parents and children are different.
  • Parents can synchronize their own pictures to their children so that they can experience an adult's perspective.
  • the first headset in the case where the first headset is paired with the second headset, the first headset may send to the second headset the information displayed on the first headset.
  • the first head-mounted device may be a head-mounted device used by parents
  • the second head-mounted device may be a head-mounted device used by children.
  • Bluetooth pairing can be used to achieve pairing of the first head-mounted device and the second head-mounted device. Subsequently, the pairing can also be achieved through Bluetooth pairing.
  • the picture of the first head-mounted device is transmitted to the second head-mounted device through Bluetooth transmission, so as to realize the picture synchronization of the first head-mounted device and the second head-mounted device.
  • the picture synchronization of the first head-mounted device and the second head-mounted device can also be achieved through wireless (Wireless Fidelity, WiFi) transmission.
  • wireless Wireless Fidelity, WiFi
  • the first head mounted device may obtain the first head mounted device in response to a location acquisition operation on the first head mounted device. location information of the second head-mounted device so that the first head-mounted device can display the acquired location information.
  • the first head-mounted device can provide a location acquisition control, and the user can trigger the location acquisition operation by triggering the location acquisition control, so that the first head-mounted device can acquire the second head-mounted device in response to the location acquisition operation.
  • Device location information can be provided.
  • the head-mounted device can be equipped with a Global Positioning System (GPS), so that the second head-mounted device can obtain its own location information, thereby sending the obtained location information to the first head-mounted device.
  • GPS Global Positioning System
  • the wearable device enables the first head-mounted device to obtain the location information of the second head-mounted device.
  • the first head-mounted device may also display a first road map based on the location of the first head-mounted device and the location of the second head-mounted device, the first road map being used to indicate the destination from the first head-mounted device.
  • the location is a route to the location of the second head-mounted device, so that the parent can quickly reach the location of the child according to the instructions of the first road map.
  • Figure 8 illustrates a head-mounted device in a parent-child function mode according to an embodiment of the present invention.
  • the processing flow chart is shown in Figure 8.
  • the first head-mounted device used by the parent and the second head-mounted device used by the child can be paired so that the parent and the child can share each other's location. , so that parents can know the location of their children at any time; in addition, when there is a need to synchronize the parent's perspective, the picture of the first head-mounted device can be transmitted to the second head-mounted device, so as to realize the synchronization of the picture of the parent device. Transfer to child device.
  • the tour guide leads the tourists to tour.
  • the tour guide will explain to the tourists from time to time.
  • the tour guide will point or look at the scenery he is explaining. This At this time, tourists may not know what the tour guide is specifically referring to and where they should look.
  • the tour guide can synchronize his own pictures to the tourists he is leading so that tourists can clearly understand the scenic spots explained by the tour guide and enhance the tour. experience.
  • the first headset may send to the second headset the information displayed on the first headset.
  • the first head-mounted device can be a head-mounted device used by the tour guide
  • the second head-mounted device can be a head-mounted device used by tourists.
  • the first head-mounted device can maintain a device identification list, and the device identification list can be used to store the device identification of the second head-mounted device used by the tourists led by the tour guide. This is equivalent to realizing the pairing of the first head-mounted device and the second head-mounted device. Subsequently, the first head-mounted device can transmit the picture to the second head-mounted device located in the device identification list.
  • Bluetooth transmission, WiFi transmission, etc. may be used to achieve picture synchronization between the first head-mounted device and the second head-mounted device.
  • the present invention does not limit which method is used.
  • the tour guide can set the collection information (including the collection time and the collection location) through the first head-mounted device, so that the tourists can be reminded to gather in time after the collection time is reached.
  • the first head-mounted device when the first head-mounted device has been paired with the second head-mounted device, in response to reaching the rendezvous time, the first head-mounted device may send a message to the second head-mounted device.
  • Collection information the second head-mounted device is used to display a second road map when receiving the collection information, and the second road map can be used to indicate a route from the location of the second head-mounted device to the collection location. .
  • Figure 9 is a processing flow chart of a head-mounted device in a tour guide function mode according to an embodiment of the present invention.
  • the first head-mounted device can be Add a second head-mounted device (that is, the tourist device) so that when there is a need to synchronize the tour guide's perspective, the image of the first head-mounted device can be transmitted to the second head-mounted device to realize the tour guide's perspective.
  • the device screen is transmitted to the tourist device; in addition, the tour guide can also set the gathering time and location through the first head-mounted device, so that tourists can navigate to the gathering location through the second head-mounted device when the gathering time is reached.
  • FIG. 10 is a block diagram of a screen display device according to an embodiment of the present invention. It is applied to a first head-mounted device.
  • the device may include:
  • Collection module 100 used to collect images containing target objects
  • the determination module 1002 is used to determine the target feature matrix of the target object based on the image containing the target object;
  • the acquisition module 1003 is used to obtain the target three-dimensional model matching the target object from the target database based on the target feature matrix of the target object.
  • the target database is used to store at least one candidate three-dimensional model and the feature matrix corresponding to each candidate three-dimensional model,
  • the candidate 3D model is based on an initial image scanned by an image scanning device.
  • the three-dimensional model is obtained by completing incomplete parts and/or restoring colors;
  • the display module 1004 is used to display the target three-dimensional model.
  • the acquisition module 1003 when used to acquire a target three-dimensional model matching the target object from the target database based on the target feature matrix of the target object, is used to:
  • the candidate three-dimensional model corresponding to the feature matrix that matches the target feature matrix is determined as the target three-dimensional model.
  • the construction process of the target database includes:
  • the target database is also used to store media data corresponding to at least one candidate three-dimensional model, and the media data is used to introduce the candidate three-dimensional model in the form of video or audio;
  • the display module 1004 is also used to display the first prompt information, and the first prompt information is used to inquire whether to play the media data corresponding to the target three-dimensional model.
  • the acquisition module 1003 is also configured to acquire media data corresponding to the target three-dimensional model from the target database in response to receiving first feedback information based on the first prompt information, where the first feedback information is used to indicate that playback is required.
  • the device also includes:
  • the playback module is used to play the obtained media data.
  • the device further includes:
  • the adjustment module is configured to adjust the displayed image of the target three-dimensional model according to the display angle and/or the image magnification indicated by the image adjustment operation in response to the image adjustment operation on the first head-mounted device.
  • the adjustment module is configured to adjust the displayed target three-dimensional model according to the display angle and/or the screen magnification indicated by the screen adjustment operation in response to the screen adjustment operation on the first head-mounted device.
  • the adjustment module uses at least one of the following:
  • the displayed screen of the target three-dimensional model is enlarged or reduced according to the screen magnification indicated by the screen adjustment operation.
  • the display module 1004 is also configured to respond to the picture adjustment operation at the second set position of the first head-mounted device, when the picture magnification indicated by the picture adjustment operation reaches the set magnification.
  • the second prompt information is displayed, and the second prompt information is used to indicate that the screen magnification indicated by the screen adjustment operation has reached the set magnification.
  • the adjustment module is also configured to set the dimming film of the first head-mounted device to an opaque state in response to a picture adjustment operation on the first head-mounted device;
  • the adjustment module is also used to obtain the ambient light brightness in response to the picture adjustment operation on the first head-mounted device, and adjust the transparency of the dimming film of the first head-mounted device based on the ambient light brightness.
  • the device further includes:
  • the first sending module is configured to send the screen displayed on the first head-mounted device to the second head-mounted device when the first head-mounted device has been paired with the second head-mounted device.
  • the acquisition module 1003 is also configured to acquire the third head-mounted device in response to a location acquisition operation on the first head-mounted device when the first head-mounted device has been paired with the second head-mounted device. Location information of two head-mounted devices;
  • the display module is also used to display the obtained location information.
  • the display module is also used to display a first road map, and the first road map is used to indicate a route from the location of the first head-mounted device to the location of the second head-mounted device.
  • the device further includes:
  • a setting module for setting the gathering time and gathering location through the first head-mounted device
  • the second sending module is configured to send the gathering information to the second head-mounted device in response to reaching the gathering time when the first head-mounted device has been paired with the second head-mounted device.
  • the second head-mounted device It is used to display a second road map when the gathering information is received, and the second road map is used to indicate a route from the location of the second head-mounted device to the gathering location.
  • the target object is a historical artifact such as a grotto statue.
  • the device embodiment since it basically corresponds to the method embodiment, please refer to the partial description of the method embodiment for relevant details.
  • the device embodiments described above are only illustrative.
  • the modules described as separate components may or may not be physically separated.
  • the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
  • FIG. 11 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention.
  • the head mounted device includes a processor 1101, a memory 1102, a network interface 1103, a first interrupt 1104, a second interrupt 1105, a light sensing device 1106, a camera device 1107, a display device 1108, a GPS and WiFi module 1109 , the memory 1102 is used to store computer program code that can be run on the processor 1101, the processor 1101 is used to implement the screen display method provided by any embodiment of the present invention when executing the computer program code, and the network interface 1103 is used to implement Input and output functions.
  • the first interrupt 1104 can be a Touch interrupt on the right temple.
  • the processor 1101 can determine the screen magnification coefficient;
  • the second interrupt 1105 can be a Touch interrupt on the left temple.
  • the processor 1101 can determine the display angle;
  • the light-sensing device 1106 can be used to obtain the ambient light brightness,
  • the camera device 1107 can be used to capture real-time images, and the display device 1108 can be used to display images of the three-dimensional model;
  • GPS and WiFi module 1109 GPS can obtain the current location in real time and provide precise positioning of navigation.
  • WiFi can ensure data transmission between head-mounted devices.
  • the head-mounted device may also include other hardware, which is not limited by the present invention.
  • the invention also provides a computer-readable storage medium.
  • the computer-readable storage medium can be in various forms.
  • the computer-readable storage medium can be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), Solid state drive, any type of storage disk (such as optical disk, DVD, etc.), or similar storage media, or a combination thereof.
  • the computer-readable medium can also be paper or other suitable media capable of printing the program.
  • a computer program is stored on the computer-readable storage medium. When the computer program is executed by the processor, the screen display method provided by any embodiment of the present invention is implemented.
  • the present invention also provides a computer program product, which includes a computer program.
  • a computer program product which includes a computer program.
  • the computer program is executed by a processor, the screen display method provided by any embodiment of the present invention is implemented.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance.
  • plurality refers to two or more than two, unless expressly limited otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明涉及一种画面显示方法、装置、设备及介质。本发明通过创建用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵的目标数据库,其中,候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到的,因而,在采集到包含目标物体的图像,并基于包含目标物体的图像,确定目标物体的目标特征矩阵后,可以基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的、经过残缺部分补全和/或色彩复原的目标三维模型,从而对目标三维模型进行显示,可以提高第一头戴式设备的显示效果,进而提高用户体验。

Description

画面显示方法、装置、设备及介质 技术领域
本发明涉及计算机技术领域,尤其涉及一种画面显示方法、装置、设备及介质。
背景技术
随着经济的不断发展,旅游逐渐成为一种重要的休闲娱乐方式。而石窟类历史文物作为世界石刻艺术宝库中的重要组成部分,逐渐成为人们在旅游过程中的一种主要游览选择。
然而,石窟类历史文物体积一般都比较巨大,为了保护文物,游客一般不会被允许近距离观看,以避免给石窟类历史文物带来人为损坏。由此,游客在游览时,无法欣赏到石窟类历史文物的全貌,无法体会到的石窟类历史文物宏伟与雄奇。因此,亟需一种画面显示方法,来帮助游客更好地游览石窟类历史文物。
发明内容
本发明提供一种画面显示方法、装置、设备及介质,以解决相关技术中的不足。
根据本发明实施例的第一方面,提供一种画面显示方法,应用于第一可穿戴设备,该方法包括:
采集包含目标物体的图像;
基于包含目标物体的图像,确定目标物体的目标特征矩阵;
基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型,目标数据库用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到;
对目标三维模型进行显示。
在一些实施例中,基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型,包括:
对目标特征矩阵与目标数据库中所存储的特征矩阵进行匹配;
将与目标特征矩阵匹配的特征矩阵对应的候选三维模型,确定为目标三维模型。
在一些实施例中,目标数据库的构建过程包括:
获取通过图像扫描设备扫描得到的初始三维模型;
获取基于初始三维模型进行残缺部分补全和/或色彩复原得到的候选三维模型;
提取候选三维模型在不同角度下的特征矩阵;
将候选三维模型和所提取到的特征矩阵存储至目标数据库。
在一些实施例中,目标数据库还用于存储至少一个候选三维模型对应的媒体数 据,媒体数据用于以视频或音频的形式介绍候选三维模型;
对目标三维模型进行显示之后,该方法还包括:
显示第一提示信息,第一提示信息用于询问是否播放目标三维模型对应的媒体数据。
在一些实施例中,显示第一提示信息之后,该方法还包括:
响应于接收到基于第一提示信息的第一反馈信息,从目标数据库中获取目标三维模型对应的媒体数据,第一反馈信息用于指示需要播放目标三维模型对应的媒体数据;
播放所获取到的媒体数据。
在一些实施例中,对目标三维模型进行显示之后,该方法还包括:
响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整。
在一些实施例中,响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整,包括下述至少一项:
响应于在第一头戴式设备的第一设定位置处的画面调整操作,按照画面调整操作所指示的显示视角,显示目标三维画面在画面调整操作所指示的显示视角下的画面;
响应于在第一头戴式设备的第二设定位置处的画面调整操作,按照画面调整操作所指示的画面放大倍数,对所显示的目标三维模型的画面进行放大或缩小。
在一些实施例中,该方法还包括:
响应于在第一头戴式设备的第二设定位置处的画面调整操作,在画面调整操作所指示的画面放大倍数达到设定放大倍数的情况下,显示第二提示信息,第二提示信息用于指示画面调整操作所指示的画面放大倍数已达到设定放大倍数。
在一些实施例中,该方法还包括下述任一项:
响应于在第一头戴式设备上的画面调整操作,将第一头戴式设备的调光膜设置为不透明状态;
响应于在第一头戴式设备上的画面调整操作,获取环境光亮度,基于环境光亮度,对第一头戴式设备的调光膜的透明程度进行调整。
在一些实施例中,该方法还包括:
在第一头戴式设备已与第二头戴式设备配对的情况下,向第二头戴式设备发送第一头戴式设备上所显示的画面。
在一些实施例中,该方法还包括:
在第一头戴式设备已与第二头戴式设备配对的情况下,响应于在第一头戴式设备上的位置获取操作,获取第二头戴式设备的位置信息;
对所获取到的位置信息进行显示。
在一些实施例中,该方法还包括:
显示第一路线图,第一路线图用于指示从第一头戴式设备所处的位置达到第二头戴式设备所处的位置的路线。
在一些实施例中,该方法还包括:
通过第一头戴式设备设置集合时间和集合位置;
该方法还包括:
在第一头戴式设备已与第二头戴式设备配对的情况下,响应于达到集合时间,向第二头戴式设备发送集合信息,第二头戴式设备用于在接收到集合信息的情况下,显示第二路线图,第二路线图用于指示从第二头戴式设备所处的位置达到集合位置的路线。
在一些实施例中,目标物体为石窟雕像类历史文物。
根据本发明实施例的第二方面,提供一种画面显示装置,应用于第一头戴式设备,该装置包括:
采集模块,用于采集包含目标物体的图像;
确定模块,用于基于包含目标物体的图像,确定目标物体的目标特征矩阵;
获取模块,用于基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型,目标数据库用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到;
显示模块,用于对目标三维模型进行显示。
在一些实施例中,获取模块,在用于基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型时,用于:
对目标特征矩阵与目标数据库中所存储的特征矩阵进行匹配;
将与目标特征矩阵匹配的特征矩阵对应的候选三维模型,确定为目标三维模型。
在一些实施例中,目标数据库的构建过程包括:
获取通过图像扫描设备扫描得到的初始三维模型;
获取基于初始三维模型进行残缺部分补全和/或色彩复原得到的候选三维模型;
提取候选三维模型在不同角度下的特征矩阵;
将候选三维模型和所提取到的特征矩阵存储至目标数据库。
在一些实施例中,目标数据库还用于存储至少一个候选三维模型对应的媒体数据,媒体数据用于以视频或音频的形式介绍候选三维模型;
显示模块,还用于显示第一提示信息,第一提示信息用于询问是否播放目标三维模型对应的媒体数据。
在一些实施例中,获取模块,还用于响应于接收到基于第一提示信息的第一反馈信息,从目标数据库中获取目标三维模型对应的媒体数据,第一反馈信息用于指示需要播放目标三维模型对应的媒体数据;
该装置还包括:
播放模块,用于播放所获取到的媒体数据。
在一些实施例中,该装置还包括:
调整模块,用于响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整。
在一些实施例中,调整模块,在用于响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整时,用于下述至少一项:
响应于在第一头戴式设备的第一设定位置处的画面调整操作,按照画面调整操作所指示的显示视角,显示目标三维画面在画面调整操作所指示的显示视角下的画面;
响应于在第一头戴式设备的第二设定位置处的画面调整操作,按照画面调整操作所指示的画面放大倍数,对所显示的目标三维模型的画面进行放大或缩小。
在一些实施例中,显示模块,还用于响应于在第一头戴式设备的第二设定位置处的画面调整操作,在画面调整操作所指示的画面放大倍数达到设定放大倍数的情况下,显示第二提示信息,第二提示信息用于指示画面调整操作所指示的画面放大倍数已达到设定放大倍数。
在一些实施例中,调整模块,还用于响应于在第一头戴式设备上的画面调整操作,将第一头戴式设备的调光膜设置为不透明状态;
调整模块,还用于响应于在第一头戴式设备上的画面调整操作,获取环境光亮度,基于环境光亮度,对第一头戴式设备的调光膜的透明程度进行调整。
在一些实施例中,该装置还包括:
第一发送模块,用于在第一头戴式设备已与第二头戴式设备配对的情况下,向第二头戴式设备发送第一头戴式设备上所显示的画面。
在一些实施例中,获取模块,还用于在第一头戴式设备已与第二头戴式设备配对的情况下,响应于在第一头戴式设备上的位置获取操作,获取第二头戴式设备的位置信息;
显示模块,还用于对所获取到的位置信息进行显示。
在一些实施例中,显示模块,还用于显示第一路线图,第一路线图用于指示从第一头戴式设备所处的位置达到第二头戴式设备所处的位置的路线。
在一些实施例中,该装置还包括:
设置模块,用于通过第一头戴式设备设置集合时间和集合位置;
第二发送模块,用于在第一头戴式设备已与第二头戴式设备配对的情况下,响应于达到集合时间,向第二头戴式设备发送集合信息,第二头戴式设备用于在接收到集合信息的情况下,显示第二路线图,第二路线图用于指示从第二头戴式设备所处的位置达到集合位置的路线。
在一些实施例中,目标物体为石窟雕像类历史文物。
根据本发明实施例的第三方面,提供一种头戴式设备,该头戴式设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,该处理器执行该计算机程序时实现上述第一方面以及第一方面的任一个实施例所提供的画面显示方法所执行的操作。
根据本发明实施例的第四方面,提供一种计算机可读存储介质,该计算机可读 存储介质上存储有程序,该程序被处理器执行时,实现上述第一方面以及第一方面的任一个实施例所提供的画面显示方法所执行的操作。
根据本发明实施例的第五方面,提供一种计算机程序产品,该计算机程序产品包括计算机程序,计算机程序被处理器执行时,实现上述第一方面以及第一方面的任一个实施例所提供的画面显示方法所执行的操作。
本发明通过创建用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵的目标数据库,其中,候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到的,因而,在采集到包含目标物体的图像,并基于包含目标物体的图像,确定目标物体的目标特征矩阵后,可以基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的、经过残缺部分补全和/或色彩复原的目标三维模型,从而对目标三维模型进行显示,可以提高第一头戴式设备的显示效果,进而提高用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本发明。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据本发明实施例示出的一种画面显示方法的实施环境示意图。
图2是根据本发明实施例示出的一种画面显示方法的流程图。
图3是根据本发明实施例示出的一种目标数据库的创建过程示意图。
图4是根据本发明实施例示出的一种画面显示方法的流程图。
图5是根据本发明实施例示出的一种画面调整方式的示意图。
图6是根据本发明实施例示出的一种调光膜的设置方式示意图。
图7是根据本发明实施例示出的一种调光膜透明度的调整过程示意图。
图8是根据本发明实施例示出的一种亲子功能模式下头戴式设备的处理流程图。
图9是根据本发明实施例示出的一种导游功能模式下头戴式设备的处理流程图。
图10是根据本发明实施例示出的一种画面显示装置的框图。
图11是根据本发明实施例提供的一种头戴式设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
本发明提供了一种画面显示方法,用于基于增强现实技术,来实现目标物体的三维模型恢复和色彩复原,以便用户可以看到物体未受到损坏前的样子,从而可以提 高用户体验。
可选地,目标物体可以为石窟雕像类历史文物,也即是,可以通过本发明所提供的画面显示方法,来对雕像的三维模型进行残缺部分补全和色彩复原,以便用户可以看到雕像的原始风貌,体会雕像真正的宏伟与雄奇。
上述画面显示方法可以由头戴式设备执行,该头戴式设备可以为智能眼镜、增强现实(Augmented Reality,AR)眼镜等,本发明对头戴式设备的设备类型和设备数量不加以限定。
在介绍了本发明的应用场景之后,下面对本发明所提供的画面显示方法的实施环境进行说明。
参见图1,图1是根据本发明实施例示出的一种画面显示方法的实施环境示意图,如图1所示,该实施环境可以包括头戴式设备101和服务器102。
其中,头戴式设备101可以为智能眼镜、AR眼镜等,服务器102可以为一台服务器、多台服务器、服务器集群、云计算平台等。头戴式设备101可以通过有线或无线的通信方式与服务器102进行通信,以便头戴式设备101可以通过本发明所提供的画面显示方法,来实现对目标物体经过残缺部分补全和色彩复原的目标三维模型的显示。
可选地,本发明所提供的画面显示方法还可以应用在其他实施环境中,例如,该实施环境还可以仅包括头戴式设备101,各个头戴式设备101之间可以通过有线或无线的通信方式进行通信,以实现本发明所提供的画面显示方法。
在介绍了本发明的实施环境之后,下面对本发明所提供的画面显示方法的方案进行介绍。
参见图2,图2是根据本发明实施例示出的一种画面显示方法的流程图,如图2所示,应用于第一头戴式设备,第一头戴式设备可以为多个头戴式设备中的任意一个,例如,第一头戴式设备可以为家长所使用的头戴式设备,或者,第一头戴式设备可以为孩子所使用的头戴式设备,或者,第一头戴式设备可以为导游所使用的头戴式设备,或者,第一头戴式设备可以为游客所使用的头戴式设备,等等,本发明对此不加以限定。该画面显示方法可以包括:
步骤101、采集包含目标物体的图像。
其中,目标物体可以为任意物体,例如,目标物体可以为石窟类雕刻历史文化,如雕像,可选地,目标物体还可以为其他物体,本发明对目标物体的具体类型不加以限定。
需要说明的是,第一头戴式设备可以内接或外置有摄像设备,本发明对摄像设备采用何种方式设置不加以限定。以第一头戴式设备为智能眼镜为例,第一头戴式设备的镜框上可以设置有摄像头(也即是摄像设备)。
在一种可能的实现方式中,第一头戴式设备可以通过内置或外接于该第一头戴式设备的摄像设备,来采集包含目标物体的图像。
步骤102、基于包含目标物体的图像,确定目标物体的目标特征矩阵。
步骤103、基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型,目标数据库用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到。
步骤104、对目标三维模型进行显示。
需要说明的是,第一头戴式设备中可以设置有显示设备,因而,第一头戴式设备可以通过显示设备,来对目标三维模型进行显示。
本发明通过创建用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵的目标数据库,其中,候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到的,因而,在采集到包含目标物体的图像,并基于包含目标物体的图像,确定目标物体的目标特征矩阵后,可以基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的、经过残缺部分补全和/或色彩复原的目标三维模型,从而对目标三维模型进行显示,可以提高第一头戴式设备的显示效果,进而提高用户体验。
在介绍了本发明的基本实现过程之后,下面对本发明的各个可选实施例进行介绍。
在一些实施例中,对于步骤102,在基于包含目标物体的图像,确定目标物体的目标特征矩阵时,可以通过如下方式实现:
将包含目标物体的图像输入至特征提取模型,通过特征提取模型对输入其中的图像进行处理,以得到目标物体的目标特征矩阵。
其中,特征提取模型可以为多种类型的神经网络模型,例如,特征提取模型可以为卷积神经网络(Convolutional Neural Network,CNN)模型,可选地,特征提取模型还可以为其他类型的模型,本发明对特征提取模型的具体类型不加以限定。
以特征提取模型为CNN为例,该特征提取模型可以包括卷积层和池化层,在通过该特征提取模型获取目标物体的目标特征矩阵时,可以通过卷积层对输入的图像进行卷积处理,得到该图像的卷积特征,进而通过池化层对卷积特征进行池化处理,以得到目标物体的目标特征矩阵。
上述仅为一种获取目标物体的目标特征矩阵的示例性方式,在更多可能的实现方式中,还可以采用其他方式来进行目标特征矩阵的获取,本发明对具体采用哪种方式不加以限定。
上述过程是以获取到包含目标物体的图像后,直接基于获取到的图像来进行目标特征矩阵的确定为例来进行说明的,而通过步骤101获取到的图像是位于相机坐标系下的,但一般在进行处理时,需要基于位于人眼坐标系下的图像进行处理,因而,在基于包含目标物体的图像,确定目标物体的目标特征矩阵之前,可以将图像从相机坐标系转换到人眼坐标系下,以保证后续提取到的目标特征矩阵的准确性。
可选地,在将图像从相机坐标系转换到人眼坐标系下时,可以对图像进行旋转和/或平移,以便实现图像从相机坐标系到人眼坐标系的转换。
其中,在对图像进行旋转和/或平移,可以按照设定角度对图像进行旋转,按照设定位移对图像进行平移。
需要说明的是,设定角度和设定距离可以是预先确定出来的。例如,可以通过如下方式来获取设定角度和设定距离:
在用户首次佩戴头戴式设备时,用户可以通过头戴式设备看到实际画面的同时,还可以看到显示在头戴式设备的显示设备上的屏幕画面,用户可以通过头戴式设备上的控件,来对显示设备上的屏幕画面进行调整(如旋转、平移等),以便屏幕画面可以与实际画面重合,此时,头戴式设备即可获取到屏幕画面从开始到移动到与实 际画面重合的位置所旋转的角度和/或所移动的距离,从而将获取到的角度作为设定角度,将获取到的距离作为设定距离,以实现设定角度和设定距离的获取。
其中,上述过程中所涉及的控件可以为按键、旋钮、触摸式按钮等多种形式,本发明对控件的具体类型不加以限定。
在通过上述过程获取到目标物体的目标特征矩阵后,即可基于所获取到的目标特征矩阵,来从目标数据库中获取目标物体对应的目标三维模型。
其中,目标数据库可以为头戴式设备所关联的数据库,目标数据库可以是预先构建好的,例如,目标数据库可以由相关技术人员通过计算机设备或头戴式设备预先创建好的。目标数据库中可以存储有至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,下面对目标数据库的构建过程进行介绍。
在一些实施例,该目标数据库的构建过程可以包括:
步骤一、获取通过图像扫描设备扫描得到的初始三维模型。
其中,图像扫描设备可以为三维扫描仪、无人机等,本发明对图像扫描设备的设备类型不加以限定。
在一种可能的实现方式中,为保证图像扫描设备可以采集到目标物体完整的画面,可以采用三维扫描仪和无人机作为图像扫描设备,以便可以通过作为图像扫描设备的三维扫描仪获取到目标物体较低部位的点云数据,通过作为图像扫描设备的无人机获取到目标物体较高部位的图像,从而实现目标物体的完整数据的获取,进而可以基于扫描得到的数据(如点云数据、图像数据等)进行三维模型重建,以得到初始三维模型。
步骤二、获取基于初始三维模型进行残缺部分补全和/或色彩复原得到的候选三维模型。
需要说明的是,由于目标物体可能年代较为久远,且可能受到了自然环境中一些物质的侵蚀,因而可能会造成目标物体有残缺、目标物体的色彩消失殆尽等情况的出现,为尽可能还原目标物体的原始风貌与色彩,可以基于初始三维模型进行残缺部分补全和/或色彩复原。
在一种可能的实现方式中,可以邀请历史文物专家提供指导意见,以便相关技术人员可以根据历史文物专家的指导,来对初始三维模型进行残缺部分补全和/或色彩复原,从而使得计算机设备(或头戴式设备)可以获取到经过残缺部分补全和/或色彩复原得到的候选三维模型。
步骤三、提取候选三维模型在不同角度下的特征矩阵。
可选地,可以对所获取到的候选三维模型的角度进行调整,以获取到候选三维模型在不同角度下的图像,进而基于候选三维模型在不同角度下的图像,来确定候选三维模型在不同角度下的特征矩阵。
需要说明的是,对于基于候选三维模型在不同角度下的图像,来确定候选三维模型在不同角度下的特征矩阵的过程,可以参见步骤102中基于包含目标物体的图像,确定目标物体的目标特征矩阵的过程,仅需保证确定特征矩阵时所使用的方法与确定目标特征矩阵时所使用的方法一致即可,从而保证后续基于特征矩阵进行匹配的过程可以顺利进行。
步骤四、将候选三维模型和所提取到的特征矩阵存储至目标数据库。
在一种可能的实现方式中,可以将该候选三维模型与对应的特征矩阵关联存储 在目标数据库中,以便后续可以基于特征矩阵实现对应的三维模型的确定。
上述过程仅以获取一个候选三维模型以及对应的特征矩阵的过程为例来进行说明,其他候选三维模型以及对应的特征矩阵的获取过程与之同理,此处不再赘述。
上述构建目标数据库的过程可以参见图3,图3是根据本发明实施例示出的一种目标数据库的创建过程示意图,如图3所示,可以通过扫描物体来建立物体的初始三维模型,再对初始三维模型进行残缺部分补全和色彩复原,以得到候选三维模型,从而提取通过候选三维模型在不同角度下的特征矩阵,以得到同一物体在不同角度下的特征矩阵,进而将候选三维模型与对应的特征矩阵对应存储,即可实现目标数据库的构建。
通过上述过程即可完成目标数据库的构建,以便在通过步骤102实现目标特征矩阵的确定后,可以通过步骤103来进行特征矩阵的匹配,以实现目标三维模型的获取。
在一些实施例中,对于步骤103,在基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型时,可以包括如下步骤:
步骤1031、对目标特征矩阵与目标数据库中所存储的特征矩阵进行匹配。
在一种可能的实现方式中,可以将目标特征矩阵与目标数据库中所存储的特征矩阵逐个进行比较,以实现目标特征矩阵与目标数据库中所存储的特征矩阵的匹配。
在确定出目标数据库中有一个特征矩阵与目标特征矩阵相同后,即可确定该特征矩阵与目标特征矩阵匹配。
步骤1032、将与目标特征矩阵匹配的特征矩阵对应的候选三维模型,确定为目标三维模型。
需要说明的是,在目标数据库中,一个候选三维模型可以对应有多个特征矩阵,在通过步骤1031确定出于目标特征矩阵匹配的特征矩阵后,即可确定出该特征矩阵对应的候选三维模型,从而将该特征矩阵对应的候选三维模型确定为目标三维模型,从而从目标数据库中获取到目标三维模型。
在通过步骤103获取到目标三维模型后,即可通过步骤104来对目标三维模型进行显示,以便用户可以看到所显示的目标三维模型,而目标三维模型是经过残缺部分补全和/或色彩复原的,因而可以更好地还原出目标物体的原始风貌与色彩,从而可以提高用户体验。
需要说明的是,目标数据库中还可以存储有各个候选三维模型对应的媒体数据,媒体数据可以用于以视频或音频的形式来介绍候选三维模型,例如,介绍候选三维模型的建造时间、建造历史等,也即是介绍目标物体的建造时间、建造历史等,以便用户可以通过媒体数据更好地了解目标物体。
其中,媒体数据可以为预先录制好并存储在目标数据库中的,媒体数据与对应的候选三维模型可以是关联存储的,以便在通过步骤103确定出目标三维模型后,可以直接确定出目标三维模型对应的媒体数据。
在一些实施例中,在通过步骤104对目标三维模型进行显示后,还可以显示第一提示信息,以便通过第一提示信息来询问是否播放目标三维模型对应的媒体数据,用户仅需对第一提示信息做出反馈,头戴式设备即可确定是否需要播放目标三维模型对应的媒体数据。
例如,第一头戴式设备可以显示第一反馈控件和第二反馈控件,用户可以通过 触发第一反馈控件来触发第一反馈信息,第一反馈信息可以用于指示需要播放目标三维模型对应的媒体数据,以便第一头戴式设备可以基于接收到的第一反馈信息确定需要播放目标三维模型对应的媒体数据,或者,用户可以通过触发第二反馈控件来触发第二反馈信息,第二反馈信息可以用于指示不需要播放目标三维模型对应的媒体数据,以便第一头戴式设备可以基于接收到的第二反馈信息确定不需要播放目标三维模型对应的媒体数据。
在一种可能的实现方式中,第一头戴式设备可以响应于接收到基于第一提示信息的第一反馈信息,从目标数据库中获取目标三维模型对应的媒体数据,进而播放所获取到的媒体数据,以便用户可以通过所播放的媒体数据,来了解目标物体。
上述各个实施例所提供的画面显示方法的过程可以参见图4,图4是根据本发明实施例示出的一种画面显示方法的流程图,如图4所示,第一头戴式设备的摄像设备可以实时拍摄画面,第一头戴式设备即可将拍摄到的画面由相机坐标系转换为人眼坐标系,从而基于人眼坐标系下的画面来提取目标物体的目标特征矩阵,从而基于目标特征矩阵来匹配数据库中的候选三维模型,在从数据库中匹配到候选三维模型的情况下,即可将匹配到的三维模型输出到显示端,并且,还可以提示是否要播放匹配到的三维模型对应的媒体数据,也即是游览介绍;另外,在未从数据库中匹配到候选三维模型的情况下,继续通过摄像设备实时拍摄画面即可。
上述过程主要介绍了如何获取目标三维模型并对目标三维模型进行显示的过程,可选地,用户还可以根据自己的需要,来对所显示的目标三维模型的画面进行调整。
在一些实施例中,用户可以通过第一头戴式设备进行画面调整操作,以便第一头戴式设备可以响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整。
下面分别对按照画面调整操作指示的显示视角对画面进行调整、以及按照画面调整操作指示的画面放大倍数对画面进行调整的过程来进行说明。
在一种可能的实现方式中,用户可以在第一头戴式设备的第一设定位置处触发画面调整操作,以便第一头戴式设备可以响应于在第一头戴式设备的第一设定位置处的画面调整操作,按照画面调整操作所指示的显示视角,显示目标三维画面在画面调整操作所指示的显示视角下的画面。
其中,第一设定位置可以为位于第一头戴式设备的左侧镜腿上的具有触控(Touch)功能的区域,用户可以在左侧镜腿上的具有Touch功能的区域中进行滑动操作,以触发画面调整操作。
可选地,不同的滑动方向可以对应于显示视角的不同调整方式,例如,在左侧镜腿上具有Touch功能的区域中向镜腿尾部的方向滑动时,可以将所显示的画面调整为在比用户实际身高更高的位置所看到的画面,以实现对显示角度的调整。相应地,在左侧镜腿上具有Touch功能的区域中向与镜腿尾部反向的方向滑动时,可以将所显示的画面调整为在比用户实际身高更低的位置所看到的画面,以实现对显示角度的调整。
正常情况下观看到的画面均是人站在地面看到的效果,然而对于巨大的雕像,在不同的高度看到的效果是不同的。比如,对于巨大的佛像,在地面看到画面中佛像各个部分的比例非常协调,然而,由于人眼看到的画面有近大远小的现象,因而佛像的头部实际是比离我们更近的身体部分更大的通过上述显示视角的调整过程,可以让游客感觉不同视角下的宏伟,更切身地体会到古人在雕刻佛像时的智慧。
在另一种可能的实现方式中,用户可以在第一头戴式设备的第二设定位置处触发画面调整操作,以便第一头戴式设备可以响应于在第一头戴式设备的第二设定位置处的画面调整操作,按照画面调整操作所指示的画面放大倍数,对所显示的目标三维模型的画面进行放大或缩小。
其中,第二设定位置可以为位于第一头戴式设备的右侧镜腿上的具有Touch功能的区域,用户可以在右侧镜腿上的具有Touch功能的区域中进行滑动操作,以触发画面调整操作。
可选地,不同的滑动方向可以对应于画面放大倍数的不同调整方式,例如,在右侧镜腿上具有Touch功能的区域中向镜腿尾部的方向滑动时,可以对所显示的画面进行放大,以实现对显示角度的调整。相应地,在右侧镜腿上具有Touch功能的区域中向与镜腿尾部反向的方向滑动时,可以对所显示的画面进行缩小。
由于大部分雕像都非常巨大,游客站在可供游览的最前面,依然距离较远,无法看清楚细节,通过上述过程,可以根据自己的观看需求来对画面进行放大或缩小,从而可以提高用户的观看体验。
此外,为了保证较好的观看体验,画面最好不要无限进行放大,因为无限放大可能会导致画面失真与模糊。
因而,在一些实施例中,响应于在第一头戴式设备的第二设定位置处的画面调整操作,在画面调整操作所指示的画面放大倍数达到设定放大倍数的情况下,可以显示第二提示信息,第二提示信息可以用于指示画面调整操作所指示的画面放大倍数已达到设定放大倍数。可选地,第二提示信息可以为语音提示信息、文字提示信息等,本发明对第二提示信息的具体类型不加以限定。
其中,设定放大倍数可以是预先设置好的。人仔细观察某物体时眼睛离物体的距离一般是固定的,此时对物体的观察效果较好,因而,可以将设定放大倍数设置为与此距离等效,以给观看者一种近距离观看物体的感觉,从而可以提高用户体验。
参见图5,图5是根据本发明实施例示出的一种画面调整方式的示意图,如图5所示,用户可以在右侧镜腿上触发画面调整操作,以便第一头戴式设备可以按照画面调整操作所指示的画面放大倍数来对画面进行调整;此外,用户还可以在左侧镜腿上触发画面调整操作,以便第一头戴式设备可以按照画面调整操作所指示的显示视角来对画面进行调整。
上述过程仅为对画面进行调整的过程,可选地,头戴式设备的镜片上还可以设置有调光膜,可以通过对调光膜的不透明度进行调整,以为用户提供较佳的观看效果。例如,参见图6,图6是根据本发明实施例示出的一种调光膜的设置方式示意图,如图6所示,调光膜可以设置在镜片表面。
其中,调光膜可以为由聚合物分散液晶(Polymer Dispersed Liquid Crystal,PDLC)材料制成的膜,可以通过控制电压的大小,来控制膜的不透明度。
可选地,可以响应于在第一头戴式设备上的画面调整操作,将第一头戴式设备的调光膜设置为不透明状态。
通过将调光膜设置为不透明状态,可以使得用户仅能看到屏幕画面,避免因画面调整操作导致屏幕画面与实际画面不重合而影响用户的观看体验,提升用户的观看沉浸感。
或者,可以响应于在第一头戴式设备上的画面调整操作,获取环境光亮度,基于环境光亮度,对第一头戴式设备的调光膜的透明程度进行调整。
其中,不同的环境光亮度可以对应于调光膜不同的透明程度,该对应关系可以是预先设置好的,使得头戴式设备可以在获取到环境光亮度后,直接根据预先设置好的对应关系,确定出需要将调光膜的透明程度调整到哪个程度,从而对调光膜的透明程度进行调整。
通过根据环境光亮度值来对调光膜的透明程度进行调整,可以在保证人眼看到的实际画面与屏幕画面不重合的情况不会影响用户观看的情况下,避免镜片亮度突然变暗给人眼带来不适感。
参见图7,图7是根据本发明实施例示出的一种调光膜透明度的调整过程示意图,如图7所示,在检测到画面调整操作的情况下,可以直接将调光膜设置为不透明状态(也即是完全不透明),还可以获取环境光亮度,进而根据环境光来对调光膜的透明程度进行调整。
需要说明的是,在游览过程中,经常会出现父母带孩子游览的情况,然而,由于父母和孩子的身高不同,从而导致父母和孩子所看到的画面是不同的,为了让孩子可以体验到父母看到的画面,父母可以将自己的画面同步给孩子,让孩子可以体验到大人的视角。
在一些实施例中,在第一头戴式设备已与第二头戴式设备配对的情况下,第一头戴式设备可以向第二头戴式设备发送第一头戴式设备上所显示的画面,其中,第一头戴式设备可以为父母所使用的头戴式设备,第二头戴式设备可以为孩子所使用的头戴式设备。
其中,在对第一头戴式设备和第二头戴式设备进行配对时,可以采用蓝牙配对的方式,来实现第一头戴式设备和第二头戴式设备的配对,后续也可以通过蓝牙传输的方式来将第一头戴式设备的画面传输给第二头戴式设备,以实现第一头戴式设备和第二头戴式设备的画面同步。可选地,还可以通过无线(Wireless Fidelity,WiFi)传输的方式来实现第一头戴式设备和第二头戴式设备的画面同步,本发明对具体采用哪种方式不加以限定。
此外,在游览过程中,时常会出现孩子脱离父母视野而走丢的情况,因而,父母有时可能需要获得孩子的实时位置信息,以便父母可以及时找到孩子。
在一些实施例中,在第一头戴式设备已与第二头戴式设备配对的情况下,第一头戴式设备可以响应于在第一头戴式设备上的位置获取操作,获取第二头戴式设备的位置信息,以便第一头戴式设备可以对所获取到的位置信息进行显示。
其中,第一头戴式设备可以提供位置获取控件,用户可以通过触发位置获取控件,来触发位置获取操作,从而使得第一头戴式设备可以响应于位置获取操作,来获取第二头戴式设备的位置信息。
需要说明的是,头戴式设备可以设置有全球定位***(Global Positioning System,GPS),以便第二头戴式设备可以获取到自身的位置信息,从而将获取到的位置信息发送给第一头戴式设备,使得第一头戴式设备可以获取到第二头戴式设备的位置信息。
可选地,第一头戴式设备还可以基于第一头戴式设备的位置以及第二头戴式设备的位置显示第一路线图,第一路线图用于指示从第一头戴式设备所处的位置达到第二头戴式设备所处的位置的路线,以便父母可以根据第一路线图的指示快速到达孩子所处的位置。
参见图8,图8是根据本发明实施例示出的一种亲子功能模式下头戴式设备的 处理流程图,如图8所示,在亲子功能模式下,可以对父母所使用的第一头戴式设备和孩子所使用的第二头戴式设备进行配对,以便父母和孩子可以共享彼此位置,从而使得父母可以随时获知孩子的位置;此外,还可以在有同步父母视角的需求的情况下,将第一头戴式设备的画面传输给第二头戴式设备,以实现将父母设备画面传输给孩子设备。
另外,在游览过程中,还会经常出现导游带领游客游览的情况,而且,在游览过程中,导游会时不时地为游客进行讲解,在讲解时,导游会指向或者看向所讲解的景观,这时,游客可能不清楚导游具体指的是什么,自己应该看向哪里,此时导游可将自己的画面,同步给自己所带领的游客,以便游客可以清楚地知道导游所讲解的景点,增强游览体验。
在一些实施例中,在第一头戴式设备已与第二头戴式设备配对的情况下,第一头戴式设备可以向第二头戴式设备发送第一头戴式设备上所显示的画面,其中,第一头戴式设备可以为导游所使用的头戴式设备,第二头戴式设备可以为游客所使用的头戴式设备。
在一种可能的实现方式中,第一头戴式设备可以维护有一个设备标识列表,该设备标识列表可以用于存储该导游所带领的游客所使用的第二头戴式设备的设备标识,相当于实现第一头戴式设备和第二头戴式设备的配对,后续第一头戴式设备即可将画面传输给位于设备标识列表的第二头戴式设备。
可选地,可以采用蓝牙传输、WiFi传输等方式来实现第一头戴式设备和第二头戴式设备的画面同步,本发明对具体采用哪种方式不加以限定。
此外,在导游带领游客进行游览的过程中,在游览完一个景点时,经常会自由活动,然后再集合。此时,导游可以通过第一头戴式设备设置集合信息(包括集合时间和集合位置),以便在到达集合时间后,可以及时提醒游客集合。
在一种可能的实现方式中,在第一头戴式设备已与第二头戴式设备配对的情况下,响应于达到集合时间,第一头戴式设备可以向第二头戴式设备发送集合信息,第二头戴式设备用于在接收到集合信息的情况下,显示第二路线图,第二路线图可以用于指示从第二头戴式设备所处的位置达到集合位置的路线。
通过在接收到集合信息时显示第二路线图,以便游客可以根据导航路线,快速到达集合地点。
参见图9,图9是根据本发明实施例示出的一种导游功能模式下头戴式设备的处理流程图,如图9所示,在导游功能模式下,可以在第一头戴式设备中添加第二头戴式设备(也即是游客设备),以便可以在有同步导游视角的需求的情况下,将第一头戴式设备的画面传输给第二头戴式设备,以实现将导游设备画面传输给游客设备;此外,导游还可以通过第一头戴式设备设置集合时间和集合位置,以便游客可以在达到集合时间的情况下,通过第二头戴式设备导航至集合位置。
本发明的实施例还提出了一种画面显示装置,参见图10,图10是根据本发明实施例示出的一种画面显示装置的框图,应用于第一头戴式设备,该装置可以包括:
采集模块1001,用于采集包含目标物体的图像;
确定模块1002,用于基于包含目标物体的图像,确定目标物体的目标特征矩阵;
获取模块1003,用于基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型,目标数据库用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,候选三维模型基于通过图像扫描设备扫描得到的初始 三维模型进行残缺部分补全和/或色彩复原得到;
显示模块1004,用于对目标三维模型进行显示。
在一些实施例中,获取模块1003,在用于基于目标物体的目标特征矩阵,从目标数据库中,获取与目标物体匹配的目标三维模型时,用于:
对目标特征矩阵与目标数据库中所存储的特征矩阵进行匹配;
将与目标特征矩阵匹配的特征矩阵对应的候选三维模型,确定为目标三维模型。
在一些实施例中,目标数据库的构建过程包括:
获取通过图像扫描设备扫描得到的初始三维模型;
获取基于初始三维模型进行残缺部分补全和/或色彩复原得到的候选三维模型;
提取候选三维模型在不同角度下的特征矩阵;
将候选三维模型和所提取到的特征矩阵存储至目标数据库。
在一些实施例中,目标数据库还用于存储至少一个候选三维模型对应的媒体数据,媒体数据用于以视频或音频的形式介绍候选三维模型;
显示模块1004,还用于显示第一提示信息,第一提示信息用于询问是否播放目标三维模型对应的媒体数据。
在一些实施例中,获取模块1003,还用于响应于接收到基于第一提示信息的第一反馈信息,从目标数据库中获取目标三维模型对应的媒体数据,第一反馈信息用于指示需要播放目标三维模型对应的媒体数据;
该装置还包括:
播放模块,用于播放所获取到的媒体数据。
在一些实施例中,该装置还包括:
调整模块,用于响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整。
在一些实施例中,调整模块,在用于响应于在第一头戴式设备上的画面调整操作,按照画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整时,用于下述至少一项:
响应于在第一头戴式设备的第一设定位置处的画面调整操作,按照画面调整操作所指示的显示视角,显示目标三维画面在画面调整操作所指示的显示视角下的画面;
响应于在第一头戴式设备的第二设定位置处的画面调整操作,按照画面调整操作所指示的画面放大倍数,对所显示的目标三维模型的画面进行放大或缩小。
在一些实施例中,显示模块1004,还用于响应于在第一头戴式设备的第二设定位置处的画面调整操作,在画面调整操作所指示的画面放大倍数达到设定放大倍数的情况下,显示第二提示信息,第二提示信息用于指示画面调整操作所指示的画面放大倍数已达到设定放大倍数。
在一些实施例中,调整模块,还用于响应于在第一头戴式设备上的画面调整操作,将第一头戴式设备的调光膜设置为不透明状态;
调整模块,还用于响应于在第一头戴式设备上的画面调整操作,获取环境光亮度,基于环境光亮度,对第一头戴式设备的调光膜的透明程度进行调整。
在一些实施例中,该装置还包括:
第一发送模块,用于在第一头戴式设备已与第二头戴式设备配对的情况下,向第二头戴式设备发送第一头戴式设备上所显示的画面。
在一些实施例中,获取模块1003,还用于在第一头戴式设备已与第二头戴式设备配对的情况下,响应于在第一头戴式设备上的位置获取操作,获取第二头戴式设备的位置信息;
显示模块,还用于对所获取到的位置信息进行显示。
在一些实施例中,显示模块,还用于显示第一路线图,第一路线图用于指示从第一头戴式设备所处的位置达到第二头戴式设备所处的位置的路线。
在一些实施例中,该装置还包括:
设置模块,用于通过第一头戴式设备设置集合时间和集合位置;
第二发送模块,用于在第一头戴式设备已与第二头戴式设备配对的情况下,响应于达到集合时间,向第二头戴式设备发送集合信息,第二头戴式设备用于在接收到集合信息的情况下,显示第二路线图,第二路线图用于指示从第二头戴式设备所处的位置达到集合位置的路线。
在一些实施例中,目标物体为石窟雕像类历史文物。
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本发明还提供了一种头戴式设备,参见图11,图11是根据本发明实施例提供的一种头戴式设备的结构示意图。如图11所示,头戴式设备包括处理器1101、存储器1102、网络接口1103、第一中断1104、第二中断1105、光感设备1106、摄像设备1107、显示设备1108、GPS和WiFi模块1109,存储器1102用于存储可在处理器1101上运行的计算机程序代码,处理器1101用于在执行该计算机程序代码时实现本发明任一实施例所提供的画面显示方法,网络接口1103用于实现输入输出功能。
此外,第一中断1104可以为右侧镜腿Touch中断,收到第一中断后,处理器1101可以进行画面放大系数的确定;第二中断1105可以为左侧镜腿Touch中断,收到第二中断后,处理器1101可以进行显示视角的确定;光感设备1106可以用于获取环境光亮度,摄像设备1107可以用于拍摄实时画面,显示设备1108可以用于显示三维模型的画面;对于GPS和WiFi模块1109,GPS可以实时获取当前位置,并提供导航的精确定位,WiFi可以保证头戴式设备间的数据传输。
在更多可能的实现方式中,头戴式设备还可以包括其他硬件,本发明对此不做限定。
本发明还提供了一种计算机可读存储介质,计算机可读存储介质可以是多种形 式,比如,在不同的例子中,计算机可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、DVD等),或者类似的存储介质,或者它们的组合。特殊的,计算机可读介质还可以是纸张或者其他合适的能够打印程序的介质。计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现本发明任一实施例所提供的画面显示方法。
本发明还提供了一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现本发明任一实施例所提供的画面显示方法。
在本发明中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。术语“多个”指两个或两个以上,除非另有明确的限定。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本发明的其它实施方案。本发明旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本发明未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (17)

  1. 一种画面显示方法,其特征在于,应用于第一头戴式设备,所述方法包括:
    采集包含目标物体的图像;
    基于包含目标物体的图像,确定所述目标物体的目标特征矩阵;
    基于所述目标物体的目标特征矩阵,从目标数据库中,获取与所述目标物体匹配的目标三维模型,所述目标数据库用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,所述候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到;
    对所述目标三维模型进行显示。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述目标物体的目标特征矩阵,从目标数据库中,获取与所述目标物体匹配的目标三维模型,包括:
    对所述目标特征矩阵与所述目标数据库中所存储的特征矩阵进行匹配;
    将与所述目标特征矩阵匹配的特征矩阵对应的候选三维模型,确定为所述目标三维模型。
  3. 根据权利要求1所述的方法,其特征在于,所述目标数据库的构建过程包括:
    获取通过图像扫描设备扫描得到的初始三维模型;
    获取基于所述初始三维模型进行残缺部分补全和/或色彩复原得到的候选三维模型;
    提取所述候选三维模型在不同角度下的特征矩阵;
    将所述候选三维模型和所提取到的特征矩阵存储至所述目标数据库。
  4. 根据权利要求1所述的方法,其特征在于,所述目标数据库还用于存储所述至少一个候选三维模型对应的媒体数据,所述媒体数据用于以视频或音频的形式介绍所述候选三维模型;
    所述对所述目标三维模型进行显示之后,所述方法还包括:
    显示第一提示信息,所述第一提示信息用于询问是否播放所述目标三维模型对应的媒体数据。
  5. 根据权利要求4所述的方法,其特征在于,所述显示第一提示信息之后,所述方法还包括:
    响应于接收到基于所述第一提示信息的第一反馈信息,从所述目标数据库中获取所述目标三维模型对应的媒体数据,所述第一反馈信息用于指示需要播放所述目标三维模型对应的媒体数据;
    播放所获取到的媒体数据。
  6. 根据权利要求1所述的方法,其特征在于,所述对所述目标三维模型进行显示之后,所述方法还包括:
    响应于在所述第一头戴式设备上的画面调整操作,按照所述画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整。
  7. 根据权利要求6所述的方法,其特征在于,所述响应于在所述第一头戴式设备上的画面调整操作,按照所述画面调整操作指示的显示视角和/或画面放大倍数,对所显示的目标三维模型的画面进行调整,包括下述至少一项:
    响应于在所述第一头戴式设备的第一设定位置处的画面调整操作,按照所述画面调整操作所指示的显示视角,显示所述目标三维画面在所述画面调整操作所指示的显示视角下的画面;
    响应于在所述第一头戴式设备的第二设定位置处的画面调整操作,按照所述画面调整操作所指示的画面放大倍数,对所显示的目标三维模型的画面进行放大或缩小。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    响应于在所述第一头戴式设备的第二设定位置处的画面调整操作,在所述画面调 整操作所指示的画面放大倍数达到设定放大倍数的情况下,显示第二提示信息,所述第二提示信息用于指示所述画面调整操作所指示的画面放大倍数已达到设定放大倍数。
  9. 根据权利要求6所述的方法,其特征在于,所述方法还包括下述任一项:
    响应于在所述第一头戴式设备上的画面调整操作,将所述第一头戴式设备的调光膜设置为不透明状态;
    响应于在所述第一头戴式设备上的画面调整操作,获取环境光亮度,基于所述环境光亮度,对所述第一头戴式设备的调光膜的透明程度进行调整。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述第一头戴式设备已与第二头戴式设备配对的情况下,向所述第二头戴式设备发送所述第一头戴式设备上所显示的画面。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述第一头戴式设备已与第二头戴式设备配对的情况下,响应于在所述第一头戴式设备上的位置获取操作,获取所述第二头戴式设备的位置信息;
    对所获取到的位置信息进行显示。
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    显示第一路线图,所述第一路线图用于指示从所述第一头戴式设备所处的位置达到所述第二头戴式设备所处的位置的路线。
  13. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    通过所述第一头戴式设备设置集合时间和集合位置;
    所述方法还包括:
    在所述第一头戴式设备已与第二头戴式设备配对的情况下,响应于达到集合时间,向所述第二头戴式设备发送集合信息,所述第二头戴式设备用于在接收到集合信息的情况下,显示第二路线图,所述第二路线图用于指示从所述第二头戴式设备所处的位置达到所述集合位置的路线。
  14. 根据权利要求1所述的方法,其特征在于,所述目标物体为石窟雕像类历史文物。
  15. 一种画面显示装置,其特征在于,应用于第一头戴式设备,所述装置包括:
    采集模块,用于采集包含目标物体的图像;
    确定模块,用于基于包含目标物体的图像,确定所述目标物体的目标特征矩阵;
    获取模块,用于基于所述目标物体的目标特征矩阵,从目标数据库中,获取与所述目标物体匹配的目标三维模型,所述目标数据库用于存储至少一个候选三维模型以及每个候选三维模型对应的特征矩阵,所述候选三维模型基于通过图像扫描设备扫描得到的初始三维模型进行残缺部分补全和/或色彩复原得到;
    显示模块,用于对所述目标三维模型进行显示。
  16. 一种头戴式设备,其特征在于,所述头戴式设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现如权利要求1至14中任一项所述的画面显示方法所执行的操作。
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有程序,所述程序被处理器执行时,实现如权利要求1至14中任一项所述的画面显示方法所执行的操作。
PCT/CN2023/105985 2022-07-26 2023-07-06 画面显示方法、装置、设备及介质 WO2024022070A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210885790.0A CN115237363A (zh) 2022-07-26 2022-07-26 画面显示方法、装置、设备及介质
CN202210885790.0 2022-07-26

Publications (1)

Publication Number Publication Date
WO2024022070A1 true WO2024022070A1 (zh) 2024-02-01

Family

ID=83675299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105985 WO2024022070A1 (zh) 2022-07-26 2023-07-06 画面显示方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115237363A (zh)
WO (1) WO2024022070A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237363A (zh) * 2022-07-26 2022-10-25 京东方科技集团股份有限公司 画面显示方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127633A (zh) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 三维重建方法、设备以及计算机可读介质
CN111414225A (zh) * 2020-04-10 2020-07-14 北京城市网邻信息技术有限公司 三维模型远程展示方法、第一终端、电子设备及存储介质
CN111681320A (zh) * 2020-06-12 2020-09-18 贝壳技术有限公司 三维房屋模型中的模型展示方法及装置
CN113762059A (zh) * 2021-05-24 2021-12-07 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及可读存储介质
CN115237363A (zh) * 2022-07-26 2022-10-25 京东方科技集团股份有限公司 画面显示方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127633A (zh) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 三维重建方法、设备以及计算机可读介质
CN111414225A (zh) * 2020-04-10 2020-07-14 北京城市网邻信息技术有限公司 三维模型远程展示方法、第一终端、电子设备及存储介质
CN111681320A (zh) * 2020-06-12 2020-09-18 贝壳技术有限公司 三维房屋模型中的模型展示方法及装置
CN113762059A (zh) * 2021-05-24 2021-12-07 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及可读存储介质
CN115237363A (zh) * 2022-07-26 2022-10-25 京东方科技集团股份有限公司 画面显示方法、装置、设备及介质

Also Published As

Publication number Publication date
CN115237363A (zh) 2022-10-25

Similar Documents

Publication Publication Date Title
US11381758B2 (en) System and method for acquiring virtual and augmented reality scenes by a user
JP7316360B2 (ja) 拡張現実のためのシステムおよび方法
CN107924584B (zh) 增强现实
CN106797460B (zh) 三维视频的重建
CN102959616B (zh) 自然交互的交互真实性增强
RU2621644C2 (ru) Мир массового одновременного удаленного цифрового присутствия
CN114586071A (zh) 支持多设备类型的交叉现实***
CN104536579A (zh) 交互式三维实景与数字图像高速融合处理***及处理方法
CN114401414B (zh) 沉浸式直播的信息显示方法及***、信息推送方法
US20210312887A1 (en) Systems, methods, and media for displaying interactive augmented reality presentations
US20180239514A1 (en) Interactive 3d map with vibrant street view
WO2024022070A1 (zh) 画面显示方法、装置、设备及介质
US11532138B2 (en) Augmented reality (AR) imprinting methods and systems
JP2022509731A (ja) クロスリアリティシステム
JP2023171298A (ja) 拡張現実及び複合現実のための空間とコンテンツの適合
CN110634190A (zh) 一种远程摄像vr体验***
CN116012509A (zh) 一种虚拟形象的驱动方法、***、设备及存储介质
CN111736692A (zh) 显示方法、显示装置、存储介质与头戴式设备
DeHart Directing audience attention: cinematic composition in 360 natural history films
CN116055708B (zh) 一种感知可视交互式球幕三维立体成像方法及***
WO2024131479A1 (zh) 虚拟环境的显示方法、装置、可穿戴电子设备及存储介质
US20240155093A1 (en) Device, system, camera device, and method for capturing immersive images with improved quality
JP2019512173A (ja) マルチメディア情報を表示する方法及び装置
Lin Lightweight and Sufficient Two Viewpoint Connections for Augmented Reality
JP2023059632A (ja) 画像処理装置、画像表示システムおよび画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23845283

Country of ref document: EP

Kind code of ref document: A1