WO2017092332A1 - 一种渲染图像的处理方法及装置 - Google Patents

一种渲染图像的处理方法及装置 Download PDF

Info

Publication number
WO2017092332A1
WO2017092332A1 PCT/CN2016/089266 CN2016089266W WO2017092332A1 WO 2017092332 A1 WO2017092332 A1 WO 2017092332A1 CN 2016089266 W CN2016089266 W CN 2016089266W WO 2017092332 A1 WO2017092332 A1 WO 2017092332A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
target
scene
difference
sequence
Prior art date
Application number
PCT/CN2016/089266
Other languages
English (en)
French (fr)
Inventor
胡雪莲
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/249,738 priority Critical patent/US20170163958A1/en
Publication of WO2017092332A1 publication Critical patent/WO2017092332A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of virtual reality technologies, and in particular, to a method for processing a rendered image and a processing device for rendering an image.
  • Virtual Reality also known as Spiritual Reality or Virtual Reality
  • VR is a multi-dimensional sensory environment that is generated in whole or in part by computer, such as vision, hearing, and touch.
  • auxiliary sensing device such as helmet display and data glove, it provides a multi-dimensional human-machine interface for observing and interacting with the virtual environment, so that people can enter the virtual environment and directly observe the internal changes of things and interact with things. , giving people a sense of "immersive".
  • VR theater systems based on mobile terminals have also developed rapidly.
  • head tracking is used to change the angle of view of the image, so that the user's visual system and the motion sensing system can be connected and feel more realistic.
  • the VR theater system based on the mobile terminal needs to continuously render the image in real time, that is, render the scene map and the video frame image.
  • the inventor found in the process of implementing the present invention that the amount of calculation of the rendered image is large, which results in the image after rendering cannot be generated quickly, that is, the frame rate of the image displayed by the mobile terminal is low.
  • the technical problem to be solved by the embodiments of the present invention is to provide a processing method for rendering an image, improve the rendering efficiency of the image, and achieve the purpose of real-time rendering, thereby improving the frame rate of the image displayed by the mobile terminal.
  • the embodiment of the invention further provides a processing device for rendering an image, which is used to ensure the implementation and application of the above method.
  • an embodiment of the present invention discloses a method for processing a rendered image, including:
  • the pre-generated quasi-scene graph is obtained from the scene buffer, and the obtained quasi-scene graph is taken as the target scene graph;
  • the video frame image is rendered based on the target scene graph to generate a rendered image.
  • the embodiment of the invention discloses a processing device for rendering an image, comprising:
  • a target state sequence generating module configured to perform state detection on the target header to generate a target state sequence
  • a state determining module configured to determine a state of the target head according to the target state sequence
  • a scene graph obtaining module configured to acquire a pre-generated quasi-scene scene map from the scene buffer area when the target head is in a stable state, and use the acquired quasi-scene scene graph as a target scene graph;
  • a rendering image generating module configured to render the video frame image based on the target scene graph to generate a rendered image.
  • An embodiment of the present invention provides a computer program comprising computer readable code, when the computer readable code is run on a mobile terminal, causing the mobile terminal to perform the processing method of the rendered image described above.
  • Embodiments of the present invention provide a computer readable medium in which the above computer program is stored.
  • the embodiments of the invention include the following advantages:
  • the embodiment of the present invention detects a state of the target head, and when the target head is in a stable state, acquires a pre-generated quasi-scene scene map from the scene buffer, and uses the acquired quasi-scene scene graph as a target scene graph to the video frame image.
  • Rendering and generating a rendered image can save the scene rendering link when the target head is in a stable state, thereby saving image rendering time, improving image rendering efficiency, realizing the purpose of real-time rendering, and increasing the frame rate of the image displayed by the mobile terminal. .
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for processing a rendered image according to the present invention
  • FIG. 2 is a flow chart showing the steps of a preferred embodiment of a method for processing a rendered image of the present invention
  • 3A is a structural block diagram of an embodiment of a processing apparatus for rendering an image of the present invention
  • 3B is a structural block diagram of a preferred embodiment of a processing apparatus for rendering an image of the present invention
  • Figure 4 shows schematically a block diagram of a mobile terminal for carrying out the method according to the invention
  • Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • a mobile terminal-based VR theater system it is necessary to continuously render images in real time, that is, to render a theater scene (ie, a scene map) and a video content (ie, a video frame image).
  • a theater scene ie, a scene map
  • a video content ie, a video frame image
  • one of the core concepts of the embodiments of the present invention is to cache the scene graph in the state space as a quasi-scene graph by monitoring the state of the user's head relatively stable, so that the image can be In the process of rendering, save the scene rendering,
  • the pre-generated quasi-scene graph is obtained directly from the scene buffer area, and the quasi-scene graph is used as the target scene graph, and the video frame image is rendered based on the target scene graph to generate a rendered image, which improves rendering efficiency and reduces rendered image.
  • Frame delay time and increase the frame rate of the image displayed by the mobile terminal.
  • FIG. 1 a flow chart of steps of a method for processing a rendered image according to the present invention is shown. Specifically, the method may include the following steps:
  • Step 101 Perform state detection on the target header to generate a target state sequence.
  • head tracking is used to change the angle of view of the image, so that the user's visual system and the motion sensing system can be connected and feel more realistic.
  • the position of the user's head can be tracked by the position tracker to determine the motion state of the user's head.
  • the position tracker also known as the position tracker, refers to the device that acts on the space tracking and positioning. It is generally used in combination with other VR devices, such as data helmets, stereo glasses, data gloves, etc., so that participants can be free in space. Movement, rotation, is not limited to a fixed spatial position.
  • the VR system based on the mobile terminal can determine the state of the user's head by detecting the state of the user's head, determine the angle of view of the image based on the state of the head of the user, and render the image according to the determined angle of view to obtain a better image display. effect.
  • the mobile terminal refers to a computer device that can be used in the mobile, such as a smart phone, a notebook computer, a tablet computer, etc., which is not limited in this embodiment of the present invention.
  • the embodiment of the present invention will be described in detail by taking a mobile phone as an example.
  • the mobile phone-based VR theater system can monitor the movement state of the user's head through an auxiliary sensing device such as a data helmet, a stereo glasses, a data glove, and the like, and the user head to be monitored is used as a target head.
  • Status detection is performed on the target head so that status information of the target head relative to the display screen of the mobile phone can be determined.
  • the state data corresponding to the current state of the user can be obtained through calculation.
  • the angle of the target head relative to the display screen of the mobile phone can be calculated, that is, the state data is generated.
  • the calculation may be performed according to any one or more of the head orientation, the moving direction, the speed, and the like corresponding to the current state of the user, and the angle of the target head relative to the display screen of the mobile phone is produced.
  • the VR system can store the generated state data into a corresponding state sequence, and generate a target state sequence corresponding to the target head, such as sequentially displaying the target head A relative to the mobile phone at different times.
  • the angle of the screen is stored in the corresponding state sequence to form a target state sequence LA corresponding to the target head A.
  • the target state sequence LA can store n state data, and n is a whole number, such as 30, 10, or 15, which is not limited by the embodiment of the present invention.
  • the above step 101 may include the following sub-steps:
  • Sub-step 1010 Acquire data collected by the sensor to generate state data corresponding to the target header.
  • Sub-step 1012 generating a target state sequence using the generated state data.
  • Step 103 Determine a state of the target head according to the target state sequence.
  • the VR system can determine whether the target header enters a steady state according to state data in a target state sequence corresponding to the target header. Specifically, the VR system determines the state of the target head by determining whether the range of change of the state data saved in the target state LA is within a preset stable range based on all state data saved in the target state sequence LA. That is, it is judged whether the target synchronization is in a steady state or in a moving state.
  • the target head In the VR theater system, it can be determined whether the target head is in a stable state by determining whether the state difference corresponding to the target state sequence (corresponding to the range of change of the state data) is within a preset steady state range. When the state difference corresponding to the target state sequence is in the preset steady state range, it can be determined that the target head is in a stable state.
  • the target head can be determined whether the range of the angle of change of the target head relative to the display screen of the mobile phone (ie, the state difference value) is within a preset steady state range; if so, it can be determined that the target head is in a stable state, that is, the target head is relatively The display of the mobile phone is still; otherwise, it can be determined that the target head is in a moving state, that is, the target head moves relative to the display screen of the mobile phone.
  • the range of the angle of change of the target head relative to the display screen of the mobile phone ie, the state difference value
  • the foregoing step 103 may include: performing statistics on the state data of the target state sequence, determining a state difference; determining whether the state difference is within a preset steady state range; When the steady state range is within, it is determined that the target head is in a steady state.
  • Step 105 When the target head is in a stable state, obtain a pre-generated quasi-scenario map from the scene buffer, and use the acquired quasi-scene graph as the target scene graph.
  • the VR theater system can render the current scene through the scene model, generate a scene graph of the current scene, and save the generated scene graph.
  • the scene graph of the current scene generated by the scene model may be used as a quasi-scene graph and stored in the scene buffer. Therefore, when the target head is in a stable state, the quasi-scene scene generated when the target head enters the steady state can be directly extracted from the scene buffer, and the extracted quasi-scene graph is taken as the target scene graph, so that the target can be rendered.
  • the link of the rendered scene is saved, and the image rendering efficiency is improved.
  • Step 107 Render a video frame image based on the target scene graph to generate a rendered image.
  • the VR theater system can use the currently rendered image as the target image and the scene in which the target image is located as the target scene.
  • the VR theater system After generating the scene graph of the target scene, that is, after generating the target scene graph, the VR theater system renders the video frame image corresponding to the target image based on the target scene graph, generates a rendered image corresponding to the target image, and completes rendering of the target image.
  • the VR theater system can display a fixed position rectangle on the screen, and then render the video frame image onto the rectangle, that is, generate a rendered image to complete the image rendering.
  • the VR theater system based on the mobile terminal can generate a target state sequence by detecting the state of the target head, and determine the state of the target head according to the target state sequence, and pass the target head when the target head is in a stable state.
  • the pre-generated quasi-scene graph is obtained from the scene buffer, and the obtained quasi-scene graph is used as the target scene graph, and the video frame image is rendered to generate a rendered image, thereby saving scene rendering links and improving image rendering efficiency.
  • the purpose of real-time rendering is performed by detecting the state of the target head, and determine the state of the target head according to the target state sequence, and pass the target head when the target head is in a stable state.
  • the pre-generated quasi-scene graph is obtained from the scene buffer, and the obtained quasi-scene graph is used as the target scene graph, and the video frame image is rendered to generate a rendered image, thereby saving scene rendering links and improving image rendering efficiency.
  • the method of processing the rendered image further comprises the step of generating a quasi-scenario map in advance.
  • the step of generating the quasi-scene graph may include: when the target head enters the moving state, rendering the current scene based on the scene model to generate a quasi-scene graph; and storing the generated quasi-scene graph in the scene buffer.
  • the VR theater system may render the scene that needs to be rendered based on the scene model by calling the scene model, and generate a scene graph of the current scene, and
  • the scene graph is used as a quasi-scene graph corresponding to the steady state, and the quasi-scene graph is stored in the scene buffer. Therefore, the VR theater system can directly extract the quasi-scenario corresponding to the steady state from the scene buffer when the target head is in the stable state.
  • the graph and the quasi-scene graph are used as the target scene graph, so that the scene rendering link can be saved when the target head is in a stable state, that is, the scene rendering time is saved, and the saving is about 50% or more.
  • the embodiment of the present invention reduces the time for rendering an image by saving the scene rendering link, that is, the delay of image rendering is reduced, and the frame rate of the image displayed by the mobile terminal is improved, thereby solving the problem that the user is dizzy due to the rendering delay, that is, obtaining Better image display effect, improve the experience.
  • FIG. 2 a flow chart of steps of a method for processing a rendered image according to the present invention is shown. Specifically, the method may include the following steps:
  • Step 201 Acquire data collected by the sensor, and generate state data corresponding to the target head.
  • VR devices such as data helmets, stereo glasses, data gloves, etc. monitor the target head, usually by sensors.
  • the posture of the mobile phone ie, the screen direction
  • the acceleration and the moving direction of the mobile phone can be detected by the accelerometer.
  • the screen direction is equivalent to the head orientation.
  • the VR system based on the mobile phone can calculate the angle of view of the left and right eyes according to the parameters of the left and right eye, the left and right field of view, and the like, and then the target head can be determined based on the angle of view of the left and right eyes.
  • the angle of the display which is the generation of status data.
  • Step 203 Generate a target state sequence by using the generated state data.
  • the VR system can sequentially store the generated state data into a corresponding state sequence, and generate a target state sequence corresponding to the target head, for example, an angle N1, N2, and N3 of the target head A at different times relative to the display screen of the mobile phone.
  • ⁇ Nn is sequentially stored in the corresponding state sequence LA, that is, the target state sequence LA corresponding to the target head A is generated.
  • the sensor can collect multiple data, and the mobile phone-based VR system can generate multiple status data.
  • the VR system can perform statistics on a plurality of state data generated every X seconds, generate an average value N of all state data every X seconds, and save the average value N, that is, Average value N deposit In the sequence.
  • X is an integer such as 1, 2, 3, 4, and the like.
  • the target state sequence LA is generated by storing the average value N of the state data calculated every 4 seconds into a sequence containing 15 state data.
  • Step 205 Perform statistics on the state data of the target state sequence to determine a state difference.
  • the above step 205 may include the following sub-steps:
  • Sub-step 2050 calculating state data of the target state sequence, determining a maximum value, a minimum value, and an average value of the target state sequence.
  • all state data in the target state sequence LA can be compared, the minimum value S and the maximum value B of all state data in the target state sequence LA are determined, and the average value M corresponding to all state data of the target state sequence LA is obtained by calculation. .
  • Sub-step 2052 calculating a first difference between the average value and the maximum value, and a second difference between the average value and the minimum value.
  • a difference between the maximum value B and the frequency average value M in the target sequence LA can be obtained, and the difference between the maximum value B and the mean value M is marked as the first difference; and the target can also be obtained.
  • the difference between the minimum value S and the mean value M in the sequence LA, the difference between the minimum value S and the mean value M is denoted as the second difference value.
  • Sub-step 2052 determining a state difference value based on the first difference value and the second difference value.
  • the VR system may use the first difference or the second difference as the state difference corresponding to the target head.
  • the largest value of the first difference and the second difference is selected as the state difference corresponding to the target head. .
  • the second difference is used as a state difference; When the difference is made, the first difference is used as a state difference.
  • Step 207 Determine whether the state difference value is within a preset steady state range.
  • step 209 When the state difference is in the steady state range, it may be determined that the target head is in a stable state, and step 209 is performed; when the state difference is not in the steady state range, the target head may be determined to be in a moving state, and execution is performed. Step 211.
  • the VR theater system can preset a steady state range for determining whether the target head enters a steady state, that is, whether the target head is in a stable state.
  • the state of the target head can be determined by determining that the state difference corresponding to the target head is within a preset steady state range.
  • the status data is the angle of the target head relative to the display screen of the mobile phone
  • the state difference is equivalent to the moving angle of the target head relative to the display screen of the mobile phone.
  • the mobile phone-based VR system can preset the stability threshold to 3 degrees, that is, the preset steady state range is 0 degrees to 3 degrees. Whether the target head enters a relatively stable state can be determined by whether the state difference corresponding to the target head is less than 3 degrees.
  • step 209 When the state difference corresponding to the target head is less than 3 degrees, it is determined that the target head A is in a stable state, and step 209 is performed; when the state difference is not less than 3 degrees, it is determined that the target head A is in a moving state, that is, exiting the steady state, Go to the normal rendering mode and go to step 211.
  • Step 209 Acquire a pre-generated quasi-scenario map from the scene buffer, and use the obtained quasi-scene graph as the target scene graph.
  • the VR theater system can directly obtain the quasi-scene scene map corresponding to the steady state from the scene buffer area, and use the acquired quasi-scene scene graph as the target scene graph of the current theater scene, so that the target scene graph may not pass through the scene.
  • the model generates a target scene graph of the target scene, which saves the scene rendering link of the current scene, that is, without performing step 211, directly jumps to step 213 for execution.
  • Step 211 Render the current scene based on the scene model, and generate a target scene graph.
  • the current theater scene (ie, the current scene) needs to be rendered from the scene model to generate a scene graph of the current scene.
  • the current scene can be used as the target scene, and the target scene is generated by calling the scene model to render the target scene.
  • Step 213 Render a video frame image based on the target scene graph to generate a rendered image.
  • the VR system renders the video frame image corresponding to the target scene to the rectangle formed by the target scene image on the screen, and generates a rendered graphic corresponding to the target scene, that is, displays the rendered image on the display screen.
  • the quasi-scene scene corresponding to the steady state may be directly extracted from the scene buffer, and the extracted quasi-scenario will be extracted.
  • the graph saves the link of rendering the scene, improves the image rendering efficiency, and reduces the delay of image rendering, thereby solving the problem that the user is dizzy due to the rendering delay, that is, obtaining a better image display effect and improving the use.
  • FIG. 3A a structural block diagram of an embodiment of a processing apparatus for rendering an image according to the present invention is shown, which may specifically include the following modules:
  • the state sequence generating module 301 can be configured to perform state detection on the target header to generate a target state sequence.
  • the state determining module 303 is configured to determine a state of the target header according to the target state sequence.
  • the scene graph obtaining module 305 is configured to obtain a pre-generated quasi-scene scene graph from the scene buffer area when the target head is in a stable state, and use the acquired quasi-scene scene graph as the target scene graph.
  • the rendered image generating module 307 can be configured to render the video frame image based on the target scene graph to generate a rendered image.
  • the processing device for rendering the image may further include a scene graph generating module 309, which is referred to FIG. 3B.
  • the scene graph generating module 309 can be configured to generate a quasi-scene graph in advance.
  • the scene graph generating module 309 can include the following submodules:
  • the scene graph generation sub-module 3090 is configured to render the current scene based on the scene model when the target head enters the moving state, and generate a quasi-scene graph.
  • the scene graph saving sub-module 3092 is configured to save the generated quasi-scene scene graph in the scene buffer area.
  • the state sequence generating module 301 may include the following submodules:
  • the status data generating submodule 3010 can be used to acquire data collected by the sensor and generate Status data corresponding to the target header.
  • the status sequence generation sub-module 3012 can be configured to generate a target status sequence using the generated status data.
  • the state determining module 303 may include the following submodules:
  • the state difference determining sub-module 3030 can be configured to perform statistics on the state data of the target state sequence to determine a state difference.
  • the state difference determination sub-module may comprise the following elements:
  • the sequence calculation unit 30301 is configured to calculate state data of the target state sequence, and determine a maximum value, a minimum value, and an average value of the target state sequence.
  • the difference calculation unit 30303 is configured to calculate a first difference between the average value and the maximum value, and a second difference between the average value and the minimum value.
  • the state difference determining unit 30305 is configured to determine the state difference value based on the first difference value and the second difference value.
  • the difference judging sub-module 3032 can be configured to determine whether the state difference value is within a preset steady state range.
  • the stability determination sub-module 3034 can be configured to determine that the target head is in a stable state when the state difference is within a steady state range.
  • the movement determining sub-module 3036 can be configured to determine that the target head is in a moving state when the state difference value is not within the steady state range.
  • the processing device for rendering the image may further include a target scene generation module 311.
  • the target scene generating module 311 can be configured to render the current scene based on the scene model when the target head is in a moving state, and generate a target scene graph.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the invention may be provided as a method, apparatus, or computer program product.
  • embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the invention may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • FIG. 4 illustrates that a mobile terminal in accordance with the present invention can be implemented.
  • the mobile terminal conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 420.
  • the memory 420 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 420 has a memory space 430 for program code 431 for performing any of the method steps described above.
  • storage space 430 for program code may include various program code 431 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 420 in the mobile terminal of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 431', ie, code readable by a processor, such as 410, that when executed by the mobile terminal causes the mobile terminal to perform each of the methods described above step.
  • Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the terminal device is in a computer readable memory that operates in a particular manner such that instructions stored in the computer readable memory produce an article of manufacture comprising instruction means implemented in a flow or in a flow chart and/or block diagram of the flowchart The function specified in a box or multiple boxes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种渲染图像的处理方法及装置,该方法包括:对目标头部进行状态检测,生成目标状态序列;依据所述目标状态序列判断所述目标头部的状态;当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图;基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。通过检测目标头部的状态,从而可以在目标头部处于稳定状态时节省场景渲染环节,节省图像渲染的时间,提高图像的渲染效率,达到实时渲染的目的。

Description

一种渲染图像的处理方法及装置
本申请要求在2015年12月4日提交中国专利局、申请号为201510884372.X、发明名称为“一种渲染图像的处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及虚拟现实技术领域,特别是涉及一种渲染图像的处理方法和一种渲染图像的处理装置。
背景技术
虚拟实境(Virtual Reality,VR),又称灵境技术或虚拟现实技术,是一种全部或部分由计算机生成的视觉、听觉、触觉等多维感觉环境。通过头盔显示器、数据手套等辅助传感设备,给人提供一个观察并与虚拟环境进行交互作用的多维人机接口,使人可以进入这个虚拟环境中直接观察事物的内在变化并与事物发生交互作用,给人一种“身临其境”的真实感。
随着VR技术的快速发展,基于移动终端的VR影院***也迅速的发展起来。在基于移动终端的VR影院***中,利用头部跟踪来改变图像的视角,使得用户的视觉***和运动感知***之间就可以联系起来,感觉更逼真。为了获得比较好的图像显示效果,基于移动终端的VR影院***需要不断地实时渲染图像,即渲染场景图和视频帧图像。但是,发明人在实现本发明的过程中发现:渲染图像的计算量很大,这导致渲染后的图像不能快速生成,即移动终端显示图像的帧率较低。
发明内容
本发明实施例所要解决的技术问题是提供一种渲染图像的处理方法,提高图像的渲染效率,达到实时渲染的目的,从而提高移动终端显示图像的帧率。
相应的,本发明实施例还提供了一种渲染图像的处理装置,用以保证上述方法的实现及应用。
为了解决上述问题,本发明实施例公开了一种渲染图像的处理方法,包括:
对目标头部进行状态检测,生成目标状态序列;
依据所述目标状态序列判断所述目标头部的状态;
当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图;
基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
相应的,本发明实施例公开了一种渲染图像的处理装置,包括:
目标状态序列生成模块,用于对目标头部进行状态检测,生成目标状态序列;
状态判断模块,用于依据所述目标状态序列判断所述目标头部的状态;
场景图获取模块,用于当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图;
渲染图像生成模块,用于基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
本发明实施例提供一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在移动终端上运行时,导致所述移动终端执行上述的渲染图像的处理方法。
本发明实施例提供一种计算机可读介质,其中存储了上述的计算机程序。
与现有技术相比,本发明实施例包括以下优点:
本发明实施例通过检测目标头部的状态,当目标头部处于稳定状态时,从场景缓冲区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图,对视频帧图像进行渲染,生成渲染图像,从而可以在目标头部处于稳定状态时节省场景渲染环节,进而节省图像渲染的时间,提高图像的渲染效率,达到实时渲染的目的;并提高移动终端显示图像的帧率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明的一种渲染图像的处理方法实施例的步骤流程图;
图2是本发明的一种渲染图像的处理方法优选实施例的步骤流程图;
图3A是本发明的一种渲染图像的处理装置实施例的结构框图;
图3B是本发明的一种渲染图像的处理装置优选实施例的结构框图;
图4示意性地示出了用于执行根据本发明的方法的移动终端的框图;以及
图5示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在基于移动终端的VR影院***中,需要不断地实时渲染图像,即渲染影院场景(即场景图)和视频内容(即视频帧图像)。但是,渲染图像的计算量很大,影响了移动终端显示图像的帧率。
实际上,当用户开始观影后一小段时间内,通过调节好姿态,已经进入一个相对稳定的状态,即使头部偶有偏动,也是在一个较小范围内波动。
因此,针对上述问题,本发明实施例的核心构思之一在于,通过监测用户头部相对稳定的状态,将这个状态空间里的场景图缓存(cache)起来,作为准场景图,从而可以在图像渲染的过程中,节省场景渲染这个环节, 直接从场景缓存区里获取预先生成的准场景图,并将准场景图作为目标场景图,基于该目标场景图对视频帧图像进行渲染,生成渲染图像,提高了渲染效率,减少渲染图像而造成的帧延迟时间;并提高了移动终端显示图像的帧率。
参照图1,示出了本发明的一种渲染图像的处理方法实施例的步骤流程图,具体可以包括如下步骤:
步骤101,对目标头部进行状态检测,生成目标状态序列。
在基于移动终端的VR影院***中,利用头部跟踪来改变图像的视角,使得用户的视觉***和运动感知***之间就可以联系起来,感觉更逼真。通常,可以通过位置追踪器,对用户的头部进行跟踪,确定用户头部的运动状态。其中,位置追踪器又称位置***,是指作用于空间跟踪与定位的装置,一般与其他VR设备结合使用,如:数据头盔、立体眼镜、数据手套等,使参与者在空间上能够自由移动、旋转,不局限于固定的空间位置。基于移动终端的VR***可以通过检测用户头部状态确定用户头部状态,基于用户头部状态确定图像的视场角,并按照所确定的视场角对图像进行渲染,获得比较好的图像显示效果。需要说明的是,移动终端是指可以在移动中使用的计算机设备,例如智能手机、笔记本电脑、平板电脑等,本发明实施例对此不作限制。本发明实施例将以手机为例进行详细描述。
作为本发明实施例的一个具体示例,基于手机的VR影院***可以通过数据头盔、立体眼镜、数据手套等辅助传感设备监测用户头部的移动状态,即将监测的用户头部作为目标头部,对目标头部进行状态检测,从而可以确定目标头部相对于手机的显示屏的状态信息。基于目标头部所对应的状态信息,通过计算,可以得到用户当前状态所对应的状态数据。例如,在用户戴上数据头盔后,通过监测该用户的头部(即目标头部)转动状态,可以计算得到目标头部相对手机的显示屏的角度,即生成状态数据。具体的,可以根据用户当前状态所对应得头部朝向、移动方向、速度等其中任意一种或几种数据进行计算,生产目标头部相对手机的显示屏的角度。
VR***可以将生成的状态数据存入相应的状态序列,生成该目标头部所对应的目标状态序列,如依次将目标头部A在不同时间相对手机的显 示屏的角度存入相应的状态序列,形成目标头部A所对应的目标状态序列LA。其中,目标状态序列LA可以存储n个状态数据,n为整正数,如30、10或者15等,本发明实施例对此不作限制。
在本发明的一种优选实施例中,上述步骤101可以包括以下子步骤:
子步骤1010,获取传感器所采集的数据,生成目标头部所对应的状态数据。
子步骤1012,采用所生成的状态数据生成目标状态序列。
步骤103,依据所述目标状态序列判断所述目标头部的状态。
实际上,可以通过实时监测目标头部的状态,判断该目标头部是否进入相对稳定的状态,即判断该目标头部相对于手机显示屏是否静止。VR***可以依据目标头部所对应的目标状态序列中的状态数据,判断所述目标头部是否进入稳定状态。具体的,VR***基于目标状态序列LA中所保存的所有状态数据,可以通过判断目标状态LA中所保存的状态数据的变化范围是否在预置的稳定范围之内,判断目标头部的状态,即判断目标同步处于稳定状态还是处于移动状态。在VR影院***中,可以通过判断目标状态序列所对应的状态差值(相当于状态数据的变化范围)是否在预置的稳定状态范围内,判断目标头部是否处于稳定状态。当目标状态序列所对应的状态差值在预置的稳定状态范围时,可以判定目标头部处于稳定状态。例如,可以通过判断目标头部相对手机的显示屏的角度变化范围(即状态差值)是否在预置的稳定状态范围内;若是,则可以判定目标头部处于稳定状态,即目标头部相对手机的显示屏静止;否则,可以判定目标头部处于移动状态,即目标头部相对手机的显示屏运动。
可选的,上述步骤103具体可以包括:对所述目标状态序列的状态数据进行统计,确定状态差值;判断所述状态差值是否在预置的稳定状态范围内;在所述状态差值在稳定状态范围内时,判定所述目标头部处于稳定状态。
步骤105,当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图。
具体的,在图像渲染的过程中,VR影院***可以通过场景模型对当前场景进行渲染,生成当前场景的场景图,还可以生成的场景图进行保存。 当用户调整好观影的姿态后,进入一个相对稳定的状态,即目标头部进入稳定状态。此时,可以将通过场景模型生成的当前场景的场景图作为准场景图,并存入场景缓存区。因此,当目标头部处于稳定状态时,可以直接从场景缓存区提取在目标头部进入稳定状态时所生成的准场景图,将所提取的准场景图作为目标场景图,从而可以在渲染目标图像时,节省了渲染场景的环节,提高图像渲染效率。
步骤107,基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
实际上,在渲染图像的过程中,VR影院***可以将当前渲染的图像作为目标图像,将目标图像所在的场景作为目标场景。在生成目标场景的场景图后,即在生成目标场景图后,VR影院***基于该目标场景图对目标图像所对应的视频帧图像进行渲染,生成目标图像对应的渲染图像,完成目标图像的渲染。具体的,目标场景图生成后,VR影院***可以在屏幕上显示一个固定位置的矩形,然后将视频帧图像渲染到该矩形上,即生成渲染图像,完成这一次图像渲染。
在本发明实施例中,基于移动终端的VR影院***可以通过检测目标头部的状态,生成目标状态序列,以及依据目标状态序列判断目标头部的状态,当目标头部处于稳定状态时,通过从场景缓冲区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图,对视频帧图像进行渲染,生成渲染图像,从而可以节省场景渲染环节,提高图像的渲染效率,达到实时渲染的目的。
在本发明的一种优选实施例中,渲染图像的处理方法还包括预先生成准场景图的步骤。生成准场景图的步骤具体可以包括:在所述目标头部进入移动状态时,基于场景模型对当前场景进行渲染,生成准场景图;将所生成的准场景图保存在场景缓存区。
具体的,在渲染图像的过程,当判断目标头部进入稳定状态时,VR影院***可以通过调用场景模型,基于场景模型对当前需要渲染的场景进行渲染,生成当前场景的场景图,并将该场景图作为稳定状态所对应的准场景图,将准场景图存入场景缓存区。因此,VR影院***可以在目标头部处于这稳定状态时,直接从场景缓存区提取该稳定状态所对应的准场景 图,并将准场景图作为目标场景图,从而可以在目标头部处于稳定状态时节省场景渲染环节,即节省场景渲染时间,大约节省50%以上。
显然,本发明实施例通过节省场景渲染环节,减少渲染图像的时间,即减轻了图像渲染的延迟,提高移动终端显示图像的帧率,从而解决了由于渲染延迟而导致用户眩晕的问题,即获得比较好的图像显示效果,提高用体验。
参照图2,示出了本发明的一种渲染图像的处理方法实施例的步骤流程图,具体可以包括如下步骤:
步骤201,获取传感器所采集的数据,生成目标头部所对应的状态数据。
实际上,VR设备如数据头盔、立体眼镜、数据手套等对目标头部进行监测,通常是通过传感器采集数据。具体的,可以通过陀螺仪检测到手机姿态(即屏幕方向)、通过加速计可以检测手机受到的加速度的大小和移动方向。其中,屏幕方向相当于头部朝向。例如,在确定头部朝向后,基于手机的VR***可以根据左右眼上下、左右视野范围等参数,计算出左右眼的视场角,进而可以基于左右眼的视场角可以确定目标头部相对显示屏的角度,即生成状态数据。
步骤203,采用所生成的状态数据生成目标状态序列。
VR***可以将生成的状态数据依次存入相应的状态序列,生成该目标头部所对应的目标状态序列,例如将目标头部A在各不同时刻相对手机的显示屏的角度N1、N2、N3······Nn依次存入相应的状态序列LA中,即生成目标头部A所对应的目标状态序列LA。为了保证渲染图像的效率以及计算得到目标场景的视场角的精确度,优选的将目标状态序列LA设置为可以存入15个状态数据N的序列,即可以将最新生成的15个状态数据N存入目标状态序列LA。
具体的,在1秒时间内,传感器可以采集到多个数据,基于手机的VR***就可以生成多个状态数据。为了提高状态数据的精确度,VR***可以对每X秒时间内所生成的多个状态数据进行统计,生成每X秒时间内所有状态数据的平均值N,并对平均值N进行保存,即将平均值N存入 序列中。其中,X为整数,如1、2、3、4等。如将每4秒计算得到状态数据的平均值N存入一个包含15个状态数据的序列,生成目标状态序列LA。
步骤205,对所述目标状态序列的状态数据进行统计,确定状态差值。
在本发明的一种优选实施例中,上述步骤205可以包括以下子步骤:
子步骤2050,对目标状态序列的状态数据进行计算,确定目标状态序列的最大值、最小值以及平均值。
实际上,可以对目标状态序列LA中所有状态数据进行比较,确定目标状态序列LA中所有状态数据的最小值S和最大值B,以及通过计算得到目标状态序列LA所有状态数据对应的平均值M。
子步骤2052,计算所述平均值与最大值的第一差值,以及平均值与最小值的第二差值。
具体而言,通过计算,可以得到该目标序列LA中的最大值B与频均值M的差值,将最大值B与均值M的差值标注为第一差值;以及,还可以得到该目标序列LA中的最小值S与均值M的差值,将最小值S与均值M的差值标注为第二差值。
子步骤2052,基于所述第一差值和第二差值确定为状态差值。
VR***可以将第一差值或者第二差值作为目标头部对应的状态差值,优选的,选取第一差值和第二差值中最大的值作为该目标头部对应的状态差值。具体的,通过判断第一差值是否大于第二差值,在第一差值大于第二差值时,将所述第二差值作为状态差值;否在第一差值不大于第二差值时,将所述第一差值作为状态差值。
步骤207,判断所述状态差值是否在预置的稳定状态范围内。
当所述状态差值在稳定状态范围内时,可以判定所述目标头部处于稳定状态,执行步骤209;当状态差值不在稳定状态范围内,可以判定所述目标头部处于移动状态,执行步骤211。
实际上,VR影院***可以预先设置稳定状态范围,该稳定状态范围用于判断目标头部是否进入稳定状态,即判断目标头部是否处于稳定状态。具体的,通过判断目标头部对应的状态差值是在预置的稳定状态范围内,可以确定目标头部的状态。
如上述例子中,状态数据为目标头部相对手机的显示屏的角度,状态差值相当于目标头部相对手机的显示屏的移动角度。基于手机的VR***可以将稳定阈值预置为3度,即预置稳定状态范围为0度到3度。通过目标头部对应的状态差值是否小于3度,可以确定目标头部是否进入相对稳定的状态。当目标头部对应的状态差值小于3度时,判定目标头部A处于稳定状态,执行步骤209;当状态差值不小于3度,判定目标头部A处于移动状态,即退出稳定状态,进入正常渲染模式,执行步骤211。
步骤209,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图。
当目标头部处于稳定状态时,VR影院***可以直接从场景缓存区获取该稳定状态所对应的准场景图,将所获取的准场景图作为当前影院场景的目标场景图,从而可以不通过场景模型就生成了目标场景的目标场景图,节省了当前场景的场景渲染环节,即不执行步骤211,直接跳转到步骤213执行。
步骤211,基于场景模型对当前场景进行渲染,生成目标场景图。
为了获得比较好的图像显示效果以及提高永辉体验,当目标头部处于移动状态时,需要从场景模型对当前影院场景(即当前场景)进行渲染,生成当前场景的场景图。具体的,VR***在渲染当前场景时,可以将当前场景作为目标场景,通过调用场景模型对目标场景进行渲染,就可以生成目标场景图。
步骤213,基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
具体的,VR***将目标场景对应的视频帧图像渲染到目标场景图在屏幕上所形成的矩形中,生成目标场景所对应的渲染图形,即在显示屏显示渲染后的图像。
在本发明实施例中,通过监测目标头部的状态,当目标头部状态处于处于稳定状态时,可以直接从场景缓存区提取在稳定状态时所对应的准场景图,将所提取的准场景图作为目标场景图,即节省了渲染场景的环节,提高图像渲染效率,减轻了图像渲染的延迟,从而解决了由于渲染延迟而导致用户眩晕的问题,即获得比较好的图像显示效果,提高用体验。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。
参照图3A,示出了本发明一种渲染图像的处理装置实施例的结构框图,具体可以包括如下模块:
状态序列生成模块301,可以用于对目标头部进行状态检测,生成目标状态序列。
状态判断模块303,可以用于依据所述目标状态序列判断所述目标头部的状态。
场景图获取模块305,可以用于当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图。
渲染图像生成模块307,可以用于基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
在图3A的基础上,可选的,该渲染图像的处理装置还可以包括场景图生成模块309,参照图3B。
场景图生成模块309,可以用于预先生成准场景图。可选的,该所述场景图生成模块309可以包括以下子模块:
场景图生成子模块3090,用于在所述目标头部进入移动状态时,基于场景模型对当前场景进行渲染,生成准场景图.
场景图保存子模块3092,用于将所生成的准场景图保存在场景缓存区。
在本发明的一种优选实施例中,状态序列生成模块301,可以包括以下子模块:
状态数据生成子模块3010,可以用于获取传感器所采集的数据,生成 目标头部所对应的状态数据。
状态序列生成子模块3012,可以用于采用所生成的状态数据生成目标状态序列。
可选的,状态判断模块303,可以包括以下子模块:
状态差值确定子模块3030,可以用于对所述目标状态序列的状态数据进行统计,确定状态差值。
在本发明的一种优选实施例中,状态差值确定子模块可以包括以下单元:
序列计算单元30301,用于对目标状态序列的状态数据进行计算,确定目标状态序列的最大值、最小值以及平均值.
差值计算单元30303,用于计算所述平均值与最大值的第一差值,以及平均值与最小值的第二差值。
状态差值确定单元30305,用于基于所述第一差值和第二差值确定为状态差值。
差值判断子模块3032,可以用于判断所述状态差值是否在预置的稳定状态范围内。
稳定判定子模块3034,可以用于在所述状态差值在稳定状态范围内时,判定所述目标头部处于稳定状态。
移动判定子模块3036,可以用于在所述状态差值不在稳定状态范围内时,判定所述目标头部处于移动状态。
该渲染图像的处理装置还可以包括目标场景生成模块311。其中,目标场景生成模块311可以用于当目标头部处于移动状态时,基于场景模型对当前场景进行渲染,生成目标场景图。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
例如,图4示出了可以实现根据本发明的移动终端。该移动终端传统上包括处理器410和以存储器420形式的计算机程序产品或者计算机可读介质。存储器420可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器420具有用于执行上述方法中的任何方法步骤的程序代码431的存储空间430。例如,用于程序代码的存储空间430可以包括分别用于实现上面的方法中的各种步骤的各个程序代码431。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图5所述的便携式或者固定存储单元。该存储单元可以具有与图4的移动终端中的存储器420类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码431’,即可以由例如诸如410之类的处理器读取的代码,这些代码当由移动终端运行时,导致该移动终端执行上面所描述的方法中的各个步骤。
本发明实施例是参照根据本发明实施例的方法、终端设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处 理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本发明所提供的一种渲染图像的处理方法和一种渲染图像的处理装置,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (14)

  1. 一种渲染图像的处理方法,其特征在于,包括:
    对目标头部进行状态检测,生成目标状态序列;
    依据所述目标状态序列判断所述目标头部的状态;
    当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图;
    基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对目标头部进行状态检测,生成目标状态序列,包括:
    获取传感器所采集的数据,生成目标头部所对应的状态数据;
    采用所生成的状态数据生成目标状态序列。
  3. 根据权利要求2所述的方法,其特征在于,所述依据所述目标状态序列判断所述目标头部的状态,包括:
    对所述目标状态序列的状态数据进行统计,确定状态差值;
    判断所述状态差值是否在预置的稳定状态范围内;
    在所述状态差值在稳定状态范围内时,判定所述目标头部处于稳定状态。
  4. 根据权利要求3所述的方法,其特征在于,对所述目标状态序列的状态数据进行统计,确定状态差值,包括:
    对目标状态序列的状态数据进行计算,确定目标状态序列的最大值、最小值以及平均值;
    计算所述平均值与最大值的第一差值,以及平均值与最小值的第二差值;
    基于所述第一差值和第二差值确定为状态差值。
  5. 根据权利要求3所述的方法,其特征在于,所述依据所述目标状态序列判断所述目标头部的状态,还包括:在所述状态差值不在稳定状态范围内时,判定所述目标头部处于移动状态;
    所述方法还包括:当目标头部处于移动状态时,基于场景模型对当前场景进行渲染,生成目标场景图。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述方法还包括预先生成准场景图的步骤,所述步骤包括:
    在所述目标头部进入移动状态时,基于场景模型对当前场景进行渲染,生成准场景图;
    将所生成的准场景图保存在场景缓存区。
  7. 一种渲染图像的处理装置,其特征在于,包括:
    目标状态序列生成模块,用于对目标头部进行状态检测,生成目标状态序列;
    状态判断模块,用于依据所述目标状态序列判断所述目标头部的状态;
    场景图获取模块,用于当所述目标头部处于稳定状态时,从场景缓存区获取预先生成的准场景图,并将所获取的准场景图作为目标场景图;
    渲染图像生成模块,用于基于所述目标场景图对视频帧图像进行渲染,生成渲染图像。
  8. 根据权利要求7所述的装置,其特征在于,所述状态序列生成模块,包括:
    状态数据生成子模块,用于获取传感器所采集的数据,生成目标头部所对应的状态数据;
    状态序列生成子模块,用于采用所生成的状态数据生成目标状态序列。
  9. 根据权利要求8所述的装置,其特征在于,所述状态判断模块,包括:
    状态差值确定子模块,用于对所述目标状态序列的状态数据进行统计,确定状态差值;
    差值判断子模块,用于判断所述状态差值是否在预置的稳定状态范围内;
    稳定判定子模块,用于在所述状态差值在稳定状态范围内时,判定所述目标头部处于稳定状态。
  10. 根据权利要求9所述的装置,其特征在于,所述状态差值确定子模块,包括:
    序列计算单元,用于对目标状态序列的状态数据进行计算,确定目标状态序列的最大值、最小值以及平均值;
    差值计算单元,用于计算所述平均值与最大值的第一差值,以及平均值与最小值的第二差值;
    状态差值确定单元,用于基于所述第一差值和第二差值确定为状态差值。
  11. 根据权利要求9所述的装置,其特征在于,所述状态判断模块还包括:移动判定子模块,用于在所述状态差值不在稳定状态范围内时,判定所述目标头部处于移动状态;
    所述装置还包括:目标场景生成模块,用于当目标头部处于移动状态时,基于场景模型对当前场景进行渲染,生成目标场景图。
  12. 根据权利要求7至11任一所述的装置,其特征在于,所述装置还包括场景图生成模块,用于预先生成准场景图,所述场景图生成模块包括:
    场景图生成子模块,用于在所述目标头部进入移动状态时,基于场景模型对当前场景进行渲染,生成准场景图;
    场景图保存子模块,用于将所生成的准场景图保存在场景缓存区。
  13. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在移动终端上运行时,导致所述移动终端执行根据权利要求1-6中的任一个所述的渲染图像的处理方法。
  14. 一种计算机可读介质,其中存储了如权利要求13所述的计算机程序。
PCT/CN2016/089266 2015-12-04 2016-07-07 一种渲染图像的处理方法及装置 WO2017092332A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/249,738 US20170163958A1 (en) 2015-12-04 2016-08-29 Method and device for image rendering processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510884372.X 2015-12-04
CN201510884372.XA CN105979360A (zh) 2015-12-04 2015-12-04 一种渲染图像的处理方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/249,738 Continuation US20170163958A1 (en) 2015-12-04 2016-08-29 Method and device for image rendering processing

Publications (1)

Publication Number Publication Date
WO2017092332A1 true WO2017092332A1 (zh) 2017-06-08

Family

ID=56988262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089266 WO2017092332A1 (zh) 2015-12-04 2016-07-07 一种渲染图像的处理方法及装置

Country Status (3)

Country Link
US (1) US20170163958A1 (zh)
CN (1) CN105979360A (zh)
WO (1) WO2017092332A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106385625A (zh) * 2016-09-29 2017-02-08 宇龙计算机通信科技(深圳)有限公司 一种图像中间帧生成方法及装置
CN106990838B (zh) * 2017-03-16 2020-11-13 惠州Tcl移动通信有限公司 一种虚拟现实模式下锁定显示内容的方法及***
CN107018336B (zh) * 2017-04-11 2018-11-09 腾讯科技(深圳)有限公司 图像处理的方法和装置及视频处理的方法和装置
CN109377503A (zh) * 2018-10-19 2019-02-22 珠海金山网络游戏科技有限公司 图像更新方法和装置、计算设备及存储介质
CN109725729B (zh) * 2019-01-02 2021-02-09 京东方科技集团股份有限公司 图像处理方法及图像控制装置、显示控制装置和显示装置
CN109727305B (zh) * 2019-01-02 2024-01-12 京东方科技集团股份有限公司 虚拟现实***画面处理方法、装置及存储介质
CN112711519B (zh) * 2019-10-25 2023-03-14 腾讯科技(深圳)有限公司 画面流畅度检测方法、装置、存储介质和计算机设备
CN110930307B (zh) * 2019-10-31 2022-07-08 江苏视博云信息技术有限公司 图像处理方法和装置
TWI715474B (zh) * 2020-03-25 2021-01-01 宏碁股份有限公司 動態調整鏡頭配置的方法、頭戴式顯示器及電腦裝置
CN111643901B (zh) * 2020-06-02 2023-07-21 三星电子(中国)研发中心 用于云游戏界面智能渲染的方法和装置
CN112099712B (zh) * 2020-09-17 2022-06-07 北京字节跳动网络技术有限公司 人脸图像显示方法、装置、电子设备及存储介质
CN113852841A (zh) * 2020-12-23 2021-12-28 上海飞机制造有限公司 一种可视化场景建立方法、装置、设备、介质和***
CN113205079B (zh) * 2021-06-04 2023-09-05 北京奇艺世纪科技有限公司 一种人脸检测方法、装置、电子设备及存储介质
CN114286163B (zh) * 2021-12-24 2024-02-13 苏州亿歌网络科技有限公司 一种序列图的录制方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020113791A1 (en) * 2001-01-02 2002-08-22 Jiang Li Image-based virtual reality player with integrated 3D graphics objects
WO2013173728A1 (en) * 2012-05-17 2013-11-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
CN104599243A (zh) * 2014-12-11 2015-05-06 北京航空航天大学 一种多视频流与三维场景的虚实融合方法
CN104740873A (zh) * 2015-04-13 2015-07-01 四川天上友嘉网络科技有限公司 游戏中的图像渲染方法
CN105117111A (zh) * 2015-09-23 2015-12-02 小米科技有限责任公司 虚拟现实交互画面的渲染方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030429A1 (en) * 2006-08-07 2008-02-07 International Business Machines Corporation System and method of enhanced virtual reality
CN103606182B (zh) * 2013-11-19 2017-04-26 华为技术有限公司 图像渲染方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020113791A1 (en) * 2001-01-02 2002-08-22 Jiang Li Image-based virtual reality player with integrated 3D graphics objects
WO2013173728A1 (en) * 2012-05-17 2013-11-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
CN104599243A (zh) * 2014-12-11 2015-05-06 北京航空航天大学 一种多视频流与三维场景的虚实融合方法
CN104740873A (zh) * 2015-04-13 2015-07-01 四川天上友嘉网络科技有限公司 游戏中的图像渲染方法
CN105117111A (zh) * 2015-09-23 2015-12-02 小米科技有限责任公司 虚拟现实交互画面的渲染方法和装置

Also Published As

Publication number Publication date
US20170163958A1 (en) 2017-06-08
CN105979360A (zh) 2016-09-28

Similar Documents

Publication Publication Date Title
WO2017092332A1 (zh) 一种渲染图像的处理方法及装置
WO2017092334A1 (zh) 一种图像渲染处理的方法及装置
KR101950641B1 (ko) 향상된 안구 추적을 위한 장면 분석
WO2017092339A1 (zh) 一种收集传感器数据的处理方法和装置
US9696859B1 (en) Detecting tap-based user input on a mobile device based on motion sensor data
US9928655B1 (en) Predictive rendering of augmented reality content to overlay physical structures
EP3195595B1 (en) Technologies for adjusting a perspective of a captured image for display
CN109741463B (zh) 虚拟现实场景的渲染方法、装置及设备
TWI687901B (zh) 虛擬實境設備的安全監控方法、裝置及虛擬實境設備
JP7008730B2 (ja) 画像に挿入される画像コンテンツについての影生成
US8803800B2 (en) User interface control based on head orientation
JP2017516250A (ja) 世界固定表示品質のフィードバック
US11490217B2 (en) Audio rendering for augmented reality
US20160140731A1 (en) Motion analysis method and apparatus
US11720996B2 (en) Camera-based transparent display
TW201443700A (zh) 自動化裝置顯示器的定向偵測技術
US20170154467A1 (en) Processing method and device for playing video
KR20180013892A (ko) 가상 현실을 위한 반응성 애니메이션
KR20150038877A (ko) 사용자 입력에 대응되는 이벤트를 이용한 유저 인터페이싱 장치 및 방법
EP3757878A1 (en) Head pose estimation
Kowalski et al. Holoface: Augmenting human-to-human interactions on hololens
US20220335638A1 (en) Depth estimation using a neural network
WO2021112839A1 (en) Snapping range for augmented reality objects
US20200057493A1 (en) Rendering content
WO2018000610A1 (zh) 一种基于图像类型判断的自动播放方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869645

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869645

Country of ref document: EP

Kind code of ref document: A1