WO2017046956A1 - Video system - Google Patents

Video system Download PDF

Info

Publication number
WO2017046956A1
WO2017046956A1 PCT/JP2015/076765 JP2015076765W WO2017046956A1 WO 2017046956 A1 WO2017046956 A1 WO 2017046956A1 JP 2015076765 W JP2015076765 W JP 2015076765W WO 2017046956 A1 WO2017046956 A1 WO 2017046956A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
unit
gazing point
user
communication
Prior art date
Application number
PCT/JP2015/076765
Other languages
French (fr)
Japanese (ja)
Inventor
ロックラン ウィルソン
圭一 瀬古
由香 小島
大和 金子
Original Assignee
フォーブ インコーポレーテッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by フォーブ インコーポレーテッド filed Critical フォーブ インコーポレーテッド
Priority to CN201580083269.3A priority Critical patent/CN108141559B/en
Priority to PCT/JP2015/076765 priority patent/WO2017046956A1/en
Priority to KR1020187008945A priority patent/KR101971479B1/en
Priority to US15/267,917 priority patent/US9978183B2/en
Publication of WO2017046956A1 publication Critical patent/WO2017046956A1/en
Priority to US15/963,476 priority patent/US20180247458A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/66Transforming electric information into light information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Definitions

  • the present invention relates to a video system, and more particularly, to a video system including a head mounted display and a video generation device.
  • Patent Document 1 discloses an image generation apparatus and an image generation method capable of detecting a user's movement and displaying an image corresponding to the user's movement on a head-mounted display.
  • the head mounted display can display an image corresponding to the user's line-of-sight direction on the screen.
  • the video displayed by the head mounted display is a moving image.
  • the amount of data is large, and if the video is transmitted as it is from the video generation device to the head-mounted display, the update of the image may be delayed and the video may be interrupted.
  • the number of high-quality monitors has increased, and it is desired to process a large amount of video data.
  • the video generation device and the head mounted display may be integrated.
  • the head mounted display is mounted and used by the user, downsizing is desired, and it is difficult to incorporate it in the housing. . Therefore, in practice, the video generation device and the head mounted display are connected by wireless or the like.
  • due to the large amount of video data there is a possibility that the provision of video to the user may be stagnant.
  • the present invention has been made in view of such a problem, and an object thereof is to provide a technique related to a video system capable of suppressing a stagnation of communication between a head mounted display and a video generation device. .
  • the video generation device receives an image captured by the imaging unit from the head mounted display, transmits a video to the head mounted display, and a user's gaze point on the video based on the image captured by the imaging unit. Based on the gazing point acquired by the gazing point acquisition unit and the gazing point acquisition unit, a predetermined area based on the gazing point is set, and the area outside the predetermined area is calculated for the predetermined area And a calculation unit that generates an image with a smaller amount of data per unit pixel than the obtained image.
  • the video generation device further includes a communication determination unit that determines a communication environment between the first communication unit and the second communication unit, and the calculation unit, when the communication environment is bad, compares the video data with the good case. The amount may be reduced.
  • the communication determination unit determines the communication environment based on information including the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router. You may judge.
  • the video generation apparatus further includes a gazing point movement acquisition unit that acquires a movement of the gazing point of the user based on the gazing point acquired by the gazing point acquisition unit, and the calculation unit is configured to calculate a predetermined area according to the movement of the gazing point. You may change at least one of a magnitude
  • the calculation unit sets the shape of the predetermined region to be a shape having a long axis and a short axis, or a shape having a long side and a short side, and the long axis direction or the long side direction of the predetermined region according to the direction of movement of the gazing point May be set.
  • the calculation unit may generate an image in which the data amount per unit pixel number is continuously reduced outside the predetermined area as the distance from the gazing point increases.
  • any combination of the above-described constituent elements and a representation obtained by converting the expression of the present invention between a method, an apparatus, a system, a recording medium, a computer program, and the like are also effective as an aspect of the present invention.
  • FIG. 7A is a schematic diagram illustrating another example of the relationship between the X coordinate of the video display area and the data amount per unit pixel.
  • FIG. 7B is a schematic diagram showing another example of the relationship between the X coordinate of the video display area and the data amount per unit pixel. It is a sequence diagram which shows the process example regarding the video system which concerns on embodiment. It is a flowchart which shows an example of the process regarding the communication determination which concerns on embodiment.
  • FIG. 1 is a diagram schematically showing an overview of a video system 1 according to an embodiment.
  • the video system 1 according to the embodiment includes a head mounted display 100 and a video generation device 200. As shown in FIG. 1, the head mounted display 100 is used by being mounted on the head of a user 300.
  • the video generation device 200 generates a video that the head mounted display 100 presents to the user.
  • the video generation device 200 is a device capable of reproducing video such as a stationary game machine, a portable game machine, a PC (Personal Computer), a tablet, a smartphone, a fablet, a video player, and a television. is there.
  • the video generation device 200 is connected to the head mounted display 100 wirelessly or by wire. In the example illustrated in FIG. 1, the video generation device 200 is connected to the head mounted display 100 wirelessly.
  • the wireless connection between the video generation apparatus 200 and the head mounted display 100 can be realized by using a wireless communication technology such as known Wi-Fi (registered trademark) or Bluetooth (registered trademark).
  • transmission of video between the head mounted display 100 and the video generation device 200 is performed according to standards such as Miracast (trademark), WiGig (trademark), and WHDI (trademark).
  • the head mounted display 100 includes a housing 150, a wearing tool 160, and headphones 170.
  • the housing 150 accommodates an image display system such as an image display element for presenting video to the user 300, and a wireless transmission module such as a Wi-Fi module or a Bluetooth (registered trademark) module (not shown).
  • the wearing tool 160 wears the head mounted display 100 on the user's 300 head.
  • the wearing tool 160 can be realized by, for example, a belt or a stretchable band.
  • the housing 150 is arranged at a position that covers the eyes of the user 300. For this reason, when the user 300 wears the head mounted display 100, the field of view of the user 300 is blocked by the housing 150.
  • the headphone 170 outputs the audio of the video reproduced by the video generation device 200.
  • the headphones 170 may not be fixed to the head mounted display 100.
  • the user 300 can freely attach and detach the headphones 170 even when the head mounted display 100 is worn using the wearing tool 160.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of the video system 1 according to the embodiment.
  • the head mounted display 100 includes a video presentation unit 110, an imaging unit 120, and a first communication unit 130.
  • the video generation device 200 includes a second communication unit 210, a communication determination unit 220, a gazing point acquisition unit 230, a gazing point movement acquisition unit 240, a calculation unit 250, and a storage unit 260.
  • the second communication unit 210 is connected to the head mounted display 100 by wireless or wired.
  • the second communication unit 210 receives an image captured by the imaging unit 120 from the head mounted display 100 and transmits an image to the head mounted display 100.
  • “video” refers to video generated by the calculation unit 250 described later.
  • the gaze point acquisition unit 230 acquires the user's gaze point P on the video based on the image captured by the imaging unit 120.
  • the position of the gazing point P can be acquired using, for example, a known gaze detection technique.
  • the gazing point acquisition unit 230 acquires the relationship between the image display position and the reference point and moving point of the user's eyes as calibration information in advance.
  • the imaging unit 120 captures an image of the eye of the user 300 as in the calibration, and the gazing point acquisition unit 230 acquires positional information of the reference point and the moving point based on the image.
  • the gazing point acquisition unit 230 estimates the user's gazing point P on the video.
  • the “reference point” indicates, for example, a point such as the eye with little movement relative to the head-mounted display, and the “moving point” indicates an iris or a pupil that moves depending on the position where the user 300 is viewing.
  • gaze point P indicates a user's gaze point estimated by the gaze point acquisition unit 230.
  • FIGS. 4A and 4B are diagrams illustrating examples of the predetermined area A set by the calculation unit 250.
  • FIG. A case where the calculation unit 250 sets a region whose distance from the gazing point P is a or less as the predetermined region A will be described with reference to FIG.
  • the predetermined area A may be a closed area, but FIG. 4A shows an example of a circle, and FIG. 4B shows an example of a rectangle.
  • the predetermined area A is a simple shape, the calculation for the calculation unit 250 to set the predetermined area A according to the movement of the gazing point P can be reduced.
  • the visual acuity of a human eye is higher in the central vision region including the fovea, and decreases rapidly when it is out of the fovea. It is known that the human eye can see the details well within a range within 5 ° of the central fovea at best. Therefore, the calculation unit 250 approximates the distance between the display pixel of the head mounted display 100 and the central fovea of the user's 300 eye, and displays the video corresponding to the region of the central fovea 5 ° with the gazing point P of the user 300 as a reference. A range on the area may be set as the predetermined area A.
  • the specific size of the predetermined area A when viewed by the user 300 is the optical system employed by the liquid crystal monitor of the head mounted display 100 and the human visual characteristics described above (for example, central vision, In view of age, viewing angle, etc.), it may be determined by experiment.
  • FIG. 5 is a diagram illustrating a graph showing the relationship between the X coordinate of the video display area and the data amount D per unit pixel number.
  • the horizontal axis of the graph corresponds to the X coordinate of the video display area
  • the vertical axis of the graph represents the data amount D per unit pixel number on a line parallel to the X axis including the gazing point P.
  • FIG. 5 shows an example in which the calculation unit 250 sets the range of the distance a from the gazing point P as the predetermined area A.
  • the calculation unit 250 extracts video data of a video to be presented to the user next from video data stored in the storage unit 260.
  • the calculation unit 250 may acquire external video data of the video generation device 200.
  • the calculation unit 250 calculates the video data so that the data amount D per unit pixel is reduced in a place where the X coordinate is less than (x ⁇ a) or greater than (x + a).
  • a method for reducing the data amount for example, a known method such as compression by dropping a high-frequency component of a video may be used. As a result, it is possible to obtain an image with a reduced amount of data during communication as a whole.
  • the communication determination unit 220 determines a communication environment between the first communication unit 130 and the second communication unit 210.
  • the calculation unit 250 may reduce the data amount of the video when the communication environment is bad as compared with when the communication environment is good.
  • the calculation unit 250 may reduce the data amount D per unit pixel number in the external area B according to the determination result of the communication environment. For example, the communication environment is divided into three stages of C1, C2, and C3 from the better one, and the storage unit 260 stores the values of the data compression ratio used in each as E1, E2, and E3.
  • the communication determination unit 220 determines which of C1 to C3 corresponds to the communication environment.
  • the calculation unit 250 acquires the value of the data compression rate corresponding to the determination result from the storage unit 260 and compresses the video data in the external area B with the acquired data compression rate to generate a video.
  • the data amount of the video transmitted from the video generation device 200 to the head mounted display 100 is adjusted according to the communication environment, it is possible to avoid the stagnation of the video due to a transfer time delay or the like.
  • the image quality does not change in the vicinity of the gazing point P of the user 300, even when the amount of data is reduced, the uncomfortable feeling given to the user 300 can be suppressed.
  • the video reflecting the information of the gazing point P of the user 300 captured by the imaging unit 120 can be provided to the user without delay.
  • the communication determination unit 220 May be determined.
  • the communication determination unit 220 may monitor communication parameters and determine whether the communication environment is good or bad based on the communication parameters.
  • the communication determination unit 220 transmits a message for inquiring the communication status to the head mounted display 100.
  • the first communication unit 130 receives this message, acquires the communication parameters on the head mounted display 100 side, and transmits the acquired communication parameters to the video generation device 200.
  • the second communication unit 210 acquires communication parameters on the video generation device 200 side. Thereby, the communication determination unit 220 may determine whether the communication environment is good or bad based on the communication parameter received from the head mounted display 100 and the communication parameter acquired by the second communication unit 210.
  • the information including the latest data may be, for example, a value obtained by calculation by the communication determination unit 220 using a moving average from a certain number of past observation values.
  • the calculation unit 250 can generate an image with a data amount suitable for the communication environment at that time. Therefore, even in a place where the communication environment is bad or easily changed, the frame rate of the video presented to the user can be maintained, and a video that does not feel strange when viewed by the user can be provided.
  • the gazing point movement acquisition unit 240 may acquire the movement of the gazing point P of the user 300 based on the gazing point P acquired by the gazing point acquisition unit 230.
  • the calculation unit 250 changes at least one of the size or the shape of the predetermined area A according to the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
  • FIG. 6 is a diagram illustrating an example of the movement of the gazing point P acquired by the gazing point movement acquisition unit 240 according to the embodiment.
  • FIG. 6 shows a state where the user's gazing point P has moved from P1 to P2.
  • the calculation unit 250 sets a predetermined region A with reference to the gazing point P acquired by the gazing point acquisition unit 230 and the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
  • the gazing point P is at the position P2, and the direction of movement of the gazing point is indicated by an arrow.
  • the predetermined area A does not need to be arranged around the gazing point P. For example, as shown in FIG.
  • the calculation unit 250 sets the boundary of the predetermined area A so that the moving direction of the movement of the gazing point P is wide and within the predetermined area A, not equidistant from the gazing point P2. May be.
  • the head mounted display 100 can provide the user 300 with an image that maintains the image quality in a wide range including the direction in which the user 300 turns the line of sight.
  • the predetermined area A may be circular or rectangular as shown in FIGS. 4 (a) and 4 (b).
  • the calculation unit 250 sets the shape of the predetermined region A to be a shape having a long axis and a short axis, or a long side and a short side, and the length of the predetermined region according to the direction of movement of the gazing point P.
  • An axis or a long side direction may be set.
  • the calculation unit 250 sets the shape of the predetermined area A as an ellipse.
  • the calculation unit 250 sets the shape of the predetermined region A as an ellipse based on the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
  • the calculation unit 250 may set the direction of movement of the gazing point P to be the major axis direction of the ellipse.
  • the gazing point P does not need to be the center of the ellipse, and the positional relationship between the gazing point P and the ellipse may be set so that the moving direction side of the movement of the gazing point P is widely within the ellipse.
  • the video presentation unit 110 can display a video with a wide image quality maintained in a direction that moves better than a direction in which the movement of the gazing point P is small.
  • the shape of the predetermined region A set by the calculation unit 250 is not limited to the above-described ellipse as long as it has a long axis and a short axis, or a long side and a short side.
  • the calculation unit 250 sets the shape of the predetermined area A to be a rectangle, the predetermined area A and the predetermined area A can be used when a compression method is used in which a plurality of pixels are compressed as one block. It is possible to simplify the calculation of the overlapping portion of the blocks existing on the boundary as compared with the case where the predetermined area A is an ellipse.
  • the calculation unit 250 may generate an image in which the data amount D per unit pixel number is changed according to the distance from the gazing point P outside the predetermined area A.
  • FIG. 7A is a schematic diagram when the relationship between the X coordinate of the video display area and the data amount D per unit pixel number is changed in a plurality of stages.
  • the lower graph in FIG. 7A shows the change in the data amount D per unit pixel number on the dotted line in the video display area shown above.
  • the calculation unit 250 sets a predetermined area A based on the gazing point P. Further, in addition to the boundary of the predetermined area A, a boundary defining the second external area B2 is provided so as to surround the first external areas B1 and B1 so as to surround A. The outside of the boundary of the second external region B2 is defined as B3.
  • the video system 1 shown in FIG. 7A can provide the user 300 with a video in which the amount of data is reduced in accordance with human visual recognition as compared with the case where the external area B is not divided into a plurality of areas.
  • the calculation unit 250 may generate an image in which the data amount D per unit pixel number is continuously reduced outside the predetermined area A as the distance from the gazing point P increases.
  • FIG. 7B is a schematic diagram when the relationship between the X coordinate of the video display area and the data amount D per unit pixel number is continuously changed.
  • the calculation unit 250 sets the gradation between the vertical axis and the horizontal axis in FIG. Thereby, the data amount D per unit pixel is changed, the difference in image quality at the region boundary is reduced, and a smooth image can be obtained.
  • the calculation unit 250 may generate an image so that the data amount D per unit pixel number does not fall below the lower limit DL.
  • the lower limit DL related to the data amount D per unit pixel.
  • a specific motion may occur particularly near an object boundary on a video depending on an image processing method.
  • the human eye has decreased visual acuity in the peripheral visual field, but is sensitive to movement. Therefore, the calculation unit 250 generates a video with reference to the lower limit value DL so as not to generate such a video.
  • the video system 1 can provide the user 300 with a video with a sense of incongruity in the peripheral visual field region.
  • the specific value of the lower limit DL may be determined by experiment in view of the image display system of the head mounted display 100, the image processing applied by the video generation device 200, and the like.
  • FIG. 8 is a sequence diagram illustrating the flow of main processing for the head mounted display 100 and the video generation device 200 according to the embodiment.
  • the user 300 wears the head mounted display 100 and views the video presented by the video presentation unit 110.
  • the imaging unit 120 acquires an image including the eyes of the user 300 (S101), and the first communication unit 130 transmits the image to the video generation device 200 (S102).
  • the second communication unit 210 of the video generation device 200 receives an image including eyes from the head mounted display 100 (S201).
  • the gazing point acquisition unit 230 acquires the gazing point P of the user 300 based on the image (S202). Further, the communication determination unit 220 determines the communication environment based on the communication parameter (S203). Details of the communication determination will be described later.
  • the calculation unit 250 sets the data compression rate based on the result determined by the communication determination unit 220 (S204).
  • the calculation unit 250 acquires the video data of the video to be displayed to the user from the storage unit 260 (S205).
  • the calculation unit 250 acquires information on the gazing point P from the gazing point acquisition unit 230, and sets a predetermined area A based on the gazing point P (S206).
  • the calculation unit 250 For the external area B, the calculation unit 250 generates an image with a smaller data amount D per unit pixel than the image calculated for the predetermined area A (S207). When generating a video with a small amount of data D, the calculation unit 250 determines the data amount D in the external area B with reference to the compression rate set based on the communication result. Next, the second communication unit 210 transmits the video generated by the calculation unit 250 to the head mounted display 100 (S208). The first communication unit 130 of the head mounted display 100 receives the generated video (S103), and the video presentation unit 110 presents this video to the user 300 (S104).
  • FIG. 9 is a flowchart illustrating an example of processing relating to communication determination according to the embodiment.
  • the communication determination unit 220 acquires the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router (S211).
  • the communication determination unit 220 calculates an average value of communication parameters based on the acquired latest data and past communication information for a predetermined period (S212).
  • the communication determination unit 220 determines the communication environment based on the calculated average value (S213).
  • the video system 1 repeats the processes described in FIGS. 8 and 9 while reproducing the video.
  • the communication determination may be performed based on the latest data of the communication parameters on the head mounted display 100 side and the video generation device 200 side.
  • the image generation apparatus 200 is configured to reduce the image quality at a distance from the gazing point P while maintaining the image quality of the image near the gazing point P that the user is viewing. Since the amount of data to be transmitted to the mount display 100 is reduced, it is possible to provide the user with a video with little discomfort. In addition, since the amount of data at the time of communication is reduced, even when the communication environment is deteriorated, it is possible to reduce the influence due to the data transfer delay caused by the communication environment. Therefore, the video system 1 of the present invention is suitable for an apparatus that is used by interactively communicating with the user 300, such as an application or a game used in a game machine, a computer, a portable terminal, and the like. .
  • the gaze point acquisition unit 230 is not limited to the case where it is mounted on the video generation device 200.
  • the gazing point acquisition unit 230 may be mounted on the head mounted display 100.
  • the head mounted display 100 may be provided with a control function, and a program function for realizing processing performed by the gazing point acquisition unit 230 may be provided by the control function of the head mounted display 100.
  • 1 video system 100 head-mounted display, 110 video presentation unit, 120 imaging unit, 130 first communication unit, 150 housing, 160 wearing tool, 170 headphones, 200 video generation device, 210 second communication unit, 220 communication determination unit 230 gaze point acquisition unit, 240 gaze point movement acquisition unit, 250 calculation unit, 260 storage unit.
  • the present invention can be used for a video system including a head mounted display and a video generation device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

This video system includes a head mounted display that is used by being mounted on the head of a user, and a video generating device that generates a video to be presented to the user by the head mounted display. In the head mounted display, a video presenting unit presents a video to a user. An image capture unit captures an image including the eyes of the user. A first communication unit transmits the captured image to the video generating device and receives a video from the video generating device. In the video generating device, a second communication unit receives the captured image from the head mounted display and transmits a video to the head mounted display. A gazing point acquiring unit acquires a user's gazing point on the video on the basis of the captured image. A calculating unit sets, on the basis of the acquired gazing point, a predetermined region in which the gazing point is set as a reference, and generates, for the outside of the predetermined region, a video having a data amount, per unit number of pixels, smaller than that of a video calculated for the predetermined region.

Description

映像システムVideo system
 この発明は、映像システムに関し、特にヘッドマウントディスプレイと映像生成装置とを備える映像システムに関する。 The present invention relates to a video system, and more particularly, to a video system including a head mounted display and a video generation device.
 ヘッドマウントディスプレイは、ユーザが頭部に装着して使用し、ユーザの目の至近距離に設置された画面に映像を表示する。ユーザはヘッドマウントディスプレイを装着すると表示される映像以外はみないため、仮想空間との一体感を楽しむことができる。上記に関連する技術として、特許文献1には、ユーザの動きを検出し、ユーザの動きに応じた画像をヘッドマウントディスプレイに表示することのできる画像生成装置および画像生成方法が開示されている。 The head mounted display is used by the user wearing on the head, and displays an image on a screen installed at a close distance of the user's eyes. Since the user sees only the image displayed when the head mounted display is mounted, the user can enjoy a sense of unity with the virtual space. As a technique related to the above, Patent Document 1 discloses an image generation apparatus and an image generation method capable of detecting a user's movement and displaying an image corresponding to the user's movement on a head-mounted display.
特開2013-258614号公報JP 2013-258614 A
 上記技術を用いれば、ヘッドマウントディスプレイは、ユーザの視線方向に応じた映像を画面に表示することができる。しかし、ほとんどの場合、ヘッドマウントディスプレイが表示する映像は動画である。そのためデータ量が大きく、映像生成装置からヘッドマウントディスプレイにそのままの映像を送信すると、画像の更新が滞って映像がとぎれてしまうことも考えられる。また、最近では高画質のモニタも増え、大容量の映像データを処理することが望まれている。映像データの送受信を考えれば、映像生成装置とヘッドマウントディスプレイとを一体型にしてもよいが、ヘッドマウントディスプレイはユーザが装着して使用するため小型化が望まれ、筐体に組み込むことは難しい。そのため、実際には、映像生成装置とヘッドマウントディスプレイとを無線などで接続するが、映像のデータ量が大きいことにより、ユーザへの映像の提供が停滞する可能性もある。 Using the above technology, the head mounted display can display an image corresponding to the user's line-of-sight direction on the screen. However, in most cases, the video displayed by the head mounted display is a moving image. For this reason, the amount of data is large, and if the video is transmitted as it is from the video generation device to the head-mounted display, the update of the image may be delayed and the video may be interrupted. Recently, the number of high-quality monitors has increased, and it is desired to process a large amount of video data. Considering transmission / reception of video data, the video generation device and the head mounted display may be integrated. However, since the head mounted display is mounted and used by the user, downsizing is desired, and it is difficult to incorporate it in the housing. . Therefore, in practice, the video generation device and the head mounted display are connected by wireless or the like. However, due to the large amount of video data, there is a possibility that the provision of video to the user may be stagnant.
 本発明はこのような課題に鑑みてなされたものであり、その目的は、ヘッドマウントディスプレイと映像生成装置との間の通信の停滞を抑制することのできる映像システムに関する技術を提供することにある。 The present invention has been made in view of such a problem, and an object thereof is to provide a technique related to a video system capable of suppressing a stagnation of communication between a head mounted display and a video generation device. .
 上記課題を解決するために、本発明のある態様は、ユーザの頭部に装着して使用されるヘッドマウントディスプレイと、ヘッドマウントディスプレイがユーザに提示する映像を生成する映像生成装置とを含む映像システムである。この映像システムにおいて、ヘッドマウントディスプレイは、映像をユーザに提示する映像提示部と、ユーザの目を含む画像を撮像する撮像部と、撮像部が撮像した画像を映像生成装置へ送信し、映像提示部に提示させる映像を映像生成装置より受信する第一通信部とを備える。映像生成装置は、ヘッドマウントディスプレイから撮像部が撮像した画像を受信し、映像をヘッドマウントディスプレイに送信する第二通信部と、撮像部が撮像した画像を基に、映像上におけるユーザの注視点を取得する注視点取得部と、注視点取得部が取得した注視点を基に、注視点を基準とする所定の領域を設定して、所定の領域の外については、所定の領域用に計算した映像に比べて単位画素数あたりのデータ量の少ない映像を生成する計算部とを備える。 In order to solve the above-described problems, an aspect of the present invention provides a video including a head mounted display that is used by being mounted on a user's head, and a video generation device that generates a video that the head mounted display presents to the user. System. In this video system, the head mounted display transmits a video presentation unit that presents video to the user, an imaging unit that captures an image including the eyes of the user, and an image captured by the imaging unit to the video generation device, and presents the video. A first communication unit that receives a video to be presented to the unit from the video generation device. The video generation device receives an image captured by the imaging unit from the head mounted display, transmits a video to the head mounted display, and a user's gaze point on the video based on the image captured by the imaging unit. Based on the gazing point acquired by the gazing point acquisition unit and the gazing point acquisition unit, a predetermined area based on the gazing point is set, and the area outside the predetermined area is calculated for the predetermined area And a calculation unit that generates an image with a smaller amount of data per unit pixel than the obtained image.
 映像生成装置は更に、第一通信部と第二通信部の間の通信環境を判定する通信判定部を備え、計算部は、通信環境が悪い場合には、良い場合と比較して映像のデータ量を小さくしてもよい。 The video generation device further includes a communication determination unit that determines a communication environment between the first communication unit and the second communication unit, and the calculation unit, when the communication environment is bad, compares the video data with the good case. The amount may be reduced.
 通信判定部は、電波強度、通信速度、データ損失率、スループット、ノイズ状況、又はルーターからの物理的距離の少なくともいずれか一つを含む通信パラメータの最新データを含む情報を基に、通信環境を判定してもよい。 The communication determination unit determines the communication environment based on information including the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router. You may judge.
 映像生成装置は更に、注視点取得部が取得した注視点を基にユーザの注視点の動きを取得する注視点移動取得部を備え、計算部は、注視点の動きに応じて所定の領域の大きさ又は形の少なくとも一方を変えてもよい。 The video generation apparatus further includes a gazing point movement acquisition unit that acquires a movement of the gazing point of the user based on the gazing point acquired by the gazing point acquisition unit, and the calculation unit is configured to calculate a predetermined area according to the movement of the gazing point. You may change at least one of a magnitude | size or a shape.
 計算部は、所定の領域の形状が長軸及び短軸、又は長辺及び短辺を有する形状となるよう設定し、注視点の動きの方向に応じて所定の領域の長軸又は長辺方向を設定してもよい。 The calculation unit sets the shape of the predetermined region to be a shape having a long axis and a short axis, or a shape having a long side and a short side, and the long axis direction or the long side direction of the predetermined region according to the direction of movement of the gazing point May be set.
 計算部は、所定の領域の外では、注視点からの距離に応じて単位画素数あたりのデータ量を変えた映像を生成してもよい。 The calculation unit may generate a video in which the data amount per unit pixel number is changed according to the distance from the gazing point outside the predetermined area.
 計算部は、所定の領域の外では、注視点からの距離が大きくなるほど単位画素数あたりのデータ量を連続的に小さくした映像を生成してもよい。 The calculation unit may generate an image in which the data amount per unit pixel number is continuously reduced outside the predetermined area as the distance from the gazing point increases.
 計算部は、単位画素数あたりのデータ量が下限値を下回らないように映像を生成してもよい。 The calculation unit may generate an image so that the amount of data per unit pixel does not fall below the lower limit.
 なお、以上の構成要素の任意の組み合わせ、本発明の表現を方法、装置、システム、記録媒体、コンピュータプログラムなどの間で変換したものもまた、本発明の態様として有効である。 It should be noted that any combination of the above-described constituent elements and a representation obtained by converting the expression of the present invention between a method, an apparatus, a system, a recording medium, a computer program, and the like are also effective as an aspect of the present invention.
 本発明によると、ヘッドマウントディスプレイを含む映像システムは、通信データ量を適切に減らし、それによりユーザへ違和感の少ない映像を停滞なく提供することができる。 According to the present invention, a video system including a head-mounted display can appropriately reduce the amount of communication data and thereby provide a video with less discomfort to the user without stagnation.
[規則91に基づく訂正 27.10.2015] 
実施の形態に係る映像システムの概観を模式的に示す図である。 実施の形態に係る映像システムの機能構成の例を示すブロック図である。 実施の形態に係る注視点取得部が取得したユーザの注視点の一例を示す図である。 図4(a)-(b)は、計算部が設定する所定の領域の例を示す図である。 映像表示領域のX座標と単位画素数あたりのデータ量との関係を模式的に示す図である。 実施の形態に係る注視点移動取得部が取得した注視点の動きの一例を示す図である。 図7(a)は、映像表示領域のX座標と単位画素数あたりのデータ量との関係の別の例を示す模式図である。 図7(b)は、映像表示領域のX座標と単位画素数あたりのデータ量との関係の別の例を示す模式図である。 実施の形態に係る映像システムに関する処理例を示すシーケンス図である。 実施の形態に係る通信判定に関する処理の一例を示すフローチャートである。
[Correction based on Rule 91 27.10.2015]
1 is a diagram schematically showing an overview of a video system according to an embodiment. It is a block diagram which shows the example of a function structure of the video system which concerns on embodiment. It is a figure which shows an example of the gaze point of the user which the gaze point acquisition part which concerns on embodiment acquired. FIGS. 4A to 4B are diagrams illustrating examples of predetermined areas set by the calculation unit. It is a figure which shows typically the relationship between the X coordinate of a video display area, and the data amount per unit pixel number. It is a figure which shows an example of the movement of the gazing point which the gazing point movement acquisition part which concerns on embodiment acquired. FIG. 7A is a schematic diagram illustrating another example of the relationship between the X coordinate of the video display area and the data amount per unit pixel. FIG. 7B is a schematic diagram showing another example of the relationship between the X coordinate of the video display area and the data amount per unit pixel. It is a sequence diagram which shows the process example regarding the video system which concerns on embodiment. It is a flowchart which shows an example of the process regarding the communication determination which concerns on embodiment.
 本発明の実施の形態の概要を述べる。図1は、実施の形態に係る映像システム1の概観を模式的に示す図である。実施の形態に係る映像システム1は、ヘッドマウントディスプレイ100と映像生成装置200とを含む。図1に示すように、ヘッドマウントディスプレイ100は、ユーザ300の頭部に装着して使用される。 An outline of the embodiment of the present invention will be described. FIG. 1 is a diagram schematically showing an overview of a video system 1 according to an embodiment. The video system 1 according to the embodiment includes a head mounted display 100 and a video generation device 200. As shown in FIG. 1, the head mounted display 100 is used by being mounted on the head of a user 300.
 映像生成装置200は、ヘッドマウントディスプレイ100がユーザに提示する映像を生成する。限定はしないが、一例として、映像生成装置200は、据え置き型のゲーム機、携帯ゲーム機、PC(Personal Computer)、タブレット、スマートフォン、ファブレット、ビデオプレイヤ、テレビ等の映像を再生可能な装置である。映像生成装置200は、ヘッドマウントディスプレイ100と無線または有線で接続する。図1に示す例では、映像生成装置200はヘッドマウントディスプレイ100と無線で接続している。映像生成装置200とヘッドマウントディスプレイ100との無線接続は、例えば既知のWi-Fi(登録商標)やBluetooth(登録商標)等の無線通信技術を用いて実現できる。限定はしないが、一例として、ヘッドマウントディスプレイ100と映像生成装置200との間における映像の伝送は、Miracast(商標)やWiGig(商標)、WHDI(商標)等の規格に則って実行される。 The video generation device 200 generates a video that the head mounted display 100 presents to the user. As an example, the video generation device 200 is a device capable of reproducing video such as a stationary game machine, a portable game machine, a PC (Personal Computer), a tablet, a smartphone, a fablet, a video player, and a television. is there. The video generation device 200 is connected to the head mounted display 100 wirelessly or by wire. In the example illustrated in FIG. 1, the video generation device 200 is connected to the head mounted display 100 wirelessly. The wireless connection between the video generation apparatus 200 and the head mounted display 100 can be realized by using a wireless communication technology such as known Wi-Fi (registered trademark) or Bluetooth (registered trademark). As an example, transmission of video between the head mounted display 100 and the video generation device 200 is performed according to standards such as Miracast (trademark), WiGig (trademark), and WHDI (trademark).
 ヘッドマウントディスプレイ100は、筐体150、装着具160、およびヘッドフォン170を備える。筐体150は、画像表示素子などユーザ300に映像を提示するための画像表示系や、図示しないWi-FiモジュールやBluetooth(登録商標)モジュール等の無線伝送モジュールを収容する。装着具160は、ヘッドマウントディスプレイ100をユーザ300の頭部に装着する。装着具160は例えば、ベルトや伸縮性の帯等で実現できる。ユーザ300が装着具160を用いてヘッドマウントディスプレイ100を装着すると、筐体150はユーザ300の眼を覆う位置に配置される。このため、ユーザ300がヘッドマウントディスプレイ100を装着すると、ユーザ300の視界は筐体150によって遮られる。 The head mounted display 100 includes a housing 150, a wearing tool 160, and headphones 170. The housing 150 accommodates an image display system such as an image display element for presenting video to the user 300, and a wireless transmission module such as a Wi-Fi module or a Bluetooth (registered trademark) module (not shown). The wearing tool 160 wears the head mounted display 100 on the user's 300 head. The wearing tool 160 can be realized by, for example, a belt or a stretchable band. When the user 300 wears the head mounted display 100 using the wearing tool 160, the housing 150 is arranged at a position that covers the eyes of the user 300. For this reason, when the user 300 wears the head mounted display 100, the field of view of the user 300 is blocked by the housing 150.
 ヘッドフォン170は、映像生成装置200が再生する映像の音声を出力する。ヘッドフォン170はヘッドマウントディスプレイ100に固定されなくてもよい。ユーザ300は、装着具160を用いてヘッドマウントディスプレイ100を装着した状態であっても、ヘッドフォン170を自由に着脱することができる。 The headphone 170 outputs the audio of the video reproduced by the video generation device 200. The headphones 170 may not be fixed to the head mounted display 100. The user 300 can freely attach and detach the headphones 170 even when the head mounted display 100 is worn using the wearing tool 160.
 図2は、実施の形態に係る映像システム1の機能構成の例を示すブロック図である。ヘッドマウントディスプレイ100は、映像提示部110、撮像部120、および第一通信部130を備える。 FIG. 2 is a block diagram illustrating an example of a functional configuration of the video system 1 according to the embodiment. The head mounted display 100 includes a video presentation unit 110, an imaging unit 120, and a first communication unit 130.
 映像提示部110は、ユーザ300に映像を提示する。映像提示部110は例えば、液晶モニタや有機EL(electroluminescence)で実現される。撮像部120は、ユーザの目を含む画像を撮像する。撮像部120は例えば、筐体150が収容するCCD(charge-coupled device)、又はCMOS(complementary metal oxide semiconductor)などのイメージセンサによって実現される。第一通信部130は、無線または有線により、映像生成装置200に接続されており、ヘッドマウントディスプレイ100と映像生成装置200との間で情報の伝達を実行する。具体的には第一通信部130は、撮像部120が撮像した画像を映像生成装置200へ送信し、映像提示部110に提示させる映像を映像生成装置200より受信する。第一通信部130は、例えばWi-FiモジュールやBluetoothモジュール等の無線伝送モジュールで実現される。 The video presentation unit 110 presents video to the user 300. The video presentation unit 110 is realized by, for example, a liquid crystal monitor or an organic EL (electroluminescence). The imaging unit 120 captures an image including the user's eyes. The imaging unit 120 is realized by an image sensor such as a CCD (charge-coupled device) or a CMOS (complementary metal oxide semiconductor) housed in the housing 150, for example. The first communication unit 130 is connected to the video generation device 200 by wireless or wired communication, and executes information transmission between the head mounted display 100 and the video generation device 200. Specifically, the first communication unit 130 transmits an image captured by the imaging unit 120 to the video generation device 200 and receives a video to be presented by the video presentation unit 110 from the video generation device 200. The first communication unit 130 is realized by a wireless transmission module such as a Wi-Fi module or a Bluetooth module.
 次に、図2の映像生成装置200について説明する。映像生成装置200は、第二通信部210、通信判定部220、注視点取得部230、注視点移動取得部240、計算部250、および記憶部260を備える。第二通信部210は、無線または有線により、ヘッドマウントディスプレイ100に接続されている。第二通信部210は、ヘッドマウントディスプレイ100から撮像部120が撮像した画像を受信し、ヘッドマウントディスプレイ100へ映像を送信する。本明細書で「映像」とは、後述する計算部250が生成する映像を示す。注視点取得部230は、撮像部120が撮像した画像を基に、映像上におけるユーザの注視点Pを取得する。注視点Pの位置は、例えば既知の視線検出技術を用いて取得できる。例えば、注視点取得部230は、画像表示位置とユーザの目の基準点及び動点の関係を、予め校正情報として取得しておく。映像再生時には、校正時と同様に撮像部120がユーザ300の目の画像を撮像し、注視点取得部230が、画像を基に基準点及び動点の位置情報を取得する。取得した位置情報と、予め取得しておいた校正情報とに基づき、注視点取得部230が映像上におけるユーザの注視点Pを推定する。ここで「基準点」とは、例えばヘッドマウントディスプレイに対し動きが少ない目頭などの点を示し、「動点」とは、ユーザ300が見ている位置によって動く、虹彩または瞳孔などを示す。以下「注視点P」とは、注視点取得部230が推定するユーザの注視点を示す。 Next, the video generation apparatus 200 in FIG. 2 will be described. The video generation device 200 includes a second communication unit 210, a communication determination unit 220, a gazing point acquisition unit 230, a gazing point movement acquisition unit 240, a calculation unit 250, and a storage unit 260. The second communication unit 210 is connected to the head mounted display 100 by wireless or wired. The second communication unit 210 receives an image captured by the imaging unit 120 from the head mounted display 100 and transmits an image to the head mounted display 100. In this specification, “video” refers to video generated by the calculation unit 250 described later. The gaze point acquisition unit 230 acquires the user's gaze point P on the video based on the image captured by the imaging unit 120. The position of the gazing point P can be acquired using, for example, a known gaze detection technique. For example, the gazing point acquisition unit 230 acquires the relationship between the image display position and the reference point and moving point of the user's eyes as calibration information in advance. At the time of video reproduction, the imaging unit 120 captures an image of the eye of the user 300 as in the calibration, and the gazing point acquisition unit 230 acquires positional information of the reference point and the moving point based on the image. Based on the acquired position information and the calibration information acquired in advance, the gazing point acquisition unit 230 estimates the user's gazing point P on the video. Here, the “reference point” indicates, for example, a point such as the eye with little movement relative to the head-mounted display, and the “moving point” indicates an iris or a pupil that moves depending on the position where the user 300 is viewing. Hereinafter, “gaze point P” indicates a user's gaze point estimated by the gaze point acquisition unit 230.
 図3は、実施の形態に係る注視点取得部230が取得したユーザ300の注視点Pの一例を示す図である。映像提示部110は、映像における三次元のオブジェクトを、実際には、映像表示領域の表示画素により二次元直交座標で表示する。図3では、ヘッドマウントディスプレイ100の、映像表示領域の横方向及び縦方向をそれぞれX軸、Y軸とし、注視点Pの座標を(x、y)と表示する。図3に示すように、注視点Pの位置は、ユーザが見ている映像上に表示されてもよい。 FIG. 3 is a diagram illustrating an example of the gazing point P of the user 300 acquired by the gazing point acquisition unit 230 according to the embodiment. The video presentation unit 110 actually displays a three-dimensional object in the video in two-dimensional orthogonal coordinates using display pixels in the video display area. In FIG. 3, the horizontal direction and the vertical direction of the video display area of the head mounted display 100 are respectively displayed as the X axis and the Y axis, and the coordinates of the gazing point P are displayed as (x, y). As shown in FIG. 3, the position of the gazing point P may be displayed on the video that the user is watching.
 図2の説明に戻る。計算部250は、注視点取得部230が取得した注視点Pを基に、注視点Pを基準とする所定の領域Aを設定する。また、計算部250は、所定の領域Aの外の外部領域Bについては、所定の領域A用に計算した映像に比べて単位画素数あたりのデータ量Dの少ない映像を生成する。詳細は後述するが、「単位画素数あたりのデータ量D」とは、映像生成装置200が生成してヘッドマウントディスプレイ100へ送信する映像を、計算部250が、所定の領域Aと外部領域Bとでどのように違って処理したかを比較するための指標であって、例えば1画素あたりのデータ量Dを表す。 Returning to the explanation of FIG. The calculation unit 250 sets a predetermined area A based on the gazing point P based on the gazing point P acquired by the gazing point acquisition unit 230. Also, the calculation unit 250 generates an image with a smaller data amount D per unit pixel than the image calculated for the predetermined area A for the external area B outside the predetermined area A. As will be described in detail later, “data amount D per unit pixel” refers to a video generated by the video generation device 200 and transmitted to the head mounted display 100 by the calculation unit 250 in a predetermined area A and an external area B. Is an index for comparing how the data are processed differently, and represents, for example, a data amount D per pixel.
 次に、計算部250が実行する処理について、図4及び図5を用いて説明する。図4(a)-(b)は、計算部250が設定する所定の領域Aの例を示す図である。図4(a)を用いて、計算部250が、注視点Pからの距離がa以下である領域を所定の領域Aとして設定する場合について説明する。所定の領域Aは、閉領域であればよいが、図4(a)では円形、図4(b)では矩形とした場合の例を示す。このように、所定の領域Aがシンプルな形状であれば、計算部250が注視点Pの動きに応じて所定の領域Aを設定するための計算を軽減できる。 Next, processing executed by the calculation unit 250 will be described with reference to FIGS. FIGS. 4A and 4B are diagrams illustrating examples of the predetermined area A set by the calculation unit 250. FIG. A case where the calculation unit 250 sets a region whose distance from the gazing point P is a or less as the predetermined region A will be described with reference to FIG. The predetermined area A may be a closed area, but FIG. 4A shows an example of a circle, and FIG. 4B shows an example of a rectangle. Thus, if the predetermined area A is a simple shape, the calculation for the calculation unit 250 to set the predetermined area A according to the movement of the gazing point P can be reduced.
 一般に、人の目の視力は、中心窩を含む中心視力領域ほど高く、中心窩からはずれると急激に低下する。人の目が詳細までよく見ることができるのは、せいぜい中心窩の中央5°以内の範囲であることが知られている。そのため、計算部250は、ヘッドマウントディスプレイ100の表示画素と、ユーザ300の目の中心窩との距離を概算し、ユーザ300の注視点Pを基準に中心窩5°の領域に対応する映像表示領域上の範囲を、所定の領域Aとして設定してもよい。ユーザ300が見たときの、具体的な所定の領域Aの大きさは、ヘッドマウントディスプレイ100の液晶モニタが採用する光学系、及び、先に述べた人の視覚特性など(例えば、中心視力、年齢、視野角など)を鑑みて、実験により定めればよい。 In general, the visual acuity of a human eye is higher in the central vision region including the fovea, and decreases rapidly when it is out of the fovea. It is known that the human eye can see the details well within a range within 5 ° of the central fovea at best. Therefore, the calculation unit 250 approximates the distance between the display pixel of the head mounted display 100 and the central fovea of the user's 300 eye, and displays the video corresponding to the region of the central fovea 5 ° with the gazing point P of the user 300 as a reference. A range on the area may be set as the predetermined area A. The specific size of the predetermined area A when viewed by the user 300 is the optical system employed by the liquid crystal monitor of the head mounted display 100 and the human visual characteristics described above (for example, central vision, In view of age, viewing angle, etc.), it may be determined by experiment.
 図5は、映像表示領域のX座標と単位画素数あたりのデータ量Dとの関係を示すグラフを例示する図である。グラフの横軸は映像表示領域のX座標に対応し、グラフの縦軸は注視点Pを含むX軸と平行な線上での、単位画素数あたりのデータ量Dを表す。図5は、計算部250が、注視点Pから距離aの範囲内を、所定の領域Aとして設定した一例である。まず、計算部250は、記憶部260に記憶される映像データの中から、次にユーザに提示すべき映像の映像データを抽出する。計算部250は、映像生成装置200の外部映像データを取得してもよい。計算部250は、この映像データを、X座標が(x-a)未満または(x+a)より大きい場所では、単位画素数あたりのデータ量Dを小さくする計算を行う。データ量を減らす方法としては、例えば、映像の高周波成分を落として圧縮するなどの既知の方法を用いればよい。これより、全体として通信時のデータ量を小さくした映像が得られる。 FIG. 5 is a diagram illustrating a graph showing the relationship between the X coordinate of the video display area and the data amount D per unit pixel number. The horizontal axis of the graph corresponds to the X coordinate of the video display area, and the vertical axis of the graph represents the data amount D per unit pixel number on a line parallel to the X axis including the gazing point P. FIG. 5 shows an example in which the calculation unit 250 sets the range of the distance a from the gazing point P as the predetermined area A. First, the calculation unit 250 extracts video data of a video to be presented to the user next from video data stored in the storage unit 260. The calculation unit 250 may acquire external video data of the video generation device 200. The calculation unit 250 calculates the video data so that the data amount D per unit pixel is reduced in a place where the X coordinate is less than (x−a) or greater than (x + a). As a method for reducing the data amount, for example, a known method such as compression by dropping a high-frequency component of a video may be used. As a result, it is possible to obtain an image with a reduced amount of data during communication as a whole.
 映像の高周波成分を落とす方法の一例を説明する。具体的には、計算部250は、三次元モデルの映像データから二次元画像のイメージを作り出す過程において採用するサンプリングレートを、所定の領域Aの内側と外側とで変化させる。計算部250は、所定の領域Aの外側については、所定の領域A内部と比較してサンプリングレートを下げる。また、計算部250は、サンプリングしない領域については補間処理により画像を生成する。補間処理は、例えば公知のバイリニアやスプライン補間である。これにより、映像の全領域を高いサンプリングレートで画像形成するときと比べ、画像はぼける。この結果、画像の高周波成分が落ちるため圧縮時のデータ量が小さくなる。さらに、画像形成時のサンプリングレートが下がるので、画像形成を高速化できる。 An example of how to reduce the high-frequency component of the video will be described. Specifically, the calculation unit 250 changes the sampling rate employed in the process of creating the image of the two-dimensional image from the video data of the three-dimensional model between the inside and the outside of the predetermined area A. The calculation unit 250 reduces the sampling rate outside the predetermined area A as compared with the inside of the predetermined area A. The calculation unit 250 generates an image by interpolation processing for a region that is not sampled. The interpolation processing is, for example, well-known bilinear or spline interpolation. As a result, the image is blurred as compared with the case where the entire area of the image is formed at a high sampling rate. As a result, the high-frequency component of the image is reduced, and the amount of data during compression is reduced. Further, since the sampling rate at the time of image formation is lowered, the image formation can be speeded up.
 図2の説明に戻る。通信判定部220は、第一通信部130と第二通信部210の間の通信環境を判定する。計算部250は、通信環境が悪い場合には、良い場合と比較して前記映像のデータ量を小さくしてもよい。 Returning to the explanation of FIG. The communication determination unit 220 determines a communication environment between the first communication unit 130 and the second communication unit 210. The calculation unit 250 may reduce the data amount of the video when the communication environment is bad as compared with when the communication environment is good.
 計算部250は、通信環境の判定結果に応じて、外部領域Bでの単位画素数あたりのデータ量Dを減らしてもよい。例えば、通信環境を、良い方からC1、C2、C3の三つの段階に分け、各々で使用されるデータ圧縮率の値を、E1、E2、E3として、記憶部260が記憶する。通信判定部220は、通信環境がC1からC3のどれに該当するかを判定する。計算部250は、判定結果に応じたデータ圧縮率の値を記憶部260から取得し、取得したデータ圧縮率で外部領域Bの映像データを圧縮して映像を生成する。 The calculation unit 250 may reduce the data amount D per unit pixel number in the external area B according to the determination result of the communication environment. For example, the communication environment is divided into three stages of C1, C2, and C3 from the better one, and the storage unit 260 stores the values of the data compression ratio used in each as E1, E2, and E3. The communication determination unit 220 determines which of C1 to C3 corresponds to the communication environment. The calculation unit 250 acquires the value of the data compression rate corresponding to the determination result from the storage unit 260 and compresses the video data in the external area B with the acquired data compression rate to generate a video.
 これより、映像生成装置200からヘッドマウントディスプレイ100に送信される映像は、通信環境に応じてデータ量が調節されるため、転送時間の遅延などによる映像の停滞を回避できる。また、ユーザ300の注視点P付近では画質が変わらないため、データ量を減らした場合であっても、ユーザ300に与える違和感を抑えられる。また、撮像部120が撮像したユーザ300の注視点Pの情報を反映した映像を、遅滞なくユーザへ提供できる。 Thus, since the data amount of the video transmitted from the video generation device 200 to the head mounted display 100 is adjusted according to the communication environment, it is possible to avoid the stagnation of the video due to a transfer time delay or the like. In addition, since the image quality does not change in the vicinity of the gazing point P of the user 300, even when the amount of data is reduced, the uncomfortable feeling given to the user 300 can be suppressed. Moreover, the video reflecting the information of the gazing point P of the user 300 captured by the imaging unit 120 can be provided to the user without delay.
 通信判定部220は、電波強度、通信速度、データ損失率、スループット、ノイズ状況、又はルーターからの物理的距離の少なくともいずれか一つを含む通信パラメータの最新データを含む情報を基に、通信環境を判定してもよい。 Based on information including the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router, the communication determination unit 220 May be determined.
 通信判定部220は、通信パラメータを監視し、通信パラメータに基づいて通信環境が良いか悪いかを判定してもよい。通信判定部220は、ヘッドマウントディスプレイ100へ通信状況を問い合わせるメッセージを送信する。そして、例えば第一通信部130はこのメッセージを受信し、ヘッドマウントディスプレイ100側の通信パラメータを取得し、取得した通信パラメータを映像生成装置200へ送信する。さらに、第二通信部210は映像生成装置200側の通信パラメータを取得する。これにより、通信判定部220は、ヘッドマウントディスプレイ100から受信した通信パラメータ及び第二通信部210が取得した通信パラメータを基に通信環境が良いか悪いかを判定してもよい。ここで、最新のデータを含む情報とは、例えば、一定数の過去の観測値から移動平均を使って、通信判定部220が計算により取得した値であってもよい。更に、前述した構成と同様に、通信環境に関連づけて設定したデータ圧縮率を利用すれば、計算部250は、その時の通信環境に適したデータ量の映像を生成することができる。そのため、通信環境が悪い、又は変わりやすい場所であっても、ユーザに提示する映像のフレームレートを維持することができ、ユーザが見て違和感のない映像を提供することができる。 The communication determination unit 220 may monitor communication parameters and determine whether the communication environment is good or bad based on the communication parameters. The communication determination unit 220 transmits a message for inquiring the communication status to the head mounted display 100. For example, the first communication unit 130 receives this message, acquires the communication parameters on the head mounted display 100 side, and transmits the acquired communication parameters to the video generation device 200. Further, the second communication unit 210 acquires communication parameters on the video generation device 200 side. Thereby, the communication determination unit 220 may determine whether the communication environment is good or bad based on the communication parameter received from the head mounted display 100 and the communication parameter acquired by the second communication unit 210. Here, the information including the latest data may be, for example, a value obtained by calculation by the communication determination unit 220 using a moving average from a certain number of past observation values. Further, similarly to the configuration described above, if the data compression rate set in association with the communication environment is used, the calculation unit 250 can generate an image with a data amount suitable for the communication environment at that time. Therefore, even in a place where the communication environment is bad or easily changed, the frame rate of the video presented to the user can be maintained, and a video that does not feel strange when viewed by the user can be provided.
 注視点移動取得部240は、注視点取得部230が取得した注視点Pを基にユーザ300の注視点Pの動きを取得してもよい。計算部250は、注視点移動取得部240が取得した注視点Pの動きに応じて、所定の領域Aの大きさ又は形の少なくとも一方を変える。 The gazing point movement acquisition unit 240 may acquire the movement of the gazing point P of the user 300 based on the gazing point P acquired by the gazing point acquisition unit 230. The calculation unit 250 changes at least one of the size or the shape of the predetermined area A according to the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
 図6は、実施の形態に係る注視点移動取得部240が取得した注視点Pの動きの一例を示す図である。図6では、ユーザの注視点PがP1からP2に移動した様子を示す。計算部250は、注視点取得部230が取得した注視点Pと、注視点移動取得部240が取得した注視点Pの動きを参照して、所定の領域Aを設定する。図6に示す例では、注視点PはP2の位置にあり、注視点の動きの方向は矢印で示される。所定の領域Aは注視点Pを中心として配置する必要はない。例えば図6で示すように、計算部250は所定の領域Aの境界を、注視点P2に対し等距離ではなく、注視点Pの動きの進行方向が広く所定の領域A以内となるよう設定してもよい。これより、ヘッドマウントディスプレイ100は、ユーザ300に対し、ユーザ300が視線を向ける方向を含む広い範囲で、画質を維持した映像を提供できる。所定の領域Aは、前述したとおり、図4(a)および(b)に示すように円形や矩形であってもよい。 FIG. 6 is a diagram illustrating an example of the movement of the gazing point P acquired by the gazing point movement acquisition unit 240 according to the embodiment. FIG. 6 shows a state where the user's gazing point P has moved from P1 to P2. The calculation unit 250 sets a predetermined region A with reference to the gazing point P acquired by the gazing point acquisition unit 230 and the movement of the gazing point P acquired by the gazing point movement acquisition unit 240. In the example shown in FIG. 6, the gazing point P is at the position P2, and the direction of movement of the gazing point is indicated by an arrow. The predetermined area A does not need to be arranged around the gazing point P. For example, as shown in FIG. 6, the calculation unit 250 sets the boundary of the predetermined area A so that the moving direction of the movement of the gazing point P is wide and within the predetermined area A, not equidistant from the gazing point P2. May be. As a result, the head mounted display 100 can provide the user 300 with an image that maintains the image quality in a wide range including the direction in which the user 300 turns the line of sight. As described above, the predetermined area A may be circular or rectangular as shown in FIGS. 4 (a) and 4 (b).
 また、計算部250は、所定の領域Aの形状が長軸及び短軸、又は長辺及び短辺を有する形状となるよう設定し、注視点Pの動きの方向に応じて所定の領域の長軸又は長辺方向を設定してもよい。 Further, the calculation unit 250 sets the shape of the predetermined region A to be a shape having a long axis and a short axis, or a long side and a short side, and the length of the predetermined region according to the direction of movement of the gazing point P. An axis or a long side direction may be set.
 図6では、計算部250は、所定の領域Aの形状を楕円として設定している。計算部250は、注視点移動取得部240が取得した注視点Pの動きに基づき、所定の領域Aの形状を楕円として設定する。計算部250は、例えば、注視点Pを基準に所定の領域Aを配置する際に、注視点Pの動きの方向が楕円の長軸方向となるよう設定してもよい。ここで注視点Pは、楕円の中心である必要はなく、注視点Pの動きの進行方向側が広く楕円内となるよう、注視点Pと楕円との位置関係を設定してもよい。これより、映像提示部110は、注視点Pの動きが少ない方向よりもよく動く方向へ、広く画質を維持した映像を表示できる。また、計算部250が設定する所定の領域Aの形状は、長軸及び短軸、又は長辺及び短辺を有するものであればよく、前述した楕円に限定されない。例えば、計算部250が、所定の領域Aの形状を長方形と設定すれば、複数の画素を一ブロックとしてブロック単位で圧縮する圧縮方法を採用する場合に、所定の領域Aと、所定の領域Aの境界上に存在するブロックの重複部分の計算を、所定の領域Aが楕円である場合と比べて簡素化できる。 In FIG. 6, the calculation unit 250 sets the shape of the predetermined area A as an ellipse. The calculation unit 250 sets the shape of the predetermined region A as an ellipse based on the movement of the gazing point P acquired by the gazing point movement acquisition unit 240. For example, when the predetermined area A is arranged based on the gazing point P, the calculation unit 250 may set the direction of movement of the gazing point P to be the major axis direction of the ellipse. Here, the gazing point P does not need to be the center of the ellipse, and the positional relationship between the gazing point P and the ellipse may be set so that the moving direction side of the movement of the gazing point P is widely within the ellipse. As a result, the video presentation unit 110 can display a video with a wide image quality maintained in a direction that moves better than a direction in which the movement of the gazing point P is small. Further, the shape of the predetermined region A set by the calculation unit 250 is not limited to the above-described ellipse as long as it has a long axis and a short axis, or a long side and a short side. For example, if the calculation unit 250 sets the shape of the predetermined area A to be a rectangle, the predetermined area A and the predetermined area A can be used when a compression method is used in which a plurality of pixels are compressed as one block. It is possible to simplify the calculation of the overlapping portion of the blocks existing on the boundary as compared with the case where the predetermined area A is an ellipse.
 計算部250は、所定の領域Aの外では、注視点Pからの距離に応じて単位画素数あたりのデータ量Dを変えた映像を生成してもよい。 The calculation unit 250 may generate an image in which the data amount D per unit pixel number is changed according to the distance from the gazing point P outside the predetermined area A.
 図7(a)は、映像表示領域のX座標と単位画素数あたりのデータ量Dとの関係を複数段階に変化させた場合の模式図である。図7(a)の下方のグラフは、上方に示す映像表示領域の点鎖線上における、単位画素数あたりのデータ量Dの変化を示したものである。図7(a)の例では、計算部250は、注視点Pを基準に所定の領域Aを設定する。更に、所定の領域Aの境界に加えAを囲むように一つ目の外部領域B1、B1を囲むように二つ目の外部領域B2を定義する境界を設ける。二つ目の外部領域B2の境界の外側をB3として定義する。このように外部領域Bを複数の領域にわけると、わけない場合に比べて、所定の領域Aと外部領域Bとの境界において生じる画質の差を、より小さくすることができる。これより図7(a)で示す映像システム1は、外部領域Bを複数領域に分割しない場合と比べて、より人の視覚認識に合わせてデータ量を減らした映像を、ユーザ300へ提供できる。 FIG. 7A is a schematic diagram when the relationship between the X coordinate of the video display area and the data amount D per unit pixel number is changed in a plurality of stages. The lower graph in FIG. 7A shows the change in the data amount D per unit pixel number on the dotted line in the video display area shown above. In the example of FIG. 7A, the calculation unit 250 sets a predetermined area A based on the gazing point P. Further, in addition to the boundary of the predetermined area A, a boundary defining the second external area B2 is provided so as to surround the first external areas B1 and B1 so as to surround A. The outside of the boundary of the second external region B2 is defined as B3. Thus, when the external area B is divided into a plurality of areas, the difference in image quality that occurs at the boundary between the predetermined area A and the external area B can be further reduced as compared with the case where the external area B is not divided. As a result, the video system 1 shown in FIG. 7A can provide the user 300 with a video in which the amount of data is reduced in accordance with human visual recognition as compared with the case where the external area B is not divided into a plurality of areas.
 計算部250は、所定の領域Aの外では、注視点Pからの距離が大きくなるほど単位画素数あたりのデータ量Dを連続的に小さくした映像を生成してもよい。 The calculation unit 250 may generate an image in which the data amount D per unit pixel number is continuously reduced outside the predetermined area A as the distance from the gazing point P increases.
 図7(b)は、映像表示領域のX座標と単位画素数あたりのデータ量Dとの関係を連続的に変化させた場合の模式図である。計算部250は、図7(b)の縦軸と横軸との関係を、連続的に変えてグラデーションとして設定している。これにより、単位画素数あたりのデータ量Dを変える、領域境界での画質の差が小さくなり、滑らかな画像を得ることができる。 FIG. 7B is a schematic diagram when the relationship between the X coordinate of the video display area and the data amount D per unit pixel number is continuously changed. The calculation unit 250 sets the gradation between the vertical axis and the horizontal axis in FIG. Thereby, the data amount D per unit pixel is changed, the difference in image quality at the region boundary is reduced, and a smooth image can be obtained.
 計算部250は、単位画素数あたりのデータ量Dが下限値DLを下回らないように映像を生成してもよい。 The calculation unit 250 may generate an image so that the data amount D per unit pixel number does not fall below the lower limit DL.
 図7(a)-(b)の縦軸に、単位画素数あたりのデータ量Dに関する下限値DLを示す。一般に、動画においてデータ量を減らす処理を行った場合、画像処理の方法によっては、特に映像上のオブジェクト境界付近において特有の動きを生じることがある。また、一般に、人の目は、周辺視野では視力が低下するが、一方で動きに対して敏感になることが知られている。そのため、計算部250は、このような映像を生成しないよう、下限値DLを参照して映像を生成する。これより、映像システム1は、周辺視野領域についても違和感を抑えた映像を、ユーザ300へ提供することができる。具体的な下限値DLの値については、ヘッドマウントディスプレイ100の画像表示系、及び、映像生成装置200が適用する画像処理などを鑑みて、実験により定めればよい。 7A to 7B show the lower limit DL related to the data amount D per unit pixel. In general, when a process for reducing the amount of data in a moving image is performed, a specific motion may occur particularly near an object boundary on a video depending on an image processing method. In general, it is known that the human eye has decreased visual acuity in the peripheral visual field, but is sensitive to movement. Therefore, the calculation unit 250 generates a video with reference to the lower limit value DL so as not to generate such a video. As a result, the video system 1 can provide the user 300 with a video with a sense of incongruity in the peripheral visual field region. The specific value of the lower limit DL may be determined by experiment in view of the image display system of the head mounted display 100, the image processing applied by the video generation device 200, and the like.
 以下、本実施の形態の使用例について、図8及び図9を参照しながら説明する。図8は、実施の形態に係るヘッドマウントディスプレイ100及び映像生成装置200について、メイン処理の流れを説明するシーケンス図である。まず、ユーザ300がヘッドマウントディスプレイ100を装着し、映像提示部110が提示した映像を視聴する。撮像部120は、ユーザ300の目を含む画像を取得し(S101)、第一通信部130が映像生成装置200へ画像を送信する(S102)。 Hereinafter, usage examples of the present embodiment will be described with reference to FIGS. FIG. 8 is a sequence diagram illustrating the flow of main processing for the head mounted display 100 and the video generation device 200 according to the embodiment. First, the user 300 wears the head mounted display 100 and views the video presented by the video presentation unit 110. The imaging unit 120 acquires an image including the eyes of the user 300 (S101), and the first communication unit 130 transmits the image to the video generation device 200 (S102).
 映像生成装置200の第二通信部210は、ヘッドマウントディスプレイ100から、目を含む画像を受信する(S201)。注視点取得部230は、画像をもとにユーザ300の注視点Pを取得する(S202)。また、通信判定部220は、通信パラメータを基に通信環境を判定する(S203)。通信判定の詳細については後述する。次に、計算部250は、通信判定部220が判定した結果を基にデータの圧縮率を設定する(S204)。計算部250は、これからユーザへ表示する映像の映像データを記憶部260から取得する(S205)。次に、計算部250は、注視点取得部230より注視点Pの情報を取得し、注視点Pを基準に所定の領域Aを設定する(S206)。計算部250は、外部領域Bについては、所定の領域A用に計算した映像に比べて単位画素数あたりのデータ量Dが少ない映像を生成する(S207)。計算部250は、データ量Dの少ない映像を生成する際、通信結果を基に設定した圧縮率を参照して、外部領域Bでのデータ量Dを決定する。次に、第二通信部210は、計算部250が生成した映像をヘッドマウントディスプレイ100へ送信する(S208)。ヘッドマウントディスプレイ100の第一通信部130は、生成された映像を受信し(S103)、映像提示部110は、この映像をユーザ300へ提示する(S104)。 The second communication unit 210 of the video generation device 200 receives an image including eyes from the head mounted display 100 (S201). The gazing point acquisition unit 230 acquires the gazing point P of the user 300 based on the image (S202). Further, the communication determination unit 220 determines the communication environment based on the communication parameter (S203). Details of the communication determination will be described later. Next, the calculation unit 250 sets the data compression rate based on the result determined by the communication determination unit 220 (S204). The calculation unit 250 acquires the video data of the video to be displayed to the user from the storage unit 260 (S205). Next, the calculation unit 250 acquires information on the gazing point P from the gazing point acquisition unit 230, and sets a predetermined area A based on the gazing point P (S206). For the external area B, the calculation unit 250 generates an image with a smaller data amount D per unit pixel than the image calculated for the predetermined area A (S207). When generating a video with a small amount of data D, the calculation unit 250 determines the data amount D in the external area B with reference to the compression rate set based on the communication result. Next, the second communication unit 210 transmits the video generated by the calculation unit 250 to the head mounted display 100 (S208). The first communication unit 130 of the head mounted display 100 receives the generated video (S103), and the video presentation unit 110 presents this video to the user 300 (S104).
 図9は、実施の形態に係る通信判定に関する処理の一例を示すフローチャートである。通信判定部220は、例えば、電波強度、通信速度、データ損失率、スループット、ノイズ状況、又はルーターからの物理的距離のうち少なくとも一つを含む通信パラメータの、最新データを取得する(S211)。次に通信判定部220は、取得した最新データと、所定期間の過去の通信情報とを基に、通信パラメータの平均値を算出する(S212)。次に通信判定部220は、算出した平均値を基に通信環境を判定する(S213)。映像システム1は、映像を再生する間は、図8及び図9に記載する処理を繰り返す。なお、S213においては、上述したように、ヘッドマウントディスプレイ100側及び映像生成装置200側の通信パラメータの最新データに基づいて通信判定を行ってもよい。 FIG. 9 is a flowchart illustrating an example of processing relating to communication determination according to the embodiment. For example, the communication determination unit 220 acquires the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router (S211). Next, the communication determination unit 220 calculates an average value of communication parameters based on the acquired latest data and past communication information for a predetermined period (S212). Next, the communication determination unit 220 determines the communication environment based on the calculated average value (S213). The video system 1 repeats the processes described in FIGS. 8 and 9 while reproducing the video. In S213, as described above, the communication determination may be performed based on the latest data of the communication parameters on the head mounted display 100 side and the video generation device 200 side.
 以上説明したように、実施の形態によれば、ユーザが見ている注視点P付近の映像の画質を維持したまま、注視点Pより離れたところで画質を下げることで、映像生成装置200がヘッドマウントディスプレイ100へ送信するデータ量を減らしているため、違和感の少ない映像をユーザに提供できる。また、通信時のデータ量は小さくなるため、通信環境が悪化した場合でも、それに起因するデータ転送の遅延などによる影響を軽減することができる。そのため、本発明の映像システム1は、例えば、ゲーム機、コンピュータ、及び携帯用端末などで使用される、アプリケーション、又はゲームなどの、ユーザ300とインタラクティブに通信を行って使用する装置において好適である。 As described above, according to the embodiment, the image generation apparatus 200 is configured to reduce the image quality at a distance from the gazing point P while maintaining the image quality of the image near the gazing point P that the user is viewing. Since the amount of data to be transmitted to the mount display 100 is reduced, it is possible to provide the user with a video with little discomfort. In addition, since the amount of data at the time of communication is reduced, even when the communication environment is deteriorated, it is possible to reduce the influence due to the data transfer delay caused by the communication environment. Therefore, the video system 1 of the present invention is suitable for an apparatus that is used by interactively communicating with the user 300, such as an application or a game used in a game machine, a computer, a portable terminal, and the like. .
 以上、本発明を実施の形態をもとに説明した。この実施の形態は例示であり、それらの各構成要素や各処理プロセスの組み合わせにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。 The present invention has been described based on the embodiments. This embodiment is an exemplification, and it will be understood by those skilled in the art that various modifications can be made to combinations of the respective constituent elements and processing processes, and such modifications are within the scope of the present invention. is there.
 以上、本発明を実施の形態をもとに説明した。この実施の形態は例示であり、それらの各構成要素や各処理プロセスの組み合わせにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。 The present invention has been described based on the embodiments. This embodiment is an exemplification, and it will be understood by those skilled in the art that various modifications can be made to combinations of the respective constituent elements and processing processes, and such modifications are within the scope of the present invention. is there.
 ところで上記では、注視点取得部230が映像生成装置200に実装されている場合について説明した。しかしながら、注視点取得部230は映像生成装置200に実装される場合に限定されない。例えば、注視点取得部230はヘッドマウントディスプレイ100に実装されてもよい。この場合、ヘッドマウントディスプレイ100に制御機能を持たせ、ヘッドマウントディスプレイ100の制御機能により、注視点取得部230で行う処理を実現するためのプログラム機能を付与してもよい。これにより、ユーザ300の目を含む画像をヘッドマウントディスプレイ100から映像生成装置200に送信することを省略できるため、映像システム1は、通信の帯域を抑制したり、処理の高速化に資することができる。 In the above description, the case where the gazing point acquisition unit 230 is mounted on the video generation apparatus 200 has been described. However, the gaze point acquisition unit 230 is not limited to the case where it is mounted on the video generation device 200. For example, the gazing point acquisition unit 230 may be mounted on the head mounted display 100. In this case, the head mounted display 100 may be provided with a control function, and a program function for realizing processing performed by the gazing point acquisition unit 230 may be provided by the control function of the head mounted display 100. Thereby, since it is possible to omit the transmission of the image including the eyes of the user 300 from the head mounted display 100 to the video generation device 200, the video system 1 can suppress the communication band or contribute to the speeding up of the process. it can.
 1 映像システム、100 ヘッドマウントディスプレイ、110 映像提示部、120 撮像部、130 第一通信部、150 筐体、160 装着具、170 ヘッドフォン、200 映像生成装置、210 第二通信部、220 通信判定部、230 注視点取得部、240 注視点移動取得部、250 計算部、260 記憶部。 1 video system, 100 head-mounted display, 110 video presentation unit, 120 imaging unit, 130 first communication unit, 150 housing, 160 wearing tool, 170 headphones, 200 video generation device, 210 second communication unit, 220 communication determination unit 230 gaze point acquisition unit, 240 gaze point movement acquisition unit, 250 calculation unit, 260 storage unit.
 本発明は、ヘッドマウントディスプレイ及び映像生成装置を含む映像システムに利用可能である。 The present invention can be used for a video system including a head mounted display and a video generation device.

Claims (8)

  1.  ユーザの頭部に装着して使用されるヘッドマウントディスプレイと、
     前記ヘッドマウントディスプレイがユーザに提示する映像を生成する映像生成装置とを含む映像システムであって、
     前記ヘッドマウントディスプレイは、
     前記映像を前記ユーザに提示する映像提示部と、
     前記ユーザの目を含む画像を撮像する撮像部と、
     前記撮像部が撮像した前記画像を前記映像生成装置へ送信し、前記映像提示部に提示させる前記映像を前記映像生成装置より受信する第一通信部とを備え、
     前記映像生成装置は、
     前記ヘッドマウントディスプレイから前記撮像部が撮像した前記画像を受信し、前記映像を前記ヘッドマウントディスプレイに送信する第二通信部と、
     前記撮像部が撮像した前記画像を基に、前記映像上における前記ユーザの注視点を取得する注視点取得部と、
     前記注視点取得部が取得した前記注視点を基に、前記注視点を基準とする所定の領域を設定して、前記所定の領域の外については、前記所定の領域用に計算した映像に比べて単位画素数あたりのデータ量の少ない映像を生成する計算部とを備える映像システム。
    A head mounted display used on the user's head; and
    A video system including a video generation device that generates video to be presented to a user by the head-mounted display,
    The head mounted display is
    A video presentation unit for presenting the video to the user;
    An imaging unit that captures an image including the eyes of the user;
    A first communication unit that transmits the image captured by the imaging unit to the video generation device and receives the video to be presented to the video presentation unit from the video generation device;
    The video generation device includes:
    A second communication unit that receives the image captured by the imaging unit from the head-mounted display and transmits the video to the head-mounted display;
    A gazing point acquisition unit that acquires the gazing point of the user on the video based on the image captured by the imaging unit;
    Based on the gazing point acquired by the gazing point acquisition unit, a predetermined area based on the gazing point is set, and the area outside the predetermined area is compared with the image calculated for the predetermined area. And a calculation unit that generates an image with a small amount of data per unit pixel.
  2.  前記映像生成装置は更に、
     前記第一通信部と前記第二通信部の間の通信環境を判定する通信判定部を備え、
     前記計算部は、前記通信環境が悪い場合には、良い場合と比較して前記映像のデータ量を小さくすることを特徴とする請求項1に記載の映像システム。
    The video generation device further includes:
    A communication determination unit for determining a communication environment between the first communication unit and the second communication unit;
    The video system according to claim 1, wherein when the communication environment is bad, the calculation unit reduces the data amount of the video compared with a good case.
  3. 前記通信判定部は、電波強度、通信速度、データ損失率、スループット、ノイズ状況、又はルーターからの物理的距離の少なくともいずれか一つを含む通信パラメータの最新データを含む情報を基に、通信環境を判定することを特徴とする請求項2に記載の映像システム。 The communication determination unit is based on information including the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router. The video system according to claim 2, wherein the video system is determined.
  4. 前記映像生成装置は更に、
    前記注視点取得部が取得した前記注視点を基にユーザの注視点の動きを取得する注視点移動取得部を備え、
    前記計算部は、前記注視点の動きに応じて前記所定の領域の大きさ又は形の少なくとも一方を変えることを特徴とする請求項1から3のいずれか一項に記載の映像システム。
    The video generation device further includes:
    A gazing point movement acquisition unit that acquires a movement of a user's gazing point based on the gazing point acquired by the gazing point acquisition unit;
    4. The video system according to claim 1, wherein the calculation unit changes at least one of a size or a shape of the predetermined area in accordance with the movement of the gazing point. 5.
  5. 前記計算部は、前記所定の領域の形状が長軸及び短軸、又は長辺及び短辺を有する形状となるよう設定し、前記注視点の動きの方向に応じて前記所定の領域の長軸又は長辺方向を設定することを特徴とする請求項4に記載の映像システム。 The calculation unit sets the shape of the predetermined region to have a long axis and a short axis, or a shape having a long side and a short side, and the long axis of the predetermined region according to the direction of movement of the gazing point The video system according to claim 4, wherein a long side direction is set.
  6. 前記計算部は、前記所定の領域の外では、前記注視点からの距離に応じて前記単位画素数あたりのデータ量を変えた映像を生成することを特徴とする請求項1から5のいずれか一項に記載の映像システム。 The said calculation part produces | generates the image | video which changed the data amount per said unit pixel number according to the distance from the said gazing point outside the said predetermined area | region. The video system according to one item.
  7. 前記計算部は、前記所定の領域の外では、前記注視点からの距離が大きくなるほど前記単位画素数あたりのデータ量を連続的に小さくした映像を生成することを特徴とする請求項1から6のいずれか一項に記載の映像システム。 The said calculation part produces | generates the image | video which made the data amount per said unit pixel number continuously small, so that the distance from the said gazing point becomes large outside the said predetermined area | region. The video system according to any one of the above.
  8. 前記計算部は、前記単位画素数あたりのデータ量が下限値を下回らないように前記映像を生成することを特徴とする請求項1から7のいずれか一項に記載の映像システム。 The video system according to claim 1, wherein the calculation unit generates the video so that a data amount per unit pixel number does not fall below a lower limit value.
PCT/JP2015/076765 2015-09-18 2015-09-18 Video system WO2017046956A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201580083269.3A CN108141559B (en) 2015-09-18 2015-09-18 Image system, image generation method and computer readable medium
PCT/JP2015/076765 WO2017046956A1 (en) 2015-09-18 2015-09-18 Video system
KR1020187008945A KR101971479B1 (en) 2015-09-18 2015-09-18 Video system
US15/267,917 US9978183B2 (en) 2015-09-18 2016-09-16 Video system, video generating method, video distribution method, video generating program, and video distribution program
US15/963,476 US20180247458A1 (en) 2015-09-18 2018-04-26 Video system, video generating method, video distribution method, video generating program, and video distribution program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/076765 WO2017046956A1 (en) 2015-09-18 2015-09-18 Video system

Publications (1)

Publication Number Publication Date
WO2017046956A1 true WO2017046956A1 (en) 2017-03-23

Family

ID=58288481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/076765 WO2017046956A1 (en) 2015-09-18 2015-09-18 Video system

Country Status (3)

Country Link
KR (1) KR101971479B1 (en)
CN (1) CN108141559B (en)
WO (1) WO2017046956A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022511838A (en) * 2018-12-14 2022-02-01 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド Forbidden coding slice size map control

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102149732B1 (en) * 2019-04-17 2020-08-31 라쿠텐 인코포레이티드 Display control device, display control method, program, and non-transitory computer-readable information recording medium
WO2021066210A1 (en) * 2019-09-30 2021-04-08 엘지전자 주식회사 Display device and display system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0713552A (en) * 1993-06-14 1995-01-17 Atr Tsushin Syst Kenkyusho:Kk Picture display device
JPH099253A (en) * 1995-06-19 1997-01-10 Toshiba Corp Picture compression communication equipment
JP2004056335A (en) * 2002-07-18 2004-02-19 Sony Corp Information processing apparatus and method, display apparatus and method, and program
JP2008131321A (en) * 2006-11-21 2008-06-05 Nippon Telegr & Teleph Corp <Ntt> Video transmission method, video transmission program and computer readable recording medium with the program recorded thereon

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886735A (en) * 1997-01-14 1999-03-23 Bullister; Edward T Video telephone headset
US9344612B2 (en) 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
US8611015B2 (en) * 2011-11-22 2013-12-17 Google Inc. User interface
CN204442580U (en) * 2015-02-13 2015-07-01 北京维阿时代科技有限公司 A kind of wear-type virtual reality device and comprise the virtual reality system of this equipment
CN104767992A (en) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 Head-wearing type display system and image low-bandwidth transmission method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0713552A (en) * 1993-06-14 1995-01-17 Atr Tsushin Syst Kenkyusho:Kk Picture display device
JPH099253A (en) * 1995-06-19 1997-01-10 Toshiba Corp Picture compression communication equipment
JP2004056335A (en) * 2002-07-18 2004-02-19 Sony Corp Information processing apparatus and method, display apparatus and method, and program
JP2008131321A (en) * 2006-11-21 2008-06-05 Nippon Telegr & Teleph Corp <Ntt> Video transmission method, video transmission program and computer readable recording medium with the program recorded thereon

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022511838A (en) * 2018-12-14 2022-02-01 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド Forbidden coding slice size map control
JP7311600B2 (en) 2018-12-14 2023-07-19 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド Slice size map control for foveated coding

Also Published As

Publication number Publication date
CN108141559A (en) 2018-06-08
KR20180037299A (en) 2018-04-11
CN108141559B (en) 2020-11-06
KR101971479B1 (en) 2019-04-23

Similar Documents

Publication Publication Date Title
US9978183B2 (en) Video system, video generating method, video distribution method, video generating program, and video distribution program
CA2928248C (en) Image display device and image display method, image output device and image output method, and image display system
WO2016157677A1 (en) Information processing device, information processing method, and program
US11196975B2 (en) Image generating device, image display system, and image generating method
JP2018141816A (en) Video system, video generation method, video distribution method, video generation program and video distribution program
EP3341818B1 (en) Method and apparatus for displaying content
US10692300B2 (en) Information processing apparatus, information processing method, and image display system
KR102547106B1 (en) Dynamic Forbited Pipeline
WO2018211672A1 (en) Image generation device, image display system, and image generation method
US20140126877A1 (en) Controlling Audio Visual Content Based on Biofeedback
US20130235169A1 (en) Head-mounted display and position gap adjustment method
WO2015149554A1 (en) Display control method and display control apparatus
WO2017208957A1 (en) Image generation device, image generation system, and image generation method
EP3665524A1 (en) Head-mountable apparatus and methods
JP2010050645A (en) Image processor, image processing method, and image processing program
JP2015125502A (en) Image processor, image processing method, display unit, display method, computer program and image display system
WO2017046956A1 (en) Video system
WO2019217260A1 (en) Dynamic foveated display
JP6500570B2 (en) Image display apparatus and image display method
JP6591667B2 (en) Image processing system, image processing apparatus, and program
WO2022004130A1 (en) Information processing device, information processing method, and storage medium
JP6725999B2 (en) Information processing apparatus, information processing apparatus control method, and program
WO2021131781A1 (en) Display control device, display control method, and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15904142

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187008945

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 15904142

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP