WO2019142817A1 - Image display system and recording medium on which program for image display system is recorded - Google Patents

Image display system and recording medium on which program for image display system is recorded Download PDF

Info

Publication number
WO2019142817A1
WO2019142817A1 PCT/JP2019/001071 JP2019001071W WO2019142817A1 WO 2019142817 A1 WO2019142817 A1 WO 2019142817A1 JP 2019001071 W JP2019001071 W JP 2019001071W WO 2019142817 A1 WO2019142817 A1 WO 2019142817A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
character
control unit
virtual space
image
Prior art date
Application number
PCT/JP2019/001071
Other languages
French (fr)
Japanese (ja)
Inventor
小川 和宏
Original Assignee
株式会社コナミデジタルエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018007913A external-priority patent/JP6587364B2/en
Priority claimed from JP2018007914A external-priority patent/JP6628331B2/en
Application filed by 株式会社コナミデジタルエンタテインメント filed Critical 株式会社コナミデジタルエンタテインメント
Publication of WO2019142817A1 publication Critical patent/WO2019142817A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a technique for displaying an image obtained by imaging a virtual space.
  • Patent Document 1 discloses a configuration for displaying a menu image in a virtual space in accordance with the tilt of the head mounted display.
  • BACKGROUND ART Content such as a game for communicating with an object such as a character in a virtual space has conventionally been proposed.
  • this kind of content is displayed by the head mounted display
  • an operation device separate from the head mounted display is prepared, the restriction on the operation by the user is alleviated, but there is a problem that the device configuration becomes complicated.
  • the present invention aims to easily realize an operation on an object in a virtual space.
  • a recording medium is an image obtained by imaging a virtual space with a virtual camera whose direction is controlled according to the direction of the head mounted display,
  • a display control unit that causes a display device of the head mounted display to display a stereoscopic image using the display, rotation of the head mounted display in the roll direction, and a direction in the virtual space according to the direction of the head mounted display
  • An image display system is an image display system including a head mounted display and an information processing apparatus, and the information processing apparatus is controlled in direction according to the direction of the head mounted display.
  • a display control unit for displaying on a display device of the head mounted display an image obtained by imaging a virtual space with a virtual camera and using binocular parallax; rotation of the head mounted display in a roll direction;
  • the motion control unit controls an operation of the object according to a reference line whose direction in the virtual space changes in accordance with the direction of the head mounted display and a positional relationship between the object in the virtual space.
  • FIG. 1 is a block diagram illustrating the configuration of an image display system. It is a block diagram which illustrates the functional composition of an image display system. It is explanatory drawing of virtual space. It is explanatory drawing of the image which a display apparatus displays. It is a flow chart which illustrates the contents of processing which a control device performs. It is explanatory drawing of rotation of HMD in a roll direction. It is explanatory drawing of the image which a display apparatus displays. It is an explanatory view of movement of a virtual camera in modification A4. It is explanatory drawing of a change of the imaging range in modification A5.
  • FIG. 1 is a perspective view illustrating the appearance of an image display system 1A according to a first embodiment of the present invention.
  • the image display system 1A is a video apparatus for displaying an image, and includes a terminal device 10 and a mounting tool 20.
  • the terminal device 10 is a portable information terminal such as a smartphone or a tablet terminal, for example.
  • the terminal device 10 is detachably installed to the mounting tool 20.
  • the mounting tool 20 is a tool for mounting the terminal device 10 on the head of the user U.
  • a goggle type attachment having a belt wound around the head of the user U is suitable as the attachment 20.
  • FIG. 2 is a block diagram illustrating the configuration of the terminal device 10.
  • the terminal device 10 is a computer system including a control device 11, a storage device 12, a display device 13, and a detection device 14.
  • the control device 11 is one or more processors such as a CPU (Central Processing Unit), and centrally controls each element of the image display system 1A.
  • the storage device 12 stores a program executed by the control device 11 and various data used by the control device 11.
  • the storage device 12 is configured by a known recording medium such as a magnetic recording medium or a semiconductor recording medium, or a combination of a plurality of types of recording mediums.
  • the display device 13 displays an image under the control of the control device 11.
  • a flat panel display such as a liquid crystal display panel or an organic EL (Electroluminescence) display panel is used as the display device 13.
  • a display device is provided in front of both eyes of the user U in a state where the display surface faces the head of the user U 13 is arranged.
  • the display device 13 according to the first embodiment displays a stereoscopic image in which the user U can perceive a stereoscopic effect.
  • the stereoscopic image is composed of a right-eye image and a left-eye image to which binocular parallax is given.
  • the user U perceives a stereoscopic effect by causing the right eye image to be viewed by the right eye of the user U and the left eye image to be viewed by the left eye of the user U.
  • the display device 13, the detection device 14, and the mounting tool 20 constitute a head mounted display (hereinafter referred to as “HMD”) 30, and the control device 11 and the storage device 12 transmit an image to the HMD 30.
  • the information processing apparatus 40 to be displayed is configured. That is, the image display system 1A includes the HMD 30 and the information processing device 40. As described above, in the first embodiment, the information processing device 40, the display device 13, and the detection device 14 are realized by the single terminal device 10. Note that the information processing device 40 may be grasped as an element of the HMD 30.
  • the detection device 14 in FIG. 2 is a sensor that outputs a detection signal according to the orientation of the HMD 30.
  • the detection device 14 is selected from a plurality of types of sensors such as a gyro sensor that detects an angular velocity, an acceleration sensor that detects an acceleration, an inclination sensor that detects an inclination angle, and a geomagnetic sensor that detects a direction by geomagnetism. It consists of one or more sensors.
  • the detection device 14 is also referred to as a sensor that outputs a detection signal according to the attitude of the display device 13 or the terminal device 10. Note that illustration of an amplifier for amplifying the detection signal and an A / D converter for converting the detection signal from analog to digital is omitted for convenience.
  • the attitude of the HMD 30 is defined by three axes (X axis, Y axis, and Z axis) orthogonal to one another at the origin O.
  • the X axis is an axis perpendicular to the display surface of the display device 13 and corresponds to the front-rear direction of the user U.
  • the Y axis and the Z axis are axes parallel to the display surface of the display device 13.
  • the Y axis corresponds to the left and right direction of the user U
  • the Z axis corresponds to the up and down direction of the user U.
  • the X axis corresponds to the depth direction of the display surface
  • the Y axis corresponds to the horizontal direction of the display surface
  • the Z direction corresponds to the vertical direction of the display surface.
  • the direction of the circumference centered on the X axis (that is, the direction of the arc of a virtual circle centered on the X axis) is the roll direction
  • the direction of the circumference centered on the Y axis is the pitch direction.
  • the circumferential direction about the axis is the yaw direction.
  • the direction of the X axis is determined by the angle in the pitch direction and the angle in the yaw direction.
  • the rotation in the roll direction corresponds to an operation of tilting the head to the left and right with the user U facing forward. That is, for example, when the user U bites his head, the HMD 30 rotates in the roll direction.
  • first side one side in the roll direction
  • second side One of clockwise and counterclockwise around the X axis is the first side
  • the other is the second side.
  • the control device 11 of FIG. 2 functions as the posture analysis unit 41, the display control unit 42A, and the operation control unit 43A by executing the program stored in the storage device 12 as illustrated in FIG. Note that part or all of the functions of the control device 11 may be realized by a dedicated electronic circuit.
  • the posture analysis unit 41 specifies the posture of the HMD 30 by analyzing the detection signal output from the detection device 14. Specifically, the posture analysis unit 41 sequentially generates posture data on the posture of the HMD 30 at a predetermined cycle. Posture data is data representing a change in the posture or posture of the HMD 30.
  • the display control unit 42A causes the display device 13 to display an image captured by the virtual camera E in the virtual space V.
  • FIG. 4 is an explanatory diagram of the virtual space V.
  • the virtual camera E is installed in the virtual space V, and images a specific range (hereinafter referred to as “imaging range”) R in the virtual space V.
  • the imaging range R is a range over predetermined angles in the longitudinal direction and the lateral direction with respect to the optical axis of the virtual camera E.
  • the optical axis of the virtual camera E corresponds to the virtual line of sight of the user U in the virtual space V.
  • the display control unit 42A of the first embodiment causes the display device 13 to display a stereoscopic image (right-eye image and left-eye image) using binocular parallax. Therefore, the virtual camera E is configured to include a first virtual camera that captures an image for the left eye and a second virtual camera that captures the image for the right eye. By sequentially supplying image data representing a stereoscopic image captured by the virtual camera E from the display control unit 42A to the display device 13, a stereoscopic image is displayed on the display device 13.
  • the display control unit 42A controls the direction of the virtual camera E in the virtual space V according to the direction of the HMD 30 specified by the posture analysis unit 41. Specifically, the direction of the optical axis of the virtual camera E (and further, the imaging range R) changes in conjunction with the rotation of the X axis about the origin O.
  • the optical axis of the virtual camera E rotates left and right about the origin O
  • the optical axis of the virtual camera E is the origin O Rotate up and down around the
  • the attitude of the virtual camera E does not change.
  • the inclination of the virtual camera E about the optical axis may be changed.
  • the character C is disposed in the virtual space V in which the virtual camera E is installed.
  • the character C according to the first embodiment is an object representing a virtual creature (for example, a human, an animal or a monster) operating in the virtual space V, and includes a head and a body.
  • the image display system 1A provides the user U with a game that interacts with the character C while touching (skinship) the character C at any time in the virtual space V. That is, the image display system 1A is used as a game device.
  • a reference line Q is set in the virtual space V.
  • the reference line Q is a straight line whose direction in the virtual space V changes according to the direction of the HMD 30 (direction of the X axis).
  • the optical axis of the virtual camera E is illustrated as a reference line Q.
  • the reference line Q rotates to the right about the origin O
  • the reference line Q rotates to the left about the origin O Do.
  • the reference line Q is a virtual line of sight of the user U in the virtual space V.
  • FIG. 5 is a schematic view of an image displayed on the display device 13.
  • the display control unit 42A causes the display device 13 to display a stereoscopic image captured by the virtual camera E in the virtual space V.
  • the character C is displayed on the display device 13.
  • the display control unit 42A causes the display device 13 to display an image G (hereinafter referred to as "instruction image") representing the direction of the reference line Q.
  • the instruction image G is disposed at a predetermined position on the line of the reference line Q in the virtual space V. Since the reference line Q in the first embodiment is the optical axis of the virtual camera E, the instruction image G is displayed at a predetermined position in the display surface of the display device 13.
  • the motion control unit 43A of FIG. 3 controls the motion of the character C in the virtual space V.
  • the motion control unit 43A controls the motion of the character C in accordance with the progress of the game, and causes the character C to perform a motion corresponding to the contact of the user U in the virtual space V (hereinafter referred to as "reaction motion").
  • reaction motion a motion corresponding to the contact of the user U in the virtual space V
  • the operation control unit 43A determines whether or not the user U has touched the character C in the virtual space V, and causes the character C to perform a reaction operation to the touch by the user U.
  • various actions such as a change in posture, a change in expression, or the pronunciation of a specific speech are typical examples of the reaction action.
  • the motion control unit 43A determines whether the user U has touched the character C in the virtual space V, in accordance with the change in the posture of the HMD 30 specified by the posture analysis unit 41.
  • the motion control unit 43A according to the first embodiment determines that the user U contacts the character C in the virtual space V when the HMD 30 rotates in the roll direction. That is, the rotation of the HMD 30 in the roll direction corresponds to an instruction (operation instruction) for the user U to contact the character C in the virtual space V.
  • the operation control unit 43A contacts the user U at a point P (hereinafter referred to as a “contact point”) at which the reference line Q and the surface of the character C intersect.
  • the contact point P is located on the surface of the character C.
  • a point closest to the virtual camera E (that is, the user U) among the plurality of points is selected as the contact point P.
  • the motion of the character C in the virtual space V is controlled according to the rotation of the HMD 30 in the roll direction. Therefore, the instruction from the user U is reflected in the operation of the character C without requiring the operation device for the user U to input the instruction for controlling the operation of the character C separately from the HMD 30. be able to.
  • FIG. 6 is a flowchart illustrating specific contents of processing (hereinafter, referred to as “first control processing”) executed by the operation control unit 43A.
  • the operation control unit 43A repeatedly executes the first control process of FIG. 6 at a predetermined cycle.
  • the storage unit 12 stores contact determination data indicating whether or not the user U is in contact with the character C in the virtual space V (hereinafter referred to as “contact state”).
  • the contact determination data is, for example, a flag indicating either a contact state or a non-contact state.
  • the operation control unit 43A determines whether the contact determination data indicates a contact state (Sa1). If the contact determination data indicates a non-contact state (Sa1: NO), the operation control unit 43A determines whether the HMD 30 has rotated in the roll direction (Sa2). Specifically, the operation control unit 43A determines whether the HMD 30 has rotated to the first side (for example, clockwise) in the roll direction by referring to posture data sequentially supplied from the posture analysis unit 41. Do.
  • FIG. 7 is an explanatory view of a process of determining whether the HMD 30 has rotated in the roll direction.
  • the operation control unit 43A rotates the HMD 30 in the roll direction when the angle (hereinafter referred to as “rotation angle”) ⁇ at which the HMD 30 rotates to the first side in the roll direction exceeds the threshold ⁇ t.
  • the operation control unit 43A does not determine that the HMD 30 has rotated in the roll direction.
  • the possibility that the HMD 30 is determined to have rotated even though the user U does not intend to rotate the HMD 30 can be reduced.
  • the angle of rotation in the roll direction exceeds the threshold ⁇ t. Is determined that the HMD 30 has rotated in the roll direction.
  • the operation control unit 43A determines whether the reference line Q intersects the character C in the virtual space V It is determined whether or not it is (Sa3). That is, it is determined whether the character C exists on the line of the reference line Q. As shown in the example of FIG. 8, when the reference line Q intersects the character C in the virtual space V (Sa3: YES), the operation control unit 43A is in a contact state where the user U contacts the contact point P of the character C. It is determined that there is (Sa4).
  • the operation control unit 43A changes the touch determination data stored in the storage device 12 into a numerical value indicating a touch state.
  • the motion control unit 43A determines that the user U has touched the contact point P of the character C in the virtual space V when the HMD 30 rotates in the roll direction. That is, the user U can instruct the contact with the contact point P of the character C by rotating the HMD 30 in the roll direction.
  • the operation control unit 43A When transitioning to the contact state, the operation control unit 43A causes the character C to execute the reaction operation (Sa5). Specifically, the operation control unit 43A causes the character C to execute a reaction operation according to the position of the contact point P where the reference line Q intersects in the character C. That is, the reaction operation performed by the character C changes in accordance with the position of the contact point P. For example, a table that defines the type of reaction operation for each of a plurality of areas obtained by dividing the surface of the character C is stored in the storage device 12. The motion control unit 43A causes the character C to execute the reaction motion defined in the table for the region including the contact point P among the plurality of regions on the surface of the character C.
  • the motion control unit 43A responds to the user U by a reactive action (e.g., a favorable action such as joy, laughing or approaching).
  • a reactive action e.g., a favorable action such as joy, laughing or approaching.
  • Make C execute.
  • the operation control unit 43A responds to the user U (for example, a negative operation such as angry, sad, or away).
  • Make character C execute. According to the above configuration, it is possible to variously change the reaction operation of the character C according to the position of the contact point P.
  • the operation control unit 43A is stored in the storage device 12.
  • the contact determination data is maintained at a value indicating a non-contact state. That is, even when the HMD 30 rotates to the first side in the roll direction, it is determined that the user U is not in contact with the character C when the reference line Q does not cross the character C as illustrated in FIG.
  • the operation control unit 43A refers to the posture data sequentially supplied from the posture analysis unit 41, and thereby the HMD 30 moves to the second side in the roll direction (for example, It is determined whether it has rotated clockwise). Specifically, the operation control unit 43A determines that the HMD 30 has rotated to the second side in the roll direction, when the rotation angle ⁇ with respect to the second side in the roll direction exceeds the predetermined threshold ⁇ t.
  • the operation control unit 43A determines that the contact with the character C is released (Sa7). Specifically, the operation control unit 43A changes the contact determination data stored in the storage device 12 into a numerical value indicating a non-contact state. On the other hand, when the HMD 30 does not rotate to the second side in the roll direction (Sa6: NO), the operation control unit 43A maintains the contact determination data at a numerical value indicating the contact state.
  • the operation control unit 43A causes the character C to execute an action in response to the release of the contact when the contact to the character C is released, the action in response to the release of the contact is the character C.
  • a configuration not to be executed is also assumed.
  • the operation control unit 43A determines that the user U has touched the character C (Sa4).
  • the HMD 30 rotates to the second side in the roll direction (Sa6: YES)
  • it is determined that the contact with the character C is released.
  • the user U can instruct the generation and release of the contact with the character C in the virtual space V according to the rotation direction of the HMD 30 in the roll direction. Since mutually opposite rotation in the roll direction corresponds to the opposite movement of contact and contact with character C, the advantage is also that user U can easily grasp the contact and contact with character C intuitively. is there.
  • the motion control unit 43A in the modification A1 causes the character C to perform a response motion according to the length of time in which the contact with the character C is maintained in the virtual space V. Specifically, the operation control unit 43A temporally changes the reaction operation within a period in which the contact state continues (hereinafter referred to as a “contact period”).
  • the contact period is a period from when the HMD 30 rotates to the first side in the roll direction to when it rotates to the second side. Note that a period from when the contact determination data is set to a value indicating the contact state to when it is changed to a value indicating the non-contact state may be set as the contact period.
  • the operation control unit 43A temporally changes the reaction operation of the character C from the start point to the end point of the contact period. Therefore, the reaction operation of the character C changes according to the time length of the contact period. According to the above configuration, the reaction action of the character C can be variously changed according to the length of time that the user U contacts the character C.
  • Modification A2 Parameters related to the character C are stored in the storage device 12 in the modification A2. Specifically, the preference of the character C for the user U is stored in the storage device 12 as a parameter. The preference is a parameter indicating the degree of favorable emotion from the character C to the user U.
  • the motion control unit 43A changes the positivity of the character C to the user U in response to the contact with the character C, in addition to changing the positivity to the progress of the game. Specifically, as the degree of contact (for example, the number of times or time) of the user U with the character C increases, the preference for the character C with respect to the user U is set to a larger numerical value.
  • the preference stored in the storage device 12 is the preference from the character C for the character (player character) used by the user U May be.
  • the user U can possess a plurality of player characters in the virtual space V.
  • the motion control unit 43A changes the reaction motion in accordance with the stored sensitivity of the character C. That is, when the user U contacts the character C (Sa2: YES, Sa3: YES), the type of reaction action performed by the character C (Sa4) changes in accordance with the character C's preference. For example, in a state where the number of times of contact with the character C or the time for which the contact with the character C is short and the preference is low, when the user U contacts the head, the character C executes a reaction action to reject the contact.
  • the character C executes a reaction action to receive the contact.
  • the reaction operation of the character C can be variously changed according to the parameter related to the character C.
  • the parameters that affect the reaction operation of the character C are not limited to the above-described exemplary sensitivity.
  • various parameters such as the degree of growth (level) of the character C or the intimacy between the user U and the character C change according to the contact of the character C by the user U, and according to the parameters The reaction of the character C is controlled.
  • the operation control unit 43A in the modification A3 causes the character C to execute a reaction operation according to the rotation angle ⁇ at which the HMD 30 is rotated in the roll direction.
  • the rotation angle ⁇ of the HMD 30 corresponds to a virtual pressure at which the user U contacts the character C in the virtual space V.
  • the operation control unit 43A causes the character C to execute a reaction operation that accepts the contact by the user U, and rejects the contact by the user U when the rotation angle ⁇ is large.
  • the character C is made to execute a reaction operation.
  • the reaction operation of the character C can be variously changed according to the rotation angle ⁇ of the HMD 30 (that is, the virtual pressure with which the user U contacts the character C).
  • the display control unit 42A in the modification A4 moves the virtual camera E (that is, a virtual viewpoint in the virtual space V) in the virtual space V according to the rotation of the HMD 30 in the roll direction.
  • the display control unit 42A changes the positional relationship between the virtual camera E and the character C in the virtual space V in conjunction with the rotation of the HMD 30.
  • the distance between the virtual camera E and the character C is controlled according to the rotation angle ⁇ of the HMD 30. That is, the virtual camera E moves in the direction of the object in conjunction with the rotation of the HMD 30 in the roll direction.
  • the display control unit 42A moves the virtual camera E by the movement amount according to the rotation angle ⁇ ( ⁇ > 0) of the HMD 30, as shown by arrow a1 in FIG.
  • the character C is made to approach in the virtual space V.
  • the display control unit 42A moves the virtual camera E by the movement amount according to the rotation angle ⁇ ( ⁇ ⁇ 0) of the HMD 30, as shown by arrow a2 in FIG. It is separated from the character C in the virtual space V.
  • the user U can move the virtual camera E in the virtual space V by a simple operation of rotating the HMD 30 in the roll direction.
  • the display control unit 42A in the modification A5 changes the imaging range R (that is, the angle of view of the virtual camera E) by the virtual camera E according to the rotation of the HMD 30 in the roll direction. That is, the range displayed on the display device 13 in the virtual space V changes according to the rotation of the HMD 30 in the roll direction.
  • the display control unit 42A reduces the imaging range R at a ratio according to the rotation angle ⁇ of the HMD 30, as indicated by an arrow b1 in FIG. Therefore, for example, the character C in the virtual space V is zoomed in (expanded).
  • the display control unit 42A enlarges the imaging range R at a ratio according to the rotation angle ⁇ of the HMD 30, as shown by the arrow b2 in FIG. Therefore, for example, the character C in the virtual space V is zoomed out (reduced).
  • the user U can change the imaging range R of the virtual camera E in the virtual space V by a simple operation of rotating the HMD 30 in the roll direction.
  • step Sa2 the motion control unit 43A determines that the user U has touched the character C when the rotation angle ⁇ changes regardless of the direction (first side / second side) of the rotation of the HMD 30. Do. As understood from the above description, by monitoring the rotation angle ⁇ of the HMD 30, the generation and release of contact with the character C can be made without determining the direction of rotation (first side / second side) of the HMD 30. And can be determined. That is, in the determination of the touch on the character C, the motion control unit 43A does not necessarily have to determine the direction of rotation of the HMD 30.
  • the character C is made to perform a reaction action to the touch by the user U.
  • the reference line Q and the character The conditions regarding the positional relationship between C and C are not limited to the above examples.
  • the reaction operation may be performed by the character C on condition that the distance between the reference line Q and the character C falls below a predetermined threshold.
  • the distance between the reference line Q and the character C is less than a predetermined threshold value, in addition to the case where the reference line Q intersects the character C, the case where the reference line Q separates from the character C within the range below the threshold is also included. .
  • the reaction operation may be performed by the character C on condition that the reference line Q intersects a specific part of the character C.
  • the operation control unit 43A does not cause the character C to execute the reaction operation.
  • the motion control unit 43A determines the position of the character C in accordance with the positional relationship between the reference line Q and the character C (for example, the relationship where the reference line Q intersects the character C). It is comprehensively expressed as an element that controls the operation.
  • the image display system in the second embodiment has the same configuration as that of the first embodiment illustrated in FIGS. 1 and 2. That is, the HMD 30 of the second embodiment includes the display device 13 and the detection device 14 of the terminal device 10 and the attachment 20, and the information processing device 40 includes the control device 11 of the terminal device 10 and the storage device 12. Do.
  • FIG. 11 is a block diagram illustrating a functional configuration of the terminal device 10 according to the second embodiment.
  • the control device 11 functions as a posture analysis unit 41, a display control unit 42B, and an operation control unit 43B by executing a program stored in the storage device 12. Note that part or all of the control device 11 may be realized by a dedicated electronic circuit.
  • the posture analysis unit 41 sequentially generates posture data on the posture of the HMD 30 by analyzing the detection signal output from the detection device 14 as in the first embodiment.
  • the display control unit 42B displays an image within an imaging range R of the virtual camera E in the virtual space V and an instruction image G representing the direction of the reference line Q. Display on 13.
  • the reference line Q is a virtual straight line whose direction in the virtual space V changes according to the direction of the HMD 30.
  • the display control unit 42B of the second embodiment changes the display mode of the instruction image G according to the positional relationship between the reference line Q and the character C (example of object) in the virtual space V. Specifically, the display control unit 42B makes the display mode of the instruction image G different between when the reference line Q intersects the character C and when the reference line Q does not intersect the character C.
  • the display mode means the property of the image which the user U can distinguish visually. For example, in addition to the three attributes of color (hue), saturation and lightness (tone), size and image content (e.g. pattern or shape) are also included in the concept of the display mode.
  • the display control unit 42B causes the display device 13 to display the image G1 as the instruction image G as illustrated in FIG.
  • the image G1 is a circular or point-like object disposed in the virtual space V, and is disposed on the line of the reference line Q in the virtual space V.
  • the display control unit 42B causes the display device 13 to display the image G2 different from the image G1 as the instruction image G, as illustrated in FIG.
  • the image G2 is a planar object that schematically represents the hand of the user U.
  • the display control unit 42B arranges the image G2 at a point (that is, the contact point P) intersecting the reference line Q in the surface of the character C in the virtual space V, as illustrated in FIG. That is, the image G2 contacts the contact point P on the surface of the character C in the virtual space V.
  • the contact point P is also referred to as a point on the surface of the character C at which the image G2 contacts.
  • the change of the display mode of the instruction image G includes the switching of display / non-display. That is, the display control unit 42B may display the image G2 as the instruction image G when the reference line Q intersects the character C, and may not display the instruction image G when the reference line Q does not intersect the character C. .
  • the display mode of the instruction image G representing the direction of the reference line Q in the virtual space V changes according to the positional relationship between the reference line Q and the character C. . Therefore, the user U can easily grasp the positional relationship between the reference line Q and the character C in the virtual space V.
  • the instruction image G image G2
  • the advantage is that the user U can easily perceive the state in contact with the surface of the character C in the virtual space V.
  • the instruction image G is arranged on the surface of the character C, there is also an advantage that the user U can easily grasp the contact point P accurately and visually.
  • the motion control unit 43B of FIG. 11 controls the motion of the character C in the virtual space V, similarly to the motion control unit 43A of the first embodiment. Specifically, the operation control unit 43B causes the character C to execute an operation (that is, a reaction operation) that reacts to the contact of the user U in the virtual space V.
  • an operation that is, a reaction operation
  • a typical example of the reaction motion is a change in posture, a change in expression, or the pronunciation of a specific speech.
  • the motion control unit 43B causes the character C to execute the motion according to the position of the contact point P where the reference line Q intersects in the character C.
  • the operation control unit 43B causes the character C to execute a reaction that accepts the contact by the user U, and the contact point P corresponds to the body of the character C. If the character C is positioned at, the character C is made to execute an operation of rejecting the contact by the user U. According to the above configuration, it is possible to change the motion of the character C in various ways according to the position of the contact point P.
  • FIG. 15 is a flowchart illustrating specific contents of processing (hereinafter, referred to as “second control processing”) executed by the display control unit 42B and the operation control unit 43B according to the second embodiment.
  • the second control process of FIG. 15 is repeatedly performed in a predetermined cycle.
  • the display control unit 42B determines whether the reference line Q intersects the character C in the virtual space V (Sb1). When the reference line Q intersects the character C (Sb1: YES), the display control unit 42B sets the image G2 arranged at the contact point P where the reference line Q and the character C intersect as the indication image G on the display device 13. Display (Sb2). On the other hand, when the reference line Q does not intersect the character C (Sb1: NO), the display control unit 42B causes the display device 13 to display the image G1 disposed on the line of the reference line Q as the instruction image G (Sb3).
  • the operation control unit 43B determines whether an instruction for the operation on the character C (hereinafter referred to as "operation instruction") has been received from the user U (Sb4).
  • the action instruction is an instruction for generating an action on the character C in the virtual space V.
  • the operation instruction in the second embodiment means an instruction that the user U touches the character C in the virtual space V.
  • the operation control unit 43B of the second embodiment receives a specific change regarding the attitude of the HMD 30 as an operation instruction from the user U. Specifically, when the change in attitude of the HMD 30 satisfies a predetermined condition (hereinafter referred to as "instruction determination condition"), the operation control unit 43B determines that the operation instruction is received. For example, as in the first embodiment, when the HMD 30 rotates in the roll direction, the operation control unit 43B determines that an operation instruction has been given from the user U. That is, the rotation of the HMD 30 in the roll direction is the instruction determination condition. On the other hand, if the change in attitude of the HMD 30 does not satisfy the instruction determination condition, the operation control unit 43B determines that the operation instruction has not been received. Note that, as described above with reference to FIG. 7, it is preferable to determine that the HMD 30 has rotated when the rotation angle ⁇ of the HMD 30 exceeds the threshold value ⁇ t.
  • the operation control unit 43B determines that the user U has touched the character C in the virtual space V, and performs an operation to react to the contact by the user U
  • the character C is made to execute (Sb5).
  • the motion control unit 43B causes the character C to execute the motion according to the position of the contact point P where the reference line Q intersects in the character C.
  • the user U moves the reference line Q so as to intersect with the desired position of the character C, thereby giving an operation instruction. It is possible to touch the desired position of the character C in the space V.
  • the display mode of the instruction image G changes according to the positional relationship between the reference line Q and the character C (specifically, whether the reference line Q intersects the character C), the instruction image
  • the display mode of the instruction image G changes according to the positional relationship between the reference line Q and the character C (specifically, whether the reference line Q intersects the character C)
  • the instruction image As compared to the configuration in which G is displayed in a fixed manner, there is an advantage that the user U can easily feel the sense that the user U has touched the character C in the virtual space V.
  • the reaction action corresponding to the position of the contact point P is performed by the character C.
  • the character C may execute the above reaction action.
  • the amount of change in attitude of the HMD 30 is a virtual pressure at which the user U contacts the character C in the virtual space V.
  • the motion control unit 43B of the modified example B1 causes the character C to execute the reaction operation for accepting the contact by the user U when the amount of change in the posture of the HMD 30 is small, and uses it when the amount of change in the posture is large.
  • the character C is made to execute a reaction action that rejects the contact by the person U. According to the above aspect, it is possible to cause the character C to execute various reaction operations according to the attitude of the HMD 30.
  • the operation control unit 43B of the modified example B2 receives an operation instruction from the user U within a specific period on the time axis (hereinafter, referred to as an “instruction receiving period”). That is, the action instruction given by the user U during the instruction acceptance period is effectively reflected in the action of the character C, but the action instruction given by the user U outside the instruction acceptance period is ignored.
  • the instruction reception period is, for example, a period of a predetermined time length (for example, several seconds) starting from the time when the reference line Q starts to cross the character C.
  • the start point or the end point of the instruction reception period is not limited to the above example.
  • the time when a specific event starts in the game may be set as the start point of the instruction reception period.
  • the display control unit 42B temporally changes the display mode of the instruction image G (image G2) in the instruction reception period. For example, as illustrated in FIG. 16, it is assumed that the image G2 displayed as the instruction image G when the reference line Q intersects the character C includes the image G21 and the image G22.
  • the image G21 is an image representing the hand of the user U similarly to the image G2 of the first embodiment, and the image G22 is a circular image disposed behind the image G21.
  • the display control unit 42B reduces the size of the image G22 over time from the start point to the end point of the instruction reception period. The size of the image G21 does not change.
  • the user U can visually grasp from the display mode of the instruction image G the instruction reception period (for example, the remaining time) in which the reception of the operation instruction is permitted.
  • the display mode for example, hue or lightness
  • other than the size of the image G22 may be changed with time.
  • the display control unit 42B of the modified example B3 changes the display mode of the instruction image G according to the position of the contact point P where the reference line Q intersects in the character C in the virtual space V. Specifically, as illustrated in FIG. 17, an instruction image is generated when the contact point P is in the first position on the surface of the character C and in the second position on the surface of the character C in the virtual space V.
  • the display mode of G is different.
  • the first position is, for example, a position where the character C receives a touch by the user U (for example, the body of the character C)
  • the second position is, for example, a position where the character C rejects the touch by the user U (for example, the character C's head).
  • FIG. 17 illustrates the case where the contact point P is at the first position or the second position
  • the display control unit 42B displays the instruction image G in parallel with the change in the position of the contact point P in the character C. Aspects may change over time. That is, when the user U moves the contact point P on the surface of the character C according to the posture of the HMD 30, the display control unit 42B temporally changes the display mode of the instruction image G in conjunction with the movement of the contact point P. Change to For example, in a period in which the contact point P moves from the first position to the second position in FIG. 17, the display control unit 42B displays the display mode of the instruction image G from the display mode at the first position to the display mode at the second position Change up to continuous or stepwise.
  • the display mode of the instruction image G changes in accordance with the position of the contact point P. Therefore, by visually recognizing the display mode of the instruction image G, the user U can estimate the user U's motion of the character C at the time of contact. That is, the contact point P is gradually moved while inferring the movement of the character C at the time of contact by confirming the display mode of the instruction image G, and the contact point P is maintained at the position where the desired movement is expected. It is possible to provide the user U with the interest of giving an operation instruction.
  • the display control unit 42B of the modified example B4 controls the character C so that the sight line of the character C follows the reference line Q.
  • the character C is controlled such that the line of sight looks at the contact point P where the reference line Q intersects the character C.
  • the eye for example, the pupil
  • the head of the character C rotate so as to track the reference line Q.
  • the user U can easily grasp the sense that the user U is affecting the character C.
  • the display mode of the instruction image G is changed according to whether or not the reference line Q intersects the character C, but the conditions for changing the display mode of the instruction image G are as described above. It is not limited.
  • the display control unit 42B may change the display mode of the instruction image G continuously or stepwise according to the distance between the reference line Q and the character C. That is, the intersection between the reference line Q and the character C is not an essential condition for changing the display mode of the instruction image G. Further, depending on whether the reference line Q intersects a specific part of the character C, the display mode of the indication image G may be changed.
  • the instruction image G is displayed in the same display mode as in the case where the reference line Q does not intersect the character C.
  • the operation control unit 43B of the second embodiment is comprehensively expressed as an element that changes the display mode of the instruction image G according to the positional relationship between the reference line Q and the character C. .
  • Modification B6 The configuration of the second embodiment in which the display mode of the instruction image G is changed according to the positional relationship between the reference line Q and the character C is used in a state where the user U holds the terminal device 10 (for example, a smartphone) by hand. The same applies to cases.
  • the imaging range R displayed on the display device 13 in the virtual space V changes in accordance with the attitude of the terminal device 10.
  • the terminal device 10 of the modified example B6 includes a touch panel that detects the contact of the user U with the display surface of the display device 13.
  • the position of the contact point P in the virtual space V is changed according to the attitude of the HMD 30, but in the modified example B6, the point at which the user U contacts the display surface of the display device 13
  • An instruction image G is displayed as the contact point P.
  • a reference line Q passing through the contact point P which the user U has touched is set in the virtual space V. That is, a virtual straight line in the direction instructed by the user U by the touch (that is, touch operation) on the display surface of the display device 13 is set in the virtual space V as the reference line Q.
  • the display control unit 42B causes the display device 13 to display the image G2 as the instruction image G when the reference line Q intersects the character C, and displays the image G1 as the instruction image G when the reference line Q does not intersect the character C.
  • the display 13 is displayed as Even with the above configuration, the user U can easily grasp the positional relationship between the reference line Q and the character C in the virtual space V (for example, the state where the user U is in contact with the character C in the virtual space V). There is an advantage.
  • the configuration for attaching the display device 13 to the head of the user U may be omitted.
  • the display control unit 42B in a configuration not premised to be worn on the head of the user U displays the image obtained by capturing the virtual space V with the virtual camera E whose posture is controlled according to the posture of the display device 13 It is expressed as an element to be displayed on 13.
  • a virtual straight line in the direction instructed by the operation (for example, touch operation) on the display device 13 is a “reference line”.
  • the reference line Q is not limited to the optical axis of the virtual camera E exemplified in each of the embodiments described above.
  • a straight line perpendicular to the display surface of the display device 13 is used as the reference line Q.
  • a straight line that forms a predetermined angle with respect to a straight line perpendicular to the optical axis of the virtual camera E or the display surface may be used as the reference line Q.
  • the line of sight estimated by the function may be used as the reference line Q.
  • the reference line Q is comprehensively expressed as a virtual straight line set in the virtual space V.
  • the character C is illustrated as an object arranged in the virtual space V, but the object with which the user U contacts in the virtual space V is not limited to the character C.
  • inanimate elements such as structures or natural objects in the virtual space V are also included in the concept of the object.
  • action is, for example, the behavior, action, appearance or attitude of the object.
  • action about an inanimate object is, for example, a change in the form of the element. For example, an operation of opening a door of a building or an operation of deforming or moving a natural object such as a rock is exemplified as the operation of an object.
  • the operation instruction is not limited to the above example.
  • the reaction operation may be performed on the character C with the rotation of the HMD 30 in the pitch direction or the yaw direction as a touch by the user U.
  • it responds to character C on the condition that the state where reference line Q crosses within the specific range of character C (that is, when user U gazes within the range for a long time) continues for a predetermined time. An action may be performed.
  • the reaction operation may be performed on the character C on condition that the user U operates the operation device (not shown) connected to the HMD 30.
  • the configuration in which the change in the attitude of the HMD 30 (particularly, the rotation in the roll direction) is a trigger for the reaction operation of the character C may be omitted.
  • the rotation of the HMD 30 in the roll direction is accepted as an operation instruction (instruction of contact with the character C), but the change in posture of the HMD 30 determined as the operation instruction is not limited to the rotation in the roll direction.
  • the rotation of the HMD 30 in the pitch direction or the yaw direction may be accepted as an operation instruction.
  • Modification C5 The modifications (Modifications A1 to A7) exemplified for the first embodiment are similarly applied to the second embodiment. Further, the modifications (Modifications B1 to B6) illustrated in the second embodiment are similarly applied to the first embodiment.
  • the HMD 30 includes the display device 13, the detection device 14, and the mounting tool 20, and is mounted on the head of the user U.
  • the information processing apparatus 40 is a control device that causes the HMD 30 to display various images by communicating with the HMD 30, and includes the control device 11 and the storage device 12.
  • various information terminals such as a mobile phone, a smartphone, a tablet terminal, a personal computer, or a home game device are used as the information processing device 40.
  • the control device 11 executes the program stored in the storage device 12 so that the posture analysis unit 41, the display control unit 42 (42A or 42B), and the operation control unit as in the first embodiment or the second embodiment. It functions as 43 (43A or 43B). Therefore, the same effect as that of the first embodiment or the second embodiment is realized also in the configuration of FIG.
  • the preferred embodiments of the present invention are realized by the cooperation of a computer (specifically, the control device 11) and a program, as illustrated in the above-described embodiments.
  • the program according to each of the above embodiments is provided in a form stored in a computer readable recording medium and installed in the computer.
  • the recording medium is, for example, a non-transitory recording medium, and is preferably an optical recording medium (optical disc) such as a CD-ROM, but any known medium such as a semiconductor recording medium or a magnetic recording medium may be used.
  • Recording media of the form Note that non-transitory recording media include any recording media except transient propagation signals, and do not exclude volatile recording media. It is also possible to provide a program to a computer in the form of distribution via a communication network.
  • a program according to a preferred aspect (aspect A1) of the present invention is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), A display control unit (42A) that causes a display device (13) of the head mounted display (30) to display a stereoscopic image using binocular parallax; rotation of the head mounted display (30) in the roll direction; According to the positional relationship between the reference line (Q) whose direction in the virtual space (V) changes according to the direction of the head mounted display (30) and the object (C) in the virtual space (V)
  • the computer (11) functions as an operation control unit (43A) that controls the operation of the object (C).
  • the “head mounted display” is an image display device attachable to the head of the user (U).
  • the head mounted display (30) in the preferred embodiment comprises a display (13) for displaying an image, and a mounting tool (20) for mounting the display (13) on the head of the user (U).
  • a dedicated display device (13) is fixedly attached to a mounting tool (20) for mounting the display device (13) on the head (in other words, a dedicated item for a head mounted display)
  • a general-purpose display device (13) that can be used outside of the above state is attachable to and detachable from the mounting tool (20).
  • the display device (13) of the head mount display (30) in addition to the device used in a state exclusively mounted on the head, a state not mounted on the head and a state mounted on the head Devices that can be used on both sides.
  • a portable terminal device (10) such as a smartphone or a tablet terminal is used as the display device (13).
  • the “rolling direction” means a circumferential direction around the axis of the user (U) in the front-rear direction (for example, the direction perpendicular to the display screen on which an image is displayed on the head mounted display (30)).
  • the head mounted display (30) rotates in the roll direction.
  • the head mounted display (30) moves in the roll direction not only when only rotation in the roll direction occurs on the head mounted display (30) but also when rotation in the pitch direction or yaw direction occurs with the rotation in the roll direction. It may be determined that it has rotated.
  • An “object” is a virtual object installed in a virtual space (V).
  • a typical example of the object (C) is a character (for example, a human, an animal or a monster), but an inanimate element such as a structure or a natural thing in the virtual space (V) may be included in the concept of the object.
  • the "action of an object” is, for example, the behavior, action, appearance or attitude of the character.
  • the motion of an object” for an inanimate element is, for example, a change in the form of the element (for example, a door or a window of a building opens, a natural object such as rock deforms or moves, etc.).
  • the “reference line” is a virtual straight line set in the virtual space (V).
  • the optical axis of the virtual camera (E), the center line of the display screen by the display device (13), or a straight line forming a predetermined angle with these straight lines is a preferred example of the reference line (Q).
  • the line of sight of the user (U) may be used as the reference line (Q) in addition to the straight lines exemplified above.
  • the positional relation is a relation in which the reference line (Q) and the object (C) intersect. According to the above aspect, an instruction according to the intersection of the reference line (Q) and the object (C) can be reflected in the operation of the object (C).
  • the operation control unit (43A) causes the object (C) to perform an action according to the length of time the contact with the object (C) is maintained.
  • the operation of the object (C) can be variously changed according to the length of time in which the head mounted display (30) is kept rotating in the roll direction.
  • the operation control unit (43A) changes a parameter related to the object (C) according to a touch on the object (C).
  • the parameter related to the object (C) in the virtual space (V) can be changed by a simple operation of rotating the head mounted display (30) in the roll direction.
  • the configuration “change the parameter according to the touch on the object (C)” for example, the parameter depends on the presence or absence of the touch on the object (C), and the parameter corresponds to the degree of touch on the object (C) And dependent configurations are included.
  • the operation control unit (43A) performs the operation according to the position where the reference line (Q) intersects the object (C), Make an object (C) execute it.
  • the operation of the object (C) can be variously changed according to the position of the point (P) where the reference line (Q) in the virtual space (V) intersects the object (C) .
  • the operation control unit (43A) is configured to move the object (C) when the head mounted display (30) is rotated to one side in the roll direction.
  • the head mounted display (30) is rotated to the other side in the roll direction while it is determined that the user (U) has touched the object (C), or the head mounted display is not When returning from the rotation to one side, it is determined that the contact to the object (C) is released.
  • the user (U) can instruct the generation and release of the contact with the object (C) in the virtual space (V) according to the direction of rotation in the roll direction.
  • the operation control unit (43A) performs an operation according to the rotation angle ( ⁇ ) in the roll direction of the head mounted display (30) Let (C) execute. According to the above aspect, it is possible to cause the object (C) to execute various operations according to the rotation angle ( ⁇ ) of the head mounted display (30).
  • the display control unit (42A) responds to the rotation of the head mounted display (30) in the rolling direction to the virtual camera (E). Move in virtual space (V).
  • the virtual camera (E) (virtual viewpoint) approaches the object (C) in the virtual space (V) by a simple operation of rotating the head mounted display (30) in the roll direction It can be separated.
  • the direction of movement of the virtual camera (E) is arbitrary, for example, a configuration in which the virtual camera (E) is moved in the direction of the object (C) is preferable.
  • the virtual camera (E) approaches the object (C), and the head mounted display (30) rotates to the other side in the roll direction.
  • a configuration in which the virtual camera (E) is separated from the object (C) is preferable.
  • the display control unit (42A) captures an image by the virtual camera (E) according to the rotation of the head mounted display (30) in the roll direction.
  • Change range (R) According to the above aspect, the range (that is, the angle of view) captured by the virtual camera (E) in the virtual space (V) can be changed by a simple operation of rotating the head mount display (30) in the roll direction. it can.
  • the change in the imaging range (R) of the virtual camera (E) means enlargement (zooming in) or reduction (zooming out) of an element (for example, an object (C)) imaged in the virtual space (V).
  • the imaging range (R) of the virtual camera (E) A configuration that depends on the presence or absence of rotation in the roll direction of the head mounted display (30) and a configuration that the imaging range (R) of the virtual camera (E) depends on the angle of rotation in the roll direction are included.
  • the operation control unit (43A) is configured such that the rotation angle ( ⁇ ) in the roll direction of the head mounted display (30) exceeds a threshold ( ⁇ t). In this case, it is determined that the head mounted display (30) has been rotated in the roll direction.
  • a threshold ⁇ t
  • the operation control unit (43A) is configured such that the rotation angle ( ⁇ ) in the roll direction of the head mounted display (30) exceeds a threshold ( ⁇ t). In this case, it is determined that the head mounted display (30) has been rotated in the roll direction.
  • ⁇ t the threshold
  • An image display system (1A, 1B) is an image display system (1A, 1B) including a head mounted display (30) and an information processing apparatus (40)
  • the information processing apparatus (40) is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), and using binocular parallax A display control unit (42A) that causes the display device (13) of the head mounted display (30) to display the captured stereoscopic image, rotation of the head mounted display (30) in the roll direction, and the head mounted display (30)
  • a positional relationship between a reference line (Q) whose direction in the virtual space (V) changes according to the direction of) and an object (C) in the virtual space (V) Comprising operation control unit for controlling the operation of the object (C) and (43A) in accordance with.
  • a program according to a preferred aspect (aspect B1) of the present invention is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30),
  • the display mode of the indication image (G) is changed according to the positional relationship between Q) and the object (C) in the virtual space (V).
  • the display mode of the instruction image (G) representing the direction of the reference line (Q) in the virtual space (V) changes according to the positional relationship between the reference line (Q) and the object (C) Therefore, the user (U) can easily grasp the positional relationship between the reference line (Q) and the object (C) in the virtual space (V).
  • the “reference line” is a virtual straight line set in the virtual space (V).
  • the optical axis of the virtual camera (E), the center line of the display screen by the display device (13), or a straight line forming a predetermined angle with these straight lines is a preferred example of the reference line (Q).
  • the line of sight of the user (U) may be used as the reference line (Q) in addition to the straight lines exemplified above.
  • An "object” is a virtual object installed in a virtual space (V).
  • a typical example of the object (C) is a character (for example, a human, an animal or a monster), but an inanimate element such as a structure or a natural thing in the virtual space (V) may be included in the concept of the object.
  • Display mode means the property of an image that can be visually distinguished. For example, in addition to the three attributes of color (hue), saturation and lightness (tone), size and image content (e.g. pattern or shape) are also included in the concept of the display mode. In addition, the change of the display mode includes the switching of display / non-display.
  • the display control unit (42B) displays the indication image (G) at a point (P) intersecting the reference line (Q) on the surface of the object (C) Display.
  • the instruction image (G) is disposed on the surface of the object (C) in the virtual space (V)
  • the user the user is in contact with the surface of the object (C)
  • U is more easily perceived by the instruction image (G).
  • a program according to a preferred example of the aspect B1 or the aspect B2 controls an operation of the object (C) according to a positional relationship between the reference line (Q) and the object (C)
  • the computer (11) functions as 43B). According to the above aspect, it is possible to cause the object (C) to execute various operations reflecting the instruction from the user (U).
  • the action of an object is, for example, the behavior, action, appearance or attitude of a character which is an example of the object (C).
  • the "inanimate object” refers to, for example, a change in the form of the element (e.g. opening of a building door or window, deformation or movement of a natural object such as a rock).
  • the “operation instruction” is an instruction for generating an action (for example, contact) on an object (C) in the virtual space (V).
  • the operation control unit (43B) determines that the operation instruction has been received when a change in the attitude of the head mounted display (30) satisfies a predetermined condition, The object (C) executes an operation according to the amount of change in attitude, and the display control unit (42B) responds to the change in attitude of the head mounted display (30) before accepting the operation instruction.
  • the display mode of the instruction image (G) is changed. According to the above aspect, it is possible to cause the object (C) to execute various operations according to the attitude of the head mounted display (30).
  • the “predetermined condition” is a condition of posture change determined to be an operation instruction by the user (U), and is, for example, rotation in the roll direction in the embodiment.
  • the display control unit (42B) is configured to receive the instruction image (G) within an instruction reception period in which the user (U) is permitted to receive the operation instruction.
  • the display mode is changed over time.
  • the user (U) can visually grasp the instruction reception period (for example, the remaining time of the instruction reception period) in which the reception of the operation instruction is permitted from the display mode of the instruction image (G).
  • the “instruction acceptance period” is, for example, the remaining time in which one touch on the character is possible, or the remaining time in the operation mode in which the touch on the character is permitted.
  • the display control unit (42B) is adapted to the position of the point (P) where the reference line (Q) intersects among the objects (C). Change the display mode of the instruction image (G).
  • the display mode of the instruction image (G) changes in accordance with the position of the point (P) where the reference line (Q) intersects the object (C). The motion of the object (C) can be inferred from the display mode of the instruction image (G).
  • the display control unit (42B) performs the instruction in parallel with the change in the position of the point (P) where the reference line (Q) intersects in the object (C).
  • the display mode of the image (G) is changed over time.
  • the indication The user (U) can be provided with the interest of gradually changing the reference line (Q) while confirming the display mode of the image (G).
  • “temporal change in display mode” means that the display mode does not change only in a binary manner at a specific point in time, but gradually changes with the passage of time. In addition, it does not matter whether the display mode changes continuously or stepwise.
  • the operation control unit (43B) causes the object (C) to track the reference line (Q) of the object (C). Control.
  • the user (U) can easily grasp the feeling (for example, the feeling of touching) in which the user (U) is affecting the object (C).
  • the line of sight of the object follows the reference line means not only when the eye (for example, the pupil) of the object (C) rotates so as to follow the reference line (Q), but also the head of the object (C) Also includes the case where the unit rotates in accordance with the reference line (Q).
  • An image display system (1A, 1B) according to a preferred aspect (aspect B10) of the present invention is an image display system (1A, 1B) including a head mounted display (30) and an information processing device (40)
  • the information processing apparatus (40) is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), and using binocular parallax
  • the display control unit (42B) is configured to cause the display device (13) of the head mounted display (30) to display the captured stereoscopic image, and the display control unit (42B) is directed to the direction of the head mounted display (30) To display a pointing image (G) in the direction of the reference line (Q) in which the direction in the virtual space (V) changes in accordance with the object, and the reference line (Q) and the objects in the virtual space (V) (C) depending on the positional relationship between the changing the display mode of the instruction image (G).
  • the display mode of the instruction image (G) representing the direction of the reference line (Q) in the virtual space (V) changes according to the positional relationship between the reference line (Q) and the object (C) Therefore, the user (U) can easily grasp the positional relationship between the reference line (Q) and the object (C) in the virtual space (V).
  • the head mounted display (30) and the information processing apparatus (40) may be integrated or separated.
  • SYMBOLS 1A, 1B Image display system, 10 ... Terminal apparatus, 11 ... Control apparatus, 12 ... Storage apparatus, 13 ... Display apparatus, 14 ... Detection apparatus, 20 ... Mounting tool, 30 ... HMD, 40 ... Information processing apparatus, 41 ... Posture analysis unit, 42A, 42B: display control unit, 43A, 43B: operation control unit, V: virtual space, E: virtual camera, C: character, R: imaging range, Q: reference line, G: instruction image, P ... contact point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This program causes a computer to function as: a display control unit that causes a 3D image which uses binocular parallax to be displayed on a display device of a head-mounted display, such image capturing a virtual space using a virtual camera the direction of which is controlled according to the direction of the head-mounted display; and an operation control unit that controls the operation of an object in accordance with the rotation of the head-mounted display in the roll direction, and the positional relationship between the object in the virtual space and a reference line the direction of which, in the virtual space, changes according to the direction of the head-mounted display.

Description

画像表示システムおよびそのプログラムを記録した記録媒体IMAGE DISPLAY SYSTEM AND RECORDING MEDIUM CONTAINING THE PROGRAM
 本発明は、仮想空間を撮像した画像を表示するための技術に関する。 The present invention relates to a technique for displaying an image obtained by imaging a virtual space.
 仮想空間内に設置された仮想的なカメラ(以下「仮想カメラ」という)により撮像された画像を表示装置に表示させる技術が従来から提案されている。例えば特許文献1には、ヘッドマウントディスプレイの傾きに応じて仮想空間内にメニュー画像を表示する構成が開示されている。 There has conventionally been proposed a technique for displaying an image captured by a virtual camera (hereinafter referred to as a "virtual camera") installed in a virtual space on a display device. For example, Patent Document 1 discloses a configuration for displaying a menu image in a virtual space in accordance with the tilt of the head mounted display.
特開2016-115122号公報JP, 2016-115122, A
 仮想空間内のキャラクタ等のオブジェクトとの間でコミュニケーションするゲーム等のコンテンツが従来から提案されている。この種のコンテンツをヘッドマウントディスプレイにより表示する場面では、利用者による操作が制限されるという問題がある。ヘッドマウントディスプレイとは別個の操作機器を用意すれば、利用者による操作の制限は緩和されるが、装置構成が複雑化するという問題がある。以上の事情を考慮して、本発明は、仮想空間内のオブジェクトに対する操作を簡便に実現することを目的とする。 BACKGROUND ART Content such as a game for communicating with an object such as a character in a virtual space has conventionally been proposed. In the situation where this kind of content is displayed by the head mounted display, there is a problem that the operation by the user is restricted. If an operation device separate from the head mounted display is prepared, the restriction on the operation by the user is alleviated, but there is a problem that the device configuration becomes complicated. In view of the above circumstances, the present invention aims to easily realize an operation on an object in a virtual space.
 以上の課題を解決するために、本発明の好適な態様に係る記録媒体は、ヘッドマウントディスプレイの方向に応じて方向が制御される仮想カメラで仮想空間を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイの表示装置に表示させる表示制御部、および、前記ヘッドマウントディスプレイのロール方向の回転と、前記ヘッドマウントディスプレイの方向に応じて前記仮想空間における方向が変化する基準線と前記仮想空間内のオブジェクトとの位置関係と、に応じて当該オブジェクトの動作を制御する動作制御部としてコンピュータを機能させるプログラムを記録した非一過性の記録媒体である。 In order to solve the above problems, a recording medium according to a preferred embodiment of the present invention is an image obtained by imaging a virtual space with a virtual camera whose direction is controlled according to the direction of the head mounted display, A display control unit that causes a display device of the head mounted display to display a stereoscopic image using the display, rotation of the head mounted display in the roll direction, and a direction in the virtual space according to the direction of the head mounted display It is a non-transitory recording medium recording a program that causes a computer to function as an operation control unit that controls the operation of the object according to the positional relationship between the changing reference line and the object in the virtual space.
 本発明の好適な態様に係る画像表示システムは、ヘッドマウントディスプレイと情報処理装置とを具備する画像表示システムであって、前記情報処理装置は、ヘッドマウントディスプレイの方向に応じて方向が制御される仮想カメラで仮想空間を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイの表示装置に表示させる表示制御部と、前記ヘッドマウントディスプレイのロール方向の回転と、前記ヘッドマウントディスプレイの方向に応じて前記仮想空間における方向が変化する基準線と前記仮想空間内のオブジェクトとの位置関係と、に応じて当該オブジェクトの動作を制御する動作制御部とを具備する。 An image display system according to a preferred aspect of the present invention is an image display system including a head mounted display and an information processing apparatus, and the information processing apparatus is controlled in direction according to the direction of the head mounted display. A display control unit for displaying on a display device of the head mounted display an image obtained by imaging a virtual space with a virtual camera and using binocular parallax; rotation of the head mounted display in a roll direction; The motion control unit controls an operation of the object according to a reference line whose direction in the virtual space changes in accordance with the direction of the head mounted display and a positional relationship between the object in the virtual space.
本発明の第1実施形態に係る画像表示システムの外観を例示する斜視図である。It is a perspective view which illustrates the appearance of the image display system concerning a 1st embodiment of the present invention. 画像表示システムの構成を例示するブロック図である。FIG. 1 is a block diagram illustrating the configuration of an image display system. 画像表示システムの機能的な構成を例示するブロック図である。It is a block diagram which illustrates the functional composition of an image display system. 仮想空間の説明図である。It is explanatory drawing of virtual space. 表示装置が表示する画像の説明図である。It is explanatory drawing of the image which a display apparatus displays. 制御装置が実行する処理の内容を例示するフローチャートである。It is a flow chart which illustrates the contents of processing which a control device performs. ロール方向におけるHMDの回転の説明図である。It is explanatory drawing of rotation of HMD in a roll direction. 表示装置が表示する画像の説明図である。It is explanatory drawing of the image which a display apparatus displays. 変形例A4における仮想カメラの移動の説明図である。It is an explanatory view of movement of a virtual camera in modification A4. 変形例A5における撮像範囲の変化の説明図である。It is explanatory drawing of a change of the imaging range in modification A5. 第2実施形態における画像表示システムの機能的な構成を例示するブロック図である。It is a block diagram which illustrates the functional composition of the image display system in a 2nd embodiment. 第2実施形態において基準線がキャラクタに交差しない場合に表示される画像の説明図である。It is explanatory drawing of the image displayed when a reference line does not cross a character in 2nd Embodiment. 第2実施形態において基準線がキャラクタに交差する場合に表示される画像の説明図である。It is explanatory drawing of the image displayed when a reference line cross | intersects a character in 2nd Embodiment. 第2実施形態におけるキャラクタと指示画像との関係の説明図である。It is explanatory drawing of the relationship between the character in 2nd Embodiment, and an instruction | indication image. 第2実施形態の制御装置が実行する処理の内容を例示するフローチャートである。It is a flowchart which illustrates the content of the process which the control apparatus of 2nd Embodiment performs. 変形例B2における指示画像の経時的な変化の説明図である。It is an explanatory view of change over time of an indication picture in modification B2. 変形例B3における指示画像の説明図である。It is explanatory drawing of the instruction | indication image in modification B3. 変形例B4における指示画像の説明図である。It is explanatory drawing of the instruction | indication image in modification B4. 変形例C5における画像表示システムの外観を例示する斜視図である。It is a perspective view which illustrates the appearance of the image display system in modification C5.
[A:第1実施形態]
 図1は、本発明の第1実施形態に係る画像表示システム1Aの外観を例示する斜視図である。画像表示システム1Aは、画像を表示するための映像機器であり、端末装置10と装着具20とを具備する。端末装置10は、例えばスマートフォンまたはタブレット端末等の可搬型の情報端末である。端末装置10は、装着具20に対して着脱可能に設置される。装着具20は、端末装置10を利用者Uの頭部に装着するための器具である。例えば、図1の例示の通り、利用者Uの頭部に巻回されるベルトを具備するゴーグル型のアタッチメントが装着具20として好適である。
[A: First Embodiment]
FIG. 1 is a perspective view illustrating the appearance of an image display system 1A according to a first embodiment of the present invention. The image display system 1A is a video apparatus for displaying an image, and includes a terminal device 10 and a mounting tool 20. The terminal device 10 is a portable information terminal such as a smartphone or a tablet terminal, for example. The terminal device 10 is detachably installed to the mounting tool 20. The mounting tool 20 is a tool for mounting the terminal device 10 on the head of the user U. For example, as illustrated in FIG. 1, a goggle type attachment having a belt wound around the head of the user U is suitable as the attachment 20.
 図2は、端末装置10の構成を例示するブロック図である。図2に例示される通り、端末装置10は、制御装置11と記憶装置12と表示装置13と検出装置14とを具備するコンピュータシステムである。制御装置11は、CPU(Central Processing Unit)等の1個以上のプロセッサであり、画像表示システム1Aの各要素を統括的に制御する。記憶装置12は、制御装置11が実行するプログラムと制御装置11が使用する各種のデータとを記憶する。例えば、磁気記録媒体もしくは半導体記録媒体等の公知の記録媒体、または、複数種の記録媒体の組合せにより、記憶装置12が構成される。 FIG. 2 is a block diagram illustrating the configuration of the terminal device 10. As illustrated in FIG. 2, the terminal device 10 is a computer system including a control device 11, a storage device 12, a display device 13, and a detection device 14. The control device 11 is one or more processors such as a CPU (Central Processing Unit), and centrally controls each element of the image display system 1A. The storage device 12 stores a program executed by the control device 11 and various data used by the control device 11. For example, the storage device 12 is configured by a known recording medium such as a magnetic recording medium or a semiconductor recording medium, or a combination of a plurality of types of recording mediums.
 表示装置13は、制御装置11による制御のもとで画像を表示する。例えば液晶表示パネルまたは有機EL(Electroluminescence)表示パネル等の平面型の表示器が表示装置13として利用される。図1から理解される通り、利用者Uの頭部に装着具20が装着された状態では、利用者Uの頭部に表示面が対向した状態で利用者Uの両眼の前方に表示装置13が配置される。第1実施形態の表示装置13は、利用者Uが立体感を知覚可能な立体視画像を表示する。立体視画像は、両眼視差が付与された右眼用画像と左眼用画像とで構成される。右眼用画像を利用者Uの右眼に視認させ、左眼用画像を利用者Uの左眼に視認させることで、利用者Uは立体感を知覚する。 The display device 13 displays an image under the control of the control device 11. For example, a flat panel display such as a liquid crystal display panel or an organic EL (Electroluminescence) display panel is used as the display device 13. As understood from FIG. 1, in a state where the mounting tool 20 is attached to the head of the user U, a display device is provided in front of both eyes of the user U in a state where the display surface faces the head of the user U 13 is arranged. The display device 13 according to the first embodiment displays a stereoscopic image in which the user U can perceive a stereoscopic effect. The stereoscopic image is composed of a right-eye image and a left-eye image to which binocular parallax is given. The user U perceives a stereoscopic effect by causing the right eye image to be viewed by the right eye of the user U and the left eye image to be viewed by the left eye of the user U.
 図2に例示される通り、表示装置13と検出装置14と装着具20とはヘッドマウントディスプレイ(以下「HMD」という)30を構成し、制御装置11と記憶装置12とは、HMD30に画像を表示させる情報処理装置40を構成する。すなわち、画像表示システム1Aは、HMD30と情報処理装置40とを具備する。前述の通り、第1実施形態では、情報処理装置40と表示装置13と検出装置14とが単体の端末装置10により実現される。なお、情報処理装置40をHMD30の要素として把握してもよい。 As illustrated in FIG. 2, the display device 13, the detection device 14, and the mounting tool 20 constitute a head mounted display (hereinafter referred to as “HMD”) 30, and the control device 11 and the storage device 12 transmit an image to the HMD 30. The information processing apparatus 40 to be displayed is configured. That is, the image display system 1A includes the HMD 30 and the information processing device 40. As described above, in the first embodiment, the information processing device 40, the display device 13, and the detection device 14 are realized by the single terminal device 10. Note that the information processing device 40 may be grasped as an element of the HMD 30.
 図2の検出装置14は、HMD30の姿勢(orientation)に応じた検出信号を出力するセンサである。具体的には、検出装置14は、角速度を検知するジャイロセンサ、加速度を検知する加速度センサ、傾斜角を検知する傾斜センサ、および、地磁気により方向を検知する地磁気センサ等の複数種のセンサから選択された1種類以上のセンサで構成される。検出装置14から出力される検出信号の解析により、HMD30の姿勢または姿勢の変化を特定することが可能である。検出装置14は、表示装置13または端末装置10の姿勢に応じた検出信号を出力するセンサとも換言される。なお、検出信号を増幅する増幅器、および検出信号をアナログからデジタルに変換するA/D変換器の図示は便宜的に省略した。 The detection device 14 in FIG. 2 is a sensor that outputs a detection signal according to the orientation of the HMD 30. Specifically, the detection device 14 is selected from a plurality of types of sensors such as a gyro sensor that detects an angular velocity, an acceleration sensor that detects an acceleration, an inclination sensor that detects an inclination angle, and a geomagnetic sensor that detects a direction by geomagnetism. It consists of one or more sensors. By analyzing the detection signal output from the detection device 14, it is possible to identify the change in attitude or attitude of the HMD 30. The detection device 14 is also referred to as a sensor that outputs a detection signal according to the attitude of the display device 13 or the terminal device 10. Note that illustration of an amplifier for amplifying the detection signal and an A / D converter for converting the detection signal from analog to digital is omitted for convenience.
 図1に例示される通り、HMD30の姿勢は、原点Oにおいて相互に直交する3軸(X軸,Y軸およびZ軸)で規定される。X軸は、表示装置13の表示面に垂直な軸線であり、利用者Uの前後方向に相当する。Y軸およびZ軸は、表示装置13の表示面に平行な軸線である。Y軸は、利用者Uの左右方向に相当し、Z軸は、利用者Uの上下方向に相当する。表示装置13の表示面に着目すると、X軸は表示面の奥行き方向に相当し、Y軸は表示面の横方向に相当し、Z方向は表示面の縦方向に相当する。X軸を中心とした円周の方向(すなわちX軸を中心とした仮想的な円の円弧の方向)はロール方向であり、Y軸を中心とした円周の方向はピッチ方向であり、Z軸を中心とした円周の方向はヨー方向である。X軸の方向は、ピッチ方向の角度とヨー方向の角度とにより確定する。ロール方向の回転は、利用者Uが前方を向いた状態で頭部を左右に傾斜させる動作に相当する。すなわち、例えば利用者Uが首をかしげた場合にHMD30はロール方向に回転する。以下の説明では、図1に例示される通り、ロール方向における一方側を「第1側」と表記し、他方側を「第2側」と表記する。X軸を中心とした時計回りおよび反時計回りの一方が第1側であり、他方が第2側である。 As illustrated in FIG. 1, the attitude of the HMD 30 is defined by three axes (X axis, Y axis, and Z axis) orthogonal to one another at the origin O. The X axis is an axis perpendicular to the display surface of the display device 13 and corresponds to the front-rear direction of the user U. The Y axis and the Z axis are axes parallel to the display surface of the display device 13. The Y axis corresponds to the left and right direction of the user U, and the Z axis corresponds to the up and down direction of the user U. Focusing on the display surface of the display device 13, the X axis corresponds to the depth direction of the display surface, the Y axis corresponds to the horizontal direction of the display surface, and the Z direction corresponds to the vertical direction of the display surface. The direction of the circumference centered on the X axis (that is, the direction of the arc of a virtual circle centered on the X axis) is the roll direction, and the direction of the circumference centered on the Y axis is the pitch direction. The circumferential direction about the axis is the yaw direction. The direction of the X axis is determined by the angle in the pitch direction and the angle in the yaw direction. The rotation in the roll direction corresponds to an operation of tilting the head to the left and right with the user U facing forward. That is, for example, when the user U bites his head, the HMD 30 rotates in the roll direction. In the following description, as illustrated in FIG. 1, one side in the roll direction is described as “first side”, and the other side is described as “second side”. One of clockwise and counterclockwise around the X axis is the first side, and the other is the second side.
 図2の制御装置11は、記憶装置12に記憶されたプログラムを実行することで、図3に例示される通り、姿勢解析部41、表示制御部42Aおよび動作制御部43Aとして機能する。なお、制御装置11の機能の一部または全部を専用の電子回路で実現してもよい。 The control device 11 of FIG. 2 functions as the posture analysis unit 41, the display control unit 42A, and the operation control unit 43A by executing the program stored in the storage device 12 as illustrated in FIG. Note that part or all of the functions of the control device 11 may be realized by a dedicated electronic circuit.
 姿勢解析部41は、検出装置14が出力する検出信号を解析することでHMD30の姿勢を特定する。具体的には、姿勢解析部41は、HMD30の姿勢に関する姿勢データを所定の周期で順次に生成する。姿勢データは、HMD30の姿勢または姿勢の変化を表すデータである。 The posture analysis unit 41 specifies the posture of the HMD 30 by analyzing the detection signal output from the detection device 14. Specifically, the posture analysis unit 41 sequentially generates posture data on the posture of the HMD 30 at a predetermined cycle. Posture data is data representing a change in the posture or posture of the HMD 30.
 表示制御部42Aは、仮想空間V内の仮想カメラEにより撮像した画像を表示装置13に表示させる。図4は、仮想空間Vの説明図である。図4に例示される通り、仮想カメラEは、仮想空間V内に設置され、仮想空間Vにおける特定の範囲(以下「撮像範囲」という)Rを撮像する。撮像範囲Rは、仮想カメラEの光軸に対して縦方向および横方向における所定の角度にわたる範囲である。仮想カメラEの光軸は、仮想空間V内における利用者Uの仮想的な視線に相当する。第1実施形態の表示制御部42Aは、両眼視差を利用した立体視画像(右眼用画像および左眼用画像)を表示装置13に表示させる。したがって、仮想カメラEは、左眼用画像を撮像する第1仮想カメラと右眼用画像を撮像する第2仮想カメラとを含んで構成される。仮想カメラEが撮像した立体視画像を表す画像データが表示制御部42Aから表示装置13に順次に供給されることで、表示装置13には立体視画像が表示される。 The display control unit 42A causes the display device 13 to display an image captured by the virtual camera E in the virtual space V. FIG. 4 is an explanatory diagram of the virtual space V. As illustrated in FIG. 4, the virtual camera E is installed in the virtual space V, and images a specific range (hereinafter referred to as “imaging range”) R in the virtual space V. The imaging range R is a range over predetermined angles in the longitudinal direction and the lateral direction with respect to the optical axis of the virtual camera E. The optical axis of the virtual camera E corresponds to the virtual line of sight of the user U in the virtual space V. The display control unit 42A of the first embodiment causes the display device 13 to display a stereoscopic image (right-eye image and left-eye image) using binocular parallax. Therefore, the virtual camera E is configured to include a first virtual camera that captures an image for the left eye and a second virtual camera that captures the image for the right eye. By sequentially supplying image data representing a stereoscopic image captured by the virtual camera E from the display control unit 42A to the display device 13, a stereoscopic image is displayed on the display device 13.
 表示制御部42Aは、仮想空間V内における仮想カメラEの方向を、姿勢解析部41が特定したHMD30の方向に応じて制御する。具体的には、原点Oを中心としたX軸の回転に連動して仮想カメラEの光軸の方向(さらには撮像範囲R)が変化する。例えば、HMD30がヨー方向に回転した場合には、仮想カメラEの光軸が原点Oを中心として左右に回転し、HMD30がピッチ方向に回転した場合には、仮想カメラEの光軸が原点Oを中心として上下に回転する。第1実施形態では、HMD30がロール方向に回転した場合でも仮想カメラEの姿勢(光軸を中心とした円周の方向における角度)は変化しない。ただし、HMD30のロール方向の回転に応じて、光軸を中心とした仮想カメラEの傾きを変化させてもよい。 The display control unit 42A controls the direction of the virtual camera E in the virtual space V according to the direction of the HMD 30 specified by the posture analysis unit 41. Specifically, the direction of the optical axis of the virtual camera E (and further, the imaging range R) changes in conjunction with the rotation of the X axis about the origin O. For example, when the HMD 30 rotates in the yaw direction, the optical axis of the virtual camera E rotates left and right about the origin O, and when the HMD 30 rotates in the pitch direction, the optical axis of the virtual camera E is the origin O Rotate up and down around the In the first embodiment, even when the HMD 30 rotates in the roll direction, the attitude of the virtual camera E (the angle in the direction of the circumference centered on the optical axis) does not change. However, in accordance with the rotation of the HMD 30 in the roll direction, the inclination of the virtual camera E about the optical axis may be changed.
 図4に例示される通り、仮想カメラEが設置された仮想空間V内にはキャラクタCが配置される。第1実施形態のキャラクタCは、仮想空間V内で活動する仮想的な生物(例えば人間、動物またはモンスター)を表すオブジェクトであり、頭部と胴体部とを含んで構成される。仮想空間V内でキャラクタCに対して随時に接触(スキンシップ)しながら当該キャラクタCと交流するゲームが画像表示システム1Aにより利用者Uに提供される。すなわち、画像表示システム1Aはゲーム装置として使用される。 As illustrated in FIG. 4, the character C is disposed in the virtual space V in which the virtual camera E is installed. The character C according to the first embodiment is an object representing a virtual creature (for example, a human, an animal or a monster) operating in the virtual space V, and includes a head and a body. The image display system 1A provides the user U with a game that interacts with the character C while touching (skinship) the character C at any time in the virtual space V. That is, the image display system 1A is used as a game device.
 仮想空間V内には基準線Qが設定される。基準線Qは、HMD30の方向(X軸の方向)に応じて仮想空間V内における方向が変化する直線である。第1実施形態では、図4に例示される通り、仮想カメラEの光軸を基準線Qとして例示する。例えば、利用者Uが頭部を右側に向ければ基準線Qは原点Oを中心として右側に回転し、利用者Uが頭部を左側に向ければ基準線Qは原点Oを中心として左側に回転する。以上の説明から理解される通り、基準線Qは、仮想空間V内における利用者Uの仮想的な視線である。 A reference line Q is set in the virtual space V. The reference line Q is a straight line whose direction in the virtual space V changes according to the direction of the HMD 30 (direction of the X axis). In the first embodiment, as illustrated in FIG. 4, the optical axis of the virtual camera E is illustrated as a reference line Q. For example, if the user U points the head to the right, the reference line Q rotates to the right about the origin O, and if the user U points the head to the left, the reference line Q rotates to the left about the origin O Do. As understood from the above description, the reference line Q is a virtual line of sight of the user U in the virtual space V.
 図5は、表示装置13に表示される画像の模式図である。図5に例示される通り、表示制御部42Aは、仮想空間V内で仮想カメラEが撮像した立体視画像を表示装置13に表示させる。撮像範囲R内にキャラクタCが位置する場合には当該キャラクタCが表示装置13に表示される。また、表示制御部42Aは、基準線Qの方向を表す画像(以下「指示画像」という)Gを表示装置13に表示させる。指示画像Gは、仮想空間V内において基準線Qの線上の所定の位置に配置される。第1実施形態における基準線Qは仮想カメラEの光軸であるから、表示装置13の表示面内における所定の位置に指示画像Gは表示される。 FIG. 5 is a schematic view of an image displayed on the display device 13. As illustrated in FIG. 5, the display control unit 42A causes the display device 13 to display a stereoscopic image captured by the virtual camera E in the virtual space V. When the character C is positioned within the imaging range R, the character C is displayed on the display device 13. In addition, the display control unit 42A causes the display device 13 to display an image G (hereinafter referred to as "instruction image") representing the direction of the reference line Q. The instruction image G is disposed at a predetermined position on the line of the reference line Q in the virtual space V. Since the reference line Q in the first embodiment is the optical axis of the virtual camera E, the instruction image G is displayed at a predetermined position in the display surface of the display device 13.
 図3の動作制御部43Aは、仮想空間V内におけるキャラクタCの動作を制御する。動作制御部43Aは、ゲームの進行に応じてキャラクタCの動作を制御するほか、仮想空間V内における利用者Uの接触に対応した動作(以下「反応動作」という)をキャラクタCに実行させる。具体的には、動作制御部43Aは、仮想空間V内で利用者UがキャラクタCに接触したか否かを判定し、利用者Uによる接触に対する反応動作をキャラクタCに実行させる。例えば、姿勢の変化、表情の変化、または特定の台詞の発音等の各種の動作が、反応動作の典型例である。 The motion control unit 43A of FIG. 3 controls the motion of the character C in the virtual space V. The motion control unit 43A controls the motion of the character C in accordance with the progress of the game, and causes the character C to perform a motion corresponding to the contact of the user U in the virtual space V (hereinafter referred to as "reaction motion"). Specifically, the operation control unit 43A determines whether or not the user U has touched the character C in the virtual space V, and causes the character C to perform a reaction operation to the touch by the user U. For example, various actions such as a change in posture, a change in expression, or the pronunciation of a specific speech are typical examples of the reaction action.
 動作制御部43Aは、姿勢解析部41が特定したHMD30の姿勢の変化に応じて、仮想空間V内で利用者UがキャラクタCに接触したか否かを判定する。第1実施形態の動作制御部43Aは、HMD30がロール方向に回転した場合に、仮想空間V内で利用者UがキャラクタCに接触したと判定する。すなわち、HMD30のロール方向の回転は、仮想空間V内で利用者UがキャラクタCに接触するための指示(動作指示)に相当する。具体的には、動作制御部43Aは、HMD30がロール方向に回転した場合に、基準線QとキャラクタCの表面とが交差する地点(以下「接触地点」という)Pに利用者Uが接触したと判定する。接触地点PはキャラクタCの表面上に位置する。なお、基準線Qが複数の地点でキャラクタCの表面に交差する場合、複数の地点のうち仮想カメラE(すなわち利用者U)に最も近い地点が接触地点Pとして選択される。 The motion control unit 43A determines whether the user U has touched the character C in the virtual space V, in accordance with the change in the posture of the HMD 30 specified by the posture analysis unit 41. The motion control unit 43A according to the first embodiment determines that the user U contacts the character C in the virtual space V when the HMD 30 rotates in the roll direction. That is, the rotation of the HMD 30 in the roll direction corresponds to an instruction (operation instruction) for the user U to contact the character C in the virtual space V. Specifically, when the HMD 30 rotates in the roll direction, the operation control unit 43A contacts the user U at a point P (hereinafter referred to as a “contact point”) at which the reference line Q and the surface of the character C intersect. It is determined that The contact point P is located on the surface of the character C. When the reference line Q intersects the surface of the character C at a plurality of points, a point closest to the virtual camera E (that is, the user U) among the plurality of points is selected as the contact point P.
 以上に説明した通り、第1実施形態では、HMD30のロール方向の回転に応じて仮想空間V内のキャラクタCの動作が制御される。したがって、キャラクタCの動作を制御するための指示を利用者Uが入力する操作装置をHMD30とは別個に用意することを必要とせずに、利用者Uからの指示をキャラクタCの動作に反映させることができる。第1実施形態では特に、基準線QとキャラクタCとが交差する関係にある場合に利用者UがキャラクタCに接触したと判定されるから、利用者UはキャラクタCを視界内に捉えながら当該キャラクタCを操作することが可能である。すなわち、利用者UがキャラクタCをより直観的に操作できるという利点がある。 As described above, in the first embodiment, the motion of the character C in the virtual space V is controlled according to the rotation of the HMD 30 in the roll direction. Therefore, the instruction from the user U is reflected in the operation of the character C without requiring the operation device for the user U to input the instruction for controlling the operation of the character C separately from the HMD 30. be able to. In the first embodiment, it is determined that the user U has touched the character C, particularly when the reference line Q and the character C cross each other, so the user U captures the character C in the field of view. It is possible to manipulate the character C. That is, there is an advantage that the user U can operate the character C more intuitively.
 図6は、動作制御部43Aが実行する処理(以下「第1制御処理」という)の具体的な内容を例示するフローチャートである。動作制御部43Aは、図6の第1制御処理を所定の周期で反復的に実行する。記憶装置12には、仮想空間V内において利用者UがキャラクタCに接触した状態(以下「接触状態」という)にあるか否かを表す接触判定データが記憶される。接触判定データは、例えば接触状態および非接触状態の何れかを示すフラグである。 FIG. 6 is a flowchart illustrating specific contents of processing (hereinafter, referred to as “first control processing”) executed by the operation control unit 43A. The operation control unit 43A repeatedly executes the first control process of FIG. 6 at a predetermined cycle. The storage unit 12 stores contact determination data indicating whether or not the user U is in contact with the character C in the virtual space V (hereinafter referred to as “contact state”). The contact determination data is, for example, a flag indicating either a contact state or a non-contact state.
 第1制御処理を開始すると、動作制御部43Aは、接触判定データが接触状態を示すか否かを判定する(Sa1)。接触判定データが非接触状態を示す場合(Sa1:NO)、動作制御部43Aは、HMD30がロール方向に回転したか否かを判定する(Sa2)。具体的には、動作制御部43Aは、姿勢解析部41から順次に供給される姿勢データを参照することで、HMD30がロール方向の第1側(例えば時計回り)に回転したか否かを判定する。 When the first control process is started, the operation control unit 43A determines whether the contact determination data indicates a contact state (Sa1). If the contact determination data indicates a non-contact state (Sa1: NO), the operation control unit 43A determines whether the HMD 30 has rotated in the roll direction (Sa2). Specifically, the operation control unit 43A determines whether the HMD 30 has rotated to the first side (for example, clockwise) in the roll direction by referring to posture data sequentially supplied from the posture analysis unit 41. Do.
 図7は、HMD30がロール方向に回転したか否かを判定する処理の説明図である。図7に例示される通り、動作制御部43Aは、HMD30がロール方向の第1側に回転した角度(以下「回転角度」という)θが閾値θtを上回る場合に、HMD30がロール方向に回転したと判定する。回転角度θは、X軸を中心としてZ軸またはY軸が回転した角度であり、例えば鉛直方向を基準(θ=0)として規定される。他方、HMD30の回転角度θが閾値θtを下回る場合、動作制御部43Aは、HMD30がロール方向に回転したと判定しない。以上に説明した通り、第1実施形態では、閾値θtを下回る程度の回転では、HMD30はロール方向に回転したと判定されない。したがって、HMD30が回転したと過剰な頻度で判定される可能性を低減できる。また、利用者UがHMD30の回転を意図していないにも関わらずHMD30が回転したと判定される可能性(すなわち誤操作の可能性)を低減できるという利点もある。 FIG. 7 is an explanatory view of a process of determining whether the HMD 30 has rotated in the roll direction. As illustrated in FIG. 7, the operation control unit 43A rotates the HMD 30 in the roll direction when the angle (hereinafter referred to as “rotation angle”) θ at which the HMD 30 rotates to the first side in the roll direction exceeds the threshold θt. It is determined that The rotation angle θ is an angle obtained by rotating the Z axis or the Y axis around the X axis, and is defined, for example, with the vertical direction as a reference (θ = 0). On the other hand, when the rotation angle θ of the HMD 30 falls below the threshold value θt, the operation control unit 43A does not determine that the HMD 30 has rotated in the roll direction. As described above, in the first embodiment, it is not determined that the HMD 30 has rotated in the roll direction when the rotation is less than the threshold θt. Therefore, it is possible to reduce the possibility of determining that the HMD 30 has rotated excessively. There is also an advantage that the possibility that the HMD 30 is determined to have rotated even though the user U does not intend to rotate the HMD 30 (that is, the possibility of erroneous operation) can be reduced.
 なお、HMD30にロール方向の回転のみが単独で発生した場合だけでなく、ロール方向の回転とともにピッチ方向またはヨー方向の回転が発生した場合でも、ロール方向における回転の角度が閾値θtを上回る場合には、HMD30がロール方向に回転したと判定される。 Not only when the rotation in the roll direction alone occurs in the HMD 30, but also when the rotation in the pitch direction or the yaw direction occurs with the rotation in the roll direction, the angle of rotation in the roll direction exceeds the threshold θt. Is determined that the HMD 30 has rotated in the roll direction.
 図6の第1制御処理において、HMD30がロール方向の第1側に回転したと判定すると(Sa2:YES)、動作制御部43Aは、基準線Qが仮想空間V内のキャラクタCに交差するか否かを判定する(Sa3)。すなわち、基準線Qの線上にキャラクタCが存在するか否かが判定される。図8の例示の通り、基準線Qが仮想空間V内のキャラクタCに交差する場合(Sa3:YES)、動作制御部43Aは、利用者UがキャラクタCの接触地点Pに接触した接触状態にあると判定する(Sa4)。具体的には、動作制御部43Aは、記憶装置12に記憶された接触判定データを、接触状態を示す数値に変更する。以上の説明から理解される通り、動作制御部43Aは、HMD30がロール方向に回転した場合に、利用者Uが仮想空間V内でキャラクタCの接触地点Pに接触したと判定する。すなわち、利用者Uは、HMD30をロール方向に回転させることで、キャラクタCの接触地点Pに対する接触を指示することが可能である。 If it is determined in the first control process of FIG. 6 that the HMD 30 has rotated to the first side in the roll direction (Sa2: YES), the operation control unit 43A determines whether the reference line Q intersects the character C in the virtual space V It is determined whether or not it is (Sa3). That is, it is determined whether the character C exists on the line of the reference line Q. As shown in the example of FIG. 8, when the reference line Q intersects the character C in the virtual space V (Sa3: YES), the operation control unit 43A is in a contact state where the user U contacts the contact point P of the character C. It is determined that there is (Sa4). Specifically, the operation control unit 43A changes the touch determination data stored in the storage device 12 into a numerical value indicating a touch state. As understood from the above description, the motion control unit 43A determines that the user U has touched the contact point P of the character C in the virtual space V when the HMD 30 rotates in the roll direction. That is, the user U can instruct the contact with the contact point P of the character C by rotating the HMD 30 in the roll direction.
 接触状態に遷移した場合、動作制御部43Aは、反応動作をキャラクタCに実行させる(Sa5)。具体的には、動作制御部43Aは、キャラクタCにおいて基準線Qが交差する接触地点Pの位置に応じた反応動作をキャラクタCに実行させる。すなわち、キャラクタCが実行する反応動作は、接触地点Pの位置に応じて変化する。例えば、キャラクタCの表面を区分した複数の領域の各々について反応動作の種類を規定するテーブルが記憶装置12に記憶される。動作制御部43Aは、キャラクタCの表面の複数の領域のうち、接触地点Pを含む領域についてテーブルに規定された反応動作をキャラクタCに実行させる。例えば、動作制御部43Aは、接触地点PがキャラクタCの頭部に位置する場合には、利用者Uによる接触を受容する反応動作(例えば喜ぶ、笑うまたは近付く等の好意的な動作)をキャラクタCに実行させる。他方、動作制御部43Aは、接触地点PがキャラクタCの胴体部に位置する場合には、利用者Uによる接触を拒否する反応動作(例えば怒る、悲しむまたは遠離る等の否定的な動作)をキャラクタCに実行させる。以上の構成によれば、接触地点Pの位置に応じてキャラクタCの反応動作を多様に変化させることが可能である。 When transitioning to the contact state, the operation control unit 43A causes the character C to execute the reaction operation (Sa5). Specifically, the operation control unit 43A causes the character C to execute a reaction operation according to the position of the contact point P where the reference line Q intersects in the character C. That is, the reaction operation performed by the character C changes in accordance with the position of the contact point P. For example, a table that defines the type of reaction operation for each of a plurality of areas obtained by dividing the surface of the character C is stored in the storage device 12. The motion control unit 43A causes the character C to execute the reaction motion defined in the table for the region including the contact point P among the plurality of regions on the surface of the character C. For example, when the touch point P is located on the head of the character C, the motion control unit 43A responds to the user U by a reactive action (e.g., a favorable action such as joy, laughing or approaching). Make C execute. On the other hand, when the contact point P is located on the body of the character C, the operation control unit 43A responds to the user U (for example, a negative operation such as angry, sad, or away). Make character C execute. According to the above configuration, it is possible to variously change the reaction operation of the character C according to the position of the contact point P.
 他方、HMD30がロール方向の第1側に回転しない場合(Sa2:NO)、または、基準線QがキャラクタCに交差しない場合(Sa3:NO)、動作制御部43Aは、記憶装置12に記憶された接触判定データを、非接触状態を示す数値に維持する。すなわち、HMD30がロール方向の第1側に回転した場合でも、図5の例示のように基準線QがキャラクタCに交差しないときには、利用者UがキャラクタCに接触していないと判定される。 On the other hand, when the HMD 30 does not rotate to the first side in the roll direction (Sa2: NO), or when the reference line Q does not cross the character C (Sa3: NO), the operation control unit 43A is stored in the storage device 12. The contact determination data is maintained at a value indicating a non-contact state. That is, even when the HMD 30 rotates to the first side in the roll direction, it is determined that the user U is not in contact with the character C when the reference line Q does not cross the character C as illustrated in FIG.
 仮想空間V内で利用者UがキャラクタCに接触した場合、以降における第1制御処理のステップSa1では、接触判定データが接触状態を示すと判定される。接触判定データが接触状態を示す場合(Sa1:YES)、動作制御部43Aは、姿勢解析部41から順次に供給される姿勢データを参照することで、HMD30がロール方向の第2側(例えば反時計回り)に回転したか否かを判定する。具体的には、動作制御部43Aは、ロール方向の第2側に対する回転角度θが所定の閾値θtを上回る場合に、HMD30がロール方向の第2側に回転したと判定する。HMD30がロール方向の第2側に回転した場合(Sa6:YES)、動作制御部43Aは、キャラクタCに対する接触が解除されたと判定する(Sa7)。具体的には、動作制御部43Aは、記憶装置12に記憶された接触判定データを、非接触状態を示す数値に変更する。他方、HMD30がロール方向の第2側に回転しない場合(Sa6:NO)、動作制御部43Aは、接触状態を示す数値に接触判定データを維持する。 When the user U contacts the character C in the virtual space V, it is determined that the contact determination data indicates a contact state in step Sa1 of the first control process thereafter. When the contact determination data indicates a contact state (Sa1: YES), the operation control unit 43A refers to the posture data sequentially supplied from the posture analysis unit 41, and thereby the HMD 30 moves to the second side in the roll direction (for example, It is determined whether it has rotated clockwise). Specifically, the operation control unit 43A determines that the HMD 30 has rotated to the second side in the roll direction, when the rotation angle θ with respect to the second side in the roll direction exceeds the predetermined threshold θt. When the HMD 30 rotates to the second side in the roll direction (Sa6: YES), the operation control unit 43A determines that the contact with the character C is released (Sa7). Specifically, the operation control unit 43A changes the contact determination data stored in the storage device 12 into a numerical value indicating a non-contact state. On the other hand, when the HMD 30 does not rotate to the second side in the roll direction (Sa6: NO), the operation control unit 43A maintains the contact determination data at a numerical value indicating the contact state.
 なお、キャラクタCに対する接触が解除された場合に、動作制御部43Aが、接触の解除に反応する動作をキャラクタCに実行させる構成が好適であるが、接触の解除に反応する動作をキャラクタCに実行させない構成も想定される。 Although it is preferable that the operation control unit 43A causes the character C to execute an action in response to the release of the contact when the contact to the character C is released, the action in response to the release of the contact is the character C. A configuration not to be executed is also assumed.
 以上の説明から理解される通り、動作制御部43Aは、HMD30がロール方向の第1側に回転した場合(Sa2:YES)に利用者UがキャラクタCに接触したと判定する(Sa4)。他方、利用者UがキャラクタCに接触した状態(Sa1:YES)において、HMD30がロール方向の第2側に回転した場合(Sa6:YES)に、キャラクタCに対する接触が解除されたと判定する。以上の構成によれば、利用者Uは、ロール方向におけるHMD30の回転の向きに応じて、仮想空間V内のキャラクタCに対する接触の発生と解除とを指示することができる。ロール方向において相互に逆向きの回転が、キャラクタCに対する接触の発生および解除という反対の動作に対応するから、キャラクタCに対する接触の発生および解除を利用者Uが直観的に把握し易いという利点もある。 As understood from the above description, when the HMD 30 rotates to the first side in the roll direction (Sa2: YES), the operation control unit 43A determines that the user U has touched the character C (Sa4). On the other hand, in the state where the user U contacts the character C (Sa1: YES), when the HMD 30 rotates to the second side in the roll direction (Sa6: YES), it is determined that the contact with the character C is released. According to the above configuration, the user U can instruct the generation and release of the contact with the character C in the virtual space V according to the rotation direction of the HMD 30 in the roll direction. Since mutually opposite rotation in the roll direction corresponds to the opposite movement of contact and contact with character C, the advantage is also that user U can easily grasp the contact and contact with character C intuitively. is there.
 第1実施形態においては、例えば以下に例示する構成を採用してもよい。 In the first embodiment, for example, the configuration exemplified below may be adopted.
[変形例A1]
 変形例A1における動作制御部43Aは、仮想空間V内においてキャラクタCに対する接触が維持される時間長に応じた反応動作をキャラクタCに実行させる。具体的には、動作制御部43Aは、接触状態が継続する期間(以下「接触期間」という)内において反応動作を経時的に変化させる。接触期間は、HMD30がロール方向の第1側に回転してから第2側に回転するまでの期間である。なお、接触判定データが接触状態を示す数値に設定されてから、非接触状態を示す数値に変更されるまでの期間を接触期間としてもよい。動作制御部43Aは、接触期間の始点から終点にかけてキャラクタCの反応動作を経時的に変化させる。したがって、接触期間の時間長に応じてキャラクタCの反応動作は変化する。以上の構成によれば、利用者UがキャラクタCに接触する時間長に応じてキャラクタCの反応動作を多様に変化させることができる。
[Modification A1]
The motion control unit 43A in the modification A1 causes the character C to perform a response motion according to the length of time in which the contact with the character C is maintained in the virtual space V. Specifically, the operation control unit 43A temporally changes the reaction operation within a period in which the contact state continues (hereinafter referred to as a “contact period”). The contact period is a period from when the HMD 30 rotates to the first side in the roll direction to when it rotates to the second side. Note that a period from when the contact determination data is set to a value indicating the contact state to when it is changed to a value indicating the non-contact state may be set as the contact period. The operation control unit 43A temporally changes the reaction operation of the character C from the start point to the end point of the contact period. Therefore, the reaction operation of the character C changes according to the time length of the contact period. According to the above configuration, the reaction action of the character C can be variously changed according to the length of time that the user U contacts the character C.
[変形例A2]
 変形例A2における記憶装置12には、キャラクタCに関するパラメータが記憶される。具体的には、利用者Uに対するキャラクタCの好感度がパラメータとして記憶装置12に記憶される。好感度は、キャラクタCから利用者Uに対する好意的な感情の度合を示すパラメータである。動作制御部43Aは、ゲームの進行に応じて好感度を変化させるほか、キャラクタCから利用者Uに対する好感度を当該キャラクタCに対する接触に応じて変化させる。具体的には、利用者UがキャラクタCに接触する度合(例えば回数または時間)が大きいほど、当該利用者Uに対するキャラクタCからの好感度は大きい数値に設定される。なお、以上の例示では、利用者Uに対するキャラクタCの好感度として説明したが、記憶装置12に記憶される好感度は、利用者Uが使用するキャラクタ(プレイヤキャラクタ)に対するキャラクタCからの好感度でもよい。利用者Uは仮想空間V内に複数のプレイヤキャラクタを所持し得る。
[Modification A2]
Parameters related to the character C are stored in the storage device 12 in the modification A2. Specifically, the preference of the character C for the user U is stored in the storage device 12 as a parameter. The preference is a parameter indicating the degree of favorable emotion from the character C to the user U. The motion control unit 43A changes the positivity of the character C to the user U in response to the contact with the character C, in addition to changing the positivity to the progress of the game. Specifically, as the degree of contact (for example, the number of times or time) of the user U with the character C increases, the preference for the character C with respect to the user U is set to a larger numerical value. In the above example, although the preference for the character C with respect to the user U has been described, the preference stored in the storage device 12 is the preference from the character C for the character (player character) used by the user U May be. The user U can possess a plurality of player characters in the virtual space V.
 また、動作制御部43Aは、キャラクタCについて記憶された好感度に応じて反応動作を変化させる。すなわち、利用者UがキャラクタCに接触した場合(Sa2:YES,Sa3:YES)にキャラクタCが実行(Sa4)する反応動作の種類が、当該キャラクタCの好感度に応じて変化する。例えば、キャラクタCとの接触の回数または時間が不足していて好感度が低い状態では、利用者Uが頭部に接触するとキャラクタCは当該接触を拒否する反応動作を実行する。他方、キャラクタCとの接触の回数または時間が充分に確保されて好感度が高い状態では、利用者Uが頭部に接触するとキャラクタCは当該接触を受容する反応動作を実行する。以上の構成によれば、キャラクタCに関するパラメータに応じてキャラクタCの反応動作を多様に変化させることができる。 Further, the motion control unit 43A changes the reaction motion in accordance with the stored sensitivity of the character C. That is, when the user U contacts the character C (Sa2: YES, Sa3: YES), the type of reaction action performed by the character C (Sa4) changes in accordance with the character C's preference. For example, in a state where the number of times of contact with the character C or the time for which the contact with the character C is short and the preference is low, when the user U contacts the head, the character C executes a reaction action to reject the contact. On the other hand, in a state where the number of times of contact with the character C or the time for which the contact with the character C has been sufficiently secured and the preference is high, when the user U contacts the head, the character C executes a reaction action to receive the contact. According to the above configuration, the reaction operation of the character C can be variously changed according to the parameter related to the character C.
 なお、キャラクタCの反応動作に影響するパラメータは、以上に例示した好感度に限定されない。例えば、キャラクタCの成長度(レベル)、または、利用者UとキャラクタCとの間の親密度等の各種のパラメータが、利用者UによるキャラクタCの接触に応じて変化し、当該パラメータに応じてキャラクタCの反応動作が制御される。 Note that the parameters that affect the reaction operation of the character C are not limited to the above-described exemplary sensitivity. For example, various parameters such as the degree of growth (level) of the character C or the intimacy between the user U and the character C change according to the contact of the character C by the user U, and according to the parameters The reaction of the character C is controlled.
[変形例A3]
 変形例A3における動作制御部43Aは、HMD30がロール方向に回転した回転角度θに応じた反応動作をキャラクタCに実行させる。例えば、HMD30の回転角度θは、仮想空間V内において利用者UがキャラクタCに接触する仮想的な圧力に相当する。例えば、回転角度θが小さい場合には、利用者UがキャラクタCに対して軽く接触している状態を意味し、回転角度θが大きい場合には、利用者UがキャラクタCを強く押圧している状態を意味する。動作制御部43Aは、例えば回転角度θが小さい場合には、利用者Uによる接触を受容する反応動作をキャラクタCに実行させ、回転角度θが大きい場合には、利用者Uによる接触を拒否する反応動作をキャラクタCに実行させる。以上の構成によれば、HMD30の回転角度θ(すなわち利用者UがキャラクタCに接触する仮想的な圧力)に応じてキャラクタCの反応動作を多様に変化させることができる。
[Modification A3]
The operation control unit 43A in the modification A3 causes the character C to execute a reaction operation according to the rotation angle θ at which the HMD 30 is rotated in the roll direction. For example, the rotation angle θ of the HMD 30 corresponds to a virtual pressure at which the user U contacts the character C in the virtual space V. For example, when the rotation angle θ is small, this means that the user U is in light contact with the character C, and when the rotation angle θ is large, the user U strongly presses the character C. Mean that For example, when the rotation angle θ is small, the operation control unit 43A causes the character C to execute a reaction operation that accepts the contact by the user U, and rejects the contact by the user U when the rotation angle θ is large. The character C is made to execute a reaction operation. According to the above configuration, the reaction operation of the character C can be variously changed according to the rotation angle θ of the HMD 30 (that is, the virtual pressure with which the user U contacts the character C).
[変形例A4]
 変形例A4における表示制御部42Aは、HMD30のロール方向における回転に応じて仮想カメラE(すなわち仮想空間V内における仮想的な視点)を仮想空間V内で移動させる。具体的には、表示制御部42Aは、仮想空間V内における仮想カメラEとキャラクタCとの位置関係をHMD30の回転に連動して変化させる。例えば、仮想カメラEとキャラクタCとの距離がHMD30の回転角度θに応じて制御される。すなわち、HMD30のロール方向における回転に連動して仮想カメラEがオブジェクトの方向に移動する。
[Modification A4]
The display control unit 42A in the modification A4 moves the virtual camera E (that is, a virtual viewpoint in the virtual space V) in the virtual space V according to the rotation of the HMD 30 in the roll direction. Specifically, the display control unit 42A changes the positional relationship between the virtual camera E and the character C in the virtual space V in conjunction with the rotation of the HMD 30. For example, the distance between the virtual camera E and the character C is controlled according to the rotation angle θ of the HMD 30. That is, the virtual camera E moves in the direction of the object in conjunction with the rotation of the HMD 30 in the roll direction.
 例えば、HMD30がロール方向の第1側に回転した場合、表示制御部42Aは、図9に矢印a1で示す通り、HMD30の回転角度θ(θ>0)に応じた移動量だけ仮想カメラEを仮想空間V内でキャラクタCに接近させる。他方、HMD30がロール方向の第2側に回転した場合、表示制御部42Aは、図9に矢印a2で示す通り、HMD30の回転角度θ(θ<0)に応じた移動量だけ仮想カメラEを仮想空間V内でキャラクタCから離間させる。以上の構成によれば、HMD30をロール方向に回転させる簡便な操作により、利用者Uは、仮想空間Vにおいて仮想カメラEを移動させることができる。 For example, when the HMD 30 rotates to the first side in the roll direction, the display control unit 42A moves the virtual camera E by the movement amount according to the rotation angle θ (θ> 0) of the HMD 30, as shown by arrow a1 in FIG. The character C is made to approach in the virtual space V. On the other hand, when the HMD 30 rotates to the second side in the roll direction, the display control unit 42A moves the virtual camera E by the movement amount according to the rotation angle θ (θ <0) of the HMD 30, as shown by arrow a2 in FIG. It is separated from the character C in the virtual space V. According to the above configuration, the user U can move the virtual camera E in the virtual space V by a simple operation of rotating the HMD 30 in the roll direction.
[変形例A5]
 変形例A5における表示制御部42Aは、HMD30のロール方向における回転に応じて仮想カメラEによる撮像範囲R(すなわち仮想カメラEの画角)を変化させる。すなわち、仮想空間V内において表示装置13に表示される範囲がHMD30のロール方向の回転に応じて変化する。例えば、HMD30がロール方向の第1側に回転した場合、表示制御部42Aは、図10に矢印b1で示す通り、HMD30の回転角度θに応じた比率で撮像範囲Rを縮小する。したがって、例えば仮想空間V内のキャラクタCがズームイン(拡大)される。他方、HMD30がロール方向の第2側に回転した場合、表示制御部42Aは、図10に矢印b2で示す通り、HMD30の回転角度θに応じた比率で撮像範囲Rを拡大する。したがって、例えば仮想空間V内のキャラクタCがズームアウト(縮小)される。以上の構成によれば、HMD30をロール方向に回転させる簡便な操作により、利用者Uは、仮想空間V内における仮想カメラEの撮像範囲Rを変化させることができる。
[Modification A5]
The display control unit 42A in the modification A5 changes the imaging range R (that is, the angle of view of the virtual camera E) by the virtual camera E according to the rotation of the HMD 30 in the roll direction. That is, the range displayed on the display device 13 in the virtual space V changes according to the rotation of the HMD 30 in the roll direction. For example, when the HMD 30 rotates to the first side in the roll direction, the display control unit 42A reduces the imaging range R at a ratio according to the rotation angle θ of the HMD 30, as indicated by an arrow b1 in FIG. Therefore, for example, the character C in the virtual space V is zoomed in (expanded). On the other hand, when the HMD 30 rotates to the second side in the roll direction, the display control unit 42A enlarges the imaging range R at a ratio according to the rotation angle θ of the HMD 30, as shown by the arrow b2 in FIG. Therefore, for example, the character C in the virtual space V is zoomed out (reduced). According to the above configuration, the user U can change the imaging range R of the virtual camera E in the virtual space V by a simple operation of rotating the HMD 30 in the roll direction.
[変形例A6]
 第1実施形態では、HMD30がロール方向の第2側に回転した場合(Sa6:YES)にキャラクタCに対する接触が解除されたと判定したが、キャラクタCに対する接触が解除されたと判定するための条件は以上の例示に限定されない。例えば、動作制御部43Aは、HMD30がロール方向に回転した状態(Sa2:YES)から元に戻った場合に、キャラクタCに対する接触が解除されたと判定してもよい。具体的には、動作制御部43Aは、図6のステップSa6において、HMD30の回転角度θが、キャラクタCに対する接触のための回転が開始された時点における初期的な角度(例えばθ=0)に変化した場合に、キャラクタCに対する接触が解除されたと判定する(Sa7)。他方、ステップSa2では、動作制御部43Aは、HMD30の回転の方向(第1側/第2側)に関わらず、回転角度θが変化した場合に、利用者UがキャラクタCに接触したと判定する。以上の説明から理解される通り、HMD30の回転角度θを監視することで、HMD30の回転の方向(第1側/第2側)までは判別しなくても、キャラクタCに対する接触の発生と解除とを判定できる。すなわち、キャラクタCに対する接触の判定において、動作制御部43Aが、HMD30の回転の方向を判別する必要は必ずしもない。
[Modification A6]
In the first embodiment, when the HMD 30 is rotated to the second side in the roll direction (Sa6: YES), it is determined that the contact with the character C is released, but the condition for determining that the contact with the character C is released is It is not limited to the above examples. For example, the motion control unit 43A may determine that the contact with the character C has been released when the HMD 30 returns from the state of being rotated in the roll direction (Sa2: YES). Specifically, in step Sa6 of FIG. 6, the motion control unit 43A sets the rotation angle θ of the HMD 30 to the initial angle (eg, θ = 0) at the time when rotation for contact with the character C is started. If it has changed, it is determined that the contact with the character C has been released (Sa7). On the other hand, in step Sa2, the motion control unit 43A determines that the user U has touched the character C when the rotation angle θ changes regardless of the direction (first side / second side) of the rotation of the HMD 30. Do. As understood from the above description, by monitoring the rotation angle θ of the HMD 30, the generation and release of contact with the character C can be made without determining the direction of rotation (first side / second side) of the HMD 30. And can be determined. That is, in the determination of the touch on the character C, the motion control unit 43A does not necessarily have to determine the direction of rotation of the HMD 30.
[変形例A7]
 第1実施形態では、基準線Qが仮想空間V内のキャラクタCに交差する場合(Sa3:YES)に、利用者Uによる接触に対する反応動作をキャラクタCに実行させたが、基準線QとキャラクタCとの間の位置関係に関する条件は以上の例示に限定されない。例えば、基準線QとキャラクタCとの距離が所定の閾値を下回ることを条件として、反応動作をキャラクタCに実行させてもよい。基準線QとキャラクタCとの距離が所定の閾値を下回る場合には、基準線QがキャラクタCに交差する場合のほか、閾値を下回る範囲で基準線QがキャラクタCから離間した場合も含まれる。また、基準線QがキャラクタCの特定の部位に交差することを条件として、反応動作をキャラクタCに実行させてもよい。基準線QがキャラクタCの特定の部位以外に交差する場合、動作制御部43Aは、反応動作をキャラクタCに実行させない。以上の説明から理解される通り、第1実施形態の動作制御部43Aは、基準線QとキャラクタCとの位置関係(例えば基準線QとキャラクタCとが交差する関係)に応じてキャラクタCの動作を制御する要素として包括的に表現される。
[Modification A7]
In the first embodiment, when the reference line Q intersects the character C in the virtual space V (Sa3: YES), the character C is made to perform a reaction action to the touch by the user U. However, the reference line Q and the character The conditions regarding the positional relationship between C and C are not limited to the above examples. For example, the reaction operation may be performed by the character C on condition that the distance between the reference line Q and the character C falls below a predetermined threshold. When the distance between the reference line Q and the character C is less than a predetermined threshold value, in addition to the case where the reference line Q intersects the character C, the case where the reference line Q separates from the character C within the range below the threshold is also included. . Alternatively, the reaction operation may be performed by the character C on condition that the reference line Q intersects a specific part of the character C. When the reference line Q crosses other than the specific part of the character C, the operation control unit 43A does not cause the character C to execute the reaction operation. As understood from the above description, the motion control unit 43A according to the first embodiment determines the position of the character C in accordance with the positional relationship between the reference line Q and the character C (for example, the relationship where the reference line Q intersects the character C). It is comprehensively expressed as an element that controls the operation.
[B:第2実施形態]
 本発明の第2実施形態を説明する。なお、以下の各例示において機能が第1実施形態と同様である要素については、第1実施形態の説明で使用した符号を流用して各々の詳細な説明を適宜に省略する。
[B: Second Embodiment]
A second embodiment of the present invention will be described. In addition, about the element which a function is the same as that of 1st Embodiment in each following example, the code | symbol used by description of 1st Embodiment is diverted and detailed description of each is abbreviate | omitted suitably.
 第2実施形態における画像表示システムは、図1および図2に例示した第1実施形態と同様の構成である。すなわち、第2実施形態のHMD30は、端末装置10の表示装置13および検出装置14と装着具20とを具備し、情報処理装置40は、端末装置10の制御装置11と記憶装置12とを具備する。 The image display system in the second embodiment has the same configuration as that of the first embodiment illustrated in FIGS. 1 and 2. That is, the HMD 30 of the second embodiment includes the display device 13 and the detection device 14 of the terminal device 10 and the attachment 20, and the information processing device 40 includes the control device 11 of the terminal device 10 and the storage device 12. Do.
 図11は、第2実施形態の端末装置10における機能的な構成を例示するブロック図である。図11に例示される通り、第2実施形態の制御装置11は、記憶装置12に記憶されたプログラムを実行することで、姿勢解析部41、表示制御部42Bおよび動作制御部43Bとして機能する。なお、制御装置11の一部または全部を専用の電子回路で実現してもよい。姿勢解析部41は、第1実施形態と同様に、検出装置14が出力する検出信号を解析することで、HMD30の姿勢に関する姿勢データを順次に生成する。 FIG. 11 is a block diagram illustrating a functional configuration of the terminal device 10 according to the second embodiment. As illustrated in FIG. 11, the control device 11 according to the second embodiment functions as a posture analysis unit 41, a display control unit 42B, and an operation control unit 43B by executing a program stored in the storage device 12. Note that part or all of the control device 11 may be realized by a dedicated electronic circuit. The posture analysis unit 41 sequentially generates posture data on the posture of the HMD 30 by analyzing the detection signal output from the detection device 14 as in the first embodiment.
 表示制御部42Bは、第1実施形態の表示制御部42Aと同様に、仮想空間Vのうち仮想カメラEによる撮像範囲R内の画像と、基準線Qの方向を表す指示画像Gとを表示装置13に表示させる。第1実施形態について前述した通り、基準線Qは、HMD30の方向に応じて仮想空間V内における方向が変化する仮想的な直線である。 Similar to the display control unit 42A of the first embodiment, the display control unit 42B displays an image within an imaging range R of the virtual camera E in the virtual space V and an instruction image G representing the direction of the reference line Q. Display on 13. As described above for the first embodiment, the reference line Q is a virtual straight line whose direction in the virtual space V changes according to the direction of the HMD 30.
 第2実施形態の表示制御部42Bは、基準線Qと仮想空間V内のキャラクタC(オブジェクトの例示)との位置関係に応じて指示画像Gの表示態様を変化させる。具体的には、表示制御部42Bは、基準線QがキャラクタCに交差する場合と基準線QがキャラクタCに交差しない場合とで指示画像Gの表示態様を相違させる。表示態様とは、利用者Uが視覚的に弁別可能な画像の性状を意味する。例えば、色の3属性である色相(色調)、彩度および明度(階調)のほか、サイズおよび画像内容(例えば模様または形状)も、表示態様の概念に包含される。 The display control unit 42B of the second embodiment changes the display mode of the instruction image G according to the positional relationship between the reference line Q and the character C (example of object) in the virtual space V. Specifically, the display control unit 42B makes the display mode of the instruction image G different between when the reference line Q intersects the character C and when the reference line Q does not intersect the character C. The display mode means the property of the image which the user U can distinguish visually. For example, in addition to the three attributes of color (hue), saturation and lightness (tone), size and image content (e.g. pattern or shape) are also included in the concept of the display mode.
 基準線QがキャラクタCに交差しない場合、表示制御部42Bは、図12に例示される通り、画像G1を指示画像Gとして表示装置13に表示させる。画像G1は、仮想空間V内に配置された円状または点状のオブジェクトであり、仮想空間V内において基準線Qの線上に配置される。他方、基準線QがキャラクタCに交差する場合、表示制御部42Bは、図13に例示される通り、画像G1とは相違する画像G2を指示画像Gとして表示装置13に表示させる。画像G2は、利用者Uの手を模式的に表す平面状のオブジェクトである。表示制御部42Bは、図14に例示される通り、仮想空間V内におけるキャラクタCの表面のうち基準線Qと交差する地点(すなわち接触地点P)に画像G2を配置する。すなわち、画像G2は、仮想空間V内においてキャラクタCの表面上の接触地点Pに接触する。接触地点Pは、キャラクタCの表面において画像G2が接触する地点とも換言される。 When the reference line Q does not intersect the character C, the display control unit 42B causes the display device 13 to display the image G1 as the instruction image G as illustrated in FIG. The image G1 is a circular or point-like object disposed in the virtual space V, and is disposed on the line of the reference line Q in the virtual space V. On the other hand, when the reference line Q intersects the character C, the display control unit 42B causes the display device 13 to display the image G2 different from the image G1 as the instruction image G, as illustrated in FIG. The image G2 is a planar object that schematically represents the hand of the user U. The display control unit 42B arranges the image G2 at a point (that is, the contact point P) intersecting the reference line Q in the surface of the character C in the virtual space V, as illustrated in FIG. That is, the image G2 contacts the contact point P on the surface of the character C in the virtual space V. The contact point P is also referred to as a point on the surface of the character C at which the image G2 contacts.
 なお、以上の説明では、画像G1と画像G2との間の切替を例示したが、指示画像Gの表示態様の変化には、表示/非表示の切替も含まれる。すなわち、表示制御部42Bは、基準線QがキャラクタCに交差する場合に画像G2を指示画像Gとして表示し、基準線QがキャラクタCに交差しない場合には指示画像Gを非表示としてもよい。 Although the switching between the image G1 and the image G2 has been exemplified in the above description, the change of the display mode of the instruction image G includes the switching of display / non-display. That is, the display control unit 42B may display the image G2 as the instruction image G when the reference line Q intersects the character C, and may not display the instruction image G when the reference line Q does not intersect the character C. .
 以上の説明から理解される通り、第2実施形態においては、仮想空間V内における基準線Qの方向を表す指示画像Gの表示態様が基準線QとキャラクタCとの位置関係に応じて変化する。したがって、仮想空間V内における基準線QとキャラクタCとの位置関係を利用者Uが容易に把握できる。また、仮想空間V内におけるキャラクタCの表面に指示画像G(画像G2)が配置されるから、仮想空間V内でキャラクタCの表面に接触している状態を利用者Uが知覚し易いという利点がある。指示画像GがキャラクタCの表面に配置されることで、利用者Uが接触地点Pを視覚的に的確に把握し易いという利点もある。 As understood from the above description, in the second embodiment, the display mode of the instruction image G representing the direction of the reference line Q in the virtual space V changes according to the positional relationship between the reference line Q and the character C. . Therefore, the user U can easily grasp the positional relationship between the reference line Q and the character C in the virtual space V. In addition, since the instruction image G (image G2) is disposed on the surface of the character C in the virtual space V, the advantage is that the user U can easily perceive the state in contact with the surface of the character C in the virtual space V. There is. Since the instruction image G is arranged on the surface of the character C, there is also an advantage that the user U can easily grasp the contact point P accurately and visually.
 図11の動作制御部43Bは、第1実施形態の動作制御部43Aと同様に、仮想空間V内におけるキャラクタCの動作を制御する。具体的には、動作制御部43Bは、仮想空間V内における利用者Uの接触に反応する動作(すなわち反応動作)をキャラクタCに実行させる。反応動作の典型例は、姿勢の変化、表情の変化、または特定の台詞の発音である。 The motion control unit 43B of FIG. 11 controls the motion of the character C in the virtual space V, similarly to the motion control unit 43A of the first embodiment. Specifically, the operation control unit 43B causes the character C to execute an operation (that is, a reaction operation) that reacts to the contact of the user U in the virtual space V. A typical example of the reaction motion is a change in posture, a change in expression, or the pronunciation of a specific speech.
 具体的には、動作制御部43Bは、キャラクタCにおいて基準線Qが交差する接触地点Pの位置に応じた動作をキャラクタCに実行させる。例えば、動作制御部43Bは、接触地点PがキャラクタCの頭部に位置する場合には、利用者Uによる接触を受容する反応動作をキャラクタCに実行させ、接触地点PがキャラクタCの胴体部に位置する場合には、利用者Uによる接触を拒否する動作をキャラクタCに実行させる。以上の構成によれば、接触地点Pの位置に応じてキャラクタCの動作を多様に変化させることが可能である。 Specifically, the motion control unit 43B causes the character C to execute the motion according to the position of the contact point P where the reference line Q intersects in the character C. For example, when the contact point P is located on the head of the character C, the operation control unit 43B causes the character C to execute a reaction that accepts the contact by the user U, and the contact point P corresponds to the body of the character C. If the character C is positioned at, the character C is made to execute an operation of rejecting the contact by the user U. According to the above configuration, it is possible to change the motion of the character C in various ways according to the position of the contact point P.
 図15は、第2実施形態の表示制御部42Bおよび動作制御部43Bが実行する処理(以下「第2制御処理」という)の具体的な内容を例示するフローチャートである。図15の第2制御処理は所定の周期で反復的に実行される。 FIG. 15 is a flowchart illustrating specific contents of processing (hereinafter, referred to as “second control processing”) executed by the display control unit 42B and the operation control unit 43B according to the second embodiment. The second control process of FIG. 15 is repeatedly performed in a predetermined cycle.
 第2制御処理を開始すると、表示制御部42Bは、仮想空間V内において基準線QがキャラクタCに交差するか否かを判定する(Sb1)。基準線QがキャラクタCに交差する場合(Sb1:YES)、表示制御部42Bは、基準線QとキャラクタCとが交差する接触地点Pに配置された画像G2を指示画像Gとして表示装置13に表示させる(Sb2)。他方、基準線QがキャラクタCに交差しない場合(Sb1:NO)、表示制御部42Bは、基準線Qの線上に配置された画像G1を指示画像Gとして表示装置13に表示させる(Sb3)。 When the second control process is started, the display control unit 42B determines whether the reference line Q intersects the character C in the virtual space V (Sb1). When the reference line Q intersects the character C (Sb1: YES), the display control unit 42B sets the image G2 arranged at the contact point P where the reference line Q and the character C intersect as the indication image G on the display device 13. Display (Sb2). On the other hand, when the reference line Q does not intersect the character C (Sb1: NO), the display control unit 42B causes the display device 13 to display the image G1 disposed on the line of the reference line Q as the instruction image G (Sb3).
 キャラクタCに重なる画像G2が表示された状態において、動作制御部43Bは、キャラクタCに対する動作の指示(以下「動作指示」という)を、利用者Uから受付けたか否かを判定する(Sb4)。動作指示は、仮想空間V内のキャラクタCに対する作用を発生させるための指示である。第2実施形態の動作指示は、仮想空間V内で利用者UがキャラクタCに接触することの指示を意味する。 In a state where the image G2 overlapping the character C is displayed, the operation control unit 43B determines whether an instruction for the operation on the character C (hereinafter referred to as "operation instruction") has been received from the user U (Sb4). The action instruction is an instruction for generating an action on the character C in the virtual space V. The operation instruction in the second embodiment means an instruction that the user U touches the character C in the virtual space V.
 第2実施形態の動作制御部43Bは、HMD30の姿勢に関する特定の変化を、利用者Uからの動作指示として受付ける。具体的には、動作制御部43Bは、HMD30の姿勢の変化が所定の条件(以下「指示判定条件」という)を充足する場合に動作指示を受付けたと判定する。例えば、動作制御部43Bは、第1実施形態と同様に、HMD30がロール方向に回転した場合に、利用者Uから動作指示が付与されたと判定する。すなわち、HMD30のロール方向の回転が指示判定条件である。他方、HMD30の姿勢の変化が指示判定条件を充足しない場合、動作制御部43Bは、動作指示を受付けていないと判定する。なお、図7を参照して前述した通り、HMD30の回転角度θが閾値θtを上回る場合にHMD30が回転したと判定する構成が好適である。 The operation control unit 43B of the second embodiment receives a specific change regarding the attitude of the HMD 30 as an operation instruction from the user U. Specifically, when the change in attitude of the HMD 30 satisfies a predetermined condition (hereinafter referred to as "instruction determination condition"), the operation control unit 43B determines that the operation instruction is received. For example, as in the first embodiment, when the HMD 30 rotates in the roll direction, the operation control unit 43B determines that an operation instruction has been given from the user U. That is, the rotation of the HMD 30 in the roll direction is the instruction determination condition. On the other hand, if the change in attitude of the HMD 30 does not satisfy the instruction determination condition, the operation control unit 43B determines that the operation instruction has not been received. Note that, as described above with reference to FIG. 7, it is preferable to determine that the HMD 30 has rotated when the rotation angle θ of the HMD 30 exceeds the threshold value θt.
 利用者Uから動作指示を受付けた場合(Sb4:YES)、動作制御部43Bは、仮想空間V内で利用者UがキャラクタCに接触したと判定し、利用者Uによる接触に反応する動作をキャラクタCに実行させる(Sb5)。具体的には、動作制御部43Bは、前述の通り、キャラクタCにおいて基準線Qが交差する接触地点Pの位置に応じた動作をキャラクタCに実行させる。以上の説明から理解される通り、第2実施形態においては、利用者Uは、キャラクタCの所望の位置に交差するように基準線Qを移動させた状態で動作指示を付与することにより、仮想空間V内でキャラクタCの所望の位置に接触することが可能である。 If an operation instruction is received from the user U (Sb4: YES), the operation control unit 43B determines that the user U has touched the character C in the virtual space V, and performs an operation to react to the contact by the user U The character C is made to execute (Sb5). Specifically, as described above, the motion control unit 43B causes the character C to execute the motion according to the position of the contact point P where the reference line Q intersects in the character C. As understood from the above description, in the second embodiment, the user U moves the reference line Q so as to intersect with the desired position of the character C, thereby giving an operation instruction. It is possible to touch the desired position of the character C in the space V.
 なお、HMD30の姿勢の変化を動作指示として認識する構成では、表示面に対するタッチ操作等の直接的な操作が不要であるから、利用者Uが接触の感覚を実感し難いという問題がある。第2実施形態では、基準線QとキャラクタCとの位置関係(具体的には基準線QがキャラクタCに交差するか否か)に応じて指示画像Gの表示態様が変化するから、指示画像Gが固定的な態様で表示される構成と比較して、仮想空間V内で利用者UがキャラクタCに接触した感覚を利用者Uが実感し易いという利点がある。 In the configuration in which the change in posture of the HMD 30 is recognized as an operation instruction, there is a problem that the user U does not easily feel a sense of touch because a direct operation such as a touch operation on the display surface is unnecessary. In the second embodiment, the display mode of the instruction image G changes according to the positional relationship between the reference line Q and the character C (specifically, whether the reference line Q intersects the character C), the instruction image As compared to the configuration in which G is displayed in a fixed manner, there is an advantage that the user U can easily feel the sense that the user U has touched the character C in the virtual space V.
 第2実施形態においては、例えば以下に例示する構成を採用してもよい。 In the second embodiment, for example, the configuration exemplified below may be adopted.
[変形例B1]
 第2実施形態では、接触地点Pの位置に応じた反応動作をキャラクタCに実行させたが、第1実施形態と同様に、HMD30の姿勢の変化量(例えばロール方向の回転角度θ)に応じた反応動作をキャラクタCに実行させてもよい。例えば、HMD30の姿勢の変化量を、仮想空間V内において利用者UがキャラクタCに接触する仮想的な圧力と仮定する。変形例B1の動作制御部43Bは、HMD30の姿勢の変化量が小さい場合には、利用者Uによる接触を受容する反応動作をキャラクタCに実行させ、姿勢の変化量が大きい場合には、利用者Uによる接触を拒否する反応動作をキャラクタCに実行させる。以上の態様によれば、HMD30の姿勢に応じた多様な反応動作をキャラクタCに実行させることができる。
[Modification B1]
In the second embodiment, the reaction action corresponding to the position of the contact point P is performed by the character C. However, as in the first embodiment, according to the amount of change in the attitude of the HMD 30 (for example, the rotation angle θ in the roll direction). The character C may execute the above reaction action. For example, it is assumed that the amount of change in attitude of the HMD 30 is a virtual pressure at which the user U contacts the character C in the virtual space V. The motion control unit 43B of the modified example B1 causes the character C to execute the reaction operation for accepting the contact by the user U when the amount of change in the posture of the HMD 30 is small, and uses it when the amount of change in the posture is large. The character C is made to execute a reaction action that rejects the contact by the person U. According to the above aspect, it is possible to cause the character C to execute various reaction operations according to the attitude of the HMD 30.
[変形例B2]
 変形例B2の動作制御部43Bは、時間軸上の特定の期間(以下「指示受付期間」という)内において利用者Uからの動作指示を受付ける。すなわち、指示受付期間内に利用者Uが付与した動作指示はキャラクタCの動作に有効に反映されるが、指示受付期間外の時点で利用者Uが付与した動作指示は無視される。指示受付期間は、例えば、基準線QがキャラクタCに交差し始めた時点を始点とする所定の時間長(例えば数秒)の期間である。なお、指示受付期間の始点または終点は以上の例示に限定されない。例えば、ゲーム内で特定のイベントが開始した時点を指示受付期間の始点としてもよい。以上のように動作指示の受付を指示受付期間に制限することで、利用者Uに適度な緊張感が付与され、結果的にゲームの飽きを抑制することが可能である。
[Modification B2]
The operation control unit 43B of the modified example B2 receives an operation instruction from the user U within a specific period on the time axis (hereinafter, referred to as an “instruction receiving period”). That is, the action instruction given by the user U during the instruction acceptance period is effectively reflected in the action of the character C, but the action instruction given by the user U outside the instruction acceptance period is ignored. The instruction reception period is, for example, a period of a predetermined time length (for example, several seconds) starting from the time when the reference line Q starts to cross the character C. The start point or the end point of the instruction reception period is not limited to the above example. For example, the time when a specific event starts in the game may be set as the start point of the instruction reception period. As described above, by restricting the reception of the operation instruction to the instruction reception period, the user U is given a proper sense of tension, and as a result, it is possible to suppress the boredom of the game.
 表示制御部42Bは、指示受付期間内において指示画像G(画像G2)の表示態様を経時的に変化させる。例えば、図16に例示される通り、基準線QがキャラクタCに交差するときに指示画像Gとして表示される画像G2が、画像G21と画像G22とを含む場合を想定する。画像G21は、第1実施形態の画像G2と同様に利用者Uの手を表す画像であり、画像G22は、画像G21の背後に配置された円形状の画像である。表示制御部42Bは、指示受付期間の始点から終点にかけて画像G22のサイズを経時的に縮小する。画像G21のサイズは変化しない。以上の構成によれば、動作指示の受付が許可される指示受付期間(例えば残り時間)を、利用者Uが指示画像Gの表示態様から視覚的に把握できるという利点がある。なお、画像G22についてサイズ以外の表示態様(例えば色相または明度)を経時的に変化させてもよい。 The display control unit 42B temporally changes the display mode of the instruction image G (image G2) in the instruction reception period. For example, as illustrated in FIG. 16, it is assumed that the image G2 displayed as the instruction image G when the reference line Q intersects the character C includes the image G21 and the image G22. The image G21 is an image representing the hand of the user U similarly to the image G2 of the first embodiment, and the image G22 is a circular image disposed behind the image G21. The display control unit 42B reduces the size of the image G22 over time from the start point to the end point of the instruction reception period. The size of the image G21 does not change. According to the above configuration, there is an advantage that the user U can visually grasp from the display mode of the instruction image G the instruction reception period (for example, the remaining time) in which the reception of the operation instruction is permitted. The display mode (for example, hue or lightness) other than the size of the image G22 may be changed with time.
[変形例B3]
 変形例B3の表示制御部42Bは、仮想空間V内のキャラクタCにおいて基準線Qが交差する接触地点Pの位置に応じて指示画像Gの表示態様を変化させる。具体的には、図17に例示される通り、仮想空間V内で接触地点PがキャラクタCの表面の第1位置にある場合とキャラクタCの表面の第2位置にある場合とで、指示画像Gの表示態様が相違する。第1位置は、例えば利用者Uによる接触をキャラクタCが受容する位置(例えばキャラクタCの胴体部)であり、第2位置は、例えば利用者Uによる接触をキャラクタCが拒否する位置(例えばキャラクタCの頭部)である。
[Modification B3]
The display control unit 42B of the modified example B3 changes the display mode of the instruction image G according to the position of the contact point P where the reference line Q intersects in the character C in the virtual space V. Specifically, as illustrated in FIG. 17, an instruction image is generated when the contact point P is in the first position on the surface of the character C and in the second position on the surface of the character C in the virtual space V. The display mode of G is different. The first position is, for example, a position where the character C receives a touch by the user U (for example, the body of the character C), and the second position is, for example, a position where the character C rejects the touch by the user U (for example, the character C's head).
 なお、図17においては接触地点Pが第1位置または第2位置にある場合を例示したが、表示制御部42Bは、キャラクタCにおける接触地点Pの位置の変化に並行して指示画像Gの表示態様を経時的に変化させてもよい。すなわち、利用者UがHMD30の姿勢に応じて接触地点PをキャラクタCの表面上を移動させると、表示制御部42Bは、接触地点Pの移動に連動して指示画像Gの表示態様を経時的に変化させる。例えば、接触地点Pが図17の第1位置から第2位置まで移動する期間内に、表示制御部42Bは、指示画像Gの表示態様を、第1位置における表示態様から第2位置における表示態様まで連続的または段階的に変化させる。 Although FIG. 17 illustrates the case where the contact point P is at the first position or the second position, the display control unit 42B displays the instruction image G in parallel with the change in the position of the contact point P in the character C. Aspects may change over time. That is, when the user U moves the contact point P on the surface of the character C according to the posture of the HMD 30, the display control unit 42B temporally changes the display mode of the instruction image G in conjunction with the movement of the contact point P. Change to For example, in a period in which the contact point P moves from the first position to the second position in FIG. 17, the display control unit 42B displays the display mode of the instruction image G from the display mode at the first position to the display mode at the second position Change up to continuous or stepwise.
 変形例B3では、接触地点Pの位置に応じて指示画像Gの表示態様が変化する。したがって、指示画像Gの表示態様を視認することで、利用者Uは、接触時のキャラクタCの動作を利用者Uを推測できる。すなわち、指示画像Gの表示態様を確認することで接触時のキャラクタCの動作を推測しながら接触地点Pを徐々に移動させ、所望の動作が予想される位置に接触地点Pを維持した状態で動作指示を付与する、という興趣性を利用者Uに提供できる。 In the modified example B3, the display mode of the instruction image G changes in accordance with the position of the contact point P. Therefore, by visually recognizing the display mode of the instruction image G, the user U can estimate the user U's motion of the character C at the time of contact. That is, the contact point P is gradually moved while inferring the movement of the character C at the time of contact by confirming the display mode of the instruction image G, and the contact point P is maintained at the position where the desired movement is expected. It is possible to provide the user U with the interest of giving an operation instruction.
[変形例B4]
 変形例B4の表示制御部42Bは、キャラクタCの視線が基準線Qを追跡するようにキャラクタCを制御する。例えば、基準線QがキャラクタCに交差する接触地点Pを視線が向くようにキャラクタCが制御される。具体的には、図18に例示される通り、キャラクタCの眼球(例えば瞳)および頭部が、基準線Qを追跡するように回転する。なお、キャラクタCの眼球および頭部の一方のみが基準線Qを追跡してもよい。変形例B4によれば、利用者UがキャラクタCに影響している感覚を利用者Uが把握し易いという利点がある。
[Modification B4]
The display control unit 42B of the modified example B4 controls the character C so that the sight line of the character C follows the reference line Q. For example, the character C is controlled such that the line of sight looks at the contact point P where the reference line Q intersects the character C. Specifically, as illustrated in FIG. 18, the eye (for example, the pupil) and the head of the character C rotate so as to track the reference line Q. Note that only one of the eyeball and the head of the character C may track the reference line Q. According to the modified example B4, there is an advantage that the user U can easily grasp the sense that the user U is affecting the character C.
[変形例B5]
 第2実施形態では、基準線QがキャラクタCに交差するか否かに応じて指示画像Gの表示態様を変化させたが、指示画像Gの表示態様を変化させるための条件は以上の例示に限定されない。例えば、表示制御部42Bは、基準線QとキャラクタCとの距離に応じて指示画像Gの表示態様を連続的または段階的に変化させてもよい。すなわち、基準線QとキャラクタCとの交差は、指示画像Gの表示態様を変化させるための必須の条件ではない。また、基準線QがキャラクタCの特定の部位に交差するか否かに応じて、指示画像Gの表示態様を変化させてもよい。基準線QがキャラクタCの特定の部位以外に交差する状態では、基準線QがキャラクタCと交差しない場合と同様の表示態様で指示画像Gが表示される。以上の説明から理解される通り、第2実施形態の動作制御部43Bは、基準線QとキャラクタCとの位置関係に応じて指示画像Gの表示態様を変化させる要素として包括的に表現される。
[Modification B5]
In the second embodiment, the display mode of the instruction image G is changed according to whether or not the reference line Q intersects the character C, but the conditions for changing the display mode of the instruction image G are as described above. It is not limited. For example, the display control unit 42B may change the display mode of the instruction image G continuously or stepwise according to the distance between the reference line Q and the character C. That is, the intersection between the reference line Q and the character C is not an essential condition for changing the display mode of the instruction image G. Further, depending on whether the reference line Q intersects a specific part of the character C, the display mode of the indication image G may be changed. In the state where the reference line Q intersects other than the specific part of the character C, the instruction image G is displayed in the same display mode as in the case where the reference line Q does not intersect the character C. As understood from the above description, the operation control unit 43B of the second embodiment is comprehensively expressed as an element that changes the display mode of the instruction image G according to the positional relationship between the reference line Q and the character C. .
[変形例B6]
 基準線QとキャラクタCとの位置関係に応じて指示画像Gの表示態様を変化させる第2実施形態の構成は、利用者Uが端末装置10(例えばスマートフォン)を手で把持した状態で使用する場合にも同様に適用される。第2実施形態と同様に、仮想空間Vのうち表示装置13に表示される撮像範囲Rは、端末装置10の姿勢に応じて変化する。
[Modification B6]
The configuration of the second embodiment in which the display mode of the instruction image G is changed according to the positional relationship between the reference line Q and the character C is used in a state where the user U holds the terminal device 10 (for example, a smartphone) by hand. The same applies to cases. As in the second embodiment, the imaging range R displayed on the display device 13 in the virtual space V changes in accordance with the attitude of the terminal device 10.
 変形例B6の端末装置10は、表示装置13の表示面に対する利用者Uの接触を検出するタッチパネルを具備する。第2実施形態では、仮想空間V内の接触地点Pの位置をHMD30の姿勢に応じて変化させたが、変形例B6では、表示装置13の表示面に対して利用者Uが接触した地点を接触地点Pとして指示画像Gが表示される。例えば、利用者Uが接触した接触地点Pを通過する基準線Qが仮想空間V内に設定される。すなわち、表示装置13の表示面に対する接触(すなわちタッチ操作)で利用者Uが指示した方向の仮想的な直線が基準線Qとして仮想空間V内に設定される。表示制御部42Bは、基準線QがキャラクタCに交差する場合には画像G2を指示画像Gとして表示装置13に表示させ、基準線QがキャラクタCに交差しない場合には画像G1を指示画像Gとして表示装置13に表示させる。以上の構成でも、仮想空間V内における基準線QとキャラクタCとの位置関係(例えば仮想空間V内で利用者UがキャラクタCに接触している状態)を利用者Uが容易に把握できるという利点がある。 The terminal device 10 of the modified example B6 includes a touch panel that detects the contact of the user U with the display surface of the display device 13. In the second embodiment, the position of the contact point P in the virtual space V is changed according to the attitude of the HMD 30, but in the modified example B6, the point at which the user U contacts the display surface of the display device 13 An instruction image G is displayed as the contact point P. For example, a reference line Q passing through the contact point P which the user U has touched is set in the virtual space V. That is, a virtual straight line in the direction instructed by the user U by the touch (that is, touch operation) on the display surface of the display device 13 is set in the virtual space V as the reference line Q. The display control unit 42B causes the display device 13 to display the image G2 as the instruction image G when the reference line Q intersects the character C, and displays the image G1 as the instruction image G when the reference line Q does not intersect the character C. The display 13 is displayed as Even with the above configuration, the user U can easily grasp the positional relationship between the reference line Q and the character C in the virtual space V (for example, the state where the user U is in contact with the character C in the virtual space V). There is an advantage.
 変形例B6の例示から理解される通り、第2実施形態において、表示装置13を利用者Uの頭部に装着する構成は省略され得る。利用者Uの頭部に装着されることを前提としない構成における表示制御部42Bは、表示装置13の姿勢に応じて姿勢が制御される仮想カメラEで仮想空間Vを撮像した画像を表示装置13に表示させる要素として表現される。また、利用者Uが手に把持した状態で使用される表示装置13においては、当該表示装置13に対する操作(例えばタッチ操作)で指示された方向の仮想的な直線が「基準線」である。 As understood from the example of the modified example B6, in the second embodiment, the configuration for attaching the display device 13 to the head of the user U may be omitted. The display control unit 42B in a configuration not premised to be worn on the head of the user U displays the image obtained by capturing the virtual space V with the virtual camera E whose posture is controlled according to the posture of the display device 13 It is expressed as an element to be displayed on 13. Moreover, in the display device 13 used in a state where the user U holds the hand, a virtual straight line in the direction instructed by the operation (for example, touch operation) on the display device 13 is a “reference line”.
[C:他の変形例]
 第1実施形態または第2実施形態に対する具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2個以上の態様を、相互に矛盾しない範囲で適宜に併合してもよい。
[C: Other Modifications]
The aspect of the specific modification with respect to 1st Embodiment or 2nd Embodiment is illustrated below. Two or more embodiments arbitrarily selected from the following exemplifications may be combined appropriately as long as they do not contradict each other.
[変形例C1]
 基準線Qは、前述の各形態で例示した仮想カメラEの光軸に限定されない。例えば、表示装置13の表示面に垂直な直線が基準線Qとして利用される。また、仮想カメラEの光軸または表示面に垂直な直線に対して所定の角度をなす直線を基準線Qに利用してもよい。なお、利用者Uの視線の方向を推定するアイトラッキング(視線計測)機能を搭載したHMD30においては、当該機能により推定された視線を基準線Qとして利用してもよい。以上の例示から理解される通り、基準線Qは、仮想空間V内に設定される仮想的な直線として包括的に表現される。
[Modification C1]
The reference line Q is not limited to the optical axis of the virtual camera E exemplified in each of the embodiments described above. For example, a straight line perpendicular to the display surface of the display device 13 is used as the reference line Q. Further, a straight line that forms a predetermined angle with respect to a straight line perpendicular to the optical axis of the virtual camera E or the display surface may be used as the reference line Q. In the HMD 30 equipped with an eye tracking (sight line measurement) function of estimating the direction of the line of sight of the user U, the line of sight estimated by the function may be used as the reference line Q. As understood from the above examples, the reference line Q is comprehensively expressed as a virtual straight line set in the virtual space V.
[変形例C2]
 前述の各形態では、仮想空間Vに配置されるオブジェクトとしてキャラクタCを例示したが、仮想空間V内で利用者Uが接触するオブジェクトはキャラクタCに限定されない。例えば、仮想空間V内の建造物または自然物等の無生物的な要素もオブジェクトの概念に包含される。生物的なオブジェクト(キャラクタC)について「動作」とは、例えば当該オブジェクトの挙動、行為、様子または態度である。また、無生物的なオブジェクトについて「動作」とは、例えば当該要素の形態の変化である。例えば建造物のドアが開く動作、または、岩石等の自然物が変形ないし移動する動作が、オブジェクトの動作として例示される。
[Modification C2]
In each of the above-described embodiments, the character C is illustrated as an object arranged in the virtual space V, but the object with which the user U contacts in the virtual space V is not limited to the character C. For example, inanimate elements such as structures or natural objects in the virtual space V are also included in the concept of the object. For a biological object (character C), "action" is, for example, the behavior, action, appearance or attitude of the object. Moreover, "action" about an inanimate object is, for example, a change in the form of the element. For example, an operation of opening a door of a building or an operation of deforming or moving a natural object such as a rock is exemplified as the operation of an object.
[変形例C3]
 前述の各形態では、HMD30がロール方向に回転した場合に、仮想空間V内で利用者UがキャラクタCに接触したと判定してキャラクタCに反応動作を実行させたが、反応動作の契機(すなわち動作指示)は以上の例示に限定されない。例えば、ピッチ方向またはヨー方向におけるHMD30の回転を利用者Uによる接触としてキャラクタCに反応動作を実行させてもよい。また、例えば基準線QがキャラクタCの特定の範囲内に交差した状態(すなわち利用者Uが当該範囲内を長時間にわたり凝視した場合)が所定の時間にわたり継続したことを条件としてキャラクタCに反応動作を実行させてもよい。HMD30に接続された操作装置(図示略)を利用者Uが操作することを条件としてキャラクタCに反応動作を実行させてもよい。以上の例示から理解される通り、HMD30の姿勢の変化(特にロール方向の回転)をキャラクタCの反応動作の契機とする構成は省略され得る。
[Modification C3]
In each of the above-described embodiments, when the HMD 30 rotates in the roll direction, it is determined that the user U has touched the character C in the virtual space V, and the character C is caused to execute a reaction operation. That is, the operation instruction is not limited to the above example. For example, the reaction operation may be performed on the character C with the rotation of the HMD 30 in the pitch direction or the yaw direction as a touch by the user U. Also, for example, it responds to character C on the condition that the state where reference line Q crosses within the specific range of character C (that is, when user U gazes within the range for a long time) continues for a predetermined time. An action may be performed. The reaction operation may be performed on the character C on condition that the user U operates the operation device (not shown) connected to the HMD 30. As understood from the above example, the configuration in which the change in the attitude of the HMD 30 (particularly, the rotation in the roll direction) is a trigger for the reaction operation of the character C may be omitted.
[変形例C4]
 前述の各形態では、ロール方向におけるHMD30の回転を動作指示(キャラクタCに対する接触の指示)として受付けたが、動作指示と判定されるHMD30の姿勢の変化はロール方向の回転に限定されない。例えば、ピッチ方向またはヨー方向におけるHMD30の回転を動作指示として受付けてもよい。
[Modification C4]
In the above-described embodiments, the rotation of the HMD 30 in the roll direction is accepted as an operation instruction (instruction of contact with the character C), but the change in posture of the HMD 30 determined as the operation instruction is not limited to the rotation in the roll direction. For example, the rotation of the HMD 30 in the pitch direction or the yaw direction may be accepted as an operation instruction.
[変形例C5]
 第1実施形態について例示した変形例(変形例A1から変形例A7)は、第2実施形態にも同様に適用される。また、第2実施形態について例示した変形例(変形例B1から変形例B6)は、第1実施形態にも同様に適用される。
[Modification C5]
The modifications (Modifications A1 to A7) exemplified for the first embodiment are similarly applied to the second embodiment. Further, the modifications (Modifications B1 to B6) illustrated in the second embodiment are similarly applied to the first embodiment.
[変形例C6]
 前述の各形態では、HMD30の表示装置13および検出装置14とHMD30を制御する情報処理装置40とを単体の端末装置10で実現した構成を例示したが、図19の画像表示システム1B(ゲーム装置)のようにHMD30と情報処理装置40とを別体の装置として実現してもよい。HMD30と情報処理装置40とは有線または無線により相互に通信可能である。HMD30と情報処理装置40との間の通信の方式は任意であるが、例えばbluetooth(登録商標)等の近距離無線通信が好適である。
[Modification C6]
In each of the above-described embodiments, the configuration in which the display device 13 and the detection device 14 of the HMD 30 and the information processing device 40 for controlling the HMD 30 are realized by a single terminal device 10 has been illustrated. The HMD 30 and the information processing apparatus 40 may be realized as separate apparatuses as in the above. The HMD 30 and the information processing apparatus 40 can communicate with each other by wire or wirelessly. Although the communication system between the HMD 30 and the information processing apparatus 40 is arbitrary, for example, near field communication such as bluetooth (registered trademark) is preferable.
 HMD30は、第1実施形態と同様に、表示装置13と検出装置14と装着具20とを具備し、利用者Uの頭部に装着される。情報処理装置40は、HMD30と通信することで各種の画像をHMD30に表示させる制御機器であり、制御装置11と記憶装置12とを具備する。例えば、携帯電話機、スマートフォン、タブレット端末、パーソナルコンピュータ、または家庭用ゲーム装置等の各種の情報端末が情報処理装置40として利用される。なお、情報処理装置40が可搬型であるか据置型であるかは不問である。制御装置11は、記憶装置12に記憶されたプログラムを実行することで、第1実施形態または第2実施形態と同様に、姿勢解析部41、表示制御部42(42Aまたは42B)および動作制御部43(43Aまたは43B)として機能する。したがって、図19の構成においても第1実施形態または第2実施形態と同様の効果が実現される。 As in the first embodiment, the HMD 30 includes the display device 13, the detection device 14, and the mounting tool 20, and is mounted on the head of the user U. The information processing apparatus 40 is a control device that causes the HMD 30 to display various images by communicating with the HMD 30, and includes the control device 11 and the storage device 12. For example, various information terminals such as a mobile phone, a smartphone, a tablet terminal, a personal computer, or a home game device are used as the information processing device 40. In addition, it does not matter whether the information processing apparatus 40 is portable or stationary. The control device 11 executes the program stored in the storage device 12 so that the posture analysis unit 41, the display control unit 42 (42A or 42B), and the operation control unit as in the first embodiment or the second embodiment. It functions as 43 (43A or 43B). Therefore, the same effect as that of the first embodiment or the second embodiment is realized also in the configuration of FIG.
[変形例C7]
 本発明の好適な態様は、前述の各形態での例示の通り、コンピュータ(具体的には制御装置11)とプログラムとの協働により実現される。前述の各形態に係るプログラムは、コンピュータが読取可能な記録媒体に格納された形態で提供されてコンピュータにインストールされる。記録媒体は、例えば非一過性(non-transitory)の記録媒体であり、CD-ROM等の光学式記録媒体(光ディスク)が好例であるが、半導体記録媒体または磁気記録媒体等の公知の任意の形式の記録媒体を含む。なお、非一過性の記録媒体とは、一過性の伝搬信号(transitory, propagating signal)を除く任意の記録媒体を含み、揮発性の記録媒体を除外するものではない。また、通信網を介した配信の形態でプログラムをコンピュータに提供することも可能である。
[Modification C7]
The preferred embodiments of the present invention are realized by the cooperation of a computer (specifically, the control device 11) and a program, as illustrated in the above-described embodiments. The program according to each of the above embodiments is provided in a form stored in a computer readable recording medium and installed in the computer. The recording medium is, for example, a non-transitory recording medium, and is preferably an optical recording medium (optical disc) such as a CD-ROM, but any known medium such as a semiconductor recording medium or a magnetic recording medium may be used. Recording media of the form Note that non-transitory recording media include any recording media except transient propagation signals, and do not exclude volatile recording media. It is also possible to provide a program to a computer in the form of distribution via a communication network.
[D:付記]
 以上の記載から、例えば以下のように本発明の好適な態様が把握される。なお、各態様の理解を容易にするために、以下では、図面の符号を便宜的に括弧書で併記する、本発明を図示の態様に限定する趣旨ではない。
[D: Supplementary note]
From the above description, for example, preferred embodiments of the present invention are grasped as follows. In addition, in order to facilitate understanding of each aspect, the present invention is not intended to limit the present invention to the illustrated aspect, in which the reference numerals of the drawings are additionally described in parentheses for convenience.
[態様A1]
 本発明の好適な態様(態様A1)に係るプログラムは、ヘッドマウントディスプレイ(30)の方向に応じて方向が制御される仮想カメラ(E)で仮想空間(V)を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイ(30)の表示装置(13)に表示させる表示制御部(42A)、および、前記ヘッドマウントディスプレイ(30)のロール方向の回転と、前記ヘッドマウントディスプレイ(30)の方向に応じて前記仮想空間(V)における方向が変化する基準線(Q)と前記仮想空間(V)内のオブジェクト(C)との位置関係と、に応じて当該オブジェクト(C)の動作を制御する動作制御部(43A)としてコンピュータ(11)を機能させる。以上の態様では、ヘッドマウントディスプレイ(30)のロール方向の回転に応じて仮想空間(V)内のオブジェクト(C)の動作が制御されるから、オブジェクト(C)の動作を制御するための指示を利用者(U)が入力する操作装置をヘッドマウントディスプレイ(30)とは別個に用意することを必要とせずに、利用者(U)からの指示をオブジェクト(C)の動作に反映させることができる。
[Aspect A1]
A program according to a preferred aspect (aspect A1) of the present invention is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), A display control unit (42A) that causes a display device (13) of the head mounted display (30) to display a stereoscopic image using binocular parallax; rotation of the head mounted display (30) in the roll direction; According to the positional relationship between the reference line (Q) whose direction in the virtual space (V) changes according to the direction of the head mounted display (30) and the object (C) in the virtual space (V) The computer (11) functions as an operation control unit (43A) that controls the operation of the object (C). In the above aspect, since the operation of the object (C) in the virtual space (V) is controlled according to the rotation of the head mounted display (30) in the roll direction, an instruction for controlling the operation of the object (C) To reflect the instruction from the user (U) in the operation of the object (C) without requiring the user (U) to prepare the operation device for inputting separately from the head mounted display (30) Can.
 なお、「ヘッドマウントディスプレイ」は、利用者(U)の頭部に装着可能な画像表示機器である。好適な態様におけるヘッドマウントディスプレイ(30)は、画像を表示する表示装置(13)と、表示装置(13)を利用者(U)の頭部に装着するための装着具(20)とを具備する。表示装置(13)を頭部に装着するための装着具(20)に専用の表示装置(13)が固定的に設置された構成(いわばヘッドマウントディスプレイの専用品)のほか、頭部に装着された状態以外でも利用され得る汎用的な表示装置(13)を装着具(20)に対して着脱可能である構成も含む。すなわち、ヘッドマウントディスプレイ(30)の表示装置(13)には、専ら頭部に装着された状態で使用される装置のほか、頭部に装着されていない状態と頭部に装着された状態との双方で使用可能な装置も含まれる。表示装置(13)を装着具(20)に対して着脱可能である構成では、例えばスマートフォンまたはタブレット端末等の可搬型の端末装置(10)が表示装置(13)として利用される。 The “head mounted display” is an image display device attachable to the head of the user (U). The head mounted display (30) in the preferred embodiment comprises a display (13) for displaying an image, and a mounting tool (20) for mounting the display (13) on the head of the user (U). Do. In addition to a configuration in which a dedicated display device (13) is fixedly attached to a mounting tool (20) for mounting the display device (13) on the head (in other words, a dedicated item for a head mounted display) It also includes a configuration in which a general-purpose display device (13) that can be used outside of the above state is attachable to and detachable from the mounting tool (20). That is, in the display device (13) of the head mount display (30), in addition to the device used in a state exclusively mounted on the head, a state not mounted on the head and a state mounted on the head Devices that can be used on both sides. In the configuration in which the display device (13) is attachable to and detachable from the mounting tool (20), a portable terminal device (10) such as a smartphone or a tablet terminal is used as the display device (13).
 「ロール方向」とは、利用者(U)の前後方向(例えば、ヘッドマウントディスプレイ(30)において画像が表示される表示画面に垂直な方向)の軸線を中心とした円周方向を意味する。例えば、ヘッドマウントディスプレイ(30)を頭部に装着した利用者(U)が、前方を向いたまま頭部を左右に傾けた場合に、ヘッドマウントディスプレイ(30)はロール方向に回転する。ヘッドマウントディスプレイ(30)にロール方向の回転のみが単独で発生した場合だけでなく、ロール方向の回転とともにピッチ方向またはヨー方向の回転が発生した場合でも、ヘッドマウントディスプレイ(30)がロール方向に回転したと判定され得る。なお、ロール方向の回転に応じて仮想カメラ(E)の姿勢を変化させるか否かは不問である。すなわち、ロール方向の回転に連動して仮想カメラ(E)を光軸周りに回転させる構成と、ロール方向の回転が発生した場合でも仮想カメラ(E)を回転させない構成との双方が、態様A1の範囲には含まれる。 The “rolling direction” means a circumferential direction around the axis of the user (U) in the front-rear direction (for example, the direction perpendicular to the display screen on which an image is displayed on the head mounted display (30)). For example, when the user (U) wearing the head mounted display (30) on the head tilts the head left and right while facing forward, the head mounted display (30) rotates in the roll direction. The head mounted display (30) moves in the roll direction not only when only rotation in the roll direction occurs on the head mounted display (30) but also when rotation in the pitch direction or yaw direction occurs with the rotation in the roll direction. It may be determined that it has rotated. In addition, it does not matter whether the posture of the virtual camera (E) is changed according to the rotation in the roll direction. That is, both the configuration that rotates the virtual camera (E) around the optical axis in conjunction with the rotation in the roll direction and the configuration that does not rotate the virtual camera (E) even when rotation in the roll direction occurs Is included in the scope of
 「オブジェクト」とは、仮想空間(V)内に設置される仮想的な物体である。オブジェクト(C)の典型例はキャラクタ(例えば人間,動物またはモンスター)であるが、仮想空間(V)内の建造物または自然物等の無生物的な要素もオブジェクトの概念に含まれ得る。キャラクタについて「オブジェクトの動作」とは、例えば、当該キャラクタの挙動,行為,様子または態度である。また、無生物的な要素について「オブジェクトの動作」とは、例えば当該要素の形態の変化(例えば建造物のドアまたは窓が開く、岩石等の自然物が変形ないし移動する等)である。 An "object" is a virtual object installed in a virtual space (V). A typical example of the object (C) is a character (for example, a human, an animal or a monster), but an inanimate element such as a structure or a natural thing in the virtual space (V) may be included in the concept of the object. About a character The "action of an object" is, for example, the behavior, action, appearance or attitude of the character. In addition, "the motion of an object" for an inanimate element is, for example, a change in the form of the element (for example, a door or a window of a building opens, a natural object such as rock deforms or moves, etc.).
 「基準線」は、仮想空間(V)内に設定された仮想的な直線である。例えば、仮想カメラ(E)の光軸、表示装置(13)による表示画面の中心線、または、これらの直線に対して所定の角度をなす直線が、基準線(Q)の好適例である。なお、利用者(U)の視線を推定するアイトラッキング機能を利用可能な構成では、以上に例示した直線のほか、利用者(U)の視線を基準線(Q)として利用してもよい。 The “reference line” is a virtual straight line set in the virtual space (V). For example, the optical axis of the virtual camera (E), the center line of the display screen by the display device (13), or a straight line forming a predetermined angle with these straight lines is a preferred example of the reference line (Q). In the configuration that can use the eye tracking function of estimating the line of sight of the user (U), the line of sight of the user (U) may be used as the reference line (Q) in addition to the straight lines exemplified above.
[態様A2]
 態様A1の好適例(態様A2)において、前記位置関係は、前記基準線(Q)と前記オブジェクト(C)とが交差する関係である。以上の態様によれば、基準線(Q)とオブジェクト(C)との交差に応じた指示をオブジェクト(C)の動作に反映させることができる。
[Aspect A2]
In a preferred example of the aspect A1 (aspect A2), the positional relation is a relation in which the reference line (Q) and the object (C) intersect. According to the above aspect, an instruction according to the intersection of the reference line (Q) and the object (C) can be reflected in the operation of the object (C).
[態様A3]
 態様A1または態様A2の好適例(態様A3)において、前記動作制御部(43A)は、前記ヘッドマウントディスプレイ(30)がロール方向に回転した場合に、前記仮想空間(V)内において前記オブジェクト(C)のうち前記基準線(Q)が交差する地点(P)に利用者(U)が接触したと判定し、当該接触に対応した動作を前記オブジェクト(C)に実行させる。以上の態様によれば、利用者(U)は、ヘッドマウントディスプレイ(30)をロール方向に回転させることで、オブジェクト(C)のうち基準線(Q)と交差する地点(P)に対する接触を指示することができる。
[Aspect A3]
In the preferable example (aspect A3) of aspect A1 or aspect A2, when the head mounted display (30) rotates in the roll direction, the operation control unit (43A) moves the object (in the virtual space (V) It is determined that the user (U) has touched the point (P) where the reference line (Q) intersects C) and causes the object (C) to execute an operation corresponding to the contact. According to the above aspect, the user (U) rotates the head mount display (30) in the roll direction to contact the object (C) with the point (P) intersecting the reference line (Q). Can be instructed.
[態様A4]
 態様A3の好適例(態様A4)において、前記動作制御部(43A)は、前記オブジェクト(C)に対する接触が維持される時間長に応じた動作を、前記オブジェクト(C)に実行させる。以上の態様によれば、ヘッドマウントディスプレイ(30)がロール方向に回転した状態に維持する時間長に応じてオブジェクト(C)の動作を多様に変化させることができる。
[Aspect A4]
In a preferred example of the aspect A3 (aspect A4), the operation control unit (43A) causes the object (C) to perform an action according to the length of time the contact with the object (C) is maintained. According to the above aspect, the operation of the object (C) can be variously changed according to the length of time in which the head mounted display (30) is kept rotating in the roll direction.
[態様A5]
 態様A3または態様A4の好適例(態様A5)において、前記動作制御部(43A)は、前記オブジェクト(C)に対する接触に応じて前記オブジェクト(C)に関するパラメータを変化させる。以上の態様によれば、ヘッドマウントディスプレイ(30)をロール方向に回転させる簡便な操作により、仮想空間(V)内のオブジェクト(C)に関するパラメータを変化させることができる。なお、「オブジェクト(C)に対する接触に応じてパラメータを変化させる」構成には、例えば、パラメータがオブジェクト(C)に対する接触の有無に依存する構成と、パラメータがオブジェクト(C)に対する接触の度合に依存する構成と、が包含される。
[Aspect A5]
In a preferred example of the aspect A3 or the aspect A4 (aspect A5), the operation control unit (43A) changes a parameter related to the object (C) according to a touch on the object (C). According to the above aspect, the parameter related to the object (C) in the virtual space (V) can be changed by a simple operation of rotating the head mounted display (30) in the roll direction. In the configuration “change the parameter according to the touch on the object (C)”, for example, the parameter depends on the presence or absence of the touch on the object (C), and the parameter corresponds to the degree of touch on the object (C) And dependent configurations are included.
[態様A6]
 態様A3から態様A5の何れかの好適例(態様A6)において、前記動作制御部(43A)は、前記オブジェクト(C)のうち前記基準線(Q)が交差する位置に応じた動作を、前記オブジェクト(C)に実行させる。以上の態様によれば、仮想空間(V)内の基準線(Q)がオブジェクト(C)に交差する地点(P)の位置に応じてオブジェクト(C)の動作を多様に変化させることができる。
[Aspect A6]
In the preferred example (Aspect A6) according to any of the aspects A3 to A5, the operation control unit (43A) performs the operation according to the position where the reference line (Q) intersects the object (C), Make an object (C) execute it. According to the above aspect, the operation of the object (C) can be variously changed according to the position of the point (P) where the reference line (Q) in the virtual space (V) intersects the object (C) .
[態様A7]
 態様A3から態様A6の何れかの好適例(態様A7)において、前記動作制御部(43A)は、前記ヘッドマウントディスプレイ(30)がロール方向の一方側に回転した場合に、前記オブジェクト(C)に利用者(U)が接触したと判定し、前記オブジェクト(C)に接触した状態において、前記ヘッドマウントディスプレイ(30)がロール方向の他方側に回転した場合、または、前記ヘッドマウントディスプレイが前記一方側への回転から元に戻った場合に、前記オブジェクト(C)に対する接触が解除されたと判定する。以上の態様によれば、利用者(U)は、ロール方向における回転の向きに応じて、仮想空間(V)内のオブジェクト(C)に対する接触の発生と解除とを指示することができる。
[Aspect A7]
In the preferred example (Aspect A7) according to any of the aspects A3 to A6, the operation control unit (43A) is configured to move the object (C) when the head mounted display (30) is rotated to one side in the roll direction. When the head mounted display (30) is rotated to the other side in the roll direction while it is determined that the user (U) has touched the object (C), or the head mounted display is not When returning from the rotation to one side, it is determined that the contact to the object (C) is released. According to the above aspect, the user (U) can instruct the generation and release of the contact with the object (C) in the virtual space (V) according to the direction of rotation in the roll direction.
[態様A8]
 態様A1から態様A7の何れかの好適例(態様A8)において、前記動作制御部(43A)は、前記ヘッドマウントディスプレイ(30)のロール方向における回転角度(θ)に応じた動作を、前記オブジェクト(C)に実行させる。以上の態様によれば、ヘッドマウントディスプレイ(30)の回転角度(θ)に応じた多様な動作をオブジェクト(C)に実行させることができる。
[Aspect A8]
In the preferred example (Aspect A8) according to any one of the aspects A1 to A7, the operation control unit (43A) performs an operation according to the rotation angle (θ) in the roll direction of the head mounted display (30) Let (C) execute. According to the above aspect, it is possible to cause the object (C) to execute various operations according to the rotation angle (θ) of the head mounted display (30).
[態様A9]
 態様A1から態様A8の何れかの好適例(態様A9)において、前記表示制御部(42A)は、前記ヘッドマウントディスプレイ(30)のロール方向における回転に応じて、前記仮想カメラ(E)を前記仮想空間(V)内で移動させる。以上の態様によれば、ヘッドマウントディスプレイ(30)をロール方向に回転させる簡便な操作により、仮想空間(V)内で仮想カメラ(E)(仮想的な視点)をオブジェクト(C)に接近または離間させることができる。なお、仮想カメラ(E)の移動の方向は任意であるが、例えば、仮想カメラ(E)をオブジェクト(C)の方向に移動させる構成が好適である。例えば、ヘッドマウントディスプレイ(30)がロール方向の一方側に回転した場合に仮想カメラ(E)をオブジェクト(C)に接近させ、ヘッドマウントディスプレイ(30)がロール方向の他方側に回転した場合に仮想カメラ(E)をオブジェクト(C)から離間させる構成が好適である。
[Aspect A9]
In the preferable example (aspect A9) according to any of the aspect A1 to the aspect A8, the display control unit (42A) responds to the rotation of the head mounted display (30) in the rolling direction to the virtual camera (E). Move in virtual space (V). According to the above aspect, the virtual camera (E) (virtual viewpoint) approaches the object (C) in the virtual space (V) by a simple operation of rotating the head mounted display (30) in the roll direction It can be separated. Although the direction of movement of the virtual camera (E) is arbitrary, for example, a configuration in which the virtual camera (E) is moved in the direction of the object (C) is preferable. For example, when the head mounted display (30) rotates to one side in the roll direction, the virtual camera (E) approaches the object (C), and the head mounted display (30) rotates to the other side in the roll direction. A configuration in which the virtual camera (E) is separated from the object (C) is preferable.
 なお、「前記ヘッドマウントディスプレイ(30)のロール方向における回転に応じて、前記仮想カメラ(E)を前記仮想空間(V)内で移動させる」構成には、例えば、仮想カメラ(E)の移動がヘッドマウントディスプレイ(30)のロール方向における回転の有無に依存する構成と、仮想カメラ(E)の移動が当該ロール方向における回転の角度に依存する構成と、が包含される。 Note that, in the configuration “move the virtual camera (E) in the virtual space (V) according to the rotation of the head mounted display (30) in the roll direction”, for example, the movement of the virtual camera (E) And the configuration in which the movement of the virtual camera (E) depends on the angle of rotation in the roll direction is included.
[態様A10]
 態様A1から態様A9の何れかの好適例(態様A10)において、前記表示制御部(42A)は、前記ヘッドマウントディスプレイ(30)のロール方向における回転に応じて、前記仮想カメラ(E)による撮像範囲(R)を変化させる。以上の態様によれば、ヘッドマウントディスプレイ(30)をロール方向に回転させる簡便な操作により、仮想空間(V)内で仮想カメラ(E)が撮像する範囲(すなわち画角)を変化させることができる。なお、仮想カメラ(E)の撮像範囲(R)の変化は、仮想空間(V)において撮像される要素(例えばオブジェクト(C))の拡大(ズームイン)または縮小(ズームアウト)を意味する。
[Aspect A10]
In the preferable example (aspect A10) according to any of the aspects A1 to A9, the display control unit (42A) captures an image by the virtual camera (E) according to the rotation of the head mounted display (30) in the roll direction. Change range (R). According to the above aspect, the range (that is, the angle of view) captured by the virtual camera (E) in the virtual space (V) can be changed by a simple operation of rotating the head mount display (30) in the roll direction. it can. The change in the imaging range (R) of the virtual camera (E) means enlargement (zooming in) or reduction (zooming out) of an element (for example, an object (C)) imaged in the virtual space (V).
 なお「ヘッドマウントディスプレイ(30)のロール方向における回転に応じて、前記仮想カメラ(E)による撮像範囲(R)を変化させる」構成には、例えば仮想カメラ(E)の撮像範囲(R)がヘッドマウントディスプレイ(30)のロール方向における回転の有無に依存する構成と、仮想カメラ(E)の撮像範囲(R)が当該ロール方向における回転の角度に依存する構成と、が包含される。 Note that, in the configuration “change the imaging range (R) of the virtual camera (E) according to the rotation of the head mounted display (30) in the roll direction” ”, for example, the imaging range (R) of the virtual camera (E) A configuration that depends on the presence or absence of rotation in the roll direction of the head mounted display (30) and a configuration that the imaging range (R) of the virtual camera (E) depends on the angle of rotation in the roll direction are included.
[態様A11]
 態様A1から態様A10の何れかの好適例(態様A11)において、前記動作制御部(43A)は、前記ヘッドマウントディスプレイ(30)のロール方向における回転角度(θ)が閾値(θt)を超えた場合に、前記ヘッドマウントディスプレイ(30)がロール方向に回転したと判定する。以上の態様では、回転角度(θ)が閾値(θt)を超えた場合に、ヘッドマウントディスプレイ(30)がロール方向に回転したと判定される。すなわち、閾値(θt)を下回る程度の回転では、ヘッドマウントディスプレイ(30)はロール方向に回転したと判定されない。したがって、ヘッドマウントディスプレイ(30)が回転したと過剰な頻度で判定される可能性を低減できる。
[Aspect A11]
In the preferred embodiment (aspect A11) according to any of the aspects A1 to A10, the operation control unit (43A) is configured such that the rotation angle (θ) in the roll direction of the head mounted display (30) exceeds a threshold (θt). In this case, it is determined that the head mounted display (30) has been rotated in the roll direction. In the above aspect, when the rotation angle (θ) exceeds the threshold (θt), it is determined that the head mounted display (30) has rotated in the roll direction. That is, it is not determined that the head mounted display (30) has rotated in the roll direction when the rotation is less than the threshold (θt). Therefore, it is possible to reduce the possibility that the head mounted display (30) is determined to rotate excessively.
[態様A12]
 本発明の好適な態様(態様A12)に係る画像表示システム(1A,1B)は、ヘッドマウントディスプレイ(30)と情報処理装置(40)とを具備する画像表示システム(1A,1B)であって、前記情報処理装置(40)は、ヘッドマウントディスプレイ(30)の方向に応じて方向が制御される仮想カメラ(E)で仮想空間(V)を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイ(30)の表示装置(13)に表示させる表示制御部(42A)と、前記ヘッドマウントディスプレイ(30)のロール方向の回転と、前記ヘッドマウントディスプレイ(30)の方向に応じて前記仮想空間(V)における方向が変化する基準線(Q)と前記仮想空間(V)内のオブジェクト(C)との位置関係と、に応じて当該オブジェクト(C)の動作を制御する動作制御部(43A)とを具備する。以上の態様では、ヘッドマウントディスプレイ(30)のロール方向の回転に応じて仮想空間(V)内のオブジェクト(C)の動作が制御されるから、オブジェクト(C)の動作を制御するための指示を利用者(U)が入力する操作装置をヘッドマウントディスプレイ(30)とは別個に用意することを必要とせずに、利用者(U)からの指示をオブジェクト(C)の動作に反映させることができる。
[Aspect A12]
An image display system (1A, 1B) according to a preferred aspect (aspect A12) of the present invention is an image display system (1A, 1B) including a head mounted display (30) and an information processing apparatus (40) The information processing apparatus (40) is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), and using binocular parallax A display control unit (42A) that causes the display device (13) of the head mounted display (30) to display the captured stereoscopic image, rotation of the head mounted display (30) in the roll direction, and the head mounted display (30) A positional relationship between a reference line (Q) whose direction in the virtual space (V) changes according to the direction of) and an object (C) in the virtual space (V) Comprising operation control unit for controlling the operation of the object (C) and (43A) in accordance with. In the above aspect, since the operation of the object (C) in the virtual space (V) is controlled according to the rotation of the head mounted display (30) in the roll direction, an instruction for controlling the operation of the object (C) To reflect the instruction from the user (U) in the operation of the object (C) without requiring the user (U) to prepare the operation device for inputting separately from the head mounted display (30) Can.
[態様B]
 仮想空間内のキャラクタ等のオブジェクトとの間でコミュニケーションするゲーム等のコンテンツが従来から提案されている。この種のコンテンツをヘッドマウントディスプレイにより表示する場面では、仮想カメラの光軸または利用者の視線等の仮想的な直線(以下「基準線」という)と、仮想空間内のキャラクタ等のオブジェクトとの位置関係を、利用者が把握し難いという問題がある。以上の事情を考慮して、本発明の態様Bは、仮想空間内のオブジェクトと基準線との位置関係を利用者が把握することを容易にすることを目的とする。
[Aspect B]
BACKGROUND ART Content such as a game for communicating with an object such as a character in a virtual space has conventionally been proposed. In a situation where this type of content is displayed by a head mounted display, a virtual straight line (hereinafter referred to as "reference line") such as the optical axis of the virtual camera or the user's line of sight and an object such as a character in the virtual space There is a problem that it is difficult for the user to grasp the positional relationship. In consideration of the above circumstances, the aspect B of the present invention aims to make it easy for a user to grasp the positional relationship between an object in a virtual space and a reference line.
[態様B1]
 本発明の好適な態様(態様B1)に係るプログラムは、ヘッドマウントディスプレイ(30)の方向に応じて方向が制御される仮想カメラ(E)で仮想空間(V)を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイ(30)の表示装置(13)に表示させる表示制御部(42B)、としてコンピュータ(11)を機能させるプログラムであって、前記表示制御部(42B)は、前記ヘッドマウントディスプレイ(30)の方向に応じて前記仮想空間(V)における方向が変化する基準線(Q)の方向に指示画像(G)を表示させ、前記基準線(Q)と前記仮想空間(V)内のオブジェクト(C)との位置関係に応じて前記指示画像(G)の表示態様を変化させる。以上の構成では、仮想空間(V)内における基準線(Q)の方向を表す指示画像(G)の表示態様が、当該基準線(Q)とオブジェクト(C)との位置関係に応じて変化するから、仮想空間(V)内における基準線(Q)とオブジェクト(C)との位置関係を利用者(U)が容易に把握できるという利点がある。
[Aspect B1]
A program according to a preferred aspect (aspect B1) of the present invention is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), A program that causes a computer (11) to function as a display control unit (42B) that causes a display device (13) of the head mount display (30) to display a stereoscopic image using binocular parallax, the display control The unit (42B) displays the indication image (G) in the direction of the reference line (Q) whose direction in the virtual space (V) changes in accordance with the direction of the head mount display (30). The display mode of the indication image (G) is changed according to the positional relationship between Q) and the object (C) in the virtual space (V). In the above configuration, the display mode of the instruction image (G) representing the direction of the reference line (Q) in the virtual space (V) changes according to the positional relationship between the reference line (Q) and the object (C) Therefore, the user (U) can easily grasp the positional relationship between the reference line (Q) and the object (C) in the virtual space (V).
 「基準線」は、仮想空間(V)内に設定された仮想的な直線である。例えば、仮想カメラ(E)の光軸、表示装置(13)による表示画面の中心線、または、これらの直線に対して所定の角度をなす直線が、基準線(Q)の好適例である。なお、利用者(U)の視線を推定するアイトラッキング機能を利用可能な構成では、以上に例示した直線のほか、利用者(U)の視線を基準線(Q)として利用してもよい。 The “reference line” is a virtual straight line set in the virtual space (V). For example, the optical axis of the virtual camera (E), the center line of the display screen by the display device (13), or a straight line forming a predetermined angle with these straight lines is a preferred example of the reference line (Q). In the configuration that can use the eye tracking function of estimating the line of sight of the user (U), the line of sight of the user (U) may be used as the reference line (Q) in addition to the straight lines exemplified above.
 「オブジェクト」とは、仮想空間(V)内に設置される仮想的な物体である。オブジェクト(C)の典型例はキャラクタ(例えば人間,動物またはモンスター)であるが、仮想空間(V)内の建造物または自然物等の無生物的な要素もオブジェクトの概念に含まれ得る。 An "object" is a virtual object installed in a virtual space (V). A typical example of the object (C) is a character (for example, a human, an animal or a monster), but an inanimate element such as a structure or a natural thing in the virtual space (V) may be included in the concept of the object.
 「表示態様」とは、視覚的に弁別可能な画像の性状を意味する。例えば、色の3属性である色相(色調)、彩度および明度(階調)のほか、サイズおよび画像内容(例えば模様または形状)も、表示態様の概念に包含される。また、表示態様の変化には、表示/非表示の切替も含まれる。 "Display mode" means the property of an image that can be visually distinguished. For example, in addition to the three attributes of color (hue), saturation and lightness (tone), size and image content (e.g. pattern or shape) are also included in the concept of the display mode. In addition, the change of the display mode includes the switching of display / non-display.
[態様B2]
 態様B1の好適例(態様B2)において、前記表示制御部(42B)は、前記オブジェクト(C)の表面のうち前記基準線(Q)と交差する地点(P)に前記指示画像(G)を表示させる。以上の態様によれば、仮想空間(V)内におけるオブジェクト(C)の表面に指示画像(G)が配置されるから、当該オブジェクト(C)の表面に接触している状態を、利用者(U)が指示画像(G)により知覚し易いという利点がある。
[Aspect B2]
In a preferred example of the aspect B1 (aspect B2), the display control unit (42B) displays the indication image (G) at a point (P) intersecting the reference line (Q) on the surface of the object (C) Display. According to the above aspect, since the instruction image (G) is disposed on the surface of the object (C) in the virtual space (V), the user (the user is in contact with the surface of the object (C) There is an advantage that U) is more easily perceived by the instruction image (G).
[態様B3]
 態様B1または態様B2の好適例(態様B3)に係るプログラムは、前記基準線(Q)と前記オブジェクト(C)との位置関係に応じて前記オブジェクト(C)の動作を制御する動作制御部(43B)としてコンピュータ(11)を機能させる。以上の態様によれば、利用者(U)からの指示を反映した多様な動作をオブジェクト(C)に実行させることができる。
[Aspect B3]
A program according to a preferred example of the aspect B1 or the aspect B2 (aspect B3) controls an operation of the object (C) according to a positional relationship between the reference line (Q) and the object (C) The computer (11) functions as 43B). According to the above aspect, it is possible to cause the object (C) to execute various operations reflecting the instruction from the user (U).
 なお、「オブジェクトの動作」とは、例えば、オブジェクト(C)の一例であるキャラクタの挙動,行為,様子または態度である。無生物的な要素について「オブジェクトの動作」とは、例えば当該要素の形態の変化(例えば建造物のドアまたは窓が開く、岩石等の自然物が変形ないし移動する等)である。 Note that "the action of an object" is, for example, the behavior, action, appearance or attitude of a character which is an example of the object (C). The "inanimate object" refers to, for example, a change in the form of the element (e.g. opening of a building door or window, deformation or movement of a natural object such as a rock).
[態様B4]
 態様B3の好適例(態様B4)において、前記動作制御部(43B)は、利用者(U)から動作指示を受付けた場合に、前記基準線(Q)と前記オブジェクト(C)との位置関係に応じて前記オブジェクト(C)の動作を制御する。以上の態様によれば、利用者(U)は、基準線(Q)を所望の位置に移動させた状態で動作指示を付与することで、オブジェクト(C)における所望の位置に対応した動作を実行させることができる。
[Aspect B4]
In the preferable example (embodiment B4) of the embodiment B3, when the operation control unit (43B) receives an operation instruction from the user (U), the positional relationship between the reference line (Q) and the object (C) Control the operation of the object (C) according to According to the above aspect, the user (U) gives an operation instruction while moving the reference line (Q) to a desired position, whereby an operation corresponding to the desired position in the object (C) is performed. It can be run.
 「動作指示」とは、仮想空間(V)内のオブジェクト(C)に対する作用(例えば接触)を発生させるための指示である。指示画像(G)が表す暫定的な方向を利用者(U)の所望の方向に確定するための指示とも換言される。例えば、ヘッドマウントディスプレイ(30)の方向について特定の変化(例えばロール方向の回転)が発生した場合に動作指示を受付けたと判定される。また、例えば仮想空間(V)内において基準線(Q)が静止した状態が所定の時間(例えば2秒)にわたり継続した場合に動作指示を受付けたと判定してもよい。 The “operation instruction” is an instruction for generating an action (for example, contact) on an object (C) in the virtual space (V). In other words, the instruction for determining the provisional direction represented by the instruction image (G) in the desired direction of the user (U). For example, when a specific change (for example, rotation in the roll direction) occurs in the direction of the head mounted display (30), it is determined that the operation instruction is accepted. Further, for example, when the state in which the reference line (Q) is stopped in the virtual space (V) continues for a predetermined time (for example, 2 seconds), it may be determined that the operation instruction has been received.
[態様B5]
 態様B4の好適例(態様B5)において、前記動作制御部(43B)は、前記ヘッドマウントディスプレイ(30)の姿勢の変化が所定の条件を充足する場合に前記動作指示を受付けたと判定し、当該姿勢の変化量に応じた動作を前記オブジェクト(C)に実行させ、前記表示制御部(42B)は、前記動作指示の受付前に、前記ヘッドマウントディスプレイ(30)の姿勢の変化に応じて前記指示画像(G)の表示態様を変化させる。以上の態様によれば、ヘッドマウントディスプレイ(30)の姿勢に応じた多様な動作をオブジェクト(C)に実行させることができる。また、ヘッドマウントディスプレイ(30)の姿勢に応じて指示画像(G)の表示態様が変化するから、利用者(U)がオブジェクト(C)に影響している感覚(例えば接触している感覚)を利用者(U)が把握し易いという利点がある。なお、「所定の条件」とは、利用者(U)による動作指示と判定される姿勢変化の条件であり、例えば実施形態においてはロール方向の回転である。
[Aspect B5]
In a preferred example (mode B5) of mode B4, the operation control unit (43B) determines that the operation instruction has been received when a change in the attitude of the head mounted display (30) satisfies a predetermined condition, The object (C) executes an operation according to the amount of change in attitude, and the display control unit (42B) responds to the change in attitude of the head mounted display (30) before accepting the operation instruction. The display mode of the instruction image (G) is changed. According to the above aspect, it is possible to cause the object (C) to execute various operations according to the attitude of the head mounted display (30). In addition, since the display mode of the instruction image (G) changes in accordance with the attitude of the head mounted display (30), the sense that the user (U) is affecting the object (C) (for example, a sense of contact) The advantage is that the user (U) can easily grasp. The “predetermined condition” is a condition of posture change determined to be an operation instruction by the user (U), and is, for example, rotation in the roll direction in the embodiment.
[態様B6]
 態様B4または態様B5の好適例(態様B6)において、前記表示制御部(42B)は、利用者(U)による前記動作指示の受付が許可される指示受付期間内において前記指示画像(G)の表示態様を経時的に変化させる。以上の態様によれば、動作指示の受付が許可される指示受付期間(例えば指示受付期間の残り時間)を、利用者(U)が指示画像(G)の表示態様から視覚的に把握できる。なお、「指示受付期間」とは、例えばキャラクタに対する1回の接触が可能な残り時間、または、キャラクタに対する接触が許可される動作モードの残り時間である。
[Aspect B6]
In a preferable example (aspect B6) of the aspect B4 or the aspect B5, the display control unit (42B) is configured to receive the instruction image (G) within an instruction reception period in which the user (U) is permitted to receive the operation instruction. The display mode is changed over time. According to the above aspect, the user (U) can visually grasp the instruction reception period (for example, the remaining time of the instruction reception period) in which the reception of the operation instruction is permitted from the display mode of the instruction image (G). Note that the “instruction acceptance period” is, for example, the remaining time in which one touch on the character is possible, or the remaining time in the operation mode in which the touch on the character is permitted.
[態様B7]
 態様B3から態様B6の何れかの好適例(態様B7)において、前記表示制御部(42B)は、前記オブジェクト(C)のうち前記基準線(Q)が交差する地点(P)の位置に応じて前記指示画像(G)の表示態様を変化させる。以上の態様によれば、オブジェクト(C)のうち基準線(Q)が交差する地点(P)の位置に応じて指示画像(G)の表示態様が変化するから、利用者(U)は、指示画像(G)の表示態様からオブジェクト(C)の動作を推測することができる。
[Aspect B7]
In the preferred embodiment (aspect B7) according to any of the aspect B3 to the aspect B6, the display control unit (42B) is adapted to the position of the point (P) where the reference line (Q) intersects among the objects (C). Change the display mode of the instruction image (G). According to the above aspect, the display mode of the instruction image (G) changes in accordance with the position of the point (P) where the reference line (Q) intersects the object (C). The motion of the object (C) can be inferred from the display mode of the instruction image (G).
[態様B8]
 態様B7の好適例(態様B8)において、前記表示制御部(42B)は、前記オブジェクト(C)のうち前記基準線(Q)が交差する地点(P)の位置の変化に並行して前記指示画像(G)の表示態様を経時的に変化させる。以上の態様によれば、オブジェクト(C)のうち基準線(Q)が交差する地点(P)の位置の変化に並行して指示画像(G)の表示態様が経時的に変化するから、指示画像(G)の表示態様を確認しながら基準線(Q)を徐々に変化させる興趣性を利用者(U)に提供できる。なお、「表示態様の経時的な変化」とは、表示態様が特定の時点で2値的にのみ変化するのではなく、表示態様が時間の経過とともに徐々に変化することを意味する。なお、表示態様が連続的に変化するのか段階的に変化するのかは不問である。
[Aspect B8]
In a preferred example of the aspect B7 (aspect B8), the display control unit (42B) performs the instruction in parallel with the change in the position of the point (P) where the reference line (Q) intersects in the object (C). The display mode of the image (G) is changed over time. According to the above aspect, since the display mode of the indication image (G) changes with time in parallel with the change of the position of the point (P) where the reference line (Q) intersects the object (C), the indication The user (U) can be provided with the interest of gradually changing the reference line (Q) while confirming the display mode of the image (G). Note that “temporal change in display mode” means that the display mode does not change only in a binary manner at a specific point in time, but gradually changes with the passage of time. In addition, it does not matter whether the display mode changes continuously or stepwise.
[態様B9]
 態様B1から態様B8の何れかの好適例(態様B9)において、前記動作制御部(43B)は、前記オブジェクト(C)の視線が前記基準線(Q)を追跡するように前記オブジェクト(C)を制御する。以上の態様によれば、利用者(U)がオブジェクト(C)に影響している感覚(例えば接触している感覚)を利用者(U)が把握し易い。なお、「オブジェクトの視線が基準線を追跡する」とは、オブジェクト(C)の眼球(例えば瞳)が基準線(Q)を追跡するように回転する場合だけでなく、オブジェクト(C)の頭部が基準線(Q)に応じて回転する場合も含む。
[Aspect B9]
In the preferred embodiment (aspect B9) according to any of the aspect B1 to the aspect B8, the operation control unit (43B) causes the object (C) to track the reference line (Q) of the object (C). Control. According to the above aspect, the user (U) can easily grasp the feeling (for example, the feeling of touching) in which the user (U) is affecting the object (C). Note that “the line of sight of the object follows the reference line” means not only when the eye (for example, the pupil) of the object (C) rotates so as to follow the reference line (Q), but also the head of the object (C) Also includes the case where the unit rotates in accordance with the reference line (Q).
[態様B10]
 本発明の好適な態様(態様B10)に係る画像表示システム(1A,1B)は、ヘッドマウントディスプレイ(30)と情報処理装置(40)とを具備する画像表示システム(1A,1B)であって、前記情報処理装置(40)は、ヘッドマウントディスプレイ(30)の方向に応じて方向が制御される仮想カメラ(E)で仮想空間(V)を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイ(30)の表示装置(13)に表示させる表示制御部(42B)を具備し、前記表示制御部(42B)は、前記ヘッドマウントディスプレイ(30)の方向に応じて前記仮想空間(V)における方向が変化する基準線(Q)の方向に指示画像(G)を表示させ、前記基準線(Q)と前記仮想空間(V)内のオブジェクト(C)との位置関係に応じて前記指示画像(G)の表示態様を変化させる。以上の構成では、仮想空間(V)内における基準線(Q)の方向を表す指示画像(G)の表示態様が、当該基準線(Q)とオブジェクト(C)との位置関係に応じて変化するから、仮想空間(V)内における基準線(Q)とオブジェクト(C)との位置関係を利用者(U)が容易に把握できるという利点がある。なお、ヘッドマウントディスプレイ(30)と情報処理装置(40)とは一体および別体の何れでもよい。
[Aspect B10]
An image display system (1A, 1B) according to a preferred aspect (aspect B10) of the present invention is an image display system (1A, 1B) including a head mounted display (30) and an information processing device (40) The information processing apparatus (40) is an image obtained by imaging a virtual space (V) with a virtual camera (E) whose direction is controlled according to the direction of the head mounted display (30), and using binocular parallax The display control unit (42B) is configured to cause the display device (13) of the head mounted display (30) to display the captured stereoscopic image, and the display control unit (42B) is directed to the direction of the head mounted display (30) To display a pointing image (G) in the direction of the reference line (Q) in which the direction in the virtual space (V) changes in accordance with the object, and the reference line (Q) and the objects in the virtual space (V) (C) depending on the positional relationship between the changing the display mode of the instruction image (G). In the above configuration, the display mode of the instruction image (G) representing the direction of the reference line (Q) in the virtual space (V) changes according to the positional relationship between the reference line (Q) and the object (C) Therefore, the user (U) can easily grasp the positional relationship between the reference line (Q) and the object (C) in the virtual space (V). The head mounted display (30) and the information processing apparatus (40) may be integrated or separated.
1A,1B…画像表示システム、10…端末装置、11…制御装置、12…記憶装置、13…表示装置、14…検出装置、20…装着具、30…HMD、40…情報処理装置、41…姿勢解析部、42A,42B…表示制御部、43A,43B…動作制御部、V…仮想空間、E…仮想カメラ、C…キャラクタ、R…撮像範囲、Q…基準線、G…指示画像、P…接触地点。 DESCRIPTION OF SYMBOLS 1A, 1B ... Image display system, 10 ... Terminal apparatus, 11 ... Control apparatus, 12 ... Storage apparatus, 13 ... Display apparatus, 14 ... Detection apparatus, 20 ... Mounting tool, 30 ... HMD, 40 ... Information processing apparatus, 41 ... Posture analysis unit, 42A, 42B: display control unit, 43A, 43B: operation control unit, V: virtual space, E: virtual camera, C: character, R: imaging range, Q: reference line, G: instruction image, P ... contact point.

Claims (12)

  1.  ヘッドマウントディスプレイの方向に応じて方向が制御される仮想カメラで仮想空間を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイの表示装置に表示させる表示制御部、および、
     前記ヘッドマウントディスプレイのロール方向の回転と、前記ヘッドマウントディスプレイの方向に応じて前記仮想空間における方向が変化する基準線と前記仮想空間内のオブジェクトとの位置関係と、に応じて当該オブジェクトの動作を制御する動作制御部
     としてコンピュータを機能させるプログラムを記録した非一過性の記録媒体。
    A display control unit that causes a display device of the head mounted display to display an image obtained by capturing an image of a virtual space with a virtual camera whose direction is controlled according to the direction of the head mounted display and using binocular parallax ,and,
    Movement of the object according to the rotation of the head mounted display in the roll direction and the positional relationship between the reference line whose direction in the virtual space changes according to the direction of the head mounted display and the object in the virtual space A non-transitory recording medium that records a program that causes a computer to function as an operation control unit that controls
  2.  前記位置関係は、前記基準線と前記オブジェクトとが交差する関係である
     請求項1の記録媒体。
    The recording medium according to claim 1, wherein the positional relationship is a relationship in which the reference line and the object intersect.
  3.  前記動作制御部は、前記ヘッドマウントディスプレイがロール方向に回転した場合に、前記仮想空間内において前記オブジェクトのうち前記基準線が交差する地点に利用者が接触したと判定し、当該接触に対応した動作を前記オブジェクトに実行させる
     請求項1または請求項2の記録媒体。
    When the head mounted display is rotated in the roll direction, the operation control unit determines that the user has touched the point where the reference line intersects in the virtual space, and corresponds to the contact. The recording medium according to claim 1 or 2, wherein the object is caused to execute an operation.
  4.  前記動作制御部は、前記オブジェクトに対する接触が維持される時間長に応じた動作を、前記オブジェクトに実行させる
     請求項3の記録媒体。
    The recording medium according to claim 3, wherein the operation control unit causes the object to execute an operation according to a length of time in which the contact with the object is maintained.
  5.  前記動作制御部は、前記オブジェクトに対する接触に応じて前記オブジェクトに関するパラメータを変化させる
     請求項3または請求項4の記録媒体。
    The recording medium according to claim 3, wherein the operation control unit changes a parameter related to the object in accordance with a touch on the object.
  6.  前記動作制御部は、前記オブジェクトのうち前記基準線が交差する位置に応じた動作を、前記オブジェクトに実行させる
     請求項3から請求項5の何れかの記録媒体。
    The recording medium according to any one of claims 3 to 5, wherein the operation control unit causes the object to execute an operation according to a position where the reference line intersects among the objects.
  7.  前記動作制御部は、前記ヘッドマウントディスプレイがロール方向の一方側に回転した場合に、前記オブジェクトに利用者が接触したと判定し、前記オブジェクトに接触した状態において、前記ヘッドマウントディスプレイがロール方向の他方側に回転した場合、または、前記ヘッドマウントディスプレイが前記一方側への回転から元に戻った場合に、前記オブジェクトに対する接触が解除されたと判定する
     請求項3から請求項6の何れかの記録媒体。
    The operation control unit determines that the user is in contact with the object when the head mount display is rotated to one side in the roll direction, and the head mount display is in the roll direction in a state in which the user is in contact with the object. The recording according to any one of claims 3 to 6, wherein it is determined that the contact with the object is released when the head mount display is rotated to the other side or when the head mounted display returns from the rotation to the one side. Medium.
  8.  前記動作制御部は、前記ヘッドマウントディスプレイのロール方向における回転角度に応じた動作を、前記オブジェクトに実行させる
     請求項1から請求項7の何れかの記録媒体。
    The recording medium according to any one of claims 1 to 7, wherein the operation control unit causes the object to execute an operation according to a rotation angle in a roll direction of the head mounted display.
  9.  前記表示制御部は、前記ヘッドマウントディスプレイのロール方向における回転に応じて、前記仮想カメラを前記仮想空間内で移動させる
     請求項1から請求項8の何れかの記録媒体。
    The recording medium according to any one of claims 1 to 8, wherein the display control unit moves the virtual camera in the virtual space according to rotation of the head mounted display in a roll direction.
  10.  前記表示制御部は、前記ヘッドマウントディスプレイのロール方向における回転に応じて、前記仮想カメラによる撮像範囲を変化させる
     請求項1から請求項9の何れかの記録媒体。
    The recording medium according to any one of claims 1 to 9, wherein the display control unit changes an imaging range of the virtual camera according to rotation of the head mounted display in a roll direction.
  11.  前記動作制御部は、前記ヘッドマウントディスプレイのロール方向における回転角度が閾値を超えた場合に、前記ヘッドマウントディスプレイがロール方向に回転したと判定する
     請求項1から請求項10の何れかの記録媒体。
    The recording medium according to any one of claims 1 to 10, wherein the operation control unit determines that the head mounted display has rotated in the roll direction when the rotation angle in the roll direction of the head mounted display exceeds a threshold. .
  12.  ヘッドマウントディスプレイと情報処理装置とを具備する画像表示システムであって、
     前記情報処理装置は、
     ヘッドマウントディスプレイの方向に応じて方向が制御される仮想カメラで仮想空間を撮像した画像であって、両眼視差を利用した立体視画像を、前記ヘッドマウントディスプレイの表示装置に表示させる表示制御部と、
     前記ヘッドマウントディスプレイのロール方向の回転と、前記ヘッドマウントディスプレイの方向に応じて前記仮想空間における方向が変化する基準線と前記仮想空間内のオブジェクトとの位置関係と、に応じて当該オブジェクトの動作を制御する動作制御部と
     を具備する画像表示システム。
    An image display system comprising a head mounted display and an information processing apparatus,
    The information processing apparatus is
    A display control unit that causes a display device of the head mounted display to display an image obtained by capturing an image of a virtual space with a virtual camera whose direction is controlled according to the direction of the head mounted display and using binocular parallax When,
    Movement of the object according to the rotation of the head mounted display in the roll direction and the positional relationship between the reference line whose direction in the virtual space changes according to the direction of the head mounted display and the object in the virtual space And an operation control unit for controlling the image display system.
PCT/JP2019/001071 2018-01-22 2019-01-16 Image display system and recording medium on which program for image display system is recorded WO2019142817A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018-007913 2018-01-22
JP2018-007914 2018-01-22
JP2018007913A JP6587364B2 (en) 2018-01-22 2018-01-22 Program and image display system
JP2018007914A JP6628331B2 (en) 2018-01-22 2018-01-22 Program and image display system

Publications (1)

Publication Number Publication Date
WO2019142817A1 true WO2019142817A1 (en) 2019-07-25

Family

ID=67301492

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/001071 WO2019142817A1 (en) 2018-01-22 2019-01-16 Image display system and recording medium on which program for image display system is recorded

Country Status (1)

Country Link
WO (1) WO2019142817A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016530600A (en) * 2013-06-18 2016-09-29 マイクロソフト テクノロジー ライセンシング,エルエルシー Multi-step virtual object selection
JP2017040970A (en) * 2015-08-17 2017-02-23 株式会社コロプラ Method and program for controlling head-mounted display system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016530600A (en) * 2013-06-18 2016-09-29 マイクロソフト テクノロジー ライセンシング,エルエルシー Multi-step virtual object selection
JP2017040970A (en) * 2015-08-17 2017-02-23 株式会社コロプラ Method and program for controlling head-mounted display system

Similar Documents

Publication Publication Date Title
CN108499105B (en) Method, device and storage medium for adjusting visual angle in virtual environment
JP6057396B2 (en) 3D user interface device and 3D operation processing method
CN107710105B (en) Operation input device and operation input method
WO2014016987A1 (en) Three-dimensional user-interface device, and three-dimensional operation method
JP4413203B2 (en) Image presentation device
EP3321777A1 (en) Dragging virtual elements of an augmented and/or virtual reality environment
WO2020241189A1 (en) Information processing device, information processing method, and program
US10649616B2 (en) Volumetric multi-selection interface for selecting multiple objects in 3D space
WO2019187862A1 (en) Information processing device, information processing method, and recording medium
US20220291744A1 (en) Display processing device, display processing method, and recording medium
CN108369451B (en) Information processing apparatus, information processing method, and computer-readable storage medium
US10771707B2 (en) Information processing device and information processing method
JP6587364B2 (en) Program and image display system
JP7040521B2 (en) Information processing equipment, information processing methods, and programs
WO2019142817A1 (en) Image display system and recording medium on which program for image display system is recorded
CN108369477B (en) Information processing apparatus, information processing method, and program
JP6730753B2 (en) Program and image display system
JP6628331B2 (en) Program and image display system
JP7387198B2 (en) Program and image display system
JP6518028B1 (en) Display device, display method, program, and non-transitory computer readable information recording medium
WO2019142621A1 (en) Information processing device, information processing method, and program
JP7492497B2 (en) PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
WO2024057783A1 (en) Information processing device provided with 360-degree image viewpoint position identification unit
JP2023166053A (en) Information processing program, information processing system, and information processing method
CN113617023A (en) Program, information processing method, information processing apparatus, and information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19741723

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19741723

Country of ref document: EP

Kind code of ref document: A1