US20230121976A1 - Animation production system - Google Patents
Animation production system Download PDFInfo
- Publication number
- US20230121976A1 US20230121976A1 US18/063,870 US202218063870A US2023121976A1 US 20230121976 A1 US20230121976 A1 US 20230121976A1 US 202218063870 A US202218063870 A US 202218063870A US 2023121976 A1 US2023121976 A1 US 2023121976A1
- Authority
- US
- United States
- Prior art keywords
- character
- user
- controller
- track
- virtual space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 22
- 230000009471 action Effects 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 description 30
- 210000003128 head Anatomy 0.000 description 22
- 230000008569 process Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 14
- 238000003860 storage Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 11
- 210000000988 bone and bone Anatomy 0.000 description 5
- 238000003825 pressing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 210000003811 finger Anatomy 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5258—Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/212—Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5255—Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/63—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6009—Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
- A63F2300/6018—Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content where the game content is authored by the player, e.g. level editor or by game device at runtime, e.g. level is created from music data on CD
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/609—Methods for processing data by generating or executing the game program for unlocking hidden game elements, e.g. features, items, levels
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/64—Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/65—Methods for processing data by generating or executing the game program for computing the condition of a game character
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6607—Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6661—Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
Definitions
- the present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
- the present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- the principal invention for solving the above-described problem is an animation production method comprising: a step of placing a character in a virtual space; a step of placing a virtual camera for shooting the character in the virtual space; a step of acquiring action data defining an action of the character from an external source; a step of operating the character based on the action data; and a step of shooting the action of the character by the camera.
- animations can be captured in a virtual space.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment;
- HMD head mount display
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as an HMD) 110 according to the present embodiment.
- HMD head mount display
- FIG. 4 shows a schematic view of the outside of the controller 210 according to the present embodiment.
- FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
- FIG. 6 shows a functional configuration diagram of the controller 210 according to the present embodiment.
- FIG. 7 shows a functional configuration diagram of an image producing device 310 according to the present embodiment.
- FIG. 8 is a flow chart illustrating an example of a track generation process according to an embodiment of the present invention.
- FIG. 9 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention.
- FIG. 10 ( a ) is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention.
- FIG. 10 ( b ) is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention.
- FIG. 11 is a diagram illustrating an operation of purchasing or renting action data as an asset.
- An animation production method has the following configuration.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment.
- a character 4 and a camera 3 are disposed in the virtual space 1 , and a character 4 is shot using the camera 3 .
- the photographer 2 is disposed, and the camera 3 is virtually operated by the photographer 2 .
- the animation production system of the present embodiment as shown in FIG.
- a user makes an animation by placing a character 4 and a camera 3 while viewing the virtual space 1 from a bird’s perspective with a TPV (Third Person’s View), taking a character 4 with an FPV (First Person View; first person support) as a photographer 2 , and performing a character 4 with an FPV.
- a plurality of characters in the example shown in FIG. 1 , a character 4 and a character 5 ) can be disposed, and the user can perform the performance while possessing a character 4 and a character 5 , respectively. That is, in the animation production system of the present embodiment, one can play a number of roles (roles).
- the camera 2 can be virtually operated as the photographer 2 , natural camera work can be realized and the representation of the movie to be shot can be enriched.
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- the animation production system 300 may comprise, for example, an HMD 110 , a controller 210 , and an image generating device 310 that functions as a host computer.
- An infrared camera (not shown) or the like can also be added to the animation production system 300 for detecting the position, orientation and slope of the HMD 110 or controller 210 .
- These devices may be connected to each other by wired or wireless means.
- each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, Bluetooth (TM), WiFi (TM).
- the image generating device 310 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function.
- FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as HMD) 110 according to the present embodiment.
- FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
- the HMD 110 is mounted on the user’s head and includes a display panel 120 for placement in front of the user’s left and right eyes.
- a display panel 120 for placement in front of the user’s left and right eyes.
- an optically transmissive and non-transmissive display is contemplated as the display panel, this embodiment illustrates a non-transmissive display panel that can provide more immersion.
- the display panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided.
- the housing portion 130 of the HMD 110 includes a sensor 140 .
- the sensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect actions such as the orientation or tilt of the user’s head.
- the axis corresponding to the user’s anteroposterior direction is Z-axis, which connects the center of the display panel 120 with the user
- the axis corresponding to the user’s left and right direction is X-axis
- the sensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle).
- the housing portion 130 of the HMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs).
- a camera e.g., an infrared light camera, a visible light camera
- the HMD 110 may be provided with a camera for detecting a light source installed in the housing portion 130 of the HMD 110 .
- the housing portion 130 of the HMD 110 may also include an eye tracking sensor.
- the eye tracking sensor is used to detect the user’s left and right eye gaze directions and gaze.
- FIG. 4 shows a schematic view of the appearance of the controller 210 according to the present embodiment.
- FIG. 6 shows a functional configuration diagram of the controller 210 according to the present embodiment.
- the controller 210 can support the user to make predetermined inputs in the virtual space.
- the controller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers.
- the left hand controller 220 and the right hand controller 230 may each have an operational trigger button 240 , an infrared LED 250 , a sensor 260 , a joystick 270 , and a menu button 280 .
- the operation trigger button 240 is positioned as 240 a , 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the controller 210 .
- the frame 245 formed in a ring-like fashion downward from both sides of the controller 210 is provided with a plurality of infrared LEDs 250 , and a camera (not shown) provided outside the controller can detect the position, orientation and slope of the controller 210 in a particular space by detecting the position of these infrared LEDs.
- the controller 210 may also incorporate a sensor 260 to detect operations such as the orientation or tilt of the controller 210 .
- sensor 260 it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof.
- the top surface of the controller 210 may include a joystick 270 and a menu button 280 . It is envisioned that the joystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of the controller 210 . Menu buttons 280 are also assumed to be operated with the thumb.
- the controller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating the controller 210 .
- the controller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of the controller 210 via a button or a joystick, and for receiving information from the host computer.
- the system can determine the user’s hand operation and attitude, pseudo-displaying and operating the user’s hand in the virtual space.
- FIG. 7 shows a functional configuration diagram of an image producing device 310 according to this embodiment.
- the image producing device 310 may use a device such as a PC, a game machine, or a portable communication terminal having a function for storing information on the user’s head operation or the operation or operation of the controller acquired by the user input information or the sensor, which is transmitted from the HMD 110 or the controller 210 , performing a predetermined computational processing, and generating an image.
- the image producing device 310 may include an input/output unit 320 for establishing a wired connection with a peripheral device such as, for example, an HMD 110 or a controller 210 , and a communication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark).
- the information received from the HMD 110 and/or the communication unit 330 regarding the operation of the user’s head or the operation or operation of the controller is detected in the control unit 340 as input contents including the operation of the user’s position, line of vision, attitude, speech, pronunciation, operation, etc., and a control program stored in the storage unit 350 is executed according to the user’s input contents to perform a process such as controlling a character and generating an image.
- the control unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved.
- the image generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing.
- the control unit 340 includes a user input detecting unit 410 that detects information received from the HMD 110 and/or the controller 210 regarding the operation of the user’s head, the speech and pronunciation of the user, and the operation or operation of the controller, a character control unit 420 that executes a control program stored in the control program storage unit 520 for a character stored in the character data storage unit 510 of the storage unit 350 in advance, and an image producing unit 430 that generates an image based on character control.
- a user input detecting unit 410 that detects information received from the HMD 110 and/or the controller 210 regarding the operation of the user’s head, the speech and pronunciation of the user, and the operation or operation of the controller
- a character control unit 420 that executes a control program stored in the control program storage unit 520 for a character stored in the character data storage unit 510 of the storage unit 350 in advance
- an image producing unit 430 that generates an image based on character control.
- control of the operation of the character is realized by converting information such as the direction, inclination, or manual operation of the user head detected through the HMD 110 or the controller 210 into the operation of each part of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the operation of the bone structure to the previously stored character data by relating the bone structure.
- the control unit 340 includes a recording and playback executing unit 440 for recording and playing back an image-generated character on a track, and an editing executing unit 450 for editing each track and generating the final content.
- the storage unit 350 includes a character data storage unit 510 for storing not only image data of a character but also information related to a character such as attributes of a character.
- the control program storage unit 520 stores a program for controlling the operation of a character or an expression in the virtual space.
- the storage unit 350 includes a track storage unit 530 for storing action data composed of parameters for controlling the movement of a character in a dynamic image generated by the image producing unit 430 .
- FIG. 8 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention.
- the recording and reproduction executing unit 440 of the control unit 340 of the image producing device 310 starts recording for storing action data of the moving image related to operation by the first character in the virtual space in the first track of the track storage unit 530 (S 101 ).
- the position of the camera where the character is to be shot and the viewpoint of the camera e.g., FPV, TPV, etc.
- the position where the camera man 2 is disposed and the angle of the camera 3 can be set with respect to the character 4 corresponding to the first character.
- the recording start operation may be indicated by a remote controller, such as controller 210 , or may be indicated by other terminals.
- the operation may also be performed by a user who is equipped with an HMD 110 to manipulate the controller 210 , to play a character, or by a user other than the user who performs the character.
- the recording process may be automatically started based on detecting an operation by a user who performs the character described below.
- the user input detecting unit 410 of the control unit 340 detects information received from the HMD 110 and/or the controller 210 relating to the operation of the user’s head, speech or pronunciation of the user, and operation or operation of the controller (S 102 ).
- the sensor 140 provided in the HMD 110 detects the tilt and transmits information about the tilt to the image generating device 310 .
- the image generating device 310 receives information about the operation of the user through the communication unit 330 , and the user input detecting unit 410 detects the operation of the user’s head based on the received information.
- the sensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to the image generating device 310 using the controller 210 .
- the image producing device 310 receives information related to the user’s controller operation and operation through the communication unit 330 , and the user input detecting unit 410 detects the user’s controller operation and operation based on the received information.
- the character control unit 420 of the control unit 340 controls the operation of the first character in the virtual space based on the operation of the detected user (S 103 ). For example, based on the user detecting an operation to tilt the head, the character control unit 420 controls to tilt the head of the first character. Also, based on the fact that the user lifts the controller and detects pressing a predetermined button on the controller, the character control unit 420 controls something while extending the arm of the first character upward. In this manner, the character control unit 420 controls the first character to perform the corresponding operation each time the user input detecting unit 410 detects an operation by a user transmitted from the HMD 110 or the controller 210 .
- the character Stores parameters related to the operation and/or operation detected by the user input detecting unit 410 in the first track of the track storage unit 530 .
- the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in the first track, or both user action and action data relating to the predetermined behavior may be stored.
- the recording and reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S 104 ), and when receiving the instruction to end the recording, completes the recording of the first track related to the first character (S 105 ).
- the recording and reproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording.
- the recording and reproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. It is also possible to execute the recording termination process at a predetermined time by activating a timer rather than accepting instructions from the user.
- the recording and playback executing unit 440 starts recording for storing action data of the moving image associated with operation by the second character in the virtual space in the second track of the track storage unit 530 (S 106 ).
- the position of the camera where the character is to be shot and the viewpoint of the camera e.g., FPV, TPV, etc.
- the position where the camera man 2 is disposed and the angle of the camera 3 can be set with respect to the character 5 corresponding to the second character.
- the user may set the camera viewpoint by the FPV of the character 5 , perform the playback processing of the first track, and perform the desired operation by the user who performs the character 5 while checking the operation of the character 4 .
- the recording start operation may be indicated by a remote controller, such as controller 210 , or may be indicated by other terminals.
- the operation may also be performed by a user who is equipped with an HMD 110 to manipulate the controller 210 , to play a character, or by a user other than the user who performs the character.
- the recording process may also be automatically initiated based on the detection of an action and/or operation by the user performing the characters described below.
- the user input detecting unit 410 of the control unit 340 detects information received from the HMD 110 and/or the controller 210 relating to the operation of the user’s head, speech or pronunciation of the user, and operation or operation of the controller (S 107 ).
- the user may be the same user as the user performing the first character, or may be a different user.
- the sensor 140 provided in the HMD 110 detects the tilt and transmits information about the tilt to the image generating device 310 .
- the image generating device 310 receives information about the operation of the user through the communication unit 330 , and the user input detecting unit 410 detects the operation of the user’s head based on the received information.
- the sensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to the image generating device 310 using the controller 210 .
- the image producing device 310 receives information related to the user’s controller operation and operation through the communication unit 330 , and the user input detecting unit 410 detects the user’s controller operation and operation based on the received information.
- the character control unit 420 of the control unit 340 controls the operation of the second character in the virtual space based on the operation of the detected user (S 108 ). For example, based on the user detecting an operation to tilt the head, the character control unit 420 controls to tilt the head of the second character.
- the character control unit 420 also controls the opening of the finger (i.e. “by”) by shaking the arm of the second character, based on the fact that the user has raised the controller to the right and left or pressed a predetermined button on the controller. In this manner, the character control unit 420 controls the second character to perform the corresponding operation each time the user input detecting unit 410 detects an operation by a user transmitted from the HMD 110 or the controller 210 .
- the parameters related to the operation and/or operation detected by the user input detecting unit 410 are stored in the second track of the track storage unit 530 .
- the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in a second track, or both user action and action data relating to a predetermined operation may be stored.
- the recording and reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S 109 ), and when the instruction to terminate the recording is received, the recording of the second track related to the second character is completed (S 110 ).
- the recording and reproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording.
- the recording and reproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected.
- FIG. 9 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention.
- the editing execution unit 450 of the control unit 340 of the image generating device 310 performs a process of editing the first track stored in the track storage unit 530 (S 201 ).
- the user edits a first track (T1) associated with the first character via a user interface for track editing, as shown in FIG. 10 a .
- the user interface displays the area in which the first track is stored along a time series. The user selects a desired bar so that the 3D model of the character is rendered based on the parameters and character data of the stored moving image, and the moving image of the character (e.g., the character 4 ) disposed in the virtual space as shown in FIG. 1 is reproduced.
- a user interface for editing tracks it is also possible to display, for example, a track name and title (e.g., a “first character”) in a list format, in addition to the display described above.
- the user interface may allow editing to be performed while the moving image of the character is played directly, or a keyframe of the moving image may be displayed in which the user selects the keyframe to be edited, without showing the tracks or lists as shown in FIGS. 10 .
- a process such as changing the color of the character selected as an editing target by the selected time.
- the placement position of the character 4 corresponding to the first character is changed in the virtual space 1 , the shooting position and angle of the character 4 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale.
- a portion of the operation of the character 4 may be changed as an editing process.
- a user may specify a part of the body to be edited and only operate the specified part by operation using the controller 210 or the HMD 110 .
- a user may specify an arm of a first character to manipulate only the arm of the first character by the operation of controller 210 while playing back the operation of the first character to modify the movement of the arm.
- the editing execution unit 450 performs the process of editing the second track stored in the track storage unit 530 (S 202 ).
- the user selects the option (not shown) to edit a second track (T2) associated with the second character via a user interface for track editing, and places a bar, via a user interface, as shown in FIG. 10 b , indicating the area in which the second track is stored along a time series.
- the user can adjust the second track relatively, such as synchronizing the playback timing of each track while editing the second track independently of the first track.
- the user may edit other tracks (e.g., a third track (T3)).
- the placement position of the character 5 corresponding to the second character is changed in the virtual space 1 , the shooting position and angle of the character 5 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale.
- the user may change the placement position of the second character 5 in the virtual space 1 opposite the character 4 corresponding to the first character.
- an animated moving image that is realized in a virtual space as a whole can be created by playing an active image including the operation of a character corresponding to each track in a virtual space.
- it is possible to edit animated moving images that can even be realized in a virtual space by not only reproducing the movement of the character, but also changing the set of the background of the character, increasing or decreasing the object head containing the character, changing the setting of writing, changing the clothing of the character or attaching an accessory, changing the wind amount or direction of the character, and changing the character as is the action data.
- the editing execution unit 450 stores the edited contents in response to a user’s request or automatically (S 203 ).
- the editing process of S 201 and S 202 can be performed in any order, and can be moved back and forth.
- the storage processing of S 203 can also be performed each time the editing of S 201 and S 202 is performed. Editing can also be terminated by editing S 201 only.
- the character operation linked to the user operation can be stored in each track, and the animation production can be realized easily and efficiently by performing editing work within each track or between each track.
- the method disclosed in this embodiment may be applied to an object (vehicle, structure, article, etc.) comprising an action, including not only a character, but also a character.
- the HMD 110 may include all or part of the configuration and functions provided by the image producing device 310 .
- the action data representing the movement of the character is managed by the association with the character in which the operation was performed.
- the action data may be stored in the storage unit 350 as a movement such as a joint or a bone independent of the character.
- the editing execution unit 450 may accept the designation of the action data stored in the storage unit 350 and the designation of the characters 4 and 5 and apply the specified action to the specified characters 4 and 5 to record the specified action in the track.
- the action data representing the movement of the character is created by the user acting while possessing the characters 4 and 5 .
- the action data created by other users can also be used.
- the editing execution unit 450 may acquire the action data by accessing another computer and register the acquired action data in the character data storage unit 510 as the action data of the characters 4 and 5 .
- the action data can be configured with parameters that control the movement of the character in the moving image, as described above, and can represent, for example, bone, joint rotation or movement.
- the recording and reproduction executing unit 440 may operate the characters 4 and 5 based on the action data, even if the action data is obtained from another computer.
- the editing execution unit 450 may edit the operation of only a part of the body with respect to a character to which the acquired action data is applied. For example, as described above, the editing execution unit 450 may designate the body part to be edited, and only the operation of the specified part may be operated by operation using the controller 210 or the HMD 110 .
- FIG. 11 is a diagram illustrating an operation of purchasing or renting action data as an asset.
- the user opens the asset list 6 on the virtual space 1 and selects a character to be disposed, for example, as a character 4-3.
- the asset list 6 includes an asset type tab 61 , an asset item 62 , a scroll bar 63 , and a button 64 .
- the asset type tab 61 may be, for example, a 3DCG model (ACTOR) of the character 4 , action data (MOVE) of the character 4 , voice quality or specific serifs (VOICE) when possessed by the character 4 , an object (OBJECT) arrangable in the virtual space 1 , or a background (BACK GROUND) arrangable in the virtual space 1 .
- the 3DCG model of the character 4 may have a particular operation in advance.
- the asset item 62 is modified according to the selection of the asset type tab 61 to display a list corresponding to each type of asset.
- the items displayed in this list may be data obtained from a server connected via a network such as a cloud server, for example.
- a list of the 3DCG models of the purchaseable character 44 is illustrated and, as an example, the facial image and name are represented in each item, but not limited thereto, for example, the amount, setting age, and the like may be represented, or the 3DCG model may be represented as is, as shown in FIG. 11 .
- a scroll bar 63 may be slid to allow all items to be identified, a sort such as the order of sounds or the order of amounts may be possible, or a user’s preferences may be learned based on the sales and rental history to preferentially display the asset in accordance with the user’s preferences.
- Each item may be sold not only as a single asset but also as a set of assets of the same or different asset types.
- Buttons 64 can be arranged for a variety of applications, and in FIG. 8 an example is an asset purchase button (SHOP), an asset rental button (RENTAL), a holding asset list (MY LIST), an asset purchase and/or rental history (MY HISTORY), and a button (BACK) that returns to the previous window display.
- SHOP asset purchase button
- RENTAL asset rental button
- MY LIST holding asset list
- MY HISTORY asset purchase and/or rental history
- BACK button that returns to the previous window display.
- the user can apply the action data to the character by grasping the action data 62 by the virtual right hand 21R in the virtual space 1 and releasing the action data 62 on the characters 4 and 5 to which the action data 62 is applied.
- the purchase procedure may be performed by pressing the purchase decision button (BUY) on the item data 7 or the button to which the decision operation of the controller 210 has been assigned. Rental may also be substantially similar, except that the use period after payment is limited.
- MY LIST asset list
- the created assets can then be sold and/or rented for a fee or free to other unspecified number of users or designated users, and may be displayed in an assortment list of other users.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Cardiology (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Processing Or Creating Images (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
To enable you to take animations in a virtual space an animation production method comprising: a step of placing a character in a virtual space; a step of placing a virtual camera for shooting the character in the virtual space; a step of acquiring action data defining an action of the character from an external source; a step of operating the character based on the action data; and a step of shooting the action of the character by the camera.
Description
- The present application is a continuation of U.S. Pat. Application No. 17/008,142 filed Aug. 31, 2020, now U.S. Pat. No. 11524235 issued Dec. 13, 2022, which claims priority to Japanese Patent Application No. 2020-128301, filed on Jul. 29, 2020, the disclosure of which are incorporated herein by reference in their entirety.
- The present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
- [PTL 1] Patent Application Publication No. 2017-146651
- No attempt was made to capture animations in the virtual space.
- 1The present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- The principal invention for solving the above-described problem is an animation production method comprising: a step of placing a character in a virtual space; a step of placing a virtual camera for shooting the character in the virtual space; a step of acquiring action data defining an action of the character from an external source; a step of operating the character based on the action data; and a step of shooting the action of the character by the camera.
- The other problems disclosed in the present application and the method for solving them are clarified in the sections and drawings of the embodiments of the invention.
- According to the present invention, animations can be captured in a virtual space.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment; -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. -
FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as an HMD) 110 according to the present embodiment. -
FIG. 4 shows a schematic view of the outside of thecontroller 210 according to the present embodiment. -
FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment. -
FIG. 6 shows a functional configuration diagram of thecontroller 210 according to the present embodiment. -
FIG. 7 shows a functional configuration diagram of animage producing device 310 according to the present embodiment. -
FIG. 8 is a flow chart illustrating an example of a track generation process according to an embodiment of the present invention. -
FIG. 9 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention. -
FIG. 10(a) is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention. -
FIG. 10(b) is a diagram illustrating an example of a user interface for editing a track according to an embodiment of the present invention. -
FIG. 11 is a diagram illustrating an operation of purchasing or renting action data as an asset. - The contents of embodiments of the present invention will be described with reference. An animation production method according to an embodiment of the present invention has the following configuration.
-
-
Item 1- An animation production method comprising:
- a step of placing a character in a virtual space;
- a step of placing a virtual camera for shooting the character in the virtual space;
- a step of acquiring action data defining an action of the character from an external source;
- a step of operating the character based on the action data; and
- a step of shooting the action of the character by the camera.
-
Item 2- The animation production method according to
item 1, the method further comprising: - a step of placing an asset store in the virtual space that presents the action data available to the character, wherein:
- the step of acquiring the action data from the external source including receiving a designation of the action data in the asset store from the user; and
- the step of operating the character including receiving a designation of the character from the user and operating the designated character based on the designated action data.
- The animation production method according to
- Item 3
- The animation production method according to
items - a step of editing the action data.
- The animation production method according to
-
Item 4 - The animation production method according to item 3, wherein the step of editing the action data including receiving a designation of a body part of the character from the user, moving the body part in response to an operation from the user, and updating the action data by the movement of the body part that has been moved.
- A specific example of an animation production system according to an embodiment of the present invention will be described below with reference to the drawings. It should be noted that the present invention is not limited to these examples, and is intended to include all modifications within the meaning and scope of equivalence with the appended claims, as indicated by the appended claims. In the following description, the same elements are denoted by the same reference numerals in the description of the drawings and overlapping descriptions are omitted.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment. In the animation production system of the present embodiment, acharacter 4 and a camera 3 are disposed in thevirtual space 1, and acharacter 4 is shot using the camera 3. In thevirtual space 1, thephotographer 2 is disposed, and the camera 3 is virtually operated by thephotographer 2. In the animation production system of the present embodiment, as shown inFIG. 1 , a user makes an animation by placing acharacter 4 and a camera 3 while viewing thevirtual space 1 from a bird’s perspective with a TPV (Third Person’s View), taking acharacter 4 with an FPV (First Person View; first person support) as aphotographer 2, and performing acharacter 4 with an FPV. In thevirtual space 1, a plurality of characters (in the example shown inFIG. 1 , acharacter 4 and a character 5) can be disposed, and the user can perform the performance while possessing acharacter 4 and a character 5, respectively. That is, in the animation production system of the present embodiment, one can play a number of roles (roles). In addition, since thecamera 2 can be virtually operated as thephotographer 2, natural camera work can be realized and the representation of the movie to be shot can be enriched. -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. Theanimation production system 300 may comprise, for example, an HMD 110, acontroller 210, and animage generating device 310 that functions as a host computer. An infrared camera (not shown) or the like can also be added to theanimation production system 300 for detecting the position, orientation and slope of theHMD 110 orcontroller 210. These devices may be connected to each other by wired or wireless means. For example, each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, Bluetooth (TM), WiFi (TM). Theimage generating device 310 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function. -
FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as HMD) 110 according to the present embodiment.FIG. 5 shows a functional configuration diagram of theHMD 110 according to the present embodiment. TheHMD 110 is mounted on the user’s head and includes adisplay panel 120 for placement in front of the user’s left and right eyes. Although an optically transmissive and non-transmissive display is contemplated as the display panel, this embodiment illustrates a non-transmissive display panel that can provide more immersion. Thedisplay panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided. - The
housing portion 130 of theHMD 110 includes asensor 140. Thesensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect actions such as the orientation or tilt of the user’s head. When the vertical direction of the user’s head is Y-axis, the axis corresponding to the user’s anteroposterior direction is Z-axis, which connects the center of thedisplay panel 120 with the user, and the axis corresponding to the user’s left and right direction is X-axis, thesensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle). - In place of or in addition to the
sensor 140, thehousing portion 130 of theHMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs). A camera (e.g., an infrared light camera, a visible light camera) installed outside the HMD 110 (e.g., indoor, etc.) can detect the position, orientation, and tilt of theHMD 110 in a particular space by detecting these light sources. Alternatively, for the same purpose, theHMD 110 may be provided with a camera for detecting a light source installed in thehousing portion 130 of theHMD 110. - The
housing portion 130 of theHMD 110 may also include an eye tracking sensor. The eye tracking sensor is used to detect the user’s left and right eye gaze directions and gaze. There are various types of eye tracking sensors. For example, the position of reflected light on the cornea, which can be irradiated with infrared light that is weak in the left eye and right eye, is used as a reference point, the position of the pupil relative to the position of reflected light is used to detect the direction of the eye line, and the intersection point in the direction of the eye line in the left eye and right eye is used as a focus point. -
FIG. 4 shows a schematic view of the appearance of thecontroller 210 according to the present embodiment.FIG. 6 shows a functional configuration diagram of thecontroller 210 according to the present embodiment. Thecontroller 210 can support the user to make predetermined inputs in the virtual space. Thecontroller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers. Theleft hand controller 220 and theright hand controller 230 may each have an operational trigger button 240, aninfrared LED 250, asensor 260, ajoystick 270, and amenu button 280. - The operation trigger button 240 is positioned as 240 a, 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the
controller 210. Theframe 245 formed in a ring-like fashion downward from both sides of thecontroller 210 is provided with a plurality ofinfrared LEDs 250, and a camera (not shown) provided outside the controller can detect the position, orientation and slope of thecontroller 210 in a particular space by detecting the position of these infrared LEDs. - The
controller 210 may also incorporate asensor 260 to detect operations such as the orientation or tilt of thecontroller 210. Assensor 260, it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof. Additionally, the top surface of thecontroller 210 may include ajoystick 270 and amenu button 280. It is envisioned that thejoystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of thecontroller 210.Menu buttons 280 are also assumed to be operated with the thumb. In addition, thecontroller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating thecontroller 210. Thecontroller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of thecontroller 210 via a button or a joystick, and for receiving information from the host computer. - With or without the user grasping the
controller 210 and manipulating the various buttons and joysticks, and with information detected by the infrared LEDs and sensors, the system can determine the user’s hand operation and attitude, pseudo-displaying and operating the user’s hand in the virtual space. -
FIG. 7 shows a functional configuration diagram of animage producing device 310 according to this embodiment. Theimage producing device 310 may use a device such as a PC, a game machine, or a portable communication terminal having a function for storing information on the user’s head operation or the operation or operation of the controller acquired by the user input information or the sensor, which is transmitted from theHMD 110 or thecontroller 210, performing a predetermined computational processing, and generating an image. Theimage producing device 310 may include an input/output unit 320 for establishing a wired connection with a peripheral device such as, for example, anHMD 110 or acontroller 210, and acommunication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark). The information received from theHMD 110 and/or thecommunication unit 330 regarding the operation of the user’s head or the operation or operation of the controller is detected in thecontrol unit 340 as input contents including the operation of the user’s position, line of vision, attitude, speech, pronunciation, operation, etc., and a control program stored in thestorage unit 350 is executed according to the user’s input contents to perform a process such as controlling a character and generating an image. Thecontrol unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved. Theimage generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing. - The
control unit 340 includes a userinput detecting unit 410 that detects information received from theHMD 110 and/or thecontroller 210 regarding the operation of the user’s head, the speech and pronunciation of the user, and the operation or operation of the controller, acharacter control unit 420 that executes a control program stored in the controlprogram storage unit 520 for a character stored in the characterdata storage unit 510 of thestorage unit 350 in advance, and animage producing unit 430 that generates an image based on character control. Here, the control of the operation of the character is realized by converting information such as the direction, inclination, or manual operation of the user head detected through theHMD 110 or thecontroller 210 into the operation of each part of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the operation of the bone structure to the previously stored character data by relating the bone structure. Further, thecontrol unit 340 includes a recording andplayback executing unit 440 for recording and playing back an image-generated character on a track, and anediting executing unit 450 for editing each track and generating the final content. - The
storage unit 350 includes a characterdata storage unit 510 for storing not only image data of a character but also information related to a character such as attributes of a character. The controlprogram storage unit 520 stores a program for controlling the operation of a character or an expression in the virtual space. Thestorage unit 350 includes atrack storage unit 530 for storing action data composed of parameters for controlling the movement of a character in a dynamic image generated by theimage producing unit 430. -
FIG. 8 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention. - First, the recording and
reproduction executing unit 440 of thecontrol unit 340 of theimage producing device 310 starts recording for storing action data of the moving image related to operation by the first character in the virtual space in the first track of the track storage unit 530 (S101). Here, the position of the camera where the character is to be shot and the viewpoint of the camera (e.g., FPV, TPV, etc.) can be set. For example, in thevirtual space 1 illustrated inFIG. 1 , the position where thecamera man 2 is disposed and the angle of the camera 3 can be set with respect to thecharacter 4 corresponding to the first character. The recording start operation may be indicated by a remote controller, such ascontroller 210, or may be indicated by other terminals. The operation may also be performed by a user who is equipped with anHMD 110 to manipulate thecontroller 210, to play a character, or by a user other than the user who performs the character. In addition, the recording process may be automatically started based on detecting an operation by a user who performs the character described below. - Subsequently, the user
input detecting unit 410 of thecontrol unit 340 detects information received from theHMD 110 and/or thecontroller 210 relating to the operation of the user’s head, speech or pronunciation of the user, and operation or operation of the controller (S102). For example, when the user mounting theHMD 110 tilts the head, thesensor 140 provided in theHMD 110 detects the tilt and transmits information about the tilt to theimage generating device 310. Theimage generating device 310 receives information about the operation of the user through thecommunication unit 330, and the userinput detecting unit 410 detects the operation of the user’s head based on the received information. Also, when a user performs a predetermined operation or operation, such as lifting thecontroller 210 or pressing a button, thesensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to theimage generating device 310 using thecontroller 210. Theimage producing device 310 receives information related to the user’s controller operation and operation through thecommunication unit 330, and the userinput detecting unit 410 detects the user’s controller operation and operation based on the received information. - Subsequently, the
character control unit 420 of thecontrol unit 340 controls the operation of the first character in the virtual space based on the operation of the detected user (S103). For example, based on the user detecting an operation to tilt the head, thecharacter control unit 420 controls to tilt the head of the first character. Also, based on the fact that the user lifts the controller and detects pressing a predetermined button on the controller, thecharacter control unit 420 controls something while extending the arm of the first character upward. In this manner, thecharacter control unit 420 controls the first character to perform the corresponding operation each time the userinput detecting unit 410 detects an operation by a user transmitted from theHMD 110 or thecontroller 210. Stores parameters related to the operation and/or operation detected by the userinput detecting unit 410 in the first track of thetrack storage unit 530. Alternatively, the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in the first track, or both user action and action data relating to the predetermined behavior may be stored. - Subsequently, the recording and
reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S104), and when receiving the instruction to end the recording, completes the recording of the first track related to the first character (S105). The recording andreproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording. Here, the recording andreproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. It is also possible to execute the recording termination process at a predetermined time by activating a timer rather than accepting instructions from the user. - Subsequently, the recording and
playback executing unit 440 starts recording for storing action data of the moving image associated with operation by the second character in the virtual space in the second track of the track storage unit 530 (S106). Here, the position of the camera where the character is to be shot and the viewpoint of the camera (e.g., FPV, TPV, etc.) can be set. For example, in thevirtual space 1 illustrated inFIG. 1 , the position where thecamera man 2 is disposed and the angle of the camera 3 can be set with respect to the character 5 corresponding to the second character. For example, the user may set the camera viewpoint by the FPV of the character 5, perform the playback processing of the first track, and perform the desired operation by the user who performs the character 5 while checking the operation of thecharacter 4. The recording start operation may be indicated by a remote controller, such ascontroller 210, or may be indicated by other terminals. The operation may also be performed by a user who is equipped with anHMD 110 to manipulate thecontroller 210, to play a character, or by a user other than the user who performs the character. The recording process may also be automatically initiated based on the detection of an action and/or operation by the user performing the characters described below. - Subsequently, the user
input detecting unit 410 of thecontrol unit 340 detects information received from theHMD 110 and/or thecontroller 210 relating to the operation of the user’s head, speech or pronunciation of the user, and operation or operation of the controller (S107). Here, the user may be the same user as the user performing the first character, or may be a different user. For example, when the user mounting theHMD 110 tilts the head, thesensor 140 provided in theHMD 110 detects the tilt and transmits information about the tilt to theimage generating device 310. Theimage generating device 310 receives information about the operation of the user through thecommunication unit 330, and the userinput detecting unit 410 detects the operation of the user’s head based on the received information. Also, when a user performs a predetermined operation or operation, such as lifting thecontroller 210, shaking thecontroller 210 left or right, or pressing a button, thesensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to theimage generating device 310 using thecontroller 210. Theimage producing device 310 receives information related to the user’s controller operation and operation through thecommunication unit 330, and the userinput detecting unit 410 detects the user’s controller operation and operation based on the received information. - Subsequently, the
character control unit 420 of thecontrol unit 340 controls the operation of the second character in the virtual space based on the operation of the detected user (S108). For example, based on the user detecting an operation to tilt the head, thecharacter control unit 420 controls to tilt the head of the second character. Thecharacter control unit 420 also controls the opening of the finger (i.e. “by”) by shaking the arm of the second character, based on the fact that the user has raised the controller to the right and left or pressed a predetermined button on the controller. In this manner, thecharacter control unit 420 controls the second character to perform the corresponding operation each time the userinput detecting unit 410 detects an operation by a user transmitted from theHMD 110 or thecontroller 210. The parameters related to the operation and/or operation detected by the userinput detecting unit 410 are stored in the second track of thetrack storage unit 530. Alternatively, the character may be controlled to perform a predetermined performance action without user input, the action data relating to the predetermined performance action may be stored in a second track, or both user action and action data relating to a predetermined operation may be stored. - Subsequently, the recording and
reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S109), and when the instruction to terminate the recording is received, the recording of the second track related to the second character is completed (S110). The recording andreproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording. Here, the recording andreproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. -
FIG. 9 is a flowchart illustrating an example of a track editing process according to an embodiment of the present invention. - First, the
editing execution unit 450 of thecontrol unit 340 of theimage generating device 310 performs a process of editing the first track stored in the track storage unit 530 (S201). For example, the user edits a first track (T1) associated with the first character via a user interface for track editing, as shown inFIG. 10 a . For example, the user interface displays the area in which the first track is stored along a time series. The user selects a desired bar so that the 3D model of the character is rendered based on the parameters and character data of the stored moving image, and the moving image of the character (e.g., the character 4) disposed in the virtual space as shown inFIG. 1 is reproduced. It should be noted that as a user interface for editing tracks, it is also possible to display, for example, a track name and title (e.g., a “first character”) in a list format, in addition to the display described above. Alternatively, the user interface may allow editing to be performed while the moving image of the character is played directly, or a keyframe of the moving image may be displayed in which the user selects the keyframe to be edited, without showing the tracks or lists as shown inFIGS. 10 . In addition, among the 3D characters displayed as a dynamic image, it is possible to perform a process such as changing the color of the character selected as an editing target by the selected time. - As an editing process, for example, in
FIG. 1 , it is contemplated that the placement position of thecharacter 4 corresponding to the first character is changed in thevirtual space 1, the shooting position and angle of thecharacter 4 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale. - Alternatively, a portion of the operation of the
character 4 may be changed as an editing process. For example, for a first character played back in a track (T1), a user may specify a part of the body to be edited and only operate the specified part by operation using thecontroller 210 or theHMD 110. For example, a user may specify an arm of a first character to manipulate only the arm of the first character by the operation ofcontroller 210 while playing back the operation of the first character to modify the movement of the arm. - Next, the
editing execution unit 450 performs the process of editing the second track stored in the track storage unit 530 (S202). For example, the user selects the option (not shown) to edit a second track (T2) associated with the second character via a user interface for track editing, and places a bar, via a user interface, as shown inFIG. 10 b , indicating the area in which the second track is stored along a time series. As shown inFIG. 10 b , the user can adjust the second track relatively, such as synchronizing the playback timing of each track while editing the second track independently of the first track. Similarly, the user may edit other tracks (e.g., a third track (T3)). - As an editing process, for example, in
FIG. 1 , it is contemplated that the placement position of the character 5 corresponding to the second character is changed in thevirtual space 1, the shooting position and angle of the character 5 is changed, and the shooting viewpoint (FPV, TPV) is changed. It is also possible to change or delete the time scale. Here, as an editing process, the user may change the placement position of the second character 5 in thevirtual space 1 opposite thecharacter 4 corresponding to the first character. - Further, by playing back each track as an editing process, an animated moving image that is realized in a virtual space as a whole can be created by playing an active image including the operation of a character corresponding to each track in a virtual space. In addition, by playing back each track, it is possible to edit animated moving images that can even be realized in a virtual space by not only reproducing the movement of the character, but also changing the set of the background of the character, increasing or decreasing the object head containing the character, changing the setting of writing, changing the clothing of the character or attaching an accessory, changing the wind amount or direction of the character, and changing the character as is the action data.
- After the editing process is completed, the
editing execution unit 450 stores the edited contents in response to a user’s request or automatically (S203). - The editing process of S201 and S202 can be performed in any order, and can be moved back and forth. The storage processing of S203 can also be performed each time the editing of S201 and S202 is performed. Editing can also be terminated by editing S201 only.
- As described above, by applying the method of multitrack recording (MTR) to the animation production according to the present embodiment, the character operation linked to the user operation can be stored in each track, and the animation production can be realized easily and efficiently by performing editing work within each track or between each track.
- Although the present embodiment has been described above, the above-described embodiment is intended to facilitate the understanding of the present invention and is not intended to be a limiting interpretation of the present invention. The present invention may be modified and improved without departing from the spirit thereof, and the present invention also includes its equivalent.
- For example, in this embodiment, while a character has been described as an example with respect to a track generation method and an editing method, the method disclosed in this embodiment may be applied to an object (vehicle, structure, article, etc.) comprising an action, including not only a character, but also a character.
- For example, although the
image producing device 310 has been described in this embodiment as separate from theHMD 110, theHMD 110 may include all or part of the configuration and functions provided by theimage producing device 310. - In the present exemplary embodiment, the action data representing the movement of the character is managed by the association with the character in which the operation was performed. However, the action data may be stored in the
storage unit 350 as a movement such as a joint or a bone independent of the character. In this case, theediting execution unit 450 may accept the designation of the action data stored in thestorage unit 350 and the designation of thecharacters 4 and 5 and apply the specified action to the specifiedcharacters 4 and 5 to record the specified action in the track. - In the present embodiment, the action data representing the movement of the character is created by the user acting while possessing the
characters 4 and 5. However, the action data created by other users can also be used. For example, theediting execution unit 450 may acquire the action data by accessing another computer and register the acquired action data in the characterdata storage unit 510 as the action data of thecharacters 4 and 5. The action data can be configured with parameters that control the movement of the character in the moving image, as described above, and can represent, for example, bone, joint rotation or movement. The recording andreproduction executing unit 440 may operate thecharacters 4 and 5 based on the action data, even if the action data is obtained from another computer. In addition, theediting execution unit 450 may edit the operation of only a part of the body with respect to a character to which the acquired action data is applied. For example, as described above, theediting execution unit 450 may designate the body part to be edited, and only the operation of the specified part may be operated by operation using thecontroller 210 or theHMD 110. - As an asset in
virtual space 1, action data can also be sold or rented.FIG. 11 is a diagram illustrating an operation of purchasing or renting action data as an asset. The user opens theasset list 6 on thevirtual space 1 and selects a character to be disposed, for example, as a character 4-3. Theasset list 6 includes anasset type tab 61, anasset item 62, ascroll bar 63, and abutton 64. - The
asset type tab 61 may be, for example, a 3DCG model (ACTOR) of thecharacter 4, action data (MOVE) of thecharacter 4, voice quality or specific serifs (VOICE) when possessed by thecharacter 4, an object (OBJECT) arrangable in thevirtual space 1, or a background (BACK GROUND) arrangable in thevirtual space 1. In addition, the 3DCG model of thecharacter 4 may have a particular operation in advance. - The
asset item 62 is modified according to the selection of theasset type tab 61 to display a list corresponding to each type of asset. The items displayed in this list may be data obtained from a server connected via a network such as a cloud server, for example. InFIG. 8 , a list of the 3DCG models of the purchaseable character 44 is illustrated and, as an example, the facial image and name are represented in each item, but not limited thereto, for example, the amount, setting age, and the like may be represented, or the 3DCG model may be represented as is, as shown inFIG. 11 . In addition, when there are many display items, for example, ascroll bar 63 may be slid to allow all items to be identified, a sort such as the order of sounds or the order of amounts may be possible, or a user’s preferences may be learned based on the sales and rental history to preferentially display the asset in accordance with the user’s preferences. Each item may be sold not only as a single asset but also as a set of assets of the same or different asset types. -
Buttons 64 can be arranged for a variety of applications, and inFIG. 8 an example is an asset purchase button (SHOP), an asset rental button (RENTAL), a holding asset list (MY LIST), an asset purchase and/or rental history (MY HISTORY), and a button (BACK) that returns to the previous window display. - The user can apply the action data to the character by grasping the
action data 62 by the virtualright hand 21R in thevirtual space 1 and releasing theaction data 62 on thecharacters 4 and 5 to which theaction data 62 is applied. - For example, after confirming the content of the item data 7, the purchase procedure may be performed by pressing the purchase decision button (BUY) on the item data 7 or the button to which the decision operation of the
controller 210 has been assigned. Rental may also be substantially similar, except that the use period after payment is limited. - It is also possible to register the assets created by the producer in the asset list (MY LIST). The created assets can then be sold and/or rented for a fee or free to other unspecified number of users or designated users, and may be displayed in an assortment list of other users.
-
- 1 virtual space
- 2 cameraman
- 3 cameras
- 4 characters
- 5 characters
- 110 HMD
- 210 controller
- 310 Image Generator
Claims (1)
1. An animation production method comprising:
a step of placing a character in a virtual space;
a step of placing a virtual camera for shooting the character in the virtual space;
a step of acquiring action data defining an action of the character from an external source;
a step of operating the character based on the action data; and
a step of shooting the action of the character by the camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/063,870 US20230121976A1 (en) | 2020-07-29 | 2022-12-09 | Animation production system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-128301 | 2020-07-29 | ||
JP2020128301A JP2022025468A (en) | 2020-07-29 | 2020-07-29 | Animation creation system |
US17/008,142 US11524235B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
US18/063,870 US20230121976A1 (en) | 2020-07-29 | 2022-12-09 | Animation production system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/008,142 Continuation US11524235B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230121976A1 true US20230121976A1 (en) | 2023-04-20 |
Family
ID=80002595
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/008,142 Active US11524235B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
US18/063,870 Abandoned US20230121976A1 (en) | 2020-07-29 | 2022-12-09 | Animation production system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/008,142 Active US11524235B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
Country Status (2)
Country | Link |
---|---|
US (2) | US11524235B2 (en) |
JP (1) | JP2022025468A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7101735B2 (en) * | 2020-10-20 | 2022-07-15 | 株式会社スクウェア・エニックス | Image generation program and image generation system |
Family Cites Families (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3143006B2 (en) * | 1993-12-28 | 2001-03-07 | 松下電器産業株式会社 | Multi-dimensional image manipulation device |
JP4374560B2 (en) * | 2000-03-31 | 2009-12-02 | 株式会社セガ | Image processing apparatus, game apparatus, and image processing method |
US6999083B2 (en) * | 2001-08-22 | 2006-02-14 | Microsoft Corporation | System and method to provide a spectator experience for networked gaming |
US7349008B2 (en) * | 2002-11-30 | 2008-03-25 | Microsoft Corporation | Automated camera management system and method for capturing presentations using videography rules |
JP4262011B2 (en) * | 2003-07-30 | 2009-05-13 | キヤノン株式会社 | Image presentation method and apparatus |
GB2409417A (en) * | 2003-12-22 | 2005-06-29 | Nokia Corp | Online gaming with spectator interaction |
US8085302B2 (en) * | 2005-11-21 | 2011-12-27 | Microsoft Corporation | Combined digital and mechanical tracking of a person or object using a single video camera |
US20070162854A1 (en) * | 2006-01-12 | 2007-07-12 | Dan Kikinis | System and Method for Interactive Creation of and Collaboration on Video Stories |
JP4989605B2 (en) * | 2008-10-10 | 2012-08-01 | 株式会社スクウェア・エニックス | Simple animation creation device |
US20120086630A1 (en) * | 2010-10-12 | 2012-04-12 | Sony Computer Entertainment Inc. | Using a portable gaming device to record or modify a game or application in real-time running on a home gaming system |
US8854298B2 (en) * | 2010-10-12 | 2014-10-07 | Sony Computer Entertainment Inc. | System for enabling a handheld device to capture video of an interactive application |
JP5901891B2 (en) * | 2011-05-23 | 2016-04-13 | 任天堂株式会社 | GAME SYSTEM, GAME PROCESSING METHOD, GAME DEVICE, AND GAME PROGRAM |
US20130178257A1 (en) * | 2012-01-06 | 2013-07-11 | Augaroo, Inc. | System and method for interacting with virtual objects in augmented realities |
CN107078996A (en) * | 2014-09-24 | 2017-08-18 | 瑞典爱立信有限公司 | Method, system and node for handling the Media Stream related to game on line |
US9814987B1 (en) * | 2014-12-22 | 2017-11-14 | Amazon Technologies, Inc. | Spectator feedback and adaptation |
US9205336B1 (en) * | 2015-03-02 | 2015-12-08 | Jumo, Inc. | System and method for providing secured wireless communication with an action figure or action figure accessory |
US10722802B2 (en) * | 2015-07-24 | 2020-07-28 | Silver Curve Games, Inc. | Augmented reality rhythm game |
US10213688B2 (en) * | 2015-08-26 | 2019-02-26 | Warner Bros. Entertainment, Inc. | Social and procedural effects for computer-generated environments |
WO2017033777A1 (en) * | 2015-08-27 | 2017-03-02 | 株式会社コロプラ | Program for controlling head-mounted display system |
US10471355B2 (en) * | 2015-10-21 | 2019-11-12 | Sharp Kabushiki Kaisha | Display system, method of controlling display system, image generation control program, and computer-readable storage medium |
US9782678B2 (en) * | 2015-12-06 | 2017-10-10 | Sliver VR Technologies, Inc. | Methods and systems for computer video game streaming, highlight, and replay |
JP6798106B2 (en) * | 2015-12-28 | 2020-12-09 | ソニー株式会社 | Information processing equipment, information processing methods, and programs |
JP2017146651A (en) * | 2016-02-15 | 2017-08-24 | 株式会社コロプラ | Image processing method and image processing program |
EP3499897B1 (en) * | 2016-08-10 | 2021-05-19 | Panasonic Intellectual Property Corporation of America | Camerawork generating method and video processing device |
JP6201028B1 (en) * | 2016-12-06 | 2017-09-20 | 株式会社コロプラ | Information processing method, apparatus, and program for causing computer to execute information processing method |
JP6580624B2 (en) * | 2017-05-11 | 2019-09-25 | 株式会社コロプラ | Method for providing virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program |
JP6276882B1 (en) * | 2017-05-19 | 2018-02-07 | 株式会社コロプラ | Information processing method, apparatus, and program for causing computer to execute information processing method |
GB2571306A (en) * | 2018-02-23 | 2019-08-28 | Sony Interactive Entertainment Europe Ltd | Video recording and playback systems and methods |
US10728430B2 (en) * | 2018-03-07 | 2020-07-28 | Disney Enterprises, Inc. | Systems and methods for displaying object features via an AR device |
CN110413104A (en) * | 2018-04-27 | 2019-11-05 | Colopl株式会社 | Program, information processing unit and method |
JP6526898B1 (en) * | 2018-11-20 | 2019-06-05 | グリー株式会社 | Video distribution system, video distribution method, and video distribution program |
JP2022516074A (en) * | 2018-12-27 | 2022-02-24 | マジック リープ, インコーポレイテッド | Systems and methods for virtual and augmented reality |
US11058950B2 (en) * | 2019-03-15 | 2021-07-13 | Sony Interactive Entertainment Inc. | Methods and systems for spectating characters in virtual reality views |
-
2020
- 2020-07-29 JP JP2020128301A patent/JP2022025468A/en active Pending
- 2020-08-31 US US17/008,142 patent/US11524235B2/en active Active
-
2022
- 2022-12-09 US US18/063,870 patent/US20230121976A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US20220032190A1 (en) | 2022-02-03 |
JP2022025468A (en) | 2022-02-10 |
US11524235B2 (en) | 2022-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220036618A1 (en) | Animation production system | |
US20230121976A1 (en) | Animation production system | |
US20220301249A1 (en) | Animation production system for objects in a virtual space | |
JP7470344B2 (en) | Animation Production System | |
JP7470345B2 (en) | Animation Production System | |
US11321898B2 (en) | Animation production system | |
JP7470346B2 (en) | Animation Production System | |
US20220036620A1 (en) | Animation production system | |
US20220351445A1 (en) | Animation production system | |
US20230152884A1 (en) | Animation production system | |
US20220351438A1 (en) | Animation production system | |
US20220351437A1 (en) | Animation production system | |
US11443471B2 (en) | Animation production method | |
JP7517893B2 (en) | Animation Production System | |
US11475619B2 (en) | Animation production method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |