CN107067457B - Image generation system and image processing method - Google Patents

Image generation system and image processing method Download PDF

Info

Publication number
CN107067457B
CN107067457B CN201710064323.0A CN201710064323A CN107067457B CN 107067457 B CN107067457 B CN 107067457B CN 201710064323 A CN201710064323 A CN 201710064323A CN 107067457 B CN107067457 B CN 107067457B
Authority
CN
China
Prior art keywords
image
subject
information
character
color information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710064323.0A
Other languages
Chinese (zh)
Other versions
CN107067457A (en
Inventor
江头规雄
长嶋谦一
永谷真之
高屋敷哲雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bandai Namco Entertainment Inc
Original Assignee
Bandai Namco Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bandai Namco Entertainment Inc filed Critical Bandai Namco Entertainment Inc
Publication of CN107067457A publication Critical patent/CN107067457A/en
Application granted granted Critical
Publication of CN107067457B publication Critical patent/CN107067457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/30Clipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

Provided are an image generation system, an image processing method, and the like, which are capable of generating an image of a character that synthesizes a part image of an object and also reflects skin color information of the object. The image generation system includes: an input processing unit for obtaining an image of a subject; an extraction processing unit for performing extraction processing of color information; and an image generation unit that generates a composite image in which a part image of a specified part of the subject included in the captured image of the subject is combined with an image of the character. The extraction processing unit performs extraction processing of skin color information of the subject based on the part image of the specified part of the subject. The image generation unit sets color information of a part other than the designated part of the character based on the extracted skin color information, and generates an image of the character.

Description

Image generation system and image processing method
Technical Field
The present invention relates to an image generation system, an image processing method, an information storage medium, and the like.
Background
Conventionally, an image generation system is known in which a subject such as a player is photographed, and an imaged image is reflected on a character or the like of a game. As a conventional technique of such an image generation system, for example, a technique disclosed in patent document 1 is known. In such a conventional technique, a character is formed in which a real image from an imaging unit and a drawing image from a drawing storage unit are combined, and the formed character is displayed on a screen of a display unit.
Prior art literature
Patent literature
Patent document 1: japanese patent application laid-open No. 7-192143.
Disclosure of Invention
Problems to be solved by the invention
According to this conventional technique, a character that synthesizes the captured images of the player can be displayed on the screen to play the game. However, the face of the character uses a face image of the player, and the other parts than the face of the character use a drawing image as a CG image. Thus, there is no sense that the player character is self-separating.
According to several aspects of the present invention, an image generation system, an image processing method, an information storage medium, and the like, which can generate an image of a character that synthesizes a part image of an object and can also reflect skin color information of the object, can be provided.
Means for solving the problems
One aspect of the present invention relates to an image generation system, including: an input processing unit for obtaining an image of a subject; an extraction processing unit for performing extraction processing of color information; and an image generation unit configured to generate a composite image in which a part image of a specific part of the subject included in the captured image of the subject is synthesized with an image of a character, wherein the extraction processing unit performs extraction processing of skin color information of the subject based on the part image of the specific part of the subject, and wherein the image generation unit sets color information of a part other than the specific part of the character based on the extracted skin color information, and generates the image of the character. The present invention also relates to a program for causing a computer to function as each of the above-described units, or a computer-readable information storage medium storing the program.
According to one aspect of the present invention, a part image of a specified part of a subject included in an imaged image of the subject is synthesized with an image of a character. In this case, in one aspect of the present invention, skin color information is extracted from the part image of the subject. Color information of a portion other than the designated portion of the character is set based on the extracted skin color information, and an image of the character is generated. In this way, the skin color information obtained based on the part image of the designated part of the subject can be reflected as the color information of the part other than the designated part of the character. Therefore, it is possible to generate an image of a character that synthesizes the part image of the subject and also reflects the skin color information of the subject.
In one aspect of the present invention, the region image is a face image of the subject, the extraction processing unit performs the extraction processing of the skin tone information based on the face image, and the image generating unit may set the color information of the region other than the face of the character based on the skin tone information extracted from the face image.
In this way, the face image of the subject can be synthesized with the character, and skin color information obtained based on the face image of the subject can be reflected as color information of a portion other than the face of the character.
In one aspect of the present invention, the character is a model object composed of a plurality of objects, and the image generating unit may set color information of an object in a portion other than the specified portion among the plurality of objects of the model object based on the skin color information of the subject, and generate the image of the model object by performing perspective processing of the model object.
In this way, the skin color information extracted from the part image of the specified part of the subject is set as the color information of the object constituting the part other than the specified part of the object of the model object, and the perspective processing is performed, so that the image of the character of the model object can be generated.
In one aspect of the present invention, the image generating unit may perform correction processing on a boundary region between the image of the character and the region image.
In this way, an image in which an image in a boundary region between an image of a character and a region image becomes unnatural can be corrected by the correction processing.
In one aspect of the present invention, the image generating unit may perform, as the correction processing, a semitransparent synthesis processing of color information of the image of the character and color information of the part image in the boundary region.
In this way, the image of the character and the image of the part image in the boundary region can be corrected to the image obtained by translucently combining the color information of the image of the character and the color information of the part image.
In one aspect of the present invention, the image generating unit may perform a decoration process on the boundary region as the correction process.
In this way, the images of the character and the part images can be made inconspicuous in the boundary region by the decoration process.
In one aspect of the present invention, the extraction processing unit extracts pixels matching skin color conditions from the part image of the subject, and obtains the skin color information of the subject from the color information of the extracted pixels.
As described above, among the pixels of the region image of the subject, the pixels corresponding to the extraction of the skin tone are selected, and skin tone information can be extracted.
In one aspect of the present invention, the image generating apparatus further includes an object information obtaining unit that obtains object information for specifying an operation of the object based on sensor information from a sensor (causes a computer to function as the object information obtaining unit), and the image generating unit may generate an image of the character specified by the object information and operated according to the operation of the object.
In this way, the movement of the subject is specified based on the subject information obtained based on the sensor information, and an image of the character that moves according to the movement of the subject can be generated. In addition, the character that operates according to the operation of the subject can accurately synthesize the part image of the subject, and can reflect the skin color information of the subject.
In one aspect of the present invention, the image generating unit may determine the specific region of the subject based on the subject information, cut out the region image from the captured image of the subject, and synthesize the cut-out region image with the image of the character, and the extraction processing unit may perform the extraction processing of the skin color information of the subject based on the region image of the specific region determined based on the subject information.
In this way, the specific region of the subject is specified using subject information for specifying the motion of the subject, the region image of the specified region is synthesized with the image of the character, or skin color information of the subject can be extracted from the region image of the specified region.
In one aspect of the present invention, the subject information obtaining unit obtains skeleton information of the subject as the subject information based on the sensor information, and the extraction processing unit may perform the extraction processing of the skin color information of the subject based on the part image of the specified part determined based on the skeleton information.
In this way, the specific region of the subject is specified using the skeleton information for specifying the motion of the subject, and the skin color information of the subject can be extracted from the region image of the specified region.
Another aspect of the present invention relates to an image processing method, comprising: input processing to obtain an image of a subject; extracting, namely extracting color information; and an image generation process of generating a composite image that combines a part image of a specified part of the subject included in the captured image of the subject with an image of a character, wherein in the extraction process, an extraction process of skin color information of the subject is performed based on the part image of the specified part of the subject, and in the image generation process, color information of a part other than the specified part of the character is set based on the extracted skin color information, and an image of the character is generated.
Drawings
Fig. 1 is a configuration example of an image generation system according to the present embodiment.
Fig. 2 shows an example of application of the image generation system of the present embodiment to a commercial game device.
Fig. 3 (a) and 3 (B) show examples of application of the image generation system according to the present embodiment to a home game device and a personal computer.
Fig. 4 (a) and fig. 4 (B) are examples of depth information and body index information.
Fig. 5 is an explanatory diagram of skeleton information.
Fig. 6 is an explanatory diagram of an image combining method of an image of a character and a face image of a player.
Fig. 7 (a) is an example of an image captured by a player, and fig. 7 (B) is an example of a composite image of a character and a face image.
Fig. 8 is an example of a game image generated by the present embodiment.
Fig. 9 is an explanatory diagram of the method of the present embodiment.
Fig. 10 is an explanatory diagram of the method of the present embodiment.
Fig. 11 is a flowchart showing an example of the extraction processing of skin tone information.
Fig. 12 (a) and 12 (B) show examples of model object information and object information.
Fig. 13 is an explanatory diagram of a method of operating a character according to the action of a player.
Fig. 14 is an explanatory diagram of a method of operating a character using skeleton information.
Fig. 15 is an explanatory diagram of perspective processing of a model object.
Fig. 16 is an explanatory diagram of a method of determining the position of a face image from skeleton information.
Fig. 17 is an explanatory diagram of the correction processing in the boundary region of the semitransparent synthesis processing.
Fig. 18 is an explanatory diagram of a method of performing a semitransparent synthesis process in the image synthesis process of an image of a character and a face image.
Fig. 19 is an explanatory diagram of the correction processing in the boundary region of the decoration processing.
Fig. 20 is a flowchart showing a detailed processing example of the present embodiment.
Symbol description
PL, player (subject) CHP, character
IMF, face image (region image) IM, and image pickup image
NC, chest AR, AL, hand
LR, LL, foot BD, boundary region
AC. Display SC, skeleton
100. A processing unit 102, an input processing unit
110. Arithmetic processing unit 111 and game processing unit
112. Object space setting unit 113 and character processing unit
114. Subject information obtaining unit 115 and extraction processing unit
118. Game score calculation unit 119 and virtual camera control unit
120. Image generating unit 130 and sound generating unit
132. Print processing unit 140 and output processing unit
150. Outlet 152 and coin inlet
160. Operation unit 162, sensor
164. Color sensor 166 and depth sensor
170. Storage unit 172 and object information storage unit
178. Drawing buffer 180 and information storage medium
190. Display 192 and sound output unit
194. I/F unit 195 and portable information storage medium
196. Communication unit 198, and printing unit.
Detailed Description
Next, this embodiment will be described. The present embodiment described below does not unduly limit the scope of the invention described in the claims. In addition, all the components described in the present embodiment are not necessarily essential components of the present invention.
1. Image generation system
Fig. 1 shows a configuration example of an image generation system (image generation apparatus, game system, game apparatus) of the present embodiment. The image generation system includes a processing unit 100, an operation unit 160, a sensor 162, a storage unit 170, a display unit 190, a sound output unit 192, an I/F unit 194, a communication unit 196, and a printing unit 198. The configuration of the image generation system according to the present embodiment is not limited to fig. 1, and various modifications such as omitting a part of the constituent elements (each part) or adding other constituent elements may be performed.
The processing unit 100 performs various processes such as input processing, arithmetic processing, and output processing based on the operation information from the operation unit 160, the sensor information from the sensor 162, and a program.
The respective processes (respective functions) of the present embodiment performed by the respective sections of the processing section 100 can be realized by a processor (a processor including hardware). For example, each process of the present embodiment can be implemented by a processor that operates based on information such as a program, and a memory that stores information such as a program. For example, the functions of the respective parts of the processor may be implemented by separate hardware, or the functions of the respective parts may be implemented by integral hardware. For example, the processor includes hardware that can include at least a portion of circuitry that processes digital information and circuitry that processes virtual signals. For example, the processor can be configured by one or more circuit devices (e.g., ICs, etc.) mounted on a circuit substrate, or one or more circuit elements (e.g., resistors, capacitors, etc.). The processor may be, for example, a CPU (Central Processing Unit: central processing Unit). However, the processor is not limited to a CPU, and various processors such as GPU (Graphics Processing Unit: graphics processor) and DSP (Digital Signal Processor: digital signal processor) may be used. Alternatively, the processor may be a hardware circuit through an ASIC. In addition, the processor may include an amplifier circuit or a filter circuit or the like that processes the dummy signal. The memory (storage unit 170) may be a semiconductor memory such as SRAM or DRAM, or may be a register. Or may be a magnetic storage device such as a Hard Disk Device (HDD), or may be an optical storage device such as an optical disk device. For example, the memory stores a command readable by a computer, and the processing (functions) of each part of the processing section 100 are realized by the command being executed by the processor. The command may be a command group constituting a program or a command instructing an operation to a hardware circuit of the processor.
The processing unit 100 includes an input processing unit 102, an arithmetic processing unit 110, and an output processing unit 140. The input processing unit 102 performs input processing of various information. For example, the input processing unit 102 performs processing for receiving operation information input by the player through the operation unit 160 as input processing. For example, processing is performed to obtain operation information detected by the operation unit 160. The input processing unit 102 performs processing for obtaining sensor information from the sensor 162 as input processing. The input processing unit 102 performs processing for reading information from the storage unit 170 as input processing. For example, a process of reading information designated in the read instruction from the storage unit 170 is performed. The input processing unit 102 performs processing for receiving information via the communication unit 196 as input processing. For example, processing of receiving information from an external device (other image generation system, server system, or the like) of the image generation system via a network is performed. The reception process is a process of instructing the communication section 196 to receive information, or obtaining information received by the communication section 196 and writing it into the storage section 170, or the like.
The arithmetic processing unit 110 performs various arithmetic processing. For example, the arithmetic processing unit 110 performs arithmetic processing such as game processing, object space setting processing, character processing, game result arithmetic processing, virtual camera control processing, image generation processing, and sound generation processing. The arithmetic processing unit 110 includes a game processing unit 111, a target space setting unit 112, a character processing unit 113, an object information obtaining unit 114, an extraction processing unit 115, a game result calculating unit 118, a virtual camera control unit 119, an image generating unit 120, a sound generating unit 130, and a printing processing unit 132.
The game process is a process of starting a game when a game start condition is satisfied, a process of causing a game to progress, a process of ending a game when a game end condition is satisfied, or the like. The game process is executed by the game process section 111 (program module of the game process).
The object space setting process is a process of setting a plurality of objects in the object space arrangement. The object space setting process is executed by the object space setting unit 112 (program module of the object space setting process). For example, the object space setting unit 112 sets, in the object space, a character (a person, a robot, an animal, a monster, an airplane, a ship, a fighter, a battle car, a warship, a car, or the like) that is in a game, a map (a topography), a building, a route (a road), a tree, a wall, a water surface, or the like, and various objects (objects formed by base surfaces such as polygons, free surfaces, or finely divided surfaces) that represent a display object. That is, the position or rotation angle (synonymous with direction, direction) of the object is determined in the world coordinate system, and the object is arranged at this position (X, Y, Z) at this rotation angle (rotation angle of rotation at X, Y, Z axis). Specifically, the object information storage unit 172 of the storage unit 170 stores object information, which is information of the position, rotation angle, moving speed, moving direction, and the like of the object (component object), in correspondence with the object reference number. The object space setting unit 112 performs, for example, a process of updating the object information for each frame.
The character processing (moving body arithmetic processing) is various arithmetic processing performed with respect to the character (moving body). For example, the character processing is processing for moving a character (a display object to be deposited in a game) in an object space (virtual three-dimensional space, three-dimensional game space), or processing for operating the character. This character processing is executed by the character processing unit 113 (program module for character processing). For example, the character processing unit 113 performs the following control processing: based on the operation information input by the player through the operation unit 160, the sensor information from the sensor 162, the program (movement, motion algorithm), various data (motion data), and the like, the character (simulation object) is moved in the object space, and the character is moved (motion, animation). Specifically, simulation processing is performed: the movement information (position, rotation angle, speed, or acceleration) or the motion information (position or rotation angle of the component object) of the character is sequentially obtained for each frame (for example, 1/60 second). A frame is a time unit for performing movement of a character, motion processing (simulation processing), or image generation processing.
The game score calculation process is a process of calculating the score of a player in a game. For example, the game score calculation process is a process of calculating a point or score obtained by a player in a game, or a process of calculating a game result such as money, medals, tickets, or the like in a game. The game score calculation process is executed by the game score calculation unit 118 (program module of the game score calculation process).
The virtual camera control process is a process of controlling a virtual camera (viewpoint, standard virtual camera) for generating an image observed from a given (arbitrary) viewpoint in the object space. The virtual camera control process is executed by the virtual camera control unit 119 (program module of the virtual camera control process). Specifically, the virtual camera control unit 119 performs processing (processing for controlling the viewpoint position, the line of sight direction, or the image angle) for controlling the position (X, Y, Z) or the rotation angle (rotation angle rotated about the X, Y, Z axis) of the virtual camera.
The image generation process is a process for generating an image (game image) displayed on the display unit 190 or an image printed by the printing unit 198, and may include various image synthesis processes, image effect processes, and the like. The sound generation process is a process for generating sound (game sound) such as BGM, effect sound, or sound output by the sound output unit 192, and can include various sound synthesis processes, sound effect processes, and the like. These image generation processing and sound generation processing are executed by the image generation unit 120 and sound generation unit 130 (program modules of the image generation processing and sound generation processing).
For example, the image generating unit 120 performs drawing processing based on the results of various processing (game processing and simulation processing) performed by the processing unit 100, thereby generating an image, and outputs the image to the display unit 190. Specifically, geometric processing such as coordinate conversion (world coordinate conversion, camera coordinate conversion), trimming processing, perspective conversion, and light source processing is performed, and drawing data (position coordinates of vertices of a base surface, structure coordinates, color data, normal vector, alpha value, and the like) is created based on the processing result. Here, based on the drawing data (base data), the object (one or more bases) after perspective conversion (after geometry processing) is drawn in a drawing buffer 178 (a buffer in which image information can be stored in pixel units of a frame buffer, a work buffer, or the like). Thereby, an image observed from a given viewpoint (virtual camera) is generated in the object space. The drawing processing performed by the image generating unit 120 may be realized by vertex shading processing, pixel shading processing, or the like.
The output processing unit 140 performs output processing of various information. For example, the output processing unit 140 performs processing of writing information in the storage unit 170 as output processing. For example, a process of writing information specified in the write instruction into the storage unit 170 is performed. The output processing unit 140 outputs the information of the generated image to the display unit 190 or outputs the information of the generated sound to the sound output unit 192 as output processing. The output processing unit 140 performs processing for transmitting information via the communication unit 196 as output processing. For example, processing is performed to transmit information to an external device (other image generation system, server system, or the like) of the image generation system via a network. The transmission process is a process of instructing the communication section 196 to transmit information, or the like. The output processing unit 140 performs processing of transferring the image printed on the print medium to the printing unit 198 as output processing.
The operation unit 160 (operation device) is an operation device for inputting operation information by a player (user), and its function can be realized by a direction instruction key, an operation button, a stick, a lever, various sensors (an angular velocity sensor, an acceleration sensor, etc.), a microphone, a touch panel type display, or the like.
The sensor 162 detects sensor information for obtaining subject information. The sensor 162 can include, for example, a color sensor 164 (color camera), a depth sensor 166 (depth camera).
The storage unit 170 (memory) is a work area of the processing unit 100, the communication unit 196, or the like, and functions thereof can be realized by RAM, SSD, HDD or the like. The game program or game data necessary for executing the game program is held in the storage unit 170. The storage unit 170 includes an object information storage unit 172 and a drawing buffer 178.
The information storage medium 180 (a medium readable by a computer) stores a program, data, or the like, and its function can be realized by an optical disk (DVD, CD, or the like), HDD (hard disk drive), memory (ROM, or the like), or the like. The processing unit 100 performs various processes according to the present embodiment based on a program (data) stored in the information storage medium 180. The information storage medium 180 can store a program (a program for causing a computer to execute processing of each section) for causing a computer (a device including an operation section, a processing section, a storage section, and an output section) to function as each section of the present embodiment.
The display unit 190 outputs an image generated by the present embodiment, and its function can be realized by an LCD, an organic EL display, a CRT, an HMD, or the like. The sound output unit 192 outputs the sound generated by the present embodiment, and the function thereof can be realized by a loudspeaker, an earphone, or the like.
The I/F (interface) section 194 performs interface processing with the portable information storage medium 195, and the function thereof can be realized by an ASIC or the like for I/F processing. The portable information storage medium 195 is a storage medium for a user to store various information, and is a storage device that stores such information even when no power is supplied. The portable information storage medium 195 can be realized by an IC card (memory card), a USB memory, a magnetic card, or the like.
The communication unit 196 performs communication with an external device (other image generation system, server system, etc.) via a network, and its function can be realized by hardware such as a communication ASIC or a communication program, or firmware (firmware) for communication.
The printing unit 198 prints an image on a printing medium such as printing paper or seal paper. The printing unit 198 can be realized by, for example, a printing head, a transport mechanism for a printing medium, or the like. Specifically, the print processing unit 132 (program module of print processing) of the processing unit 100 performs processing for selecting a print target image, and instructs the printing unit 198 to print the print target image selected. As a result, the printed matter on which the print target image is printed is dispensed from the dispensing port 152 of fig. 2 described later. The print target image is, for example, an image of a character of a player in a game, or an image of a commemorative photograph taken by the player's character together with other characters.
Further, a program (data) for causing a computer to function as each part of the present embodiment is distributed from an information storage medium included in a server system (host apparatus) to the information storage medium 180 (or the storage unit 170) via the network and the communication unit 196. The use of such information storage media of a server system is also included within the scope of the present invention.
Fig. 2, 3 (a), and 3 (B) are diagrams showing examples of hardware devices to which the image generation system of the present embodiment is applied.
Fig. 2 is an example of a commercial game device to which the image generation system of the present embodiment is applied. The commercial game device includes an operation unit 160 implemented by operation buttons and direction instruction buttons, a sensor 162 having a color sensor 164 and a depth sensor 166, a display unit 190 implemented by an LCD, CRT, or the like, a sound output unit 192 implemented by a speaker, a coin input unit 150, and a dispensing unit 152 for printed matter such as photographs. The player PL views the game image displayed on the display section 190 and performs various actions for playing the game. The sensor 162 detects the movement of the player PL (movement of the position or the like of the hands and feet). Then, a game process based on the detection result is performed, and a game image based on the game process is displayed on the display unit 190.
The hardware device to which the image generation system of the present embodiment is applied is not limited to the commercial game device shown in fig. 2, but may be applied to various hardware devices such as a home game device shown in fig. 3 (a) and a personal computer (information processing device) shown in fig. 3 (B). In fig. 3 a, a sensor 162 (sensor device) having a color sensor 164 or a depth sensor 166 is connected to a main body device of a home game device, and the sensor 162 is disposed near a television, which is a display unit, for example. Further, by detecting the action of the player by the sensor 162, the main body device of the home game device executes various game processes, and a game image is displayed on the screen of the television. In fig. 3 (B), a sensor 162 is connected to a main body device of a personal computer, and the action of a player is detected by the sensor 162, and a game image based on the result of game processing is displayed on a liquid crystal display as a display unit.
As shown in fig. 1, the image generation system (image generation apparatus, game system) of the present embodiment includes: an input processing unit 102 for obtaining an image of a subject; an extraction processing unit 115 that performs extraction processing of color information; and an image generation unit 120 that generates a composite image in which a part image of a specified part of the subject included in the captured image of the subject is combined with the character image.
The subject is, for example, the player PL shown in fig. 2. The subject may be an animal other than a human or an object other than an animal. The captured image is, for example, an image (color image such as RGB image) captured by a color sensor 164 (color camera) provided in the sensor 162. The color sensor 164 and the depth sensor 166 may be separately provided. For example, a part image (for example, a face image) of a specified part of the subject is cut out from the captured image photographed by the color sensor 164, and the cut part image generates a composite image of the images combined with the character. For example, the part image of the subject is synthesized for the part corresponding to the designated part of the character.
The extraction processing unit 115 performs extraction processing of skin color information (broadly, designated color information) of the subject based on the region image of the designated region of the subject. For example, extraction processing is performed to extract skin color information of the subject from color information of pixels of the part image. Specifically, skin color information is obtained from color information matching skin color conditions in pixels of the part image.
The image generating unit 120 sets color information of a part other than the designated part of the character based on the extracted skin color information of the subject, and generates an image of the character. For example, the character has the first to nth parts as parts other than the designated part (for example, parts other than the face), and the ith to jth parts among the first to nth parts are parts (for example, parts of the hand, foot, chest, etc.) for which skin colors can be set as basic colors. At this time, the extracted skin color information of the subject is set as the information of the basic colors of the i-th to j-th positions.
In this way, not only the designated region corresponding to the region image synthesized for the character, but also the regions (i-th to j-th regions) other than the designated region can reflect the skin color information of the subject. Therefore, it is possible to generate an image of a character that synthesizes the part image of the subject and also reflects the skin color information of the subject.
In the present embodiment, the region image is, for example, a face image of the subject, and the extraction processing unit 115 performs extraction processing of skin color information based on the face image. Then, the image generating unit 120 sets color information of a portion other than the face of the character based on the skin color information extracted from the face image, and generates an image of the character. Then, a composite image is generated in which the face image is combined with the image of the generated character.
For example, a face image is obtained by cutting out a face portion from an image capturing image of the whole subject, the image being a part image of the face of a specified part. The position of the face in the captured image can be determined from subject information (skeleton information or the like) obtained by the subject information obtaining section 114 as described later. The extraction processing unit 115 extracts skin color information corresponding to the skin color of the subject's face from the face image obtained from the captured image of the subject. Then, color information (basic color information) of a portion other than the face of the character, for example, a hand, foot, chest, or the like is set based on the extracted skin color information. This can reflect skin color information extracted from the face of the subject to a portion other than the face, such as a hand, foot, chest, or the like. The region of the subject to be extracted with skin tone information is not limited to the face, and skin tone information may be extracted from a region other than the face (for example, a hand, a foot, or a chest). Further, it is possible to extract color information other than skin color information from a part image of a specified part of the subject, and to set the extracted color information as a modification of the color information of the part other than the specified part.
The character (subject character) is a model object composed of a plurality of objects, for example. For example, a three-dimensional model object constituted by a plurality of three-dimensional objects (component objects). Each object is constituted by, for example, a plurality of polygons (broadly, base planes).
At this time, the image generating unit 120 sets color information (basic color information) of an object other than the specified portion among a plurality of objects of the model object based on the skin color information of the subject, and generates an image of the model object as a character by performing perspective (rendering) processing of the model object.
Specifically, the image generating unit 120 performs perspective processing on a model object disposed at a position corresponding to the position of the subject (player or the like) in the object space, and generates an image of the character. For example, the position of the subject is specified based on the subject information obtained by the subject information obtaining unit 114, and a model object as a character is arranged in the object space at a position corresponding to the specified position of the subject. Then, the perspective processing of the model object is performed, and an image of the character observed from the virtual camera is generated. At this time, the image generating unit 120 performs perspective processing of the model object based on the illumination model, and generates an image of the character. In this way, as an image of the character, an image in which a shadow is applied by the light source based on the illumination model can be generated. Further, a deformation implementation using a two-dimensional image (animation image) as an image of a character can be performed.
The image generation unit 120 performs correction processing in the boundary region between the image of the character and the region image. For example, a first region (for example, a face) corresponding to a designated region of a character and a second region (for example, a chest) for setting skin color information extracted from a region image of a subject are adjacent regions. At this time, the correction processing is performed in the boundary region between the first portion and the second portion. That is, correction processing is performed in a boundary region between a part image (face image) synthesized at the position of a first part (face) and an image (image generated from extracted skin color information) of a second part (chest) of the character. For example, correction processing is performed based on the color information of the synthesized part image and the color information of the image of the second part of the character.
For example, the image generation unit 120 performs a semitransparent synthesis process (α -blending process) of color information of an image of a character and color information of a part image in a boundary region. That is, correction processing for reconciling the color of the image of the character and the color of the part image is performed. For example, in the boundary region, the semitransparent composition processing is performed, and the higher the mixture ratio of the semitransparent composition of the color information of the part image is, the closer the semitransparent composition of the color information of the part image is, the higher the mixture ratio of the semitransparent composition of the color information of the character image is. For example, the boundary region is defined as a boundary region between the first portion (face) and the second portion (chest). In this case, the higher the mixing ratio of the semitransparent composition of the color information of the part image is, the higher the mixing ratio of the semitransparent composition of the color information of the character image is, the closer the first part is, the higher the semitransparent composition processing is performed in the boundary region.
The image generation unit 120 may perform a decoration process on the boundary region as the correction process. In this way, the boundary region between the image and the part image of the character can be made inconspicuous by the decoration process. In addition, as the decoration processing, for example, a display (object) for decoration is arranged in the boundary area, and various image effects are processed in the boundary area.
The extraction processing unit 115 extracts pixels matching skin color conditions from the region image of the subject, and obtains skin color information of the subject from the color information of the extracted pixels. For example, from among a plurality of pixels constituting a part image, a pixel matching a skin tone condition is extracted. Specifically, for example, color information of pixels at a specified portion is converted from RGB values to HSV values (hue, chroma, brightness), and it is determined whether or not the HVS values of the pixels match skin tone conditions. Then, specific arithmetic processing such as averaging processing is performed on color information (for example, RGB values) of pixels matching the skin color condition, and the skin color information is obtained from the arithmetic result. In this way, when a portion (for example, eye, hair, or the like) other than skin color exists at a specified portion of the subject, the influence of the color information of the portion can be removed, and skin color information can be extracted.
Further, the subject information obtaining section 114 (program module of subject information obtaining process) obtains subject information for determining the motion of the subject based on the sensor information from the sensor 162. The image generating unit 120 generates an image of a character that operates in accordance with the operation of the subject (such as the operation of the region or the movement of the position) specified by the subject information. For example, the subject information obtaining unit 114 obtains skeleton information (broadly, motion information) as subject information based on sensor information from the sensor 162. Based on the skeleton information, the motion of the subject is specified, and the image generation unit 120 generates an image of a character that moves in response to the motion of the subject. For example, a character is operated by operation reproduction based on skeleton information, and an image of the character is generated by performing perspective processing on the character.
For example, the image generating unit 120 specifies a specified portion of the subject from the subject information, cuts out a portion image of the specified portion from the captured image of the subject, and synthesizes the cut-out portion image with the image of the character. For example, a region of a given size with reference to the position of a specified region is cut out from the captured image, a region image of the specified region is obtained, and the cut region image is synthesized with the image of the character. At this time, the extraction processing unit 115 performs extraction processing of skin color information of the subject based on the part image of the specified part determined based on the subject information. That is, the position of the specified portion of the subject is specified based on the subject information, and the extraction processing unit 115 extracts skin tone information of the subject from the portion image that is an image of the specified portion. Then, color information of a portion other than the designated portion of the character is set based on the extracted skin color information.
Specifically, the subject information obtaining unit 114 obtains skeleton information of the subject as subject information based on the sensor information. Then, the extraction processing unit 115 performs extraction processing of skin color information of the subject based on the part image of the specified part specified based on the skeleton information. For example, the position of a specified portion of the subject is specified based on the skeleton information of the subject detected by the sensor 162, a portion image that is an image of the specified position is cut out from the captured image of the subject, and skin color information of the subject is extracted based on the color information of the portion image. In this way, the skeleton information of the subject detected by the sensor 162 can be effectively used to extract skin color information of the subject.
2. The method of the present embodiment
Next, the method of the present embodiment will be described in detail. In the following, the method of the present embodiment is described as applied to a game in which a character is operated according to the action of a player, but the method of the present embodiment is not limited to this, and can be applied to various games (RPG game, music game, combat game, communication game, robot game, card game, sports game, action game, and the like).
2.1 description of the Game
First, an example of a game implemented by the image generation system of the present embodiment will be described. In the present embodiment, the sensor 162 of fig. 2 detects the motion of the player PL (in a broad sense, the subject), and performs game processing reflecting the detected motion, thereby generating a game image.
As shown in fig. 2, the sensor 162 (image sensor) is provided such that, for example, its imaging direction (optical axis direction) is directed toward the player PL (user, operator). For example, the imaging direction of the sensor 162 is set to be a depression angle direction with respect to the horizontal plane so that even a small child can take a picture of the whole. In fig. 2, the sensor 162 is provided beside the display unit 190, but the installation position is not limited to this, and may be provided at any position (for example, a lower portion, an upper portion, etc. of the display unit 190).
The sensor 162 includes a color sensor 164 and a depth sensor 166. The color sensor 164 captures a color image (RGB image), and can be implemented by a CMOS sensor, a CCD, or the like. The depth sensor 166 projects light such as infrared light, and detects the reflection intensity of the projected light or the time for which the projected light returns to obtain depth information (depth information). For example, the depth sensor 166 can be constituted by an infrared projector that projects infrared rays and an infrared camera. Furthermore, depth information is obtained, for example, by means Of TOF (Time Of Flight). The depth information is information for setting a depth value (depth value) at each pixel position. In the depth information, a depth value (depth value) of a player or a landscape around the player is set to, for example, a gray scale value.
The sensor 162 may be a sensor provided with a color sensor 164 and a depth sensor 166, respectively, or may be a sensor in which the color sensor 164 and the depth sensor 166 are combined in a composite manner. The depth sensor 166 may be a sensor that is not the TOF system (for example, optical encoding). As a method for obtaining depth information, various modifications are possible, and for example, depth information can be obtained by a distance measuring sensor or the like using ultrasonic waves or the like.
Fig. 4 (a) and 4 (B) are examples of depth information and body index information obtained based on the sensor 162, respectively. Fig. 4 (a) schematically illustrates depth information, which is information indicating a distance from the sensor 162 to the subject such as a player. The body index information of fig. 4 (B) is information indicating a person region. The body index information enables the position or shape of the player's body to be determined.
Fig. 5 is an example of skeleton information obtained based on sensor information from the sensor 162. In fig. 5, as skeleton information, positional information (three-dimensional coordinates) of bones constituting the skeleton is obtained as positional information of joints (portions) C0 to C20. C0, C1, C2, C3, C4, C5, C6 each correspond to the waist, spine, midshoulder, right shoulder, left shoulder, neck, head. C7, C8, C9, C10, C11, C12 each correspond to a right elbow, a right wrist, a right hand, a left elbow, a left wrist, a left hand. C13, C14, C15, C16, C17, C18, C19, C20 correspond to the right waist, left waist, right knee, right heel, right foot, left knee, left heel, left foot, respectively. Each of the bones constituting the skeleton corresponds to each portion of the player reflected by the sensor 162. For example, the subject information obtaining unit 114 determines the three-dimensional shape of the player based on the sensor information (color image information, depth information) from the sensor 162. Then, each part of the player is estimated using information of the three-dimensional shape, an operation vector (optical flow) of the image, or the like, and the position of the joint at each part is estimated. Then, three-dimensional coordinate information of the position of the joint of the skeleton is obtained from the two-dimensional coordinates of the pixel position in the depth information corresponding to the specified joint position and the depth value set at the pixel position, and skeleton information shown in fig. 5 is obtained. By using the obtained skeleton information, the action of the player (the movement of the position or the movement of the hand or foot) can be specified. This allows the character to be operated according to the action of the player. For example, in the case of the player PL operating on hands and feet in fig. 2, hands and feet of a character (player character) displayed on the display section 190 can also operate in conjunction with this. In addition, when the player PL moves back and forth and right and left, the character can move back and forth and right and left in the target space in linkage therewith.
In the present embodiment, a composite image of the captured image obtained by photographing the player PL by the color sensor 164 of the sensor 162 and the image of the character (player character) is generated. Specifically, a composite image of a face image (a subject image and a part image in a broad sense) cut out from the captured image and an image of the character is generated.
For example, in fig. 6, a model object of a character CHP (clothing) is prepared, and a face image IMF of a player PL cut out from a captured image is displayed on a polygon SCI (advertisement board polygon, screen). The model object of the character CHP is constituted by objects in a plurality of locations other than the face (head), and is a character for wearing beautiful clothing.
In fig. 6, the face image IMF of the polygon SCI is displayed on an extension line of a line connecting the viewpoint position of the virtual camera VC and the face position of the character CHP. By setting the face image IMF of the virtual camera VC, the character CHP, and the polygon SCI to such an arrangement relationship, an image of the face image IMF of the synthetic player can be generated in the face portion of the character CHP.
The image synthesizing method for the face image IMF of the character CHP is not limited to the method shown in fig. 6. For example, various image synthesizing methods such as a configuration in which a face image IMF (subject image, part image) is mapped to an object constituting a face (designated part) of a character CHP can be employed.
For example, fig. 7 (a) shows an example of an image of the player PL captured by the color sensor 164 of the sensor 162. From this captured image, the face image IMF (trimmed) of the player PL is cut out, and the image is synthesized with the image of the character CHP by the image synthesizing method of fig. 6, whereby a synthesized image shown in fig. 7 (B) can be generated. In fig. 7 (B), a face portion becomes a face image IMF of the player PL, and a portion other than the face portion becomes an image of a character CHP of the CG image. Thus, the player PL can play a game by wearing a garment worn by the player, who is a personal character CHP, in the fairy tale or the cartoon world, and the interest of the game or the enthusiasm of the player can be improved.
Fig. 8 is an example of a game image generated by the image generation system of the present embodiment and displayed on the display unit 190 of fig. 2. The character CHP composing the face image IMF of the player PL holds, for example, a magic wand ST in its hand. When the player PL of fig. 2 moves the wrist, the motion of the wrist is detected by the sensor 162, and the skeleton information of the motion of the wrist is obtained as skeleton information of fig. 5. Thus, the wrist motion of the character CHP on the game image shown in fig. 8 can be attacked by the magic wand ST striking the opponent characters EN1, EN2, EN 3. Then, the game score of the player is calculated by the number of the offensive enemies, the acquisition of the bonus points, or the like.
According to the method of this embodiment, the player PL can put on the clothes of the owner of the fairy tale or the cartoon by the individual character CHP, and feel as if the player is the owner, and play the game of fig. 8. Therefore, a game in which the player PL such as a child can be enjoyed with enthusiasm can be realized.
2.2 extraction and setting of skin tone information
As shown in fig. 6 to 8, in the present embodiment, a synthesized character image is generated in which images of face images IMF of players are synthesized in a character CHP, and the character CHP is operated in accordance with the operation of the player PL, whereby the player plays a game. In the composite character image (CHP, IMF), the player can play the character CHP as a character corresponding to the player by replacing the part of the face with the face image IMF of the player.
However, in the composite character image, the part of the face is replaced with the face image IMF of the player, and the other part is generated by a so-called CG (computer graphics) image. Accordingly, the following problems exist: there is no sense that the player can be given the sense that the character is true of itself.
For example, when a player with a white skin tone plays a game, the color of the face of the character CHP becomes a white skin tone by synthesizing the face image IMF of the player, but the color of the CG image (perspective image) of the character CHP is held for the parts of the hands, feet, chest, etc. other than the face. Therefore, the color of the face of the character CHP may be different from the color of the hand, foot, chest, or the like, and an image may be formed as an uncomfortable image. Similarly, when a player with a slightly dark skin tone plays a game, the color of the face of the character CHP becomes a slightly dark skin tone by synthesizing the face image IMF of the player, but the color of the CG image of the character CHP is maintained for the parts other than the face, such as hands, feet, chest, and the like. Therefore, the color of the face of the character CHP may be different from the color of the hand, foot, chest, or the like, and an image may be formed as an incongruous image. In this way, in the method of composing only the face image IMF of the face portion of the character CHP, no image is known which gives the player the sense that the character CHP is true of itself, and there is a possibility that there is an uncomfortable image due to the difference in chromaticity of the skin color.
Here, in the present embodiment, the following method is adopted: the part images of the player are synthesized with the character CHP, and color information of other parts of the character CHP is set based on the skin color information extracted from the part images.
Specifically, as shown in fig. 9, a face image IMF (broadly, a part image) included in the captured image IM of the player PL is synthesized with a part of the face (broadly, a specified part) of the character CHP, and skin color information is extracted from the face image IMF. For example, skin tone information is extracted from color information of pixels of a skin tone portion of the face image IMF. Then, color information of a portion other than the face is set based on the extracted skin color information. For example, as shown in fig. 9, the extracted skin color information is set as color information of parts (broadly, parts other than a specified part) such as the chest NC (neck, chest), hands AR, AL, feet LR, LL, and the like. That is, the skin color information is used as basic color information, and the object of the parts such as chest NC, hands AR, AL, feet LR, LL is subjected to perspective processing.
More specifically, as shown in fig. 10, the face image IMF is cut out from the captured image IM of the entire player PL captured by the color sensor 164 of the sensor 162. This can be achieved by specifying the position of the face of the player PL based on the skeleton information of the player PL described in fig. 5, and cutting (trimming) an image of a region of a specific size centered on the position of the face as a face image IMF. Then, the face image IMF thus cut out is synthesized with the face portion of the character CHP, and the skin color information extracted from the face image IMF is set as basic color information of the parts of the chest NC, hands AR, AL, feet LR, LL, and the like.
According to the method of this embodiment, for example, when the skin tone of the player PL is white, not only the skin tone of the face image IMF synthesized with the character CHP, but also the skin tone of other parts such as the chest, hands, feet, etc. are set to be white. Similarly, for example, when the skin tone of the player PL is slightly black, not only the skin tone of the face image IMF synthesized with the character CHP, but also the skin tone of other parts such as the chest, hands, and feet are set to be slightly black. Therefore, according to the present embodiment, not only the face portion but also other portions can reflect the skin color of the player PL. This gives the player the sense that the character CHP is true of himself. Further, the skin tone of the face, the skin tone of the other parts such as the chest, the hand, and the foot can be unified, and the occurrence of an image that is inconsistent due to the difference in chromaticity of the skin tone can be suppressed.
Fig. 11 is a flowchart showing an example of the skin tone information extraction process. First, a face image is cut out from an image captured by a player (step S1). For example, as shown in fig. 10, from the captured image of the entire player, a part of the face image is cut out using the skeleton information or the like of fig. 5. Then, an image process such as a resolution reduction process or a blurring process is performed on the cut face image (step S2). For example, in order to reduce the processing load, the number of pixels of the cut face image is reduced, and in order to extract rough color information, a blurring filter process or the like is performed on the face image.
Next, RGB values of the face image after the image processing are converted into HSV values, and pixels conforming to skin tone conditions are extracted (step S3). For example, the hue, chroma, and lightness of the HSV value of each pixel of the face image are determined as whether or not the value falls within a specific range of the hue, chroma, and lightness of the skin tone, and whether or not the value is a pixel matching the skin tone condition is determined, and the matching pixel is extracted. Then, skin tone information of a basic color which is a skin tone is obtained from the color information (RGB value) of the extracted pixel (step S4). For example, when M pixels are extracted as pixels matching skin tone conditions, an average value of color information (RGB values) of the M pixels is obtained, and the obtained average value of the color information is obtained as skin tone information of a face image. In this way, it is possible to extract, as skin tone information, an average color of skin tone of the face of the player by removing portions (for example, eyes, hair, and the like) that are not skin tone in each portion of the face of the player. Then, the extracted skin color information is set as the basic color of the portion of the chest NC, hands AR, AL, feet LR, LL, etc. in fig. 9 and 10, and perspective processing is performed on the character CHP as the model object, thereby generating an image of the character CHP. In this embodiment, pixels matching skin tone conditions are extracted from a face image IMF (part image) of a player (subject), and skin tone information of the player is obtained from color information of the extracted pixels.
For example, fig. 12 (a) shows an example of model object information of the character CHP. The model object information includes information of a position or direction of the model object within the object space. In addition, information of a plurality of objects OB1, OB2, OB3 … … constituting the model object is included.
Fig. 12 (B) is an example of object information of objects OB1, OB2, OB3 … … constituting the model object of the character CHP. The object information associates the basic color of each object OB1, OB2, OB3 … … or the data of the polygon constituting each object with each other. In the present embodiment, as the basic color information of fig. 12 (B), skin color information extracted by the method of fig. 11 is set. In other words, perspective processing is performed on each object of the model object with the extracted skin color information as a basic color, and an image of the character CHP is generated.
In the present embodiment, the character CHP displayed on the display unit 190 in fig. 13 is a model object composed of a plurality of objects. For example, an image of the character CHP is generated from a three-dimensional CG object constituted by a base surface of a polygon or the like. As shown in fig. 13, an image showing a face image IMF of the player PL (subject) is generated at the position of the face of the model object as an image of the character CHP. Such a composite character image of the character CHP and the face image IMF can be generated by the image compositing method described in fig. 6, for example.
Then, as shown in fig. 13, when the player PL (subject) moves, a synthetic image in which the synthetic position of the face image IMF (region image) changes in the synthetic object image is generated in accordance with the movement of the player PL, and displayed on the display unit 190. That is, when the player PL moves rightward and the face moves rightward, the face image IMF displayed on the display section 190 moves rightward in response to this. Similarly, when the player PL moves to the left and the face moves to the left, the face image IMF moves to the left in response to this.
Specifically, in the present embodiment, the operation of the character CHP is controlled using the skeleton information described in fig. 5. For example, the character CHP is operated by performing operation reproduction of the character CHP based on the operation data of the skeleton information.
Fig. 14 is a diagram schematically illustrating a skeleton. The skeleton for operating the character CHP is a skeleton having a three-dimensional shape arranged and set at the position of the character CHP, and SK represents projecting the skeleton on the screen SCS. The skeleton is constituted by a plurality of joints as illustrated in fig. 5, and the positions of the joints are represented by three-dimensional coordinates. The skeleton is linked with the action of the player PL of fig. 13, and the bone action. When the bone of the skeleton is operated, the portion corresponding to the bone of the character CHP is operated in association with the operation. For example, when the player PL moves around the wrist, the skeleton moves around the wrist, and the wrist of the character CHP also moves around the wrist. In addition, the player PL moves on the foot, and in linkage with this, the bones of the skeleton move, and the feet of the character CHP also move. In addition, when the player PL moves with the head, the bone of the head of the skeleton moves and the head of the character CHP also moves. In fig. 14, for example, a sensor 162 is disposed at the viewpoint position of the virtual camera VC, and the imaging direction of the sensor 162 is set to be directed toward the line of sight of the virtual camera VC.
Thus, in the present embodiment, the character CHP operates in conjunction with the action of the player PL. Therefore, when the player PL moves in the head, the face image IMF displayed on the display section 190 also moves in conjunction with this. Here, in the present embodiment, the skeleton information described in fig. 5 and 14 is used to specify the position of the face (head) of the player PL, and based on the specified face position, the face image IMF is cut out from the captured image IM as shown in fig. 9 and 10, synthesized with the character CHP, and skin color information is extracted from the face image IMF.
In the present embodiment, a model object composed of a plurality of objects is used as the character CHP. Then, in the object space (three-dimensional virtual space), an image of the character CHP is generated by performing perspective processing on the model object arranged at a position corresponding to the position of the player PL (subject). For example, in fig. 15, character CHP is disposed in the target space at a position corresponding to the position of player PL in fig. 13. For example, when the player PL moves back and forth and left and right, the character CHP can also move back and forth and left and right in the object space.
Then, the perspective processing of the character CHP as the model object is performed according to the illumination model using the light source LS as shown in fig. 15. Specifically, the illumination process (masking process) is performed based on the illumination model. The illumination process is performed using information (light source vector, light source color, luminance, light source type, etc.) of the light source LS, a line-of-sight vector of the virtual camera VC, a normal vector of an object constituting the character CHP, raw materials (color, material) of the object, and the like. Further, as the illumination model, there is a diffuse illumination model considering only lambertian of ambient light and diffuse light, or a Phong (von) illumination model considering reflected light in addition to ambient light and diffuse light, a Blinn-Phong illumination model, or the like.
By performing the illumination processing based on such an illumination model, a vivid high-quality image in which shadows are appropriately applied to the character CHP by the light source LS can be generated. For example, by performing illumination processing using a light source of a spotlight, an image of a character CHP (clothing) illuminated by the spotlight can be generated, and the virtual reality of a player can be improved.
In the present embodiment, the basic color of the object other than the face such as the chest, the hands, and the feet (fig. 12B) is set, and the skin color information extracted from the face image IMF is set, and the see-through processing shown in fig. 15 is performed. As described above, by using the set skin color information as a basic color, an image of the animated character CHP in which a shadow is applied by the illumination processing using the light source LS or the like of fig. 15 can be generated.
As described above, in the present embodiment, the face image IMF (region image of the designated region) included in the captured image IM of the player (subject) generates a composite image of the image synthesized with the character CHP. At this time, as described in fig. 9 to 11, the skin color information of the character CHP is extracted from the face image IMF (part image of the designated part of the subject) of the character. Then, color information of a portion other than the face of the character CHP (a portion other than the designated portion) is set based on the extracted skin color information, an image of the character CHP is generated, and a synthesized image of the face image IMF is synthesized with the image of the character CHP.
At this time, as described in fig. 12 (a) and 12 (B), the character CHP is a model object composed of a plurality of objects. Then, color information of the object of the part other than the face (the part other than the designated part) is set out of the plurality of objects of the model object based on the skin color information of the player. That is, color information of the object at the chest, hand, foot, etc. is set based on the extracted skin color information. Then, as shown in fig. 15, by performing perspective processing of the model object, an image of the character CHP as the model object is generated.
At this time, in the present embodiment, subject information for specifying the action of the player PL (subject) is obtained from the sensor information from the sensor 162. Then, as described in fig. 13, an image of the character CHP that is specified by the subject information and that acts in accordance with the action of the player PL is generated.
For example, a portion of the face of the character CHP (a designated portion of the subject) is specified based on subject information, and as shown in fig. 10, a face image IMF (a portion image) is cut out from the captured image IM of the player PL, and the cut-out face image IMF is synthesized with the image of the character CHP. At this time, the extraction processing of skin color information of the player PL (subject) is performed based on the face image IMF (region image of the designated region) specified by the subject information.
More specifically, the skeleton information of the player PL described in fig. 5 and 14 is obtained as subject information based on the sensor information from the sensor 162. Then, as shown in fig. 16, the extraction processing of skin color information is performed based on the face image IMF specified based on the skeleton information (SK). For example, in fig. 16, the position (C6) of the head of the player PL can be specified based on the skeleton information (SK). That is, as shown in fig. 13, when the head position moves due to the player PL motion, the sensor 162 detects the player PL motion to obtain skeleton information, and the moving head position can be tracked. Accordingly, the position of the head of the player PL is specified using the skeleton information, and the face image IMF is cut out from the captured image IM captured by the color sensor 164 using the information of the specified position, whereby skin tone information can be extracted. That is, in a game in which the character CHP is operated in conjunction with the operation of the player PL, the face image IMF of the player PL moving forward, backward, leftward, rightward, and the like is tracked, skin tone information is extracted from the face image IMF, and the extracted skin tone information can be set as the basic color of the skin tone of the other part of the operated character CHP.
2.3 correction Process
In the present embodiment, as shown in fig. 9 and 10, skin color information is extracted from the face image IMF of the player PL, and is set as the basic color of other parts such as the chest, hand, and foot of the character CHP that synthesizes the face image IMF. At this time, a color difference may be generated between the color of the image of the character CHP and the color of the face image IMF. For example, in fig. 17, in the boundary region BD between the face image IMF and the chest (chest, neck) of the character CHP, the color may discontinuously change (discretely change).
For example, in fig. 17, the skin color of the face image IMF is taken as CLF, and the skin color of the chest of the character CHP is taken as CLC. At this time, the colors CLF and CLC are not exactly the same color. For example, the color CLC is set based on skin color information extracted from the face image IMF, but the extracted skin color and the color CLC are strictly the same color. As described with reference to fig. 15, the color CLC of the chest of the character CHP is a color obtained by applying a shadow by the illumination process based on the illumination model. Therefore, the color CLF of the face image and the color CLC of the chest are not the same color, resulting in a difference in color.
Here, in the present embodiment, in the boundary region BD of the image of the character CHP and the face image IMF (region image), correction processing is performed so as to make such a difference in color inconspicuous. For example, correction processing (gradation processing) of gradually changing color from the color CLF of the face image IMF to the color CLC of the chest is performed.
Specifically, as this correction process, a semitransparent synthesis process of the color CLC of the image of the character CHP and the color CLF of the face image IMF (part image) is performed in the boundary area BD. For example, the mixing ratio of the semitransparent composition of color CLC is α, the mixing ratio of the semitransparent composition of color CLF is β, and the generated color is CL. At this time, in fig. 17, the translucent synthesis processing indicated by cl=α×clc+β×clf is performed as the correction processing. Here, the blending ratio α is a value that increases from the position of the face image IMF to the position of the chest, and the blending ratio β is a value that increases from the position of the chest to the position of the face image IMF, and for example, a relational expression of β=α -1 holds.
As described above, the color CL of the boundary area BD gradually changes from the color CLF to the color CLC as going from the position of the face image IMF toward the position of the chest of the character CHP. Therefore, the situation in which the image is unnatural due to the color discontinuity change in the boundary area BD can be effectively suppressed.
Further, this semitransparent combining process is preferably performed at the time of the image combining process of the image of the character CHP and the face image IMF as illustrated in fig. 6. That is, as shown in fig. 18, in the image combining process of the image of the character CHP and the face image IMF, the blending ratio α for the character CHP is set, and the blending ratio β for the face image IMF is set. Then, in the region of the character CHP, α=1, β=0 is set in the region other than the boundary region BD of fig. 17. Similarly, in the area of the face image IMF, β=1 and α=0 are set in the area other than the boundary area BD. In the boundary area BD, 0 < α < 1,0 < β < 1, and a semitransparent synthesis process expressed by cl=α×clc+β×clf are performed. In this way, even in the image combining process of the image of the character CHP and the face image IMF, the translucent combining process can be executed in the boundary area BD, and the efficiency of the process can be improved.
In the present embodiment, the boundary region BD may be subjected to a decoration process as the correction process.
For example, in fig. 19, a boundary area BD between the face image IMF and the chest of the character CHP is subjected to decoration processing, and for example, a decorative display AC (object, article) such as a scarf or necklace is arranged so as to cover the boundary area BD. In this way, the display AC for decoration can hide the boundary area BD. That is, the display portion AC for decoration can hide discontinuous color changes in the boundary area BD, and thus, an unnatural image can be effectively suppressed.
The display (object) used in the decoration process may be a display of an article held by the player or a display associated with the character CHP. Further, as the decoration processing, an image effect processing having an effect of decoration may be performed on the boundary region BD.
As a method of the correction processing of the boundary area BD, various methods are considered. For example, the image of the boundary region BD is subjected to a blurring filter process, and various modifications such as an averaging process (smoothing process) of color information are performed in order to adjust the brightness of the image of the face image IMF and the chest.
In the present embodiment, the correction processing of the extracted skin color information may be performed based on the environmental information such as the brightness information of the environment at the time of photographing by the sensor 162 (color sensor 164). For example, when the photographing environment is a darker environment, skin tone information extracted from a part image such as a face image may be darker than the actual skin tone of the player. Therefore, in this case, the skin tone information extracted from the part image is corrected to a brighter skin tone. On the other hand, when the photographing environment is a brighter environment, the skin tone information extracted from the part image such as the face image may be a brighter skin tone than the actual skin tone of the player. Therefore, in this case, the skin tone information extracted from the part image is corrected to a darker skin tone. In this way, the skin tone set in the hand, foot, chest, etc. of the character CHP can be made close to the skin tone of the actual player, and a more appropriate character image can be generated.
In the present embodiment, correction processing may be performed to change the skin color information of a part image such as a face image or the color information of skin colors of other parts set based on the skin color information according to the game situation. For example, depending on the game situation, the skin color of the character CHP of the player is good in blood color, or the pale blood color deteriorates the game performance. In this way, the skin color of the part (hand, foot, chest, face, etc.) of the character CHP can be prevented from being changed to a single color according to the game situation.
3. Detailed processing
Next, a detailed processing example of the present embodiment will be described with reference to the flowchart of fig. 20.
First, the position of the face of the player is specified based on the skeleton information of the player, and a face image is cut out from the captured image of the player based on the specified position (step S11). For example, using the skeleton information and the like described in fig. 5, 14, and 16, the position of the face image IMF of the player as the captured image of the color image is specified, and the face image IMF is cut out by trimming a region of a given size centered on the position.
Next, skin tone information is extracted from the face image (step S12). For example, skin tone information is extracted from a face image by the method described in fig. 11 or the like. Then, the basic color of the skin color of the part other than the face of the character is set based on the extracted skin color information (step S13). For example, as described in fig. 9 and 10, the basic color of the skin color of the portion of the chest NC, hands AR, AL, feet LR, LL, etc. other than the face is set. The basic color of the skin color is set for an object corresponding to the object information of fig. 12 (B), for example.
Then, according to the illumination model. Perspective processing of the character is performed to generate an image of the character (step S14). For example, as described in fig. 15, the illumination process using the light source LS is performed, and the perspective process is performed at the same time, so that a perspective image (hatched image) of the character is generated. Then, a process of combining the image of the character and the face image is performed, and at this time, a correction process is performed in a boundary region between the image of the character and the face image (step S15). For example, as shown in fig. 18, image synthesis processing of the image of the character CHP and the face image IMF is performed using the blend ratios α, β, and, as shown in fig. 17, correction processing (semitransparent synthesis processing) is performed in the boundary region BD of the image of the character CHP and the face image IMF using the blend ratios α, β.
Although the present embodiment has been described in detail, various modifications will be readily apparent to those skilled in the art without substantially departing from the novel aspects and effects of the present invention. Such modifications are therefore intended to be included within the scope of the present invention. For example, terms (player, face image, etc.) described together with different terms (subject, designated part, part image, etc.) that are broader or synonymous at least once in the specification or drawings may be replaced with the different terms at any position of the specification or drawings. The extraction process of skin color information, the image synthesis process, the setting process of color information, the correction process, the perspective process, the specific process of the operation of the subject, and the like are not limited to those described in the present embodiment, and methods equivalent thereto are also included in the scope of the present invention.

Claims (8)

1. An image generation system, comprising:
an input processing unit for obtaining an image of a subject;
an extraction processing unit for performing extraction processing of color information;
an image generation unit that generates a composite image of a portion image of a specified portion of the subject included in the captured image of the subject, the portion image being a part of the subject, and a computer-drawn image of a part other than the specified portion being the part of the subject, by combining the portion image with the image of the character; and
A subject information obtaining unit that obtains subject information for specifying the motion of the subject based on sensor information from a sensor,
the image generating unit specifies the specified portion of the subject based on the subject information, cuts out the portion image from the captured image of the subject,
the extraction processing unit performs extraction processing of skin color information of the subject based on the part image of the specified part of the subject cut out based on the subject information,
the image generating unit sets color information of a portion other than the designated portion of the character based on the extracted skin color information, generates an image of the character, synthesizes the image of the character and the portion image, synthesizes the synthesized image in which a synthesized position of the portion image changes in accordance with an operation of the subject,
the character is a model object made up of a plurality of objects,
the image generating unit sets color information of an object other than the specified portion among the plurality of objects of the model object based on the skin color information of the subject, generates an image of the model object by performing perspective processing of the model object,
The composite image is an image in which the character as the model object operates according to the operation of the subject detected by the subject information.
2. The image generation system of claim 1, wherein the image generation system,
the subject information obtaining section obtains skeleton information of the subject as the subject information based on the sensor information,
the extraction processing unit performs extraction processing of the skin color information of the subject based on the part image of the specified part specified based on the skeleton information.
3. The image generation system according to claim 1 or 2, characterized in that,
the image generating unit performs correction processing on a boundary region between the image of the character and the region image.
4. The image generation system of claim 3, wherein,
the image generating unit performs, as the correction processing, a semitransparent synthesis processing of color information of the image of the character and color information of the part image in the boundary area.
5. The image generation system of claim 3, wherein,
the image generation unit performs a decoration process on the boundary region as the correction process.
6. The image generation system according to claim 1 or 2, characterized in that,
the extraction processing unit extracts pixels matching skin color conditions from the part image of the subject, and obtains the skin color information of the subject based on the color information of the extracted pixels.
7. An image processing method, characterized by:
input processing to obtain an image of a subject;
extracting, namely extracting color information;
an image generation process of generating a composite image of a computer-drawn image in which a specific portion of the subject is the portion image of the subject and a portion other than the specific portion is the character by compositing the portion image of the specific portion of the subject included in the captured image of the subject to the image of the character; and
a subject information obtaining process of obtaining subject information for specifying an operation of the subject based on sensor information from a sensor,
in the image generation process, the specified portion of the subject is specified based on the subject information, the portion image is cut out from the captured image of the subject,
In the extraction processing, extraction processing of skin color information of the subject is performed based on the part image of the specified part of the subject cut out from the subject information,
in the image generation processing, color information of a portion other than the designated portion of the character is set based on the extracted skin color information, an image of the character is generated, the image of the character is synthesized with the portion image, the synthesized image in which the synthesized position of the portion image changes in accordance with the movement of the subject is synthesized,
the character is a model object made up of a plurality of objects,
in the image generation processing, color information of an object other than the specified portion is set among the plurality of objects of the model object based on the skin color information of the subject, and a perspective processing of the model object is performed to generate an image of the model object,
the composite image is an image in which the character as the model object operates according to the operation of the subject detected by the subject information.
8. A computer-readable information storage medium storing a program for causing a computer to function as each of the following sections,
an input processing unit for obtaining an image of a subject;
an extraction processing unit for performing extraction processing of color information;
an image generation unit that generates a composite image of a portion image of a specified portion of the subject included in the captured image of the subject, the portion image being a part of the subject, and a computer-drawn image of a part other than the specified portion being the part of the subject, by combining the portion image with the image of the character; and
a subject information obtaining unit that obtains subject information for specifying the motion of the subject based on sensor information from a sensor,
the image generating unit specifies the specified portion of the subject based on the subject information, cuts out the portion image from the captured image of the subject,
the extraction processing unit performs extraction processing of skin color information of the subject based on the part image of the specified part of the subject cut out based on the subject information,
The image generating unit sets color information of a portion other than the designated portion of the character, which is a model object composed of a plurality of objects, based on the extracted skin color information, generates an image of the character, synthesizes the synthesized image in which a synthesized position of the portion image changes in accordance with an operation of the subject by synthesizing the image of the character and the portion image,
the image generating unit sets color information of an object other than the specified portion among the plurality of objects of the model object based on the skin color information of the subject, generates an image of the model object by performing perspective processing of the model object,
the composite image is an image in which the character as the model object operates according to the operation of the subject detected by the subject information.
CN201710064323.0A 2016-02-05 2017-02-04 Image generation system and image processing method Active CN107067457B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016021039A JP6370820B2 (en) 2016-02-05 2016-02-05 Image generation system, game device, and program.
JP2016-021039 2016-02-05

Publications (2)

Publication Number Publication Date
CN107067457A CN107067457A (en) 2017-08-18
CN107067457B true CN107067457B (en) 2024-04-02

Family

ID=59565021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710064323.0A Active CN107067457B (en) 2016-02-05 2017-02-04 Image generation system and image processing method

Country Status (2)

Country Link
JP (1) JP6370820B2 (en)
CN (1) CN107067457B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108479070A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 Dummy model generation method and device
CN110579222B (en) * 2018-06-07 2022-03-15 百度在线网络技术(北京)有限公司 Navigation route processing method, device and equipment
US11831854B2 (en) 2018-12-17 2023-11-28 Sony Interactive Entertainment Inc. Information processing system, information processing method, and computer program
JP2020149174A (en) * 2019-03-12 2020-09-17 ソニー株式会社 Image processing apparatus, image processing method, and program
CN110286975B (en) * 2019-05-23 2021-02-23 华为技术有限公司 Display method of foreground elements and electronic equipment
CN111210490B (en) * 2020-01-06 2023-09-19 北京百度网讯科技有限公司 Electronic map construction method, device, equipment and medium
CN113413594A (en) * 2021-06-24 2021-09-21 网易(杭州)网络有限公司 Virtual photographing method and device for virtual character, storage medium and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001096062A (en) * 2000-06-28 2001-04-10 Kce Japan:Kk Game system
JP2001292305A (en) * 2000-02-02 2001-10-19 Casio Comput Co Ltd Image data synthesizer, image data synthesis system, image data synthesis method and recording medium
JP2011203835A (en) * 2010-03-24 2011-10-13 Konami Digital Entertainment Co Ltd Image generating device, image processing method, and program
CN103127717A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control and operation of game
JP2014016886A (en) * 2012-07-10 2014-01-30 Furyu Kk Image processor and image processing method
CN103731601A (en) * 2012-10-12 2014-04-16 卡西欧计算机株式会社 Image processing apparatus and image processing method
JP2015093009A (en) * 2013-11-11 2015-05-18 株式会社バンダイナムコゲームス Program, game device, and game system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09106419A (en) * 1995-08-04 1997-04-22 Sanyo Electric Co Ltd Clothes fitting simulation method
JP4521101B2 (en) * 2000-07-19 2010-08-11 デジタルファッション株式会社 Display control apparatus and method, and computer-readable recording medium recording display control program
JP2003342820A (en) * 2002-05-22 2003-12-03 B's Japan:Kk Coordinate system, method, program recording medium and program
GB201102794D0 (en) * 2011-02-17 2011-03-30 Metail Ltd Online retail system
JP2013219544A (en) * 2012-04-09 2013-10-24 Ricoh Co Ltd Image processing apparatus, image processing method, and image processing program
JP6018707B2 (en) * 2012-06-21 2016-11-02 マイクロソフト コーポレーション Building an avatar using a depth camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001292305A (en) * 2000-02-02 2001-10-19 Casio Comput Co Ltd Image data synthesizer, image data synthesis system, image data synthesis method and recording medium
JP2001096062A (en) * 2000-06-28 2001-04-10 Kce Japan:Kk Game system
JP2011203835A (en) * 2010-03-24 2011-10-13 Konami Digital Entertainment Co Ltd Image generating device, image processing method, and program
CN103127717A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control and operation of game
JP2014016886A (en) * 2012-07-10 2014-01-30 Furyu Kk Image processor and image processing method
CN103731601A (en) * 2012-10-12 2014-04-16 卡西欧计算机株式会社 Image processing apparatus and image processing method
JP2015093009A (en) * 2013-11-11 2015-05-18 株式会社バンダイナムコゲームス Program, game device, and game system

Also Published As

Publication number Publication date
CN107067457A (en) 2017-08-18
JP6370820B2 (en) 2018-08-08
JP2017138913A (en) 2017-08-10

Similar Documents

Publication Publication Date Title
CN107067457B (en) Image generation system and image processing method
CN107045711B (en) Image generation system and image processing method
JP5128276B2 (en) GAME DEVICE, GAME PROGRAM, COMPUTER-READABLE INFORMATION STORAGE MEDIUM, GAME SYSTEM, AND GAME PROCESSING METHOD
US9495800B2 (en) Storage medium having stored thereon image processing program, image processing apparatus, image processing system, and image processing method
JP5145444B2 (en) Image processing apparatus, image processing apparatus control method, and program
JP2019510297A (en) Virtual try-on to the user&#39;s true human body model
US20090244064A1 (en) Program, information storage medium, and image generation system
JP2017138915A (en) Image generation system and program
JP5469516B2 (en) Image display program, image display system, image display method, and image display apparatus
US10896322B2 (en) Information processing device, information processing system, facial image output method, and program
JP2011258159A (en) Program, information storage medium and image generation system
JP2010033298A (en) Program, information storage medium, and image generation system
JP2007226576A (en) Program, information storage medium and image generation system
JP4804122B2 (en) Program, texture data structure, information storage medium, and image generation system
JP6732463B2 (en) Image generation system and program
CN112104857A (en) Image generation system, image generation method, and information storage medium
JP2020107251A (en) Image generation system and program
JP2007148567A (en) Image processing method and image processing device
JP2010029397A (en) Program, information storage medium and image generation system
JP3413383B2 (en) GAME SYSTEM AND INFORMATION STORAGE MEDIUM
JP7104539B2 (en) Simulation system and program
JP2006252426A (en) Program, information storage medium, and image generation system
KR20220025048A (en) Game system, processing method and information storage medium
TW200938270A (en) Image generating device, method for generating image, and information recording medium
JP4786389B2 (en) Program, information storage medium, and image generation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant