WO2020145021A1 - Program, information processing device, and game system - Google Patents

Program, information processing device, and game system Download PDF

Info

Publication number
WO2020145021A1
WO2020145021A1 PCT/JP2019/048719 JP2019048719W WO2020145021A1 WO 2020145021 A1 WO2020145021 A1 WO 2020145021A1 JP 2019048719 W JP2019048719 W JP 2019048719W WO 2020145021 A1 WO2020145021 A1 WO 2020145021A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
partial
dimensional shape
information
elements
Prior art date
Application number
PCT/JP2019/048719
Other languages
French (fr)
Japanese (ja)
Inventor
剛 廣瀬
孝 名倉
Original Assignee
株式会社バンダイ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社バンダイ filed Critical 株式会社バンダイ
Publication of WO2020145021A1 publication Critical patent/WO2020145021A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/95Storage media specially adapted for storing game information, e.g. video game cartridges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Definitions

  • the present invention relates to a program, an information processing device, and a game system, and more particularly to a technique for generating a three-dimensional shape based on a two-dimensional image.
  • Patent Document 1 In the so-called VR space, there is a technique that enables generation of a three-dimensional shape like engraving (Patent Document 1).
  • Patent Document 1 basically involves a large apparatus scale for introduction, and is related to model formation while confirming whether a desired shape is obtained by changing the position. The operation needs to be repeated, which can give the impression that it is complicated. ..
  • the burden on the user is reduced. Can be reduced.
  • a three-dimensional shape having a uniform size in the depth direction depth direction orthogonal to the drawing surface of the paper medium
  • the desired appearance is obtained. There is a possibility that it may not be a mode having. ..
  • An object of the present invention is to provide a program, an information processing device, and a game system that easily generate a three-dimensional shape having a suitable appearance.
  • a program of the present invention is a program for generating a three-dimensional shape of a character, which is composed of a set of elements which are constituent units, and is a drawing in which a two-dimensional image of the character is drawn on a computer.
  • the process of recognizing the information of the drawn character for each partial area provided in the drawing area, and the processing of recognizing the information of the character corresponding to the partial area for each partial area based on the information of the recognized character By combining the process of determining the number of elements in the depth direction used for the configuration of the three-dimensional shape and the set of elements configured for each of the partial areas based on the determined number of elements in the depth direction, And a process of generating a three-dimensional shape.
  • Block diagram showing a functional configuration of a smartphone 100 according to an embodiment and a modification of the present invention The figure which showed the structural example of the game card which concerns on embodiment and modification of this invention. The figure which illustrated the three-dimensional shape formed based on the information acquired from the game card which concerns on embodiment and modification of this invention. The figure for demonstrating the three-dimensional-part-izing of the one part part of the character which concerns on embodiment and modification of this invention. The figure for demonstrating three-dimensional part formation of another one part part of the character which concerns on embodiment and modification of this invention. The flowchart which illustrated the production
  • An embodiment described below is an example in which the present invention is applied to a smartphone capable of acquiring information of a two-dimensional image drawn on a game card and registering a corresponding character as an example of an information processing device. Will be explained. However, the present invention can be applied to any device that can generate a corresponding three-dimensional shape based on the information recognized about the drawing area in which the two-dimensional image of the character is drawn. ..
  • a game card that is a real article (game article) in a game application executed on the smartphone 100
  • a two-dimensional image of a character can be drawn.
  • the article is not limited to the game card as long as it is a given article.
  • the game item may be any item as long as the user can draw the information necessary for generating the three-dimensional shape of the character.
  • "drawing" on a game article is limited to a method performed by a user drawing a two-dimensional image of a character in a drawing area provided on the article using a predetermined writing instrument. Instead, any method may be used as long as it is a method of adding colored information to the drawing area, such as stamping, printing using a predetermined printing device, attaching a sticker, or the like. ..
  • the control unit 101 is, for example, a CPU, and controls the operation of each block included in the smartphone 100. Specifically, the control unit 101 controls the operation of each block by reading out the operation program of each block recorded in the recording medium 102 or the program related to the game application, expanding the program in the memory 103, and executing the program.
  • the game application is configured to register a character based on the information acquired from the game card and provide a playing experience of the game in which the character appears. ..
  • the recording medium 102 is a recording device such as a non-volatile memory or an HDD that can hold data permanently.
  • the recording medium 102 stores, in addition to the operation program of each block included in the smartphone 100, information such as parameters necessary for the operation of each block and various graphics data used in a game executed by the smartphone 100.
  • the memory 103 is a storage device used for temporary data storage, such as a volatile memory.
  • the memory 103 is used not only as an expansion area for the operation program of each block, but also as a storage area for temporarily storing the data output in the operation of each block. ..
  • the image pickup unit 104 is an image pickup device unit having an image pickup device such as a CCD or a CMOS sensor, and is used to acquire information attached to the game card in the present embodiment.
  • the image capturing unit 104 captures an image of a predetermined surface of the game card during execution of the game application, and outputs a captured image including information attached to the game card. It is assumed that the image capturing is performed intermittently until it is determined by the analysis unit 105, which will be described later, that the entire article estimated to be a game card falls within the image capturing range and an appropriate captured image is obtained. ..
  • the information acquisition from the game card is described as being performed by imaging, but the acquisition of the two-dimensional image in which the character is drawn is necessarily performed from the actual article such as the game card. It does not have to be a thing. That is, it goes without saying that, for example, digital data (two-dimensional image) in which a character is drawn may be acquired using a pen-type pointing device or the like. ..
  • the analysis unit 105 applies predetermined image processing to the captured image output from the image capturing unit 104 and analyzes whether the game card is appropriate. ..
  • the game card usable in the game application of the present embodiment has a configuration as shown in FIG. 2A, and the analysis unit 105 first detects the code 201 in the captured image. Then, based on the position and size of the code in the captured image, it is determined whether the entire game card is included in the captured range. At this time, if the entire game card is included in the imaging range, it is determined that an appropriate captured image related to information acquisition from the game card has been obtained, and control for completing the imaging of the imaging unit 104 is performed. ..
  • the analysis unit 105 extracts the image information regarding the code 201 and the image information regarding the drawing area 203 in the frame 202 described later from the image information regarding the game card included in the captured image, and based on the former, the game card. Is a game card that can be used in the game application of the present embodiment, that is, whether or not the game card is a proper game card. The latter image information is output to the recognition unit 106. ..
  • the code 201 includes identification information unique to the game application that uniquely identifies the game card.
  • the code 201 is a two-dimensional code
  • the analysis unit 105 applies a predetermined conversion process to the image information related to the code 201 to decode the coded information and acquire the identification information. be able to.
  • the analysis unit 105 determines whether or not the captured game card is an appropriate game card, depending on whether or not the code 201 includes identification information related to the game application. ..
  • the recognition unit 106 performs processing for recognizing the information necessary for generating the three-dimensional shape of the character based on the two-dimensional image (image information) of the character drawn in the drawing area 203 of the game card. As will be described later in detail, in the game application of the present embodiment, the processing for recognizing character information is performed for each partial area provided in advance in the drawing area 203 in the frame 202 of the game card as described above. ..
  • the determination unit 107 is information in the depth direction necessary for generating the three-dimensional shape of the character, that is, two-dimensional information displayed in the drawing area 203 of the game card. Of the size (thickness) in the direction orthogonal to the two-dimensional surface, which serves as a reference for expanding the three-dimensional image. Although details will be described later, the determining unit 107 determines information in the depth direction based on the two-dimensional image of the character recognized for each partial area. ..
  • the model generation unit 108 performs a process of generating a three-dimensional shape of the character based on the information on the character recognized by the recognition unit 106 and the information on the depth direction determined by the determination unit 107.
  • the character information is recognized for each partial area and the information in the depth direction is determined. Therefore, by constructing a three-dimensional shape for each partial area and combining them, the character is constructed. The three-dimensional shape is generated. ..
  • the presentation control unit 109 controls the presentation of various kinds of information to the user on the smartphone 100.
  • the smartphone 100 according to the present embodiment will be described as having a display unit 120 that displays a screen (game screen, other OS menu screen, etc.) as a means for presenting various information to the user. It goes without saying that the present invention is not limited to this, and alternatives or additions are possible. ..
  • the presentation control unit 109 includes a drawing device such as a GPU, and performs a predetermined drawing process when generating a game screen to be displayed on the display unit 120. Specifically, the presentation control unit 109 determines the three-dimensional shape of the character based on the processing and commands performed by the control unit 101 and the operation input made via the operation input unit 110 during the execution of the game application. Then, appropriate arithmetic processing is executed to draw and generate various game screens related to the game. The generated game screen is presented to the user by being output and displayed on the display unit 120 provided in the smartphone 100.
  • the display unit 120 is a display device included in the smartphone 100, such as an LCD.
  • the display unit 120 is described as being built in and integrated with the smartphone 100, but the embodiment of the present invention is not limited to this, and for example, outside the smartphone 100, regardless of wired or wireless.
  • the display device may be detachably connected. ..
  • the operation input unit 110 is a user interface included in the smartphone 100, such as a touch panel and buttons. When the operation input unit 110 detects an operation input made by the user, the operation input unit 110 outputs a control signal corresponding to the operation input to the control unit 101.
  • the communication unit 111 is a communication interface included in the smartphone 100 for performing communication with other devices.
  • the communication unit 111 connects to an external device via a network by a predetermined communication method regardless of wired or wireless, and transmits/receives data.
  • the program of the game application may be configured to be receivable from an external device via the communication unit 111. ..
  • the game application will be described as being executable even when the smartphone 100 is offline and providing a game playing experience.
  • the present invention is not limited to this. It is not limited.
  • the character information generated based on the game card may be managed by an external server in association with a user ID for identifying the user, and similarly, game play in a game application may be performed.
  • communication with an external device may be performed via the communication unit 111 during the game play when performing a battle game with another user.
  • the present invention only needs to generate the three-dimensional shape of the character based on the two-dimensional image of the drawn character, and it is not possible to provide a game playing experience using the three-dimensional shape of the character on the smartphone 100. ,Not required. ..
  • the character registration is performed based on the information acquired from the game card. That is, it may be configured to be able to provide a play experience including a predetermined game in which the registered character appears.
  • Generation of a three-dimensional shape (appearance) of a character registered as a character appearing in the game and determination of parameters of the character used for controlling the progress of the game are performed by the game card for which information acquisition (imaging) is performed at the time of registration. It is performed based on the attached image information. ..
  • Game Card> the configuration of a game card that can be used in the game application of this embodiment will be described with reference to FIG.
  • Usable game cards are sold, for example, by a predetermined dealer or distributor, and the user needs to purchase and prepare the game card separately from the game application.
  • the game card may be provided to the user at the article vending machine in response to payment of consideration. ..
  • the game card according to the present embodiment is provided with or can be provided with two types of information, and after the information of the entire game card is acquired by imaging. , Each is processed separately. As shown in FIG. 2A, the two types of information are divided into areas (code 201 and drawing area 203) on one surface of the game card, and in the game application of the present embodiment, the game By capturing the card, these pieces of information are temporarily acquired from the game card, and the image information of the corresponding area in the captured image is separated and processed, whereby each piece of information can be handled individually. ..
  • the configuration of the game card will be described below, focusing on the drawing area 203 in the frame 202 used for generating the three-dimensional shape of the character according to the present invention. ..
  • the drawing area 203 of the game card of the present embodiment is different from the code 201, and is configured to allow the user to draw two-dimensional information.
  • the game card is shown in FIG. It is a blank field that does not contain any information. In this state, even if the information about the game card is acquired, it is determined that the image information is not in a state in which significant information is attached, and it is determined based on the code 201 that the game card is appropriate. Even so, the game card is treated as being in a state in which it cannot be used for the game yet. ..
  • the drawing area 203 is configured such that the user can arbitrarily draw the appearance of the character related to the game card as a picture (two-dimensional image).
  • a three-dimensional shape of a character having a corresponding appearance can be generated and made to appear in the game. That is, the game card provided for the game application of the present embodiment is in a state in which the information for determining the appearance of the character is held by the user drawing the information in the drawing area 203. ..
  • the inside of the drawing area 203 is divided into a grid pattern by printing to form a grid.
  • rectangular shapes of the same size and the same size are arranged without a gap in each of the horizontal direction (horizontal direction) and the vertical direction (vertical direction) defined for the game card.
  • the user holds information for determining the appearance of the character, for example, as shown in FIG. It is possible to easily form the appearance of the character as a so-called dot picture by painting at least a part of the rectangles included in the drawing area 203. Since the three-dimensional image can be recognized as a dot picture composed of a limited number of dots, it is possible to reduce the amount of calculation related to pattern identification and shape estimation of image information related to the drawing area 203.
  • the two-dimensional image of the character drawn in the drawing area 203 indicates which part of the character. It is necessary to specify what is meant by. Therefore, in the game card of the present embodiment, a plurality of partial areas in which different character parts are associated are defined in the drawing area 203.
  • the partial area may be configured to separate the drawing area 203, as shown in FIG. 2C, for example. ..
  • the character corresponding to the game card is assumed to be a humanoid character having a head, a body, and four limbs. Therefore, as shown in FIG. 2C, in the drawing area 203, the partial area 211 recognized as the character's head, the partial area 212 recognized as the shoulder, and the partial area recognized as the body. 213, partial area 214 recognized as right arm, partial area 215 recognized as right hand, partial area 216 recognized as left arm, partial area 217 recognized as left hand, partial area recognized as leg 218 will be described as being included. ..
  • the implementation of the present invention is not limited to this, and the parts forming the character may be added or excluded as appropriate.
  • at least a partial area where the two-dimensional image is recognized may be included in the head and the body. ..
  • the head is important as an identification element that characterizes the character, and since the user can desire a detailed expression, the partial region 211 is configured to have a larger square than the partial region 213. .. In other words, the width of the partial area 213 is smaller than that of the partial area 211. This is reflected also when generating the three-dimensional shape of the character, and the head of the character can be made larger and more distinctive than the three-dimensional shapes of other parts, and as a result, it can be drawn by the user. It is possible to effectively increase the discriminating power of the character. ..
  • the partial area 211 recognized as the head is indirectly adjacent to the partial area 213 recognized as the body via the partial area 212 recognized as the shoulder.
  • the partial region 212 recognized as the shoulder may not be included. That is, the partial area 211 recognized as the head and the partial area 213 recognized as the body may be directly adjacent to each other.
  • the partial areas 214 and 216 recognized as arms and the partial area 218 recognized as legs are the four limbs of the character, they are directly adjacent to the partial area 213 recognized as a body. It is assumed that ..
  • these partial areas may be explicitly shown in the drawing area 203 of the game card by, for example, color-coding, enclosing the corresponding area with lines having different thicknesses and colors.
  • the outline (guide) of the recommended character may be indicated by using lines 221 having different thicknesses and colors, which may be explicitly indicated.
  • the drawing area 203 is provided with the partial areas in which different character parts are associated with each other, when the image information related to the game card is acquired, the image information related to the drawing area 203 among them is acquired. Are processed separately for each partial area. More specifically, first, the recognition unit 106 draws a two-dimensional image in a partial region corresponding to each part of the character (head, shoulder, torso, right arm, right hand, left arm, left hand, leg). Based on the information, the significant information about the part, that is, the information on the drawn rectangular group included in the partial region is recognized.
  • the character information relating to the part may be configured to include, for example, at least the distribution of the drawn rectangle in the corresponding partial area and the information of the color drawn in the rectangle. ..
  • the same color distribution and outline as the two-dimensional image drawn in the drawing area 203 is expressed in the three-dimensional shape generated based on the two-dimensional image drawn in the drawing area 203.
  • the recognition unit 106 performs recognition, the color distribution presented as a two-dimensional image when the three-dimensional shape of the generated character is drawn by the parallel projection method from the front to the depth direction in the reference state. And the appearance is fixed.
  • the front surface is a surface of the generated three-dimensional shape in which the color distribution and pattern shown in the two-dimensional image drawn in the drawing area 203 are reproduced.
  • the drawing area 203 does not define information such as the side surface and the back surface of the generated three-dimensional shape, and the user can draw only the front surface of an arbitrary character. That is, in the present specification, the front surface of the three-dimensional shape of the generated character is a surface on which the pattern shown in the drawing area 203 of the game card appears, and the depth direction is the front surface when facing the front surface.
  • the direction orthogonal to the front surface which is basically positive (depth) in the distant direction, and the back surface of the three-dimensional shape appears when the character is observed in the negative direction of the depth direction. Is a face. ..
  • the determination unit 107 determines the maximum number of blocks in the depth direction when forming the three-dimensional part of the region corresponding to each partial region. To decide. ..
  • the image information relating to the drawing area 203 is configured so that the appearance of the character can be formed as an image (dot picture) having a rectangular number of pixels in the drawing area 203.
  • the three-dimensional shape of the drawn character is related to the present invention as shown in FIG.
  • the three-dimensional shape of the character generated based on the two-dimensional image drawn in the drawing area 203 is an x-axis (horizontal direction in the horizontal direction) in the three-dimensional space having the z-axis in the depth direction.
  • the cubes are arranged so as to be connected to each other in each of the z-axis and the y-axis (vertical axis), so that the cube corresponds to the dot picture.
  • the determination unit 107 determines the maximum number of cubes that are not represented in the drawing area 203 and are connected in the z-axis direction for each character part. That is, the determining unit 107 determines, for each of the partial regions, the maximum thickness to be given to the three-dimensional part obtained by converting the drawn two-dimensional image into the three-dimensional part by the number of cubes. ..
  • the maximum number of cubes for each partial area is determined by, for example, if the number of dots from the leftmost dot to the rightmost dot arranged in the partial area is 12 dots, the maximum number of cubes is 12, and so on. It is determined at a predetermined ratio based on the horizontal width of the two-dimensional image drawn in the partial area, that is, the horizontal range (length) in which dots having significant information are arranged. It may be one. ..
  • the determination unit 107 may determine that the maximum number of thickness cubes of the character's head is larger than the number of maximum thickness cubes of the body. Good. Further, for symmetrically existing parts such as the left arm part and the right arm part, since the three-dimensional parts have different thicknesses, it is possible to give an unnatural impression. Therefore, the determining unit 107 determines that the parts have the same maximum thickness. It may determine the number of cubes. ..
  • the model generation unit 108 configures a three-dimensional part related to each part based on the information of the maximum number of cubes of thickness thus determined for each part.
  • the model generating unit 108 basically forms a three-dimensional part of each part by connecting the maximum thickness cube numbers determined for the part by the determining unit 107 in the depth direction. ..
  • the two-dimensional image of the character drawn in the partial region 214 corresponding to the right arm part and the partial region 215 corresponding to the right hand part is as shown in FIG. 4A, and the determining unit 107 sets the right arm part in the right arm part.
  • the model generation unit 108 first connects the cubes in the xy plane according to the distribution of the dots shown in each partial area, as shown in FIG. To form.
  • the model generation unit 108 further connects the cubes in the z-axis direction of the cube group, so that the model generation unit 108 has a thickness corresponding to the maximum number of cubes determined by the determination unit 107 as illustrated in FIG. 4C.
  • the maximum thickness cube determined according to the width particularly for the parts such as the head and the torso where the allocated partial regions have a relatively wide width.
  • it When it is configured to have a number, it becomes a columnar three-dimensional part, and the three-dimensional shape of the character generated as a result can be in a form not desired by the user. ..
  • the two-dimensional image of the character drawn in the partial area 211 corresponding to the head and the partial area 213 corresponding to the torso is as shown in FIG.
  • the maximum thickness cube number is determined to be “9”
  • the maximum thickness cube number for the body is determined to be “6”.
  • the model generation unit 108 and the three-dimensional part are configured to have the maximum number of cubes
  • the columnar shape extending in the depth direction becomes a conspicuous shape as shown in FIG. 5B. That is, if the three-dimensional part has such a shape, the two-dimensional image of the drawn character is simply stretched in the depth direction to give a three-dimensional representation of the character, which does not give a favorable impression to the user. ..
  • the model generator 108 removes each of the three-dimensional parts so that the cubes located at the outer edges of the front surface and the back surface as shown by hatching in FIG. 5C are removed and the shape becomes as shown in FIG. 5D. Control is performed. That is, the model generation unit 108 applies a so-called “chamfering” process to the joints of the front surface and the side surfaces and the back surface and the side surfaces so that the three-dimensional part of each part has a suitable shape. ..
  • the model generation unit 108 generates the three-dimensional shape of the character drawn in the drawing area 203 of the game card by combining the three-dimensional parts of the respective parts thus formed. ..
  • the generation process for generating the three-dimensional shape of the character based on the game card which occurs during the execution of the game application in the smartphone 100 of the present embodiment having such a configuration, will be described with reference to the flowchart of FIG. Processing will be described.
  • the processing corresponding to the flowchart can be realized by the control unit 101 reading out a corresponding processing program stored in the recording medium 102, expanding the processing program in the memory 103, and executing the program.
  • the present generation process is started when, for example, a process of registering a character used for game play of a game application is started based on a game card, and image information of a drawing area 203 of the game card is acquired. It will be described as being performed. ..
  • step S ⁇ b>601 the recognition unit 106, under the control of the control unit 101, separates the acquired image information of the drawing area 203 into each predetermined partial area, and creates a two-dimensional image drawn in the partial area. Based on the information, the character information drawn about the part corresponding to the partial area is recognized. When the recognition unit 106 completes the recognition of the character information for all the partial areas, the processing proceeds to S602. ..
  • step S602 the control unit 101 determines whether or not there is a partial area in which character information is not recognized.
  • all the partial areas provided in the drawing area 203 include significant information, that is, the two-dimensional shape of the part corresponding to the partial area.
  • the condition is that the image is drawn. Therefore, when the control unit 101 determines that there is a partial area in which the character information is not recognized, the control unit 101 causes the presentation control unit 109 to notify that the drawing in the drawing area 203 is insufficient, This generation processing is completed.
  • the control unit 101 determines that there is no partial area in which the character information is not recognized, that is, the character information is recognized in all the partial areas, the control unit 101 shifts the processing to S603. .
  • the determination unit 107 determines the maximum thickness cube number for each partial region based on the information of the character recognized for each partial region. As described above, regarding the determination of the maximum thickness cube number, the distribution state of significant information (dots) in the partial area and the maximum thickness cube number that is predetermined in generating the three-dimensional part of the part corresponding to the partial area Shall be based on the decision rule of. ..
  • step S604 under the control of the control unit 101, the model generation unit 108 configures a three-dimensional part corresponding to each part of the character based on the information of the character recognized for each partial region and the determined maximum thickness cube number. Then, by combining these, the three-dimensional shape of the character corresponding to the two-dimensional image of the character drawn in the drawing area 203 is generated.
  • the generated three-dimensional shape may be passed to the process of registering the character and stored as the character information together with other information. ..
  • the information processing apparatus can easily generate a three-dimensional shape having a suitable appearance. More specifically, the two-dimensional image of the character drawn by the user is separately recognized for each partial region, and the thickness determination and the three-dimensional part processing are different based on the part associated with each partial region. Since the processing is performed in this manner, it is possible to generate a three-dimensional shape that more reflects the user's intention. ..
  • the present invention is not limited to the above-described embodiments, and various modifications and changes can be made within the scope of the invention without departing from the spirit and scope of the present invention.
  • the information processing apparatus according to the present invention can also be realized by a program that causes one or more computers to function as the information processing apparatus.
  • the program can be provided/distributed by being recorded in a computer-readable recording medium or through an electric communication line.
  • 100 smartphone, 101: control unit, 102: recording medium, 103: memory, 104: imaging unit, 105: analysis unit, 106: recognition unit, 107: determination unit, 108: model generation unit, 109: presentation control unit, 110: operation input unit, 111: communication unit, 120: display unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To generate a three-dimensional shape having a suitable exterior appearance in a simple manner. [Solution] Provided is a program for generating a three-dimensional shape for a character formed by a set of elements which are constituent units. This program causes a computer to execute: a process of recognizing, in a render region in which a two-dimensional image of a character is drawn, information of the drawn character for each partial region in the render region; a process of determining for each of the partial regions the number of elements in the depth direction to be used in forming a three-dimensional shape of the character corresponding to the partial region on the basis of the recognized character information; and a process of generating the three-dimensional shape of the character by combining the set of elements formed for each of the partial regions on the basis of the determined number of elements in the depth direction.

Description

プログラム、情報処理装置及びゲームシステムProgram, information processing device and game system
本発明は、プログラム、情報処理装置及びゲームシステムに関し、特に2次元画像に基づく3次元形状の生成技術に関する。 The present invention relates to a program, an information processing device, and a game system, and more particularly to a technique for generating a three-dimensional shape based on a two-dimensional image.
所謂VR空間内において、彫刻の如く、3次元形状を生成可能ならしめる技術が存在する(特許文献1)。 In the so-called VR space, there is a technique that enables generation of a three-dimensional shape like engraving (Patent Document 1).
特開2017-182241号公報JP, 2017-182241, A
特許文献1のような3次元形状の生成方法は、基本的には導入するための装置規模が大きく、また、位置を変更しながら所望の形状となっているかを確認しつつ、モデル形成に係る操作を繰り返す必要があり、煩雑である印象を与え得る。  The three-dimensional shape generation method as disclosed in Patent Document 1 basically involves a large apparatus scale for introduction, and is related to model formation while confirming whether a desired shape is obtained by changing the position. The operation needs to be repeated, which can give the impression that it is complicated. ‥
これに対し、ユーザがキャラクタを紙媒体等に描き、該紙媒体に描かれた2次元画像を読み込ませ、該画像に基づいて3次元形状を生成する手法であれば、ユーザに与える負担等を軽減できる。しかしながら、例えば2次元画像に含まれるキャラクタ像の全体に対して、均一な深度方向(紙媒体の描画面と直交する奥行き方向)の大きさを有する3次元形状を形成したとしても、所望の外観を有した態様とならない可能性があった。  On the other hand, if the user draws a character on a paper medium, reads a two-dimensional image drawn on the paper medium, and generates a three-dimensional shape based on the image, the burden on the user is reduced. Can be reduced. However, for example, even if a three-dimensional shape having a uniform size in the depth direction (depth direction orthogonal to the drawing surface of the paper medium) is formed on the entire character image included in the two-dimensional image, the desired appearance is obtained. There is a possibility that it may not be a mode having. ‥
本発明は、好適な外観を有する3次元形状を簡易に生成するプログラム、情報処理装置及びゲームシステムを提供することを目的とする。 An object of the present invention is to provide a program, an information processing device, and a game system that easily generate a three-dimensional shape having a suitable appearance.
本発明のプログラムは、キャラクタの3次元形状であって、構成単位である要素の集合で構成された3次元形状を生成させるプログラムであって、コンピュータに、キャラクタの2次元画像が描かれた描画領域について、該描画領域に設けられた部分領域ごとに、描かれたキャラクタの情報を認識する処理と、部分領域ごとに、認識されたキャラクタの情報に基づいて、該部分領域に対応するキャラクタの3次元形状の構成に用いる奥行き方向の要素の数を決定する処理と、決定された奥行き方向の要素の数に基づいて、部分領域のそれぞれについて構成された要素の集合を組み合わせることで、キャラクタの3次元形状を生成する処理と、を実行させる。 A program of the present invention is a program for generating a three-dimensional shape of a character, which is composed of a set of elements which are constituent units, and is a drawing in which a two-dimensional image of the character is drawn on a computer. Regarding the area, the process of recognizing the information of the drawn character for each partial area provided in the drawing area, and the processing of recognizing the information of the character corresponding to the partial area for each partial area based on the information of the recognized character By combining the process of determining the number of elements in the depth direction used for the configuration of the three-dimensional shape and the set of elements configured for each of the partial areas based on the determined number of elements in the depth direction, And a process of generating a three-dimensional shape.
このような構成により本発明によれば、好適な外観を有する3次元形状を簡易に生成することが可能となる。 According to the present invention having such a configuration, it becomes possible to easily generate a three-dimensional shape having a preferable appearance.
本発明の実施形態及び変形例に係るスマートフォン100の機能構成を示したブロック図Block diagram showing a functional configuration of a smartphone 100 according to an embodiment and a modification of the present invention. 本発明の実施形態及び変形例に係るゲームカードの構成例を示した図The figure which showed the structural example of the game card which concerns on embodiment and modification of this invention. 本発明の実施形態及び変形例に係るゲームカードから取得された情報に基づいて形成される3次元形状を例示した図The figure which illustrated the three-dimensional shape formed based on the information acquired from the game card which concerns on embodiment and modification of this invention. 本発明の実施形態及び変形例に係るキャラクタの一部の部位の3次元パーツ化を説明するための図The figure for demonstrating the three-dimensional-part-izing of the one part part of the character which concerns on embodiment and modification of this invention. 本発明の実施形態及び変形例に係るキャラクタの別の一部の部位の3次元パーツ化を説明するための図The figure for demonstrating three-dimensional part formation of another one part part of the character which concerns on embodiment and modification of this invention. 本発明の実施形態及び変形例に係るスマートフォン100において実行される生成処理を例示したフローチャートThe flowchart which illustrated the production|generation process performed in the smart phone 100 which concerns on embodiment and modification of this invention.
[実施形態] 以下、添付図面を参照して実施形態を詳しく説明する。なお、以下の実施形態は特許請求の範囲に係る発明を限定するものでなく、また実施形態で説明されている特徴の組み合わせの全てが発明に必須のものとは限らない。実施形態で説明されている複数の特徴うち二つ以上の特徴が任意に組み合わされてもよい。また、同一若しくは同様の構成には同一の参照番号を付し、重複した説明は省略する。  [Embodiment] Hereinafter, an embodiment will be described in detail with reference to the accompanying drawings. The following embodiments do not limit the invention according to the claims, and not all combinations of the features described in the embodiments are essential to the invention. Two or more of the plurality of features described in the embodiments may be arbitrarily combined. Further, the same or similar configurations are denoted by the same reference numerals, and duplicated description will be omitted. ‥
以下に説明する一実施形態は、情報処理装置の一例としての、ゲームカードに描かれた2次元画像の情報取得し、対応するキャラクタを登録することが可能なスマートフォンに、本発明を適用した例を説明する。しかし、本発明は、キャラクタの2次元画像が描かれた描画領域について認識された情報に基づいて、対応する3次元形状を生成することが可能な任意の機器に適用可能である。  An embodiment described below is an example in which the present invention is applied to a smartphone capable of acquiring information of a two-dimensional image drawn on a game card and registering a corresponding character as an example of an information processing device. Will be explained. However, the present invention can be applied to any device that can generate a corresponding three-dimensional shape based on the information recognized about the drawing area in which the two-dimensional image of the character is drawn. ‥
なお、本実施形態では、スマートフォン100において実行されるゲームアプリケーションにおいて実物品(ゲーム用物品)であるゲームカードから情報取得を行うものとして説明するが、キャラクタに係る2次元画像を描き込み可能に構成された物品であれば、ゲームカードに限られるものではない。例えばゲーム用物品は、キャラクタの3次元形状の生成に必要な情報をユーザが描き込み可能に構成された物品であれば、いずれの物品であってもよい。ここで、ゲーム用物品への「描き込み」とは、該物品に設けられた描画領域にキャラクタの2次元画像を、ユーザが所定の筆記具を用いて描き加えることにより行われる方法に限られるものではなく、スタンプを押す、所定の印刷装置を用いて印刷する、シールを貼付する等、描画領域に有色の情報を加える方法であれば、いずれによって行われるものであってもよい。  It should be noted that in the present embodiment, description is made assuming that information is acquired from a game card that is a real article (game article) in a game application executed on the smartphone 100, but a two-dimensional image of a character can be drawn. The article is not limited to the game card as long as it is a given article. For example, the game item may be any item as long as the user can draw the information necessary for generating the three-dimensional shape of the character. Here, "drawing" on a game article is limited to a method performed by a user drawing a two-dimensional image of a character in a drawing area provided on the article using a predetermined writing instrument. Instead, any method may be used as long as it is a method of adding colored information to the drawing area, such as stamping, printing using a predetermined printing device, attaching a sticker, or the like. ‥
《スマートフォンの構成》 まず、本発明の実施形態に係るスマートフォン100の機能構成について、図1のブロック図を用いて説明する。  <<Configuration of Smartphone>> First, the functional configuration of the smartphone 100 according to the embodiment of the present invention will be described with reference to the block diagram of FIG. ‥
制御部101は、例えばCPUであり、スマートフォン100が有する各ブロックの動作を制御する。具体的には制御部101は、例えば記録媒体102に記録されている各ブロックの動作プログラムやゲームアプリケーションに係るプログラムを読み出し、メモリ103に展開して実行することにより各ブロックの動作を制御する。本実施形態ではゲームアプリケーションは、ゲームカードから取得された情報に基づいてキャラクタを登録し、該キャラクタを登場させたゲームのプレイ体験を提供可能に構成されているものとする。  The control unit 101 is, for example, a CPU, and controls the operation of each block included in the smartphone 100. Specifically, the control unit 101 controls the operation of each block by reading out the operation program of each block recorded in the recording medium 102 or the program related to the game application, expanding the program in the memory 103, and executing the program. In the present embodiment, it is assumed that the game application is configured to register a character based on the information acquired from the game card and provide a playing experience of the game in which the character appears. ‥
記録媒体102は、例えば不揮発性メモリやHDD等の、恒久的にデータを保持可能な記録装置である。記録媒体102は、スマートフォン100が有する各ブロックの動作プログラムに加え、各ブロックの動作において必要となるパラメータ等の情報や、スマートフォン100が実行するゲームに使用される各種のグラフィックスデータを記憶する。メモリ103は、例えば揮発性メモリ等の一時的なデータ記憶に使用される記憶装置である。メモリ103は、各ブロックの動作プログラムの展開領域としてだけでなく、各ブロックの動作において出力されたデータ等を一時的に記憶する格納領域としても用いられる。  The recording medium 102 is a recording device such as a non-volatile memory or an HDD that can hold data permanently. The recording medium 102 stores, in addition to the operation program of each block included in the smartphone 100, information such as parameters necessary for the operation of each block and various graphics data used in a game executed by the smartphone 100. The memory 103 is a storage device used for temporary data storage, such as a volatile memory. The memory 103 is used not only as an expansion area for the operation program of each block, but also as a storage area for temporarily storing the data output in the operation of each block. ‥
撮像部104は、例えばCCDやCMOSセンサ等の撮像素子を有する撮像装置ユニットであり、本実施形態ではゲームカードに付された情報の取得に用いられる。撮像部104は、ゲームアプリケーションの実行中において、ゲームカードの所定の面を撮像することで、該ゲームカードに付された情報を含む撮像画像を出力する。撮像は、後述の解析部105によりゲームカードと推定される物品の全体が撮像範囲に収まり、適切な撮像画像が得られたと判断されるまで、間欠的に行われるものとする。  The image pickup unit 104 is an image pickup device unit having an image pickup device such as a CCD or a CMOS sensor, and is used to acquire information attached to the game card in the present embodiment. The image capturing unit 104 captures an image of a predetermined surface of the game card during execution of the game application, and outputs a captured image including information attached to the game card. It is assumed that the image capturing is performed intermittently until it is determined by the analysis unit 105, which will be described later, that the entire article estimated to be a game card falls within the image capturing range and an appropriate captured image is obtained. ‥
なお、本実施形態のゲームアプリケーションでは、ゲームカードからの情報取得を、撮像により行うものとして説明するが、キャラクタが描かれた2次元画像の取得は、必ずしもゲームカードのような実物品から行われるものである必要はない。即ち、例えば、ペン型ポインティングデバイス等を用いてキャラクタが描かれたデジタルデータ(2次元画像)を取得するものであってもよいことは言うまでもない。  In the game application of the present embodiment, the information acquisition from the game card is described as being performed by imaging, but the acquisition of the two-dimensional image in which the character is drawn is necessarily performed from the actual article such as the game card. It does not have to be a thing. That is, it goes without saying that, for example, digital data (two-dimensional image) in which a character is drawn may be acquired using a pen-type pointing device or the like. ‥
解析部105は、撮像部104から出力された撮像画像に対して所定の画像処理を適用し、適正なゲームカードであるかの解析を行う。  The analysis unit 105 applies predetermined image processing to the captured image output from the image capturing unit 104 and analyzes whether the game card is appropriate. ‥
詳細は後述するが、本実施形態のゲームアプリケーションにおいて使用可能なゲームカードは、図2(a)に示されるような構成となっており、解析部105はまず、撮像画像中のコード201を検出し、該コードの撮像画像中の位置及び大きさに基づいて、ゲームカードの全体が撮像範囲に含まれているかを判断する。このとき、ゲームカードの全体が撮像範囲に含まれるのであれば、ゲームカードからの情報取得に係る適切な撮像画像が得られたとして、撮像部104の撮像を完了する制御が行われる。  Although details will be described later, the game card usable in the game application of the present embodiment has a configuration as shown in FIG. 2A, and the analysis unit 105 first detects the code 201 in the captured image. Then, based on the position and size of the code in the captured image, it is determined whether the entire game card is included in the captured range. At this time, if the entire game card is included in the imaging range, it is determined that an appropriate captured image related to information acquisition from the game card has been obtained, and control for completing the imaging of the imaging unit 104 is performed. ‥
また解析部105は、撮像画像に含まれるゲームカードに係る画像情報から、コード201に係る画像情報と、後述の枠202内の描画領域203に係る画像情報を抽出し、前者に基づいてゲームカードが本実施形態のゲームアプリケーションにおいて使用することが可能なゲームカードであるか、即ち、適正なゲームカードであるか否かの判定を行う。なお、後者の画像情報については、認識部106に出力される。  Further, the analysis unit 105 extracts the image information regarding the code 201 and the image information regarding the drawing area 203 in the frame 202 described later from the image information regarding the game card included in the captured image, and based on the former, the game card. Is a game card that can be used in the game application of the present embodiment, that is, whether or not the game card is a proper game card. The latter image information is output to the recognition unit 106. ‥
少なくともゲームアプリケーションに使用できるかを判定可能なように、適正なゲームカードである場合には、コード201にはゲームカードを一意に識別する、ゲームアプリケーション固有の識別情報が含まれている。本実施形態では、コード201は2次元コードであり、解析部105がコード201に係る画像情報に所定の変換処理を適用することで、コード化されている情報を復号し、識別情報を取得することができる。解析部105は、コード201にゲームアプリケーションに係る識別情報が含まれているか否かにより、撮像されたゲームカードが適正なゲームカードであるか否かを判定する。  If the game card is a proper game card so that it can be determined at least whether it can be used for the game application, the code 201 includes identification information unique to the game application that uniquely identifies the game card. In the present embodiment, the code 201 is a two-dimensional code, and the analysis unit 105 applies a predetermined conversion process to the image information related to the code 201 to decode the coded information and acquire the identification information. be able to. The analysis unit 105 determines whether or not the captured game card is an appropriate game card, depending on whether or not the code 201 includes identification information related to the game application. ‥
認識部106は、ゲームカードの描画領域203に描かれたキャラクタの2次元画像(画像情報)に基づいて、該キャラクタの3次元形状の生成に必要な情報を認識する処理を行う。詳細は後述するが、本実施形態のゲームアプリケーションでは、上述したようにゲームカードの枠202内の描画領域203に予め設けられた部分領域ごとに、キャラクタの情報を認識する処理を行う。  The recognition unit 106 performs processing for recognizing the information necessary for generating the three-dimensional shape of the character based on the two-dimensional image (image information) of the character drawn in the drawing area 203 of the game card. As will be described later in detail, in the game application of the present embodiment, the processing for recognizing character information is performed for each partial area provided in advance in the drawing area 203 in the frame 202 of the game card as described above. ‥
決定部107は、認識部106により認識されたキャラクタの情報について、該キャラクタの3次元形状の生成に必要となる奥行き方向の情報、即ち、ゲームカードの描画領域203に現される2次元の情報を3次元に拡張するための基準となる、該2次元の面に対して直交する方向の大きさ(厚み)の情報を決定する。詳細は後述するが、決定部107は、部分領域ごとに認識されたキャラクタの2次元画像に基づいて、奥行き方向の情報を決定する。  Regarding the information of the character recognized by the recognition unit 106, the determination unit 107 is information in the depth direction necessary for generating the three-dimensional shape of the character, that is, two-dimensional information displayed in the drawing area 203 of the game card. Of the size (thickness) in the direction orthogonal to the two-dimensional surface, which serves as a reference for expanding the three-dimensional image. Although details will be described later, the determining unit 107 determines information in the depth direction based on the two-dimensional image of the character recognized for each partial area. ‥
モデル生成部108は、認識部106により認識されたキャラクタの情報、及び決定部107により決定された奥行き方向の情報に基づいて、該キャラクタの3次元形状を生成する処理を行う。上述したように、本実施形態のゲームアプリケーションでは、部分領域ごとにキャラクタの情報を認識し、奥行き方向の情報を決定するため、部分領域ごとに3次元形状を構築し、それらを組み合わせることによってキャラクタの3次元形状の生成は行われる。  The model generation unit 108 performs a process of generating a three-dimensional shape of the character based on the information on the character recognized by the recognition unit 106 and the information on the depth direction determined by the determination unit 107. As described above, in the game application of the present embodiment, the character information is recognized for each partial area and the information in the depth direction is determined. Therefore, by constructing a three-dimensional shape for each partial area and combining them, the character is constructed. The three-dimensional shape is generated. ‥
提示制御部109は、スマートフォン100におけるユーザへの各種情報提示の制御を司る。本実施形態のスマートフォン100では、ユーザへの各種情報提示の手段として、画面(ゲーム画面、その他OSメニュー画面等)表示を行う表示部120を有するものとして説明するが、情報提示の手段はこれらに限られるものではなく、代替あるいは追加が可能であることは言うまでもない。  The presentation control unit 109 controls the presentation of various kinds of information to the user on the smartphone 100. The smartphone 100 according to the present embodiment will be described as having a display unit 120 that displays a screen (game screen, other OS menu screen, etc.) as a means for presenting various information to the user. It goes without saying that the present invention is not limited to this, and alternatives or additions are possible. ‥
提示制御部109は、例えばGPU等の描画装置を含み、表示部120に表示させるゲーム画面を生成する際には所定の描画処理を行う。具体的には提示制御部109は、ゲームアプリケーションの実行中において、制御部101により行われた処理や命令、操作入力部110を介してなされた操作入力に基づいて、キャラクタの3次元形状に対して適当な演算処理を実行し、ゲームに係る各種ゲーム画面を描画、生成する。生成されたゲーム画面は、スマートフォン100に設けられた表示部120に出力されて表示されることでユーザに提示される。表示部120は、例えばLCD等のスマートフォン100が有する表示装置である。本実施形態では表示部120はスマートフォン100に内蔵され、一体となっているものとして説明するが、本発明の実施はこれに限られるものではなく、例えばスマートフォン100の外部に、有線無線を問わず着脱可能に接続された表示装置であってもよい。  The presentation control unit 109 includes a drawing device such as a GPU, and performs a predetermined drawing process when generating a game screen to be displayed on the display unit 120. Specifically, the presentation control unit 109 determines the three-dimensional shape of the character based on the processing and commands performed by the control unit 101 and the operation input made via the operation input unit 110 during the execution of the game application. Then, appropriate arithmetic processing is executed to draw and generate various game screens related to the game. The generated game screen is presented to the user by being output and displayed on the display unit 120 provided in the smartphone 100. The display unit 120 is a display device included in the smartphone 100, such as an LCD. In the present embodiment, the display unit 120 is described as being built in and integrated with the smartphone 100, but the embodiment of the present invention is not limited to this, and for example, outside the smartphone 100, regardless of wired or wireless. The display device may be detachably connected. ‥
操作入
力部110は、例えばタッチパネルやボタン等のスマートフォン100が有するユーザインタフェースである。操作入力部110は、ユーザによりなされた操作入力を検出すると、該操作入力に対応する制御信号を制御部101に出力する。 
The operation input unit 110 is a user interface included in the smartphone 100, such as a touch panel and buttons. When the operation input unit 110 detects an operation input made by the user, the operation input unit 110 outputs a control signal corresponding to the operation input to the control unit 101.
通信部111は、スマートフォン100が有する、他の装置との通信を行うための通信インタフェースである。通信部111は、有線無線を問わず、所定の通信方式により例えばネットワークを介して外部機器に接続し、データの送受信を行う。ゲームアプリケーションのプログラムは、通信部111を介して外部機器から受信可能に構成されるものであってよい。  The communication unit 111 is a communication interface included in the smartphone 100 for performing communication with other devices. The communication unit 111 connects to an external device via a network by a predetermined communication method regardless of wired or wireless, and transmits/receives data. The program of the game application may be configured to be receivable from an external device via the communication unit 111. ‥
なお、本実施形態では簡単のため、ゲームアプリケーションは、スマートフォン100がオフラインであっても実行可能であり、ゲームのプレイ体験の提供が可能であるものとして説明するが、本発明の実施はこれに限られるものではない。例えば、ゲームカードに基づいて生成されるキャラクタの情報は、ユーザを識別するユーザIDと対応付けられて、外部サーバに管理されるものであってもよいし、同様にゲームアプリケーションでのゲームプレイを行う他のユーザとの対戦ゲームを行う場合には、ゲームプレイ中に通信部111を介した外部機器(ゲームサーバや他のスマートフォン)との通信が行われてもよいことは言うまでもない。また、本発明は、描かれたキャラクタの2次元画像に基づいてキャラクタの3次元形状を生成するものであればよく、スマートフォン100における該キャラクタの3次元形状を用いたゲームのプレイ体験の提供は、必須ではない。  Note that, for the sake of simplicity in the present embodiment, the game application will be described as being executable even when the smartphone 100 is offline and providing a game playing experience. However, the present invention is not limited to this. It is not limited. For example, the character information generated based on the game card may be managed by an external server in association with a user ID for identifying the user, and similarly, game play in a game application may be performed. It goes without saying that communication with an external device (game server or other smartphone) may be performed via the communication unit 111 during the game play when performing a battle game with another user. Further, the present invention only needs to generate the three-dimensional shape of the character based on the two-dimensional image of the drawn character, and it is not possible to provide a game playing experience using the three-dimensional shape of the character on the smartphone 100. ,Not required. ‥
《ゲームアプリケーションに係るゲーム》 本実施形態のスマートフォン100においてゲームアプリケーションが実行されることでプレイ体験が提供されるゲームは、上述したように、ゲームカードから取得した情報に基づいてキャラクタの登録が行われ、登録されたキャラクタを登場させた所定の遊戯を含むプレイ体験の提供が可能に構成されるものであってよい。ゲームに登場するものとして登録されるキャラクタの3次元形状(外観)の生成や、ゲームの進行制御に用いられる該キャラクタのパラメータの決定は、該登録に際して情報取得(撮像)がなされたゲームカードに付されている画像情報に基づいて行われる。  <<Game Related to Game Application>> In the game in which the game experience is provided by executing the game application on the smartphone 100 of the present embodiment, as described above, the character registration is performed based on the information acquired from the game card. That is, it may be configured to be able to provide a play experience including a predetermined game in which the registered character appears. Generation of a three-dimensional shape (appearance) of a character registered as a character appearing in the game and determination of parameters of the character used for controlling the progress of the game are performed by the game card for which information acquisition (imaging) is performed at the time of registration. It is performed based on the attached image information. ‥
〈ゲームカード〉 ここで、本実施形態のゲームアプリケーションで使用可能なゲームカードの構成について、図2を用いて説明する。使用可能なゲームカードは、例えば所定の取扱店や販売業者により販売されるものであり、ユーザはゲームアプリケーションとは別にこれを購入して用意する必要がある。あるいは、ゲームカードは、物品販売機において対価の支払いに応じてユーザに提供されるものであってもよい。  <Game Card> Here, the configuration of a game card that can be used in the game application of this embodiment will be described with reference to FIG. Usable game cards are sold, for example, by a predetermined dealer or distributor, and the user needs to purchase and prepare the game card separately from the game application. Alternatively, the game card may be provided to the user at the article vending machine in response to payment of consideration. ‥
解析部105及び認識部106について上述したように、本実施形態のゲームカードには2種類の情報が付与され、もしくは付与可能に構成されており、撮像によりゲームカードの全体が情報取得された後、それぞれが分離されて処理される。2種類の情報は、図2(a)に示されるようにゲームカードの1つの面に領域を分けて(コード201及び描画領域203)付されており、本実施形態のゲームアプリケーションでは、該ゲームカードを撮像することでゲームカードからこれらの情報を一時に取得し、撮像画像中の該当領域の画像情報を分離して処理することで、各々の情報を個別に取り扱うことができる。  As described above regarding the analyzing unit 105 and the recognizing unit 106, the game card according to the present embodiment is provided with or can be provided with two types of information, and after the information of the entire game card is acquired by imaging. , Each is processed separately. As shown in FIG. 2A, the two types of information are divided into areas (code 201 and drawing area 203) on one surface of the game card, and in the game application of the present embodiment, the game By capturing the card, these pieces of information are temporarily acquired from the game card, and the image information of the corresponding area in the captured image is separated and processed, whereby each piece of information can be handled individually. ‥
以下、本発明に係るキャラクタの3次元形状生成に用いられる、枠202内の描画領域203に注目して、ゲームカードの構成を説明する。  The configuration of the game card will be described below, focusing on the drawing area 203 in the frame 202 used for generating the three-dimensional shape of the character according to the present invention. ‥
本実施形態のゲームカードは、描画領域203はコード201とは異なり、ユーザにより2次元情報を描き込むことが可能に構成されており、ゲームカードの販売時点においては図2(a)に示されるように何らの情報も含まない空欄となっている。この状態では、ゲームカードについて情報取得がなされたとしても、画像情報には有意な情報が付された状態ではないと判断され、コード201に基づいて適正なゲームカードを示していると判定されたとしても、ゲームカードはまだゲームに使用することができない状態であるものとして取り扱われる。  The drawing area 203 of the game card of the present embodiment is different from the code 201, and is configured to allow the user to draw two-dimensional information. The game card is shown in FIG. It is a blank field that does not contain any information. In this state, even if the information about the game card is acquired, it is determined that the image information is not in a state in which significant information is attached, and it is determined based on the code 201 that the game card is appropriate. Even so, the game card is treated as being in a state in which it cannot be used for the game yet. ‥
本実施形態では描画領域203は、図2(b)に示されるように、ゲームカードに係るキャラクタの外観を絵(2次元画像)としてユーザが任意に描き込み可能に構成されており、描き込み後にゲームカードについて情報取得を行わせることで、対応する外観を有するキャラクタの3次元形状を生成させ、ゲームに登場させることができる。つまり、本実施形態のゲームアプリケーション用に提供されるゲームカードは、ユーザによって描画領域203に情報が描き込まれることによって、キャラクタの外観を確定するための情報が保持された状態となる。  In this embodiment, as shown in FIG. 2B, the drawing area 203 is configured such that the user can arbitrarily draw the appearance of the character related to the game card as a picture (two-dimensional image). By obtaining information about the game card later, a three-dimensional shape of a character having a corresponding appearance can be generated and made to appear in the game. That is, the game card provided for the game application of the present embodiment is in a state in which the information for determining the appearance of the character is held by the user drawing the information in the drawing area 203. ‥
より詳しくは描画領域203の内部は、図2(a)及び(b)に示されるように、印刷によって格子状に区切られて方眼が形成されている。換言すれば、描画領域203の内部は、ゲームカードについて定義された横方向(水平方向)と縦方向(垂直方向のそれぞれに、同形同大の方形が隙間なく並べられており、描画領域203の全体に該方形が分布している。本実施形態のゲームカードでは、ユーザは、キャラクタの外観を確定するための情報を保持させるべく、例えば図2(b)に示されるように描画領域203に含まれる少なくとも一部の方形を所望の色で塗りつぶすことで、所謂ドット絵としてキャラクタの外観を容易に形成することができる。このようにすることで、描画領域203に描かれたキャラクタの2次元画像を、限られた数のドットで構成されたドット絵として認識することができるため、該描画領域203に係る画像情報のパターン識別や形状推定に係る演算量を低減することができる。  More specifically, as shown in FIGS. 2A and 2B, the inside of the drawing area 203 is divided into a grid pattern by printing to form a grid. In other words, inside the drawing area 203, rectangular shapes of the same size and the same size are arranged without a gap in each of the horizontal direction (horizontal direction) and the vertical direction (vertical direction) defined for the game card. In the game card of this embodiment, the user holds information for determining the appearance of the character, for example, as shown in FIG. It is possible to easily form the appearance of the character as a so-called dot picture by painting at least a part of the rectangles included in the drawing area 203. Since the three-dimensional image can be recognized as a dot picture composed of a limited number of dots, it is possible to reduce the amount of calculation related to pattern identification and shape estimation of image information related to the drawing area 203.
ところで、キャラクタの3次元形状を生成するにあたり、ユーザの意図を反映させた外観を好適に形成するためには、描画領域203に描き入れられたキャラクタの2次元画像が、該キャラクタのいずれの部位を指すものであるかを特定することが必要となる。このため、本実施形態のゲームカードでは、描画領域203中に、各々異なるキャラクタ部位を対応付けた複数の部分領域を定める。部分領域は、例えば図2(c)に示されるように、描画領域203を分離するように構成されるものであってよい。  By the way, when a three-dimensional shape of a character is generated, in order to preferably form an appearance that reflects the user's intention, the two-dimensional image of the character drawn in the drawing area 203 indicates which part of the character. It is necessary to specify what is meant by. Therefore, in the game card of the present embodiment, a plurality of partial areas in which different character parts are associated are defined in the drawing area 203. The partial area may be configured to separate the drawing area 203, as shown in FIG. 2C, for example. ‥
本実施形態ではゲームカードに対応するキャラクタは、頭部、胴部及び四肢の部位を有して構成される人型のキャラクタであるものとする。このため、図2(c)に示されるように、描画領域203には、キャラクタの頭部として認識される部分領域211、肩部として認識される部分領域212、胴部として認識される部分領域213、右腕部として認識される部分領域214、右手部として認識される部分領域215、左腕部として認識される部分領域216、左手部として認識される部分領域217、脚部として認識される部分領域218が含まれているものとして説明する。  In the present embodiment, the character corresponding to the game card is assumed to be a humanoid character having a head, a body, and four limbs. Therefore, as shown in FIG. 2C, in the drawing area 203, the partial area 211 recognized as the character's head, the partial area 212 recognized as the shoulder, and the partial area recognized as the body. 213, partial area 214 recognized as right arm, partial area 215 recognized as right hand, partial area 216 recognized as left arm, partial area 217 recognized as left hand, partial area recognized as leg 218 will be described as being included. ‥
しかしながら、本発明の実施はこれに限られるものではなく、キャラクタを構成する部位は、適宜追加や除外が可能であってよい。特に、キャラクタを特徴づける要素にユーザの意思を反映させるとの観点では、少なくとも頭部と胴部として二次元画像が認識される部分領域が含まれるものであってよい。  However, the implementation of the present invention is not limited to this, and the parts forming the character may be added or excluded as appropriate. In particular, from the viewpoint of reflecting the intention of the user in the element that characterizes the character, at least a partial area where the two-dimensional image is recognized may be included in the head and the body. ‥
ところで、頭部はキャラクタを特徴づける識別要素として重要であり、またユーザが細かな表現を所望し得る部位であるため、部分領域211は、部分領域213よりも大きい方形を有して構成される。換言すれば、部分領域213は、部分領域211よりも幅が狭く構成される。このことは、キャラクタの3次元形状を生成する際にも反映され、キャラクタの頭部を、他の部位の3次元形状よりも大きく、より特徴的に見せることができ、結果として、ユーザの描いたキャラクタの識別力を効果的に増長することができる。  By the way, the head is important as an identification element that characterizes the character, and since the user can desire a detailed expression, the partial region 211 is configured to have a larger square than the partial region 213. .. In other words, the width of the partial area 213 is smaller than that of the partial area 211. This is reflected also when generating the three-dimensional shape of the character, and the head of the character can be made larger and more distinctive than the three-dimensional shapes of other parts, and as a result, it can be drawn by the user. It is possible to effectively increase the discriminating power of the character. ‥
なお、図2(c)の例では、頭部として認識される部分領域211は、肩部として認識される部分領域212を介して、胴部として認識される部分領域213に間接的に隣接するものとして説明するが、肩部として認識される部分領域212は含めなくともよい。即ち、頭部として認識される部分領域211と胴部として認識される部分領域213とは、直接的に隣接しているものであってもよい。また、腕部として認識される部分領域214及び216や脚部として認識される部分領域218は、キャラクタの四肢を構成するものであるため、胴部として認識される部分領域213に直接的に隣接しているものとする。  In the example of FIG. 2C, the partial area 211 recognized as the head is indirectly adjacent to the partial area 213 recognized as the body via the partial area 212 recognized as the shoulder. However, the partial region 212 recognized as the shoulder may not be included. That is, the partial area 211 recognized as the head and the partial area 213 recognized as the body may be directly adjacent to each other. Further, since the partial areas 214 and 216 recognized as arms and the partial area 218 recognized as legs are the four limbs of the character, they are directly adjacent to the partial area 213 recognized as a body. It is assumed that ‥
また、これら部分領域は、ゲームカードの描画領域203において、例えば色分けされる、太さや色の異なる線で該当の領域を囲う等によって明示的に示されるものであってよい。あるいは、図2(d)に示されるように、太さや色の異なる線221を用いて推奨のキャラクタの外形(ガイド)が示されることにより、明示的に示されるものであってよい。このように、描画領域203内に部位が対応付けられた部分領域を予め設けて示しておくことにより、描画領域203に描き込んだキャラクタ像がどのような態様で認識されるかをユーザに推測させやすくすることができる。即ち、ユーザは描かれたキャラクタの2次元画像に基づいて3次元形状が生成される際に、いずれの部分領域に描いた2次元画像がいずれの部位として認識されるかを予め把握しておくことができるため、結果としてユーザが所望しているであろう外観を反映した3次元形状を有するキャラクタの登録が可能なる。  Further, these partial areas may be explicitly shown in the drawing area 203 of the game card by, for example, color-coding, enclosing the corresponding area with lines having different thicknesses and colors. Alternatively, as shown in FIG. 2D, the outline (guide) of the recommended character may be indicated by using lines 221 having different thicknesses and colors, which may be explicitly indicated. In this way, by predefining and showing partial areas in which the parts are associated in the drawing area 203, it is possible to infer the user how the character image drawn in the drawing area 203 will be recognized. It can be made easier. That is, the user knows in advance which part area the two-dimensional image drawn is recognized as which part when the three-dimensional shape is generated based on the two-dimensional image of the drawn character. Therefore, as a result, it is possible to register a character having a three-dimensional shape that reflects the appearance that the user may desire. ‥
〈3次元形状の生成〉 次に、このように構成されたゲームカードの描画領域203に係り得られた画像情報に基づいて行われる、描かれたキャラクタに対応する3次元形状の生成方法について、図を参照して概要を説明する。  <Generation of three-dimensional shape> Next, regarding the method of generating a three-dimensional shape corresponding to the drawn character, which is performed based on the image information obtained in the drawing area 203 of the game card thus configured, The outline will be described with reference to the drawings. ‥
上述したように、描画領域203には各々異なるキャラクタ部位が対応付けられた部分領域が設けられているため、ゲームカードに係る画像情報が取得された場合、このうちの描画領域203に係る画像情報は、部分領域ごとに分離されて処理される。より詳しくは、まず認識部106は、キャラクタの各部位(頭部、肩部、胴部、右腕部、右手部、左腕部、左手部、脚部)に対応する部分領域に描かれた2次元情報に基づいて、該部位についての有意な情報、即ち、該部分領域に含まれる描き込みがなされた方形群の情報を認識する。部位に係るキャラクタの情報は、例えば対応する部分領域における描き込みがなされた方形の分布、及びその方形に描き込まれた色の情報を少なくとも含んで構成されるものであってよい。  As described above, since the drawing area 203 is provided with the partial areas in which different character parts are associated with each other, when the image information related to the game card is acquired, the image information related to the drawing area 203 among them is acquired. Are processed separately for each partial area. More specifically, first, the recognition unit 106 draws a two-dimensional image in a partial region corresponding to each part of the character (head, shoulder, torso, right arm, right hand, left arm, left hand, leg). Based on the information, the significant information about the part, that is, the information on the drawn rectangular group included in the partial region is recognized. The character information relating to the part may be configured to include, for example, at least the distribution of the drawn rectangle in the corresponding partial area and the information of the color drawn in the rectangle. ‥
本実施形態のゲームアプリケーションでは、描画領域203に描かれた2次元画像に基づいて生成される3次元形状に、描画領域203に描かれた2次元画像と同様の色分布及び外形が表現されるよう構成される。従って、認識部106による認識がなされることで、生成されるキャラクタの3次元形状を、基準の状態で、正面から奥行き方向に平行投影法で描画した場合に2次元画像として提示される色分布及び外観が確定する。ここで、正面とは、生成される3次元形状のうち、描画領域203に描かれた2次元画像で示される色分布やパターンが再現される面である。換言すれば、描画領域203には、生成される3次元形状の側面及び背面等の情報は定められず、ユーザは、任意のキャラクタの正面のみを描き込むことができる。即ち、本明細書において、生成されるキャラクタの3次元形状の正面とは、ゲームカードの描画領域203に示されるパターンが現れる面であり、奥行き方向とは、正面に正対した場合に、該正面に直交する方向であって、基本的には遠離する方向に正をとるもの(深度)であり、3次元形状の背面とは、該奥行き方向の負の方向にキャラクタを観察する際に現れる面である。  In the game application of this embodiment, the same color distribution and outline as the two-dimensional image drawn in the drawing area 203 is expressed in the three-dimensional shape generated based on the two-dimensional image drawn in the drawing area 203. Is configured. Therefore, when the recognition unit 106 performs recognition, the color distribution presented as a two-dimensional image when the three-dimensional shape of the generated character is drawn by the parallel projection method from the front to the depth direction in the reference state. And the appearance is fixed. Here, the front surface is a surface of the generated three-dimensional shape in which the color distribution and pattern shown in the two-dimensional image drawn in the drawing area 203 are reproduced. In other words, the drawing area 203 does not define information such as the side surface and the back surface of the generated three-dimensional shape, and the user can draw only the front surface of an arbitrary character. That is, in the present specification, the front surface of the three-dimensional shape of the generated character is a surface on which the pattern shown in the drawing area 203 of the game card appears, and the depth direction is the front surface when facing the front surface. The direction orthogonal to the front surface, which is basically positive (depth) in the distant direction, and the back surface of the three-dimensional shape appears when the character is observed in the negative direction of the depth direction. Is a face. ‥
認識部106によって、部分領域ごとに描かれたキャラクタの情報が認識されると、決定部107は、部分領域ごとに対応する部位の3次元パーツを構成する際の、最大の奥行き方向のブロック数を決定する。  When the recognition unit 106 recognizes the information of the character drawn for each partial region, the determination unit 107 determines the maximum number of blocks in the depth direction when forming the three-dimensional part of the region corresponding to each partial region. To decide. ‥
上述した
ように、本実施形態のゲームアプリケーションでは、対応するゲームカードの描画領域203に所定の大きさの方形群が格子状に設けられており、ユーザはドット絵で簡易にキャラクタの外観を形成可能に構成されている。即ち、描画領域203に係る画像情報は、該描画領域203内の方形の数の画素数の画像(ドット絵)としてキャラクタの外観を形成可能に構成されている。このようなドット絵で表現されたキャラクタの雰囲気を損なわずに表現すべく、本実施形態のゲームアプリケーションでは、描かれたキャラクタの3次元形状を、図3に示されるように、本発明に係る構成単位である要素としての、各ドットに対応した同形同大の立方体モデル(キューブ)を連結させることで形成する。より詳しくは、描画領域203に描かれた2次元画像に基づいて生成されるキャラクタの3次元形状は、奥行き方向をz軸とする3次元空間において、z軸と直交するx軸(水平方向の軸)及びy軸(鉛直方向の軸)と、z軸の各軸方向に、キューブが連結されて配置されることで、ドット絵と対応するように構成される。 
As described above, in the game application of the present embodiment, rectangular groups of a predetermined size are provided in a grid pattern in the drawing area 203 of the corresponding game card, and the user can easily form the appearance of the character by dot painting. It is configured to be possible. That is, the image information relating to the drawing area 203 is configured so that the appearance of the character can be formed as an image (dot picture) having a rectangular number of pixels in the drawing area 203. In order to represent the character represented by such a dot picture without impairing the atmosphere of the character, in the game application of the present embodiment, the three-dimensional shape of the drawn character is related to the present invention as shown in FIG. It is formed by connecting cube models (cubes) of the same shape and size corresponding to each dot as an element which is a structural unit. More specifically, the three-dimensional shape of the character generated based on the two-dimensional image drawn in the drawing area 203 is an x-axis (horizontal direction in the horizontal direction) in the three-dimensional space having the z-axis in the depth direction. The cubes are arranged so as to be connected to each other in each of the z-axis and the y-axis (vertical axis), so that the cube corresponds to the dot picture.
このため、決定部107は、描画領域203においては表現されていないz軸方向に連結させるキューブの最大数を、キャラクタの部位ごとに決定する。即ち、決定部107は、部分領域の各々について、描かれた2次元画像を3次元化した3次元パーツを構成するにあたり、該パーツにもたせる最大厚みをキューブの数で決定する。  Therefore, the determination unit 107 determines the maximum number of cubes that are not represented in the drawing area 203 and are connected in the z-axis direction for each character part. That is, the determining unit 107 determines, for each of the partial regions, the maximum thickness to be given to the three-dimensional part obtained by converting the drawn two-dimensional image into the three-dimensional part by the number of cubes. ‥
部分領域ごとの最大厚みキューブ数の決定は、例えば該部分領域に配置される左端のドットから右端のドットまでが12ドット分である場合には、最大厚みのキューブ数を12個とする等、該部分領域に描かれた2次元画像の水平方向の幅、即ち、有意な情報を有するドットが配置されている横方向の範囲(長さ)に基づいて、予め定められた比率で決定されるものであってよい。  The maximum number of cubes for each partial area is determined by, for example, if the number of dots from the leftmost dot to the rightmost dot arranged in the partial area is 12 dots, the maximum number of cubes is 12, and so on. It is determined at a predetermined ratio based on the horizontal width of the two-dimensional image drawn in the partial area, that is, the horizontal range (length) in which dots having significant information are arranged. It may be one. ‥
このとき、部分領域に含まれる有意な情報を有するドットの分布によっては、好適なバランスの3次元パーツとならない可能性があるため、最大厚みのキューブ数には、例えば下限数、上限数が定められているものであってもよい。また、上述したようにキャラクタの識別力を増長させるべく、キャラクタの頭部の最大厚みキューブ数は、胴部の最大厚みキューブ数よりも大きくなるよう、決定部107は決定するものであってもよい。また、例えば左腕部と右腕部等、対称的に存在する部位については、3次元パーツの厚みが異なることで不自然な印象を与え得るため、決定部107は、これらの部位について同一の最大厚みキューブ数を決定するものであってよい。  At this time, depending on the distribution of dots having significant information included in the partial area, it may not be a three-dimensional part with a suitable balance. Therefore, for the number of cubes with the maximum thickness, for example, a lower limit number and an upper limit number are set. It may be one that has been used. In addition, as described above, in order to increase the discriminating power of the character, the determination unit 107 may determine that the maximum number of thickness cubes of the character's head is larger than the number of maximum thickness cubes of the body. Good. Further, for symmetrically existing parts such as the left arm part and the right arm part, since the three-dimensional parts have different thicknesses, it is possible to give an unnatural impression. Therefore, the determining unit 107 determines that the parts have the same maximum thickness. It may determine the number of cubes. ‥
そして、モデル生成部108は、このように各部位について決定された最大厚みキューブ数の情報に基づき、各部位に係る3次元パーツを構成する。モデル生成部108は、基本的には各部位について、決定部107により該部位に対して決定された最大厚みキューブ数を奥行き方向に連結させることで、該部位の3次元パーツを構成する。  Then, the model generation unit 108 configures a three-dimensional part related to each part based on the information of the maximum number of cubes of thickness thus determined for each part. The model generating unit 108 basically forms a three-dimensional part of each part by connecting the maximum thickness cube numbers determined for the part by the determining unit 107 in the depth direction. ‥
例えば、右腕部に対応する部分領域214及び右手部に対応する部分領域215に描かれたキャラクタの2次元画像が、図4(a)に示されるようであり、決定部107が、右腕部に対して最大厚みキューブ数を「2」、右手部に対して最大厚みキューブ数を「3」として決定する態様について考える。このとき、モデル生成部108は、まず図4(b)に示されるように、各部分領域に示されるドットの分布に応じてキューブをxy平面において連結させ、厚みがキューブ1個分のキューブ群を形成する。そして、モデル生成部108は、該キューブ群のz軸方向にさらにキューブを連結することで、図4(c)に示されるような決定部107により決定された最大厚みキューブ数分の厚みを有する3次元パーツを構成する。なお、各部位の連結時の奥行き方向の配置は、予めルール化されているものであってよい。  For example, the two-dimensional image of the character drawn in the partial region 214 corresponding to the right arm part and the partial region 215 corresponding to the right hand part is as shown in FIG. 4A, and the determining unit 107 sets the right arm part in the right arm part. On the other hand, consider a mode in which the maximum thickness cube number is determined as "2" and the maximum thickness cube number for the right-hand portion is determined as "3". At this time, the model generation unit 108 first connects the cubes in the xy plane according to the distribution of the dots shown in each partial area, as shown in FIG. To form. Then, the model generation unit 108 further connects the cubes in the z-axis direction of the cube group, so that the model generation unit 108 has a thickness corresponding to the maximum number of cubes determined by the determination unit 107 as illustrated in FIG. 4C. Configure 3D parts. The arrangement of the respective parts in the depth direction at the time of connection may be ruled in advance. ‥
ところで、図2(c)に示したように、特に、割り当てられている部分領域の幅が比較的広い、頭部及び胴部のような部位については、幅に応じて決定された最大厚みキューブ数を有するよう構成すると、柱状の3次元パーツとなり、結果的に生成されるキャラクタの3次元形状が、ユーザの所望しない態様になり得る。  By the way, as shown in FIG. 2C, the maximum thickness cube determined according to the width, particularly for the parts such as the head and the torso where the allocated partial regions have a relatively wide width. When it is configured to have a number, it becomes a columnar three-dimensional part, and the three-dimensional shape of the character generated as a result can be in a form not desired by the user. ‥
例えば、頭部に対応する部分領域211及び胴部に対応する部分領域213に描かれたキャラクタの2次元画像が、図5(a)に示されるようであり、決定部107が、頭部に対して最大厚みキューブ数を「9」、胴部に対して最大厚みキューブ数を「6」として決定する態様について考える。このとき、モデル生成部108、最大厚みキューブ数を有するよう3次元パーツを構成すると、図5(b)に示されるように奥行き方向に延びた柱状であることが目立つ形状となる。即ち、3次元パーツがこのような形状となると、単に描かれたキャラクタの2次元画像を奥行き方向に引き延ばした状態でキャラクタの3次元表現がなされてしまうため、ユーザに好適な印象を与えない。  For example, the two-dimensional image of the character drawn in the partial area 211 corresponding to the head and the partial area 213 corresponding to the torso is as shown in FIG. On the other hand, consider a mode in which the maximum thickness cube number is determined to be “9” and the maximum thickness cube number for the body is determined to be “6”. At this time, when the model generation unit 108 and the three-dimensional part are configured to have the maximum number of cubes, the columnar shape extending in the depth direction becomes a conspicuous shape as shown in FIG. 5B. That is, if the three-dimensional part has such a shape, the two-dimensional image of the drawn character is simply stretched in the depth direction to give a three-dimensional representation of the character, which does not give a favorable impression to the user. ‥
従って、本実施形態のゲームアプリケーションでは、このような表現が3次元形状に現れないよう、少なくとも頭部及び胴部の3次元パーツについては、最大厚みキューブ数を与えて形成した柱状の形状から、図5(c)にハッチングで示されるような正面及び背面の外縁部に位置するキューブを除去し、図5(d)に示されるような形状となるよう、モデル生成部108は各3次元パーツを構成する制御を行う。即ち、モデル生成部108は、各部位の3次元パーツが好適な形状となるよう、正面と側面、及び背面と側面の接合部について、所謂「面取り」にあたる処理を適用する。  Therefore, in the game application of the present embodiment, in order to prevent such an expression from appearing in the three-dimensional shape, at least the three-dimensional parts of the head and the torso are formed from the columnar shape formed by giving the maximum thickness cube number, The model generator 108 removes each of the three-dimensional parts so that the cubes located at the outer edges of the front surface and the back surface as shown by hatching in FIG. 5C are removed and the shape becomes as shown in FIG. 5D. Control is performed. That is, the model generation unit 108 applies a so-called “chamfering” process to the joints of the front surface and the side surfaces and the back surface and the side surfaces so that the three-dimensional part of each part has a suitable shape. ‥
そして、モデル生成部108は、このように形成された各部位の3次元パーツを組み合わせることで、ゲームカードの描画領域203に描かれたキャラクタの3次元形状を生成する。  Then, the model generation unit 108 generates the three-dimensional shape of the character drawn in the drawing area 203 of the game card by combining the three-dimensional parts of the respective parts thus formed. ‥
《生成処理》 このような構成をもつ本実施形態のスマートフォン100においてゲームアプリケーションの実行中に生じる、ゲームカードに基づいてキャラクタの3次元形状を生成する生成処理について、図6のフローチャートを用いて具体的な処理を説明する。該フローチャートに対応する処理は、制御部101が、例えば記録媒体102に記憶されている対応する処理プログラムを読み出し、メモリ103に展開して実行することにより実現することができる。なお、本生成処理は、例えばゲームカードに基づいて、ゲームアプリケーションのゲームプレイに使用するキャラクタの登録を行う処理が開始され、ゲームカードに係る描画領域203に係る画像情報が取得された際に開始されるものとして説明する。  <<Generation Process>> The generation process for generating the three-dimensional shape of the character based on the game card, which occurs during the execution of the game application in the smartphone 100 of the present embodiment having such a configuration, will be described with reference to the flowchart of FIG. Processing will be described. The processing corresponding to the flowchart can be realized by the control unit 101 reading out a corresponding processing program stored in the recording medium 102, expanding the processing program in the memory 103, and executing the program. It should be noted that the present generation process is started when, for example, a process of registering a character used for game play of a game application is started based on a game card, and image information of a drawing area 203 of the game card is acquired. It will be described as being performed. ‥
S601で、認識部106は制御部101の制御の下、取得された描画領域203に係る画像情報を、予め定められた部分領域ごとに分離し、該部分領域に描かれている2次元画像に基づいて、該部分領域と対応する部位について描かれているキャラクタの情報を認識する。認識部106は、全ての部分領域についてキャラクタの情報の認識が完了すると、処理をS602に移す。  In step S<b>601, the recognition unit 106, under the control of the control unit 101, separates the acquired image information of the drawing area 203 into each predetermined partial area, and creates a two-dimensional image drawn in the partial area. Based on the information, the character information drawn about the part corresponding to the partial area is recognized. When the recognition unit 106 completes the recognition of the character information for all the partial areas, the processing proceeds to S602. ‥
S602で、制御部101は、キャラクタの情報が認識されなかった部分領域が存在するか否かを判断する。本実施形態のゲームアプリケーションでは、キャラクタの3次元形状の生成は、描画領域203に設けられた全ての部分領域に有意な情報が含まれている、即ち、該部分領域に対応する部位の2次元画像が描かれていることを条件とする。従って、制御部101は、キャラクタの情報が認識されなかった部分領域が存在すると判断した場合は、例えば描画領域203への描き込みが不足している旨の通知を提示制御部109に行わせ、本生成処理を完了する。また制御部101は、キャラクタの情報が認識されなかった部分領域が存在しない、即ち、全ての部分領域についてキャラクタの情報が認識されたと判断した場合は、処理をS603に移す。  In step S602, the control unit 101 determines whether or not there is a partial area in which character information is not recognized. In the game application of the present embodiment, in the generation of the three-dimensional shape of the character, all the partial areas provided in the drawing area 203 include significant information, that is, the two-dimensional shape of the part corresponding to the partial area. The condition is that the image is drawn. Therefore, when the control unit 101 determines that there is a partial area in which the character information is not recognized, the control unit 101 causes the presentation control unit 109 to notify that the drawing in the drawing area 203 is insufficient, This generation processing is completed. When the control unit 101 determines that there is no partial area in which the character information is not recognized, that is, the character information is recognized in all the partial areas, the control unit 101 shifts the processing to S603. ‥
S603で、決定部107は制御部101の制御の下、各部分領域について認識されたキャラクタの情報に基づき、部分領域ごとに最大厚みキューブ数を決定する。上述したように、最大厚みキューブ数の決定については、部分領域における有意な情報(ドット)の分布状況、及び該部分領域に対応する部位の3次元パーツの生成にあたり予め定められた最大厚みキューブ数の決定ルールに基づいて行われるものとする。  In S603, under the control of the control unit 101, the determination unit 107 determines the maximum thickness cube number for each partial region based on the information of the character recognized for each partial region. As described above, regarding the determination of the maximum thickness cube number, the distribution state of significant information (dots) in the partial area and the maximum thickness cube number that is predetermined in generating the three-dimensional part of the part corresponding to the partial area Shall be based on the decision rule of. ‥
S604で、モデル生成部108は制御部101の制御の下、部分領域ごとに認識されたキャラクタの情報と決定された最大厚みキューブ数に基づいて、キャラクタの各部位に対応する3次元パーツを構成し、これを組み合わせることで、描画領域203に描かれたキャラクタの2次元画像に対応する、該キャラクタの3次元形状を生成する。生成された3次元形状は、キャラクタの登録を行う処理に渡され、他の情報とともにキャラクタの情報として格納されればよい。  In step S604, under the control of the control unit 101, the model generation unit 108 configures a three-dimensional part corresponding to each part of the character based on the information of the character recognized for each partial region and the determined maximum thickness cube number. Then, by combining these, the three-dimensional shape of the character corresponding to the two-dimensional image of the character drawn in the drawing area 203 is generated. The generated three-dimensional shape may be passed to the process of registering the character and stored as the character information together with other information. ‥
このようにすることで、本実施形態の情報処理装置によれば、好適な外観を有する3次元形状を簡易に生成することができる。より詳しくは、ユーザにより描かれたキャラクタの2次元画像を、部分領域ごとに分離して認識し、部分領域ごとに対応付けられた部位に基づいて厚みの決定及び3次元パーツ化の処理を異ならせて処理がなされるため、よりユーザの意図が反映された3次元形状を生成することができる。  By doing so, the information processing apparatus according to the present embodiment can easily generate a three-dimensional shape having a suitable appearance. More specifically, the two-dimensional image of the character drawn by the user is separately recognized for each partial region, and the thickness determination and the three-dimensional part processing are different based on the part associated with each partial region. Since the processing is performed in this manner, it is possible to generate a three-dimensional shape that more reflects the user's intention. ‥
[変形例1] 上述した実施形態の生成処理では、描画領域203に設けられた部分領域の全てについて、対応する部位の2次元画像が描かれていることを条件に3次元形状を生成するものとして説明したが、本発明の実施はこれに限られるものではない。  [Modification 1] In the generation processing of the above-described embodiment, a three-dimensional shape is generated on condition that a two-dimensional image of the corresponding region is drawn for all partial areas provided in the drawing area 203. However, the embodiment of the present invention is not limited to this. ‥
即ち、ユーザが、好適な外観の3次元形状が生成される態様を把握した上で、これら全ての部分領域への2次元画像の描き込みを行うことは、特に年少の児童等には困難であるため、一部の部分領域については、2次元画像が描かれていなくとも、あるいは2次元画像は描かれているが3次元形状の構成にあたり破綻し得る情報である場合であっても、認識部106がこれを補完してキャラクタの情報を認識し、あるいはモデル生成部108がデフォルトの3次元パーツを代用して補うことにより、キャラクタの3次元形状が生成されるよう制御されるものであってもよい。  That is, it is difficult for a younger child or the like to draw a two-dimensional image in all of these partial areas after the user understands the manner in which a three-dimensional shape with a suitable appearance is generated. Therefore, even if a two-dimensional image is not drawn for some partial areas, or if the two-dimensional image is drawn but the information may be broken in constructing a three-dimensional shape, recognition is performed. The unit 106 complements this to recognize the character information, or the model generation unit 108 substitutes and supplements the default three-dimensional parts so that the three-dimensional shape of the character is controlled. May be. ‥
[変形例2] ところで、上述した実施形態では、フリーハンドで描かれた2次元の絵について、その立体的形状等を推定する処理はより多くの演算を要したり、演算負荷が発生したりし得、ゲームアプリケーションに係る興趣体験を性能の異なる多くの機器で提供するとの観点では好適ではない点を考慮し、ゲームカードには、限られた数のドットで構成されたドット絵として認識されるよう、方形群が格子状に配置されて予め印刷されているものとして説明した。しかしながら、本発明の実施はこれに限られるものではなく、描画領域203は、該キャラクタをユーザが所望する外観でゲームに登場させられるよう、フリーハンドで描けるよう構成されるものであってもよい。このような態様であっても、描画領域203には部分領域が、いずれの部位に対応するものであるかを明示する態様で付されていれば、本発明は実現可能である。  [Modification 2] By the way, in the above-described embodiment, a process of estimating a three-dimensional shape or the like of a two-dimensional picture drawn by freehand requires more calculation or a calculation load occurs. However, considering that it is not suitable from the viewpoint of providing the entertainment experience related to the game application on many devices with different performances, the game card recognizes it as a dot picture composed of a limited number of dots. As described above, the square groups are arranged in a grid pattern and are printed in advance. However, the embodiment of the present invention is not limited to this, and the drawing area 203 may be configured so that the character can be drawn by freehand so that the character can appear in the game with the appearance desired by the user. .. Even in such a mode, the present invention can be realized as long as the drawing area 203 is provided with a mode that clearly indicates to which part the partial area corresponds. ‥
[その他の実施形態] 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から逸脱することなく、発明の要旨の範囲内で種々の変形・変更が可能である。また本発明に係る情報処理装置は、1以上のコンピュータを該情報処理装置として機能させるプログラムによっても実現可能である。該プログラムは、コンピュータが読み取り可能な記録媒体に記録されることにより、あるいは電気通信回線を通じて、提供/配布することができる。 [Other Embodiments] The present invention is not limited to the above-described embodiments, and various modifications and changes can be made within the scope of the invention without departing from the spirit and scope of the present invention. The information processing apparatus according to the present invention can also be realized by a program that causes one or more computers to function as the information processing apparatus. The program can be provided/distributed by being recorded in a computer-readable recording medium or through an electric communication line.
100:スマートフォン、101:制御部、102:記録媒体、103:メモリ、104:撮像部、105:解析部、106:認識部、107:決定部、108:モデル生成部、109:提示制御部、110:操作入力部、111:通信部、120:表示部 100: smartphone, 101: control unit, 102: recording medium, 103: memory, 104: imaging unit, 105: analysis unit, 106: recognition unit, 107: determination unit, 108: model generation unit, 109: presentation control unit, 110: operation input unit, 111: communication unit, 120: display unit

Claims (18)

  1. キャラクタの3次元形状であって、構成単位である要素の集合で構成された3次元形状を生
    成させるプログラムであって、 コンピュータに、  前記キャラクタの2次元画像が描かれた描画領域について、該描画領域に設けられた部分領域ごとに、描かれた前記キャラクタの情報を認識する処理と、  前記部分領域ごとに、認識された前記キャラクタの情報に基づいて、該部分領域に対応する前記キャラクタの3次元形状の構成に用いる奥行き方向の前記要素の数を決定する処理と、  決定された前記奥行き方向の要素の数に基づいて、前記部分領域のそれぞれについて構成された前記要素の集合を組み合わせることで、前記キャラクタの3次元形状を生成する処理と、を実行させるプログラム。
    A program for generating a three-dimensional shape of a character, which is composed of a set of elements, which is a constituent unit, and is configured to cause a computer to draw a drawing area in which a two-dimensional image of the character is drawn. A process of recognizing the information of the drawn character for each partial region provided in the region, and 3 of the characters corresponding to the partial region based on the information of the recognized character for each partial region. A process of determining the number of the elements in the depth direction used for the configuration of the three-dimensional shape, and combining the set of elements configured for each of the partial regions based on the determined number of the elements in the depth direction. And a process for generating a three-dimensional shape of the character.
  2. 前記認識する処理において、前記部分領域ごとに対応付けられた前記キャラクタの部位について、描かれた前記キャラクタの情報が認識される請求項1に記載のプログラム。 The program according to claim 1, wherein in the recognition processing, information on the drawn character is recognized for a portion of the character associated with each of the partial areas.
  3. 前記描画領域には、少なくとも、描かれた前記キャラクタの情報が前記キャラクタの頭部として認識される第1の部分領域と、描かれた前記キャラクタの情報が前記キャラクタの胴部として認識される第2の部分領域とが設けられている請求項2に記載のプログラム。 In the drawing area, at least a first partial area in which information of the drawn character is recognized as a head of the character, and a first partial area in which information of the drawn character is recognized as a body of the character. The program according to claim 2, wherein two partial areas are provided.
  4. 前記第2の部分領域は、前記描画領域において前記第1の部分領域よりも幅が狭く設定されている請求項3に記載のプログラム。 The program according to claim 3, wherein the second partial area is set to have a width narrower than that of the first partial area in the drawing area.
  5. 前記描画領域には、描かれた前記キャラクタの情報が前記キャラクタの腕部として認識される、前記第1の部分領域及び前記第2の部分領域とは異なる第3の部分領域がさらに設けられている請求項3または4に記載のプログラム。 The drawing area is further provided with a third partial area different from the first partial area and the second partial area, in which information of the drawn character is recognized as an arm of the character. The program according to claim 3 or 4, which is provided.
  6. 前記第2の部分領域は、前記第1の部分領域と直接的に、または他の部分領域を介して間接的に隣接して設けられ、 前記第3の部分領域は、前記第2の部分領域と直接的に隣接して設けられている請求項5に記載のプログラム。 The second partial region is provided directly adjacent to the first partial region or indirectly adjacent to the first partial region via another partial region, and the third partial region is the second partial region. The program according to claim 5, which is provided directly adjacent to.
  7. 前記キャラクタの3次元形状は、前記奥行き方向を1軸とする、3次元空間に定義される互いに直交した3軸それぞれの方向に、前記要素を連結することで構成される請求項2乃至6のいずれか1項に記載のプログラム。 7. The three-dimensional shape of the character is formed by connecting the elements in respective directions of three axes that are defined in a three-dimensional space and have one axis in the depth direction. The program according to any one of items.
  8. 前記要素のそれぞれは、同一の形状を有する請求項7に記載のプログラム。 The program according to claim 7, wherein each of the elements has the same shape.
  9. 前記決定する処理において、前記部分領域ごとに異なる前記奥行き方向の要素の数が決定される請求項8に記載のプログラム。 The program according to claim 8, wherein in the determining process, the number of elements in the depth direction that differ for each of the partial regions is determined.
  10. 前記決定する処理において、前記キャラクタの頭部に係る3次元形状の構成に用いる前記奥行き方向の要素の数が、前記キャラクタの胴部に係る3次元形状の構成に用いる前記奥行き方向の要素の数よりも多くなるよう決定される請求項8または9に記載のプログラム。 In the determining process, the number of elements in the depth direction used to configure the three-dimensional shape of the character's head is the number of elements in the depth direction used to configure the three-dimensional shape of the body of the character. The program according to claim 8 or 9, which is determined to be more than.
  11. 前記決定する処理において、前記キャラクタの腕部に係る3次元形状の構成に用いる前記奥行き方向の要素の数が、左腕部と右腕部とで同一になるよう決定される請求項8または9に記載のプログラム。 The said determination process WHEREIN: The number of the elements of the said depth direction used for the structure of the three-dimensional shape which concerns on the arm part of the said character is determined so that the left arm part and the right arm part may become the same. Program of.
  12. 前記キャラクタの頭部に係る3次元形状の構成にあたり、対応する部分領域に描かれた2次元の前記キャラクタの像に、該頭部について決定された前記奥行き方向の要素の数を均一に与えて3次元形状を構成した後、該3次元形状の正面及び背面の外縁部に位置する要素を除去する処理を、前記コンピュータにさらに実行させる請求項7乃至11のいずれか1項に記載のプログラム。 In constructing the three-dimensional shape of the head of the character, the number of the elements in the depth direction determined for the head is uniformly given to the two-dimensional image of the character drawn in the corresponding partial area. The program according to any one of claims 7 to 11, further comprising causing the computer to further perform a process of removing the elements located on the outer edges of the front surface and the back surface of the three-dimensional shape after forming the three-dimensional shape.
  13. 前記キャラクタの胴部に係る3次元形状の構成にあたり、対応する部分領域に描かれた2次元の前記キャラクタの像に、該胴部について決定された前記奥行き方向の要素の数を均一に与えて3次元形状を構成した後、該3次元形状の正面及び背面の外縁部に位置する要素を除去する処理を、前記コンピュータにさらに実行させる請求項7乃至12のいずれか1項に記載のプログラム。 In constructing the three-dimensional shape of the torso of the character, the number of the elements in the depth direction determined for the torso is uniformly given to the two-dimensional image of the character drawn in the corresponding partial area. The program according to any one of claims 7 to 12, further comprising causing the computer to further perform a process of removing elements located on the outer edges of the front surface and the back surface of the three-dimensional shape after forming the three-dimensional shape.
  14. 前記生成する処理において、前記描画領域に設けられた全ての前記部分領域に2次元画像が描かれていることを条件に、前記キャラクタの3次元形状が生成される請求項1乃至13のいずれか1項に記載のプログラム。 The three-dimensional shape of the character is generated on condition that a two-dimensional image is drawn in all the partial areas provided in the drawing area in the generating process. The program according to item 1.
  15. 前記生成する処理において、前記描画領域に設けられた一部の前記部分領域に2次元画像が描かれていない場合に、該一部の部分領域に係る前記キャラクタの情報を補完生成され、前記キャラクタの3次元形状が生成される請求項1乃至13のいずれか1項に記載のプログラム。 In the generating process, when a two-dimensional image is not drawn in a part of the partial area provided in the drawing area, the information of the character related to the partial area is complementarily generated, 14. The program according to claim 1, wherein the three-dimensional shape is generated.
  16. 実物品から前記キャラクタの2次元画像を取得する処理を、前記コンピュータにさらに実行させ、 前記実物品には、前記描画領域が描き込み可能な態様で付される請求項1乃至15のいずれか1項に記載のプログラム。 16. The computer according to claim 1, wherein the computer further executes a process of acquiring a two-dimensional image of the character from an actual article, and the drawing area is attached to the actual article in a form in which the drawing area can be drawn. The program described in the section.
  17. キャラクタの3次元形状であって、構成単位である要素の集合で構成された3次元形状を生成させる情報処理装置であって、 前記キャラクタの2次元画像が描かれた描画領域について、該描画領域に設けられた部分領域ごとに、描かれた前記キャラクタの情報を認識する認識手段と、 前記部分領域ごとに、前記認識手段により認識された前記キャラクタの情報に基づいて、該部分領域に対応する前記キャラクタの3次元形状の構成に用いる奥行き方向の前記要素の数を決定する決定手段と、 前記決定手段により決定された前記奥行き方向の要素の数に基づいて、前記部分領域のそれぞれについて構成された前記要素の集合を組み合わせることで、前記キャラクタの3次元形状を生成する生成手段と、を備える情報処理装置。 An information processing device for generating a three-dimensional shape of a character, which is composed of a set of elements that are constituent units, wherein the drawing area is a drawing area in which a two-dimensional image of the character is drawn. Recognition means for recognizing the information of the drawn character for each partial area, and corresponding to the partial area for each of the partial areas based on the information of the character recognized by the recognition means A determination unit that determines the number of elements in the depth direction used to configure the three-dimensional shape of the character, and a configuration for each of the partial regions based on the number of elements in the depth direction determined by the determination unit An information processing apparatus comprising: a generation unit configured to generate a three-dimensional shape of the character by combining the set of elements.
  18. キャラクタの3次元形状であって、構成単位である要素の集合で構成された3次元形状を生成させる情報処理装置と、前記キャラクタの2次元画像の描き込みが可能な描画領域が付されたゲーム用物品と、を含むゲームシステムであって、 前記情報処理装置は、  前記キャラクタの2次元画像が描かれた前記描画領域について、該描画領域に設けられた部分領域ごとに、描かれた前記キャラクタの情報を認識する認識手段と、  前記部分領域ごとに、前記認識手段により認識された前記キャラクタの情報に基づいて、該部分領域に対応する前記キャラクタの3次元形状の構成に用いる奥行き方向の前記要素の数を決定する決定手段と、  前記決定手段により決定された前記奥行き方向の要素の数に基づいて、前記部分領域のそれぞれについて構成された前記要素の集合を組み合わせることで、前記キャラクタの3次元形状を生成する生成手段と、を備えるゲームシステム。 An information processing apparatus for generating a three-dimensional shape of a character, which is composed of a set of elements which are constituent units, and a game provided with a drawing area capable of drawing a two-dimensional image of the character. A game system including an article, wherein the information processing device draws the character for each of the partial areas provided in the drawing area with respect to the drawing area in which the two-dimensional image of the character is drawn. Recognition means for recognizing the information of the character, and for each of the partial areas, based on the information of the character recognized by the recognition means, the depth direction of the character corresponding to the partial area. By combining the determination means for determining the number of elements and the set of elements configured for each of the partial areas based on the number of elements in the depth direction determined by the determination means, the character's 3 A game system including a generation unit that generates a three-dimensional shape.
PCT/JP2019/048719 2019-01-11 2019-12-12 Program, information processing device, and game system WO2020145021A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-003883 2019-01-11
JP2019003883A JP6734411B2 (en) 2019-01-11 2019-01-11 Program, information processing device and game system

Publications (1)

Publication Number Publication Date
WO2020145021A1 true WO2020145021A1 (en) 2020-07-16

Family

ID=71520356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/048719 WO2020145021A1 (en) 2019-01-11 2019-12-12 Program, information processing device, and game system

Country Status (2)

Country Link
JP (1) JP6734411B2 (en)
WO (1) WO2020145021A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0256303B2 (en) * 1985-04-03 1990-11-29 Shinnippon Seitetsu Kk
JPH0981775A (en) * 1995-09-19 1997-03-28 Hitachi Ltd Method for generating three-dimensional articulated structure shape
JP2002306840A (en) * 2001-04-19 2002-10-22 Taito Corp Character item generation game machine capable of preparing parameter by two dimensional shape
JP2018195300A (en) * 2017-05-19 2018-12-06 株式会社リコー Display control apparatus, display system, display control method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0696164A (en) * 1992-09-14 1994-04-08 A T R Tsushin Syst Kenkyusho:Kk Three-dimensional image data base generating system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0256303B2 (en) * 1985-04-03 1990-11-29 Shinnippon Seitetsu Kk
JPH0981775A (en) * 1995-09-19 1997-03-28 Hitachi Ltd Method for generating three-dimensional articulated structure shape
JP2002306840A (en) * 2001-04-19 2002-10-22 Taito Corp Character item generation game machine capable of preparing parameter by two dimensional shape
JP2018195300A (en) * 2017-05-19 2018-12-06 株式会社リコー Display control apparatus, display system, display control method, and program

Also Published As

Publication number Publication date
JP6734411B2 (en) 2020-08-05
JP2020113087A (en) 2020-07-27

Similar Documents

Publication Publication Date Title
US9652895B2 (en) Augmented reality image transformation
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
JP6224327B2 (en) Information processing system, information processing apparatus, information processing method, and information processing program
JP4851504B2 (en) How to generate assets for interactive entertainment using digital image capture
US10363486B2 (en) Smart video game board system and methods
JP6202980B2 (en) Information processing program, information processing apparatus, information processing system, and information processing method
US11681910B2 (en) Training apparatus, recognition apparatus, training method, recognition method, and program
JPWO2011155068A1 (en) Character generation system, character generation method and program
JP5812550B1 (en) Image display device, image display method, and program
CN107469355A (en) Game image creation method and device, terminal device
JP2011243019A (en) Image display system
JP2014155564A (en) Game system and program
JP6734411B2 (en) Program, information processing device and game system
JP2021016547A (en) Program, recording medium, object detection device, object detection method, and object detection system
US20140192045A1 (en) Method and apparatus for generating three-dimensional caricature using shape and texture of face
KR101685505B1 (en) Method for generating 3d image by user&#39;s participation and system and method for edutainments service using the same
WO2020031542A1 (en) Program, game device, and game system
KR101643569B1 (en) Method of displaying video file and experience learning using this
JP2017123103A (en) Terminal device, information processing method, and program
JP2011215709A (en) Apparatus, method and program for assisting cartoon creation
JP3819911B2 (en) Entertainment equipment
CN114219888A (en) Method and device for generating dynamic silhouette effect of three-dimensional character and storage medium
JP7073311B2 (en) Programs, game machines and game systems
KR20120097589A (en) Computing device, method and system for embodying augmented reality
JP3866602B2 (en) 3D object generation apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19908467

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19908467

Country of ref document: EP

Kind code of ref document: A1