WO2019150778A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2019150778A1
WO2019150778A1 PCT/JP2018/045758 JP2018045758W WO2019150778A1 WO 2019150778 A1 WO2019150778 A1 WO 2019150778A1 JP 2018045758 W JP2018045758 W JP 2018045758W WO 2019150778 A1 WO2019150778 A1 WO 2019150778A1
Authority
WO
WIPO (PCT)
Prior art keywords
real
information processing
processing apparatus
information
physical quantity
Prior art date
Application number
PCT/JP2018/045758
Other languages
French (fr)
Japanese (ja)
Inventor
哲男 池田
智裕 石井
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2019150778A1 publication Critical patent/WO2019150778A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Definitions

  • This disclosure relates to an information processing apparatus, an information processing method, and a program.
  • Patent Document 1 discloses a technique for recognizing a real object and a work action performed on the real object by analyzing a photographed image and managing it as history information.
  • the physical quantity of at least one real object could not be estimated appropriately based on the behavior of a plurality of real objects.
  • the present disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of more appropriately estimating the physical quantity of at least one real object based on the behavior of a plurality of real objects. To do.
  • an identification unit that identifies a plurality of real objects existing in a real space in an input image, and a detection that detects an event corresponding to a physical action between the plurality of real objects based on the input image
  • an estimation unit that estimates a physical quantity of at least one real object among the plurality of real objects based on the event.
  • a plurality of real objects existing in the real space are identified in the input image, and an event corresponding to a physical action between the plurality of real objects is detected based on the input image.
  • a computer-executed information processing method comprising: estimating a physical quantity of at least one real object among the plurality of real objects based on the event.
  • a plurality of real objects existing in the real space are identified in the input image, and an event corresponding to a physical action between the plurality of real objects is detected based on the input image.
  • a program for causing a computer to realize a physical quantity of at least one real object among the plurality of real objects based on the event is provided.
  • FIG. 2 is a block diagram illustrating a functional configuration example of an information processing apparatus 100.
  • FIG. It is a figure which shows an example of the memory content of the object information primary storage part. It is a figure which shows an example of the memory content of the object information storage part.
  • It is a flowchart which shows an example of the flow of a process from an input to a graphics display. It is a flowchart which shows an example of the flow of an object tracking process. It is a flowchart which shows an example of the flow of an object tracking process. It is a flowchart which shows an example of the flow of an object information estimation process.
  • FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100.
  • FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100.
  • FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100.
  • FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100.
  • FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100. It is a figure explaining the Example in which cooking assistance is performed using the information processing apparatus. It is a figure explaining the Example in which cooking assistance is performed using the information processing apparatus.
  • FIG. 2 is a block diagram illustrating a hardware configuration example of an information processing apparatus 100.
  • Embodiment> (1.1. Overview) First, an overview of an embodiment of the present disclosure will be described.
  • the information processing system includes an information processing apparatus 100 and a projection plane 200 on which projection is performed by the information processing apparatus 100.
  • the information processing apparatus 100 is an apparatus having a function of estimating a physical quantity of a real object existing in a real space. More specifically, as shown in FIG. 1, the information processing apparatus 100 has an imaging function for imaging the entire projection plane 200 by being installed above the projection plane 200. The information processing apparatus 100 identifies a plurality of real objects in the input image by analyzing the captured input image.
  • the information processing apparatus 100 detects an event corresponding to a physical action between the plurality of real objects by analyzing the input image. Furthermore, the information processing apparatus 100 can estimate the physical quantity of at least one real object among a plurality of real objects based on the detected event. As a result, the user can obtain an estimated value of the physical quantity of the real object without using a dedicated measuring instrument or the like.
  • the above real object may be any tangible object that exists in real space.
  • the real object may be a solid having a regularity among tangible objects, or a liquid or gas having no regularity.
  • the above event may be any event as long as it corresponds to a physical action between real objects.
  • the event includes an event in which some force acts on the real object such as a collision or contact between the real objects.
  • the event may also include an event related to the behavior of another real object on the real object having the projection plane 200.
  • the contents of the physical quantity that is the estimation target are not particularly limited.
  • the physical quantity to be estimated may include mass, density, friction coefficient, speed, acceleration, temperature, current, voltage, or the like. Note that the physical quantity to be estimated is not limited to these.
  • the information processing apparatus 100 can generate a virtual object corresponding to a real object that is a physical quantity estimation target. Then, the information processing apparatus 100 can reflect the estimated physical quantity of the real object on the physical quantity of the virtual object corresponding to the real object. For example, when the information processing apparatus 100 estimates the mass of an apple that is a real object, the information processing apparatus 100 generates a virtual object of the apple and sets the estimated mass of the apple to the mass of the apple that is the virtual object. It can be reflected.
  • the information processing apparatus 100 can project an arbitrary image on the projection plane 200.
  • the information processing apparatus 100 can project the generated virtual object on the projection plane 200.
  • the information processing apparatus 100 can approximate the behavior of the virtual object to the behavior of the real object by reflecting the estimated physical quantity of the real object on the physical quantity of the virtual object.
  • the information processing apparatus 100 can project an image that causes the virtual objects to interact with each other in the virtual space.
  • the information processing apparatus 100 can project an image that causes a plurality of apples that are virtual objects to collide with each other in a virtual space.
  • the information processing apparatus 100 may project a video that causes a real object and a virtual object to interact with each other. For example, when dominoes that are real objects and dominoes that are virtual objects are arranged in order, and the user defeats dominoes that are real objects, the information processing apparatus 100 is a virtual object after the dominoes that are real objects have fallen. You may defeat Domino in virtual space. Thereby, the information processing apparatus 100 can give the user an impression that the boundary between the real space and the virtual space has become ambiguous.
  • the functions of the information processing apparatus 100 described above can be changed as appropriate.
  • the information processing apparatus 100 may include various sensors to realize identification of a real object and estimation of a physical quantity of the real object based on sensor information other than the input image.
  • the functions of the information processing apparatus 100 described above may be provided in different apparatuses. Details of the physical object physical quantity estimation processing and the generated virtual object projection processing performed by the information processing apparatus 100 will be described later.
  • the projection surface 200 may be any surface as long as the image can be projected by the information processing apparatus 100.
  • the projection surface 200 may be an uneven surface, a curved surface, a spherical surface, or the like.
  • the material of the projection surface 200 is not particularly limited.
  • the material of the projection surface 200 may be wood, rubber material, metal material, plastic material, or the like.
  • the projection surface 200 may be any surface and may be formed of any material. Therefore, the user can use the information processing system as long as the information processing apparatus 100 can be installed on an arbitrary surface. can do.
  • an image may be projected from below the projection plane 200a by installing the information processing apparatus 100a below the projection plane 200a that is a transmissive screen.
  • the information processing apparatus 100a captures a real object positioned in the upward direction of the projection plane 200a (not necessarily on the projection plane 200a) from the downward direction of the projection plane 200, and the captured image is captured. By analyzing, real object identification and physical object physical quantity estimation are realized.
  • the information processing apparatus 100b may include a touch panel with a sensor. More specifically, the information processing apparatus 100b includes a plurality of image sensors on the surface of the touch panel so as to photograph a real object that is positioned above the touch panel (not necessarily on the touch panel), and By analyzing the captured image, real object identification and physical object physical quantity estimation are realized. In this case, since the video is displayed by the information processing apparatus 100b, the projection plane 200 does not exist. Note that the modes 2A and 2B in FIG. 2 can be flexibly modified in accordance with specifications and operations.
  • the information processing apparatus 100 holds the information shown in FIG. 3A.
  • the information processing apparatus 100 determines whether or not the block (real object) size, texture, emitted sound (indicated as “sound” in the drawing), mass, and mass shown in 3A-1 are variable. (Hereinafter referred to as “mass variable flag”) and information on the material.
  • the block is a plastic real object having a size of 1 cm in length, 3 cm in width, 1 cm in height, and a mass of 9 g, and the mass is not variable.
  • sound data related to the sound generated when the block collides with another real object is registered. Note that the information shown in 3A-1 is stored in the information processing apparatus 100 as known object information. Details will be described later.
  • the information processing apparatus 100 has a reference density of 1 g / cm 3 as indicated by 3A-2.
  • the reference density is information used when the information processing apparatus 100 estimates the mass of a real object with unknown mass identified in the input image. More specifically, the information processing apparatus 100 outputs the volume of the real object by analyzing the input image, and can estimate the value obtained by multiplying the volume by the reference density as the mass of the real object. .
  • the information processing apparatus 100 uses the reference projection surface as a wood surface.
  • the reference projection plane is information regarding the material of the projection plane 200 that is provisionally registered when the information processing apparatus 100 cannot identify the material of the projection plane 200 by analyzing the input image or the like. is there.
  • the information processing apparatus 100 sets the friction coefficient of the plastic to the wood surface and the friction coefficient of the rubber surface to 0.4 and 0.8, respectively.
  • the information processing apparatus 100 measures or estimates object information including the size, texture, emitted sound, mass, mass variable flag, friction coefficient, and accuracy shown in 3B-2.
  • the accuracy is information used as an index value of the accuracy of object information. It can be said that the higher the accuracy, the more reliable the object information. Note that the above object information is merely an example, and the content of the object information is not particularly limited.
  • the information processing apparatus 100 does not grasp object information related to the apple 10, and thus each piece of information has a null value (null) in 3B-2. .
  • the information processing apparatus 100 cannot identify the material of the projection plane 200 by analyzing the input image, and the projection plane 200 is temporarily estimated as a wood plane using information on the reference projection plane. doing.
  • the information shown in 3B-2 is accumulated in the information processing apparatus 100 as undetermined object information. Details will be described later.
  • the information processing apparatus 100 identifies the apple 10 by analyzing the input image. Based on the analysis of the input image, the size of the apple 10 is 3 cm in length, 3 cm in width, as shown in 3C-2 in FIG. 3C. Outputs a height of 3 cm. Then, the information processing apparatus 100 multiplies the volume of the apple 10 by 27 g by multiplying the volume (value obtained by multiplying the vertical, horizontal, and height values) based on the size information and the reference density. Estimated. Further, the information processing apparatus 100 registers true (registered as a provisional value) and unknown for the mass variable flag and the friction coefficient whose values are unknown, and 0.0 is registered for the accuracy.
  • the information processing apparatus 100 can recognize the detailed shape of the real object including the apple 10 and can appropriately change the content of the information regarding the size according to the shape of the real object. More specifically, when the real object is a thread, the information processing apparatus 100 may hold information regarding the length of the thread as information regarding the size. Further, the information processing apparatus 100 may hold information on the volume of the real object instead of (or in addition to the information on the size) the information on the size.
  • the information processing apparatus 100 identifies the block 20 by analyzing the captured input image, and based on the feature amount, the block 20 is identified as a known block indicated by 3A-1 in FIG. 3A. It is determined that they are substantially the same (or similar). Then, the information processing apparatus 100 reflects the information of 3A-1 on the object information regarding the block 20 as shown in 3D-1.
  • the friction coefficient of the block 20 in 3D-1 is the friction coefficient for the wood surface shown in 3A-4. Since the object information is known, 1.0 is registered as the accuracy.
  • the information processing apparatus 100 predicts the behavior of the apple 10 and the block 20 (for example, the moving distance due to the collision) based on the various object information of the apple 10 and the block 20 registered in the above 3D-2.
  • the method for predicting the behavior of the apple 10 and the block 20 is not particularly limited, and for example, a known technique such as physical simulation can be used.
  • the information processing apparatus 100 determines that the material of the projection surface 200 is incorrect, and based on the moving distance of the block 20, the projection surface 200. Is the rubber surface. Then, as indicated by 3E-2, the information processing apparatus 100 registers 0.8, which is the friction coefficient with respect to the rubber surface of the plastic indicated by 3A-5, in the friction coefficient of the block 20.
  • the information processing apparatus 100 calculates the energy applied to the block 20 when the projection surface 200 is a rubber surface. Then, if it is found by analysis of the input image that the energy applied to the block 20 and the apple 10 is the same, the information processing apparatus 100 determines the rubber surface when the energy applied to the block 20 is applied to the apple 10.
  • the behavior of the apple 10 above (for example, the movement distance due to the collision, etc.) is output.
  • the information processing apparatus 100 determines that the mass of the apple 10 is incorrect, and estimates and updates the mass of the apple 10 based on the actual moving distance, as indicated by 3F-2 in FIG. 3F ( In the example of 3F-2, the mass of apple 10 is estimated to be 108 g). Note that the information processing apparatus 100 may determine that other object information such as the reference density or the friction coefficient is incorrect and estimate the information.
  • the information processing apparatus 100 performs estimation processing of object information (including physical quantities such as mass) each time an event as described above (in the above example, a collision) occurs. Then, as described above, the information processing apparatus 100 can improve the estimation accuracy of the object information by performing the estimation process in the new event using the estimation result in the past event (this example) Then, as shown to FIG. 3G, the information processing apparatus 100 estimates the mass of the apple 10 finally to 106.5g).
  • object information including physical quantities such as mass
  • the information processing apparatus 100 estimates the physical quantity of the apple 10 based on the behavior when the apple 10 is rolled on the projection plane 200 or dropped on the projection plane 200 without using the block 20. May be.
  • the behavior of the real object used for the physical quantity estimation processing is mainly the movement distance, but the present invention is not limited to this.
  • the information processing apparatus 100 predicts various physical behaviors such as rolling, jumping, and collapsing of a real object, and estimates a physical quantity of the real object by detecting a difference from the actual behavior. May be.
  • the behavior is predicted by inputting the shape or property of the real object (for example, ease of rolling, ease of jumping, ease of collapse, etc.) to the physical simulation program.
  • the information processing apparatus 100 includes a control unit 110, a processing unit 120, an input unit 130, a graphics display processing unit 140, and an output unit 150.
  • the control unit 110 has a functional configuration that comprehensively controls the overall processing performed by the information processing apparatus 100.
  • the control unit 110 may control activation or stop of each functional configuration including the processing unit 120 based on an input from the input unit 130, or may control the output unit 150 such as a display or a speaker. it can.
  • the control content of the control part 110 is not limited to these.
  • the control unit 110 may realize processing (for example, OS (Operating System) processing or the like) generally performed in a general-purpose computer, a PC (Personal Computer), a tablet PC, a smartphone, or the like.
  • the processing unit 120 has a functional configuration that estimates physical quantities of real objects and manages object information of real objects. As illustrated in FIG. 4, the processing unit 120 includes an object tracking unit 121, an object information estimation unit 122, an object information primary storage unit 123, and an object information storage unit 124.
  • the object tracking unit 121 has a functional configuration that performs object tracking processing for tracking a real object. More specifically, the object tracking unit 121 compares the feature amount of the real object in the input image with the feature amount of the real object registered in the object information primary storage unit 123 or the object information storage unit 124. Thus, it functions as an identification unit for identifying a real object in the input image.
  • the object tracking unit 121 also functions as a detection unit that detects an event between a plurality of real objects by analyzing the input image. More specifically, the object tracking unit 121 monitors the static movement of the real object by analyzing the input image, and detects an event when the real object has moved. When an event occurs, the object tracking unit 121 predicts the behavior of the real object in the event and detects a difference from the actual behavior.
  • the occurrence of a difference between the expected behavior and the actual behavior is also referred to as “inconsistency”, and the detection process of the difference is also referred to as “inconsistency detection processing”.
  • the object tracking unit 121 also functions as an estimation unit that estimates a physical quantity of a real object. More specifically, when a contradiction occurs due to an event as described above, the object tracking unit 121 cancels the difference between the predicted behavior and the actual behavior (or reduces the difference to a predetermined value or less). Calculate the value. Here, eliminating the difference is also referred to as “contradiction resolution processing”. Details of the processing by the object tracking unit 121 will be described later.
  • the object information estimation unit 122 has a functional configuration for performing object information estimation processing for estimating object information of a real object. More specifically, the object information estimation unit 122 functions as an identification unit that identifies a real object in the input image in the same manner as the object tracking unit 121.
  • the object information estimation unit 122 also functions as an estimation unit that estimates a physical quantity of a real object. More specifically, the object information estimation unit 122 is based on the feature amount (for example, size or texture) of the target real object and the real object registered in the target information storage unit 124. The identity is determined. Then, when the identity of these real objects can be confirmed, the object information estimation unit 122 updates the physical quantity of the target real object using the physical quantity of the real object registered in the object information storage unit 124.
  • the feature amount for example, size or texture
  • the object information estimation unit 122 determines that these real objects are substantially the same if the difference between the feature quantity of the target real object and the feature quantity of the real object registered in the object information storage unit 124 is equal to or less than a predetermined value. You may treat it as a thing.
  • the object information estimation unit 122 determines that the target real object and the real object registered in the object information storage unit 124 are present. May be treated as substantially the same. This eliminates the trouble of registering all real objects having slight differences in size, texture, and the like. Details of the processing by the object information estimation unit 122 will be described later.
  • the object information primary storage unit 123 has a functional configuration for storing object information about an undetermined real object. More specifically, as shown in FIG. 5, the object information primary storage unit 123 has an object ID, a size, a texture, a sound to be emitted, a mass, a variable mass flag, a friction coefficient, an accuracy, etc.
  • the object information including is stored.
  • the object ID is identification information assigned to the real object identified by the information processing apparatus 100. The contents of the other object information are as described above.
  • the accuracy of the object information in the object information primary storage unit 123 increases. Then, object information is transferred from the object information primary storage unit 123 to the object information storage unit 124 for the real object whose accuracy is equal to or greater than a predetermined value.
  • the object information stored in the object information primary storage unit 123 is not limited to the example of FIG.
  • the object information storage unit 124 is a functional configuration that stores object information about a real object that is known (or the accuracy is a predetermined value or more). More specifically, as shown in FIG. 6, the object information storage unit 124 stores object information including a size, a texture, a sound to be emitted, a mass, a mass variable flag, a material, and the like for a known real object.
  • Pieces of object information may be registered by physical quantity estimation processing by the information processing apparatus 100, may be input by a user, or may be acquired from an arbitrary network such as the Internet. These object information may be registered by an image recognition-based object recognition engine. As a result, more types of real objects can be recognized.
  • the dictionary data of the object recognition engine may be updated by inputting highly accurate object information to the object recognition engine.
  • the object information stored in the object information storage unit 124 is not limited to the example of FIG.
  • the input unit 130 has a functional configuration that receives input from a user or the like.
  • the input unit 130 includes an image sensor and can generate an input image by photographing the entire projection surface 200.
  • the input unit 130 may generate an input image in the visible light band, or an input image in a specific wavelength band (for example, an infrared light band) through a multispectral filter that transmits only a specific wavelength. It may be generated. Further, the input unit 130 may be able to generate an input image from which polarized light has been removed through a polarization filter.
  • the input unit 130 may be able to generate 3D information by including a depth sensor.
  • the type of the depth sensor and the sensing method are not particularly limited, and for example, a stereo camera may be used, or a TOF (Time of Flight) method or a Structured Light method may be used.
  • the input unit 130 may include a touch sensor that can detect an operation of the user touching the projection surface 200. As a result, the user can make a desired input by touching the projection surface 200.
  • the type of touch sensor and the sensing method are not particularly limited. For example, a touch may be detected by providing the projection surface 200 with a touch panel, or the touch may be detected by analyzing an input image generated by the image sensor.
  • the sensor provided in the input unit 130 is not limited to these.
  • the input unit 130 may include an arbitrary sensor such as a sound sensor, a temperature sensor, an illuminance sensor, a position sensor (for example, a GNSS (Global Navigation Satellite System) sensor) or an atmospheric pressure sensor.
  • the input unit 130 provides the input information to the control unit 110, the processing unit 120, and the graphics display processing unit 140.
  • the graphics display processing unit 140 has a functional configuration that performs processing related to graphics display. More specifically, the graphics display processing unit 140 inputs the object information of the real object provided from the processing unit 120 and the input provided from the input unit 130 into arbitrary software (for example, a graphics application). As a result, the graphics projected on the projection plane 200 are output. The graphics display processing unit 140 provides information about the output graphics to the output unit 150.
  • the graphics display processing unit 140 functions as a generation unit that generates a virtual object corresponding to a real object that is a physical quantity estimation target. Then, the graphics display processing unit 140 can reflect the estimated physical quantity of the real object on the physical quantity of the virtual object corresponding to the real object.
  • the graphics display processing unit 140 can also function as a display control unit that controls the behavior of the generated virtual object.
  • the graphics display processing unit 140 can control the behavior of another virtual object based on the behavior of the virtual object.
  • the graphics display processing unit 140 can control the behavior of the virtual object based on the behavior of the real object.
  • the graphics display processing unit 140 can give the user an impression that the boundary between the real space and the virtual space has become ambiguous. A specific example of display control by the graphics display processing unit 140 will be described later.
  • the output unit 150 has a functional configuration that outputs various types of information.
  • the output unit 150 includes projection means such as a projector, and can project the graphics output by the graphics display processing unit 140 onto the projection plane 200.
  • the output unit 150 may include display means such as various displays or sound output means such as a speaker or an amplifier.
  • the output means is not limited to the above.
  • the functional configuration example of the information processing apparatus 100 has been described above.
  • the above-described functional configuration described with reference to FIG. 4 is merely an example, and the functional configuration of the information processing apparatus 100 is not limited to the example.
  • the information processing apparatus 100 does not necessarily include all the configurations illustrated in FIG.
  • the functional configuration of the information processing apparatus 100 can be flexibly modified according to specifications and operations.
  • step S1000 the input unit 130 acquires information about the real object including the input image, and the processing unit 120 receives the information from the input unit 130.
  • the information acquired by the input unit 130 may include various sensor information as well as the input image.
  • step S1004 the object tracking unit 121 performs object tracking processing for tracking a real object using the information received from the input unit 130. Details of the object tracking process will be described later.
  • step S1008 the object information estimation unit 122 performs object information estimation processing for estimating the object information of the real object. Details of the object information estimation process will also be described later.
  • step S1012 the graphics display processing unit 140 performs graphics display processing as appropriate according to the result of the previous process. For example, the graphics display processing unit 140 generates a virtual object corresponding to the real object, thereby projecting a video that causes the virtual object to interact with each other, or to cause the real object and the virtual object to interact with each other. Or projecting an image.
  • step S1016 The above-described various processes are repeated until an end operation by the user is performed in step S1016.
  • step S1016 / Yes When the end operation is performed by the user (step S1016 / Yes), a series of processes ends.
  • the object tracking unit 121 repeats the processing from step S1100 to step S1128 for each real object included in the input image.
  • step S ⁇ b> 1100 the object tracking unit 121 compares the feature quantity of the real object in the input image with the feature quantity of the real object registered in the object information primary storage unit 123, so that it is included in the input image. It is confirmed whether or not a real object exists in the object information primary storage unit 123.
  • step S1104 the object tracking unit 121 analyzes the input image, thereby obtaining one real object. It is determined whether or not the frame is stationary in the previous frame.
  • step S1100 when the real object included in the input image does not exist in the object information primary storage unit 123 (step S1100 / No), the object tracking unit 121 performs step S1100 for other real objects included in the input image. Step S1128 is performed.
  • step S1104 determines in step S1104 that the real object has not been stationary in the previous frame (in other words, the real object has moved in the previous frame) (step S1104 / No)
  • step S1108 the object tracking unit 121 determines whether or not the real object is stationary in the current frame. If the object tracking unit 121 determines that the real object is stationary in the current frame (step S1108 / Yes), in step S1112, the object tracking unit 121 determines that the coordinate change and the real object during the movement of the real object have occurred. Acquires sound information.
  • step S1104 when the real object is stationary in the previous frame (in other words, when the real object is still stationary) (step S1104 / Yes), the object tracking unit 121 includes the input object in the input image.
  • the processes in steps S1100 to S1128 are performed on other contained real objects.
  • step S1108 when the object tracking unit 121 determines in step S1108 that the real object is not stationary in the current frame (in other words, when the real object continues to move) (step S1108 / No), the object tracking is performed.
  • the unit 121 performs the processing from step S1100 to step S1128 for other real objects included in the input image.
  • step S1116 the object tracking unit 121 determines whether there is a contradiction regarding the coordinate change of the moving real object. In other words, the object tracking unit 121 predicts the behavior of the real object and determines whether there is a difference from the actual behavior (or whether a difference greater than a predetermined value has occurred).
  • the object tracking unit 121 determines that there is no contradiction regarding the coordinate change of the real object (step S1116 / Yes)
  • the object tracking unit 121 performs steps S1100 to S1128 with respect to other real objects included in the input image. Perform the process.
  • step S1120 the object tracking unit 121 stores the real object in the object information primary storage unit 123. It is determined whether the mass variable flag is false (in other words, the mass is unchanged). When the mass variable flag of the real object is false in the object information primary storage unit 123 (step S1120 / Yes), the object tracking unit 121 increases the contradiction decision number by 1 in step S1124.
  • the contradiction fixed number is a value used when an estimated value to be updated is determined in subsequent processing. Details will be described later.
  • the object tracking unit 121 adds the real object to the contradiction table in step S1128.
  • the contradiction table is a table in which real objects that are object information update targets are registered.
  • the object tracking unit 121 repeats the processing from step S1100 to step S1128 for each real object included in the input image, and then determines in step S1132 whether or not the contradiction decision number is zero. When the contradiction fixed number is zero (step S1132 / Yes), a series of processing ends. If the contradiction decision number is not zero (step S1132 / No), in step S1136, the object tracking unit 121 determines whether the contradiction decision number is equal to or greater than a predetermined threshold.
  • step S1136 / No If the contradiction decision number is not equal to or greater than the predetermined threshold (step S1136 / No), since there are more real objects with no contradiction in coordinate changes than the predetermined number, the estimated value of the projection plane 200 (eg, the material of the projection plane 200) However, it can be said that the estimated physical quantity of each real object is more suspicious.
  • the object tracking unit 121 attempts to update the object information of each real object by repeating the processing of steps S1140 to S1148 for each real object registered in the contradiction table. More specifically, in step S1140, the object tracking unit 121 reduces the accuracy of the real object registered in the contradiction table to a constant value.
  • step S1144 the object tracking unit 121 determines whether or not the accuracy of the other real object that collided is 1.0. (Or whether the accuracy of other real objects is higher than a predetermined value).
  • step S1148 the object tracking unit 121 Based on the energy generated by the collision, the movement of the real object before and after the collision, the size of the real object, etc., the mass of the real object registered in the contradiction table is estimated, and the object information is updated (this is more accurate) Equivalent to estimating less accurate physical quantities based on higher physical quantities).
  • the object tracking unit 121 increases the accuracy of the real object by 0.1.
  • step S1136 when the number of contradictions is greater than or equal to a predetermined threshold value (step S1136 / Yes), since there are more real objects with inconsistent coordinate changes than the predetermined number, the estimated physical quantity of each real object is larger than the estimated value. It can be said that the estimated value of the projection plane 200 (for example, the material of the projection plane 200) is more suspicious.
  • step S1152 the object tracking unit 121 updates the estimated value of the projection plane 200.
  • the object tracking unit 121 updates the rubber surface or the like.
  • step S1156 when the friction coefficient with respect to the updated projection plane 200 is known, the object tracking unit 121 updates the friction coefficient of the real object.
  • step S1160 the object tracking unit 121 calculates the mass of the real object based on the energy generated by the collision, the movement of the real object before and after the collision, the size of the real object, etc. Estimate and update object information. At that time, the object tracking unit 121 increases the accuracy of the real object by 0.1. Thus, a series of object tracking processing is completed.
  • the object information estimation unit 122 repeats the process shown in FIGS. 9A and 9B for each real object included in the input image.
  • the object information estimation unit 122 analyzes the information such as the input image provided from the input unit 130, thereby acquiring the size and texture of the real object.
  • the object information estimation unit 122 confirms whether or not the detected real object exists in the object information primary storage unit 123 based on the acquired size and texture information.
  • step S1208 the object information estimation unit 122 stores the values indicating the size and texture of the real object in the object information primary storage. It is determined whether or not the threshold value has changed by more than a predetermined threshold value compared with that existing in the unit 123. Thus, the object information estimation unit 122 can grasp that the appearance (texture) of the real object in the input image has changed due to the real object rolling or falling.
  • step S1212 the object information estimation unit 122
  • the object information of the real object is updated using the newly acquired size and texture information of the real object.
  • the object information estimation unit 122 can generate a 3D model of the real object through, for example, the real object rolling or falling.
  • step S1208 when the values indicating the size and texture of the real object have not changed by a predetermined threshold value or more compared to those existing in the object information primary storage unit 123 (step S1208 / No), it is updated. Since there is no power object information, the process of step S1212 is not performed.
  • step S1204 when the detected real object does not exist in the object information primary storage unit 123 (step S1204 / No), in step S1216, the object information estimation unit 122 performs a large comparison with the target real object. It is confirmed whether or not there is a real object in the object information storage unit 124 in which the difference between the values indicating the length is equal to or less than a predetermined threshold.
  • step S1216 / Yes the object information estimation is performed in step S1220.
  • the unit 122 confirms whether or not there is a real object in the object information storage unit 124 whose difference in value indicating the texture is equal to or smaller than a predetermined threshold.
  • the object information estimation unit 122 In the comparison with the target real object, if there is a real object in the object information storage unit 124 in which the difference between the values indicating the texture is equal to or smaller than a predetermined threshold (step S1220 / Yes), the object information estimation unit 122 It is determined that the real object is substantially the same (or similar) as the real object existing in the object information storage unit 124. Therefore, in step S1224, the object information estimation unit 122 registers the object information of the target real object in the object information primary storage unit 123 using the physical information of the real object existing in the object information storage unit 124. At this time, since the physical information registered in the object information storage unit 124 is reliable, the object information estimation unit 122 sets the accuracy of the physical information to 1.0.
  • step S1228 the object information estimation unit 122 calculates the friction coefficient of the real object with respect to the projection surface 200 based on the material of the real object and the material of the projection surface 200, and updates the object information in the object information primary storage unit 123. Thereafter, the process transitions to immediately after step S1212.
  • step S1216 in comparison with the target real object, if there is no real object in the object information storage unit 124 in which the difference in value indicating the size is equal to or smaller than a predetermined threshold (step S1216 / No), or step In S1220, when there is no real object in the object information storage unit 124 in which the difference between the values indicating the texture is equal to or smaller than the predetermined threshold (Step S1220 / No), the process transitions to Step S1232.
  • step S1232 the object information estimation unit 122 registers the object information of the target real object in the object information primary storage unit 123 using the acquired size and texture information. At that time, the object information estimation unit 122 sets the accuracy of the physical information to 0.0.
  • step S1236 the object information estimation unit 122 calculates the mass of the real object based on the acquired size and reference density, and updates the object information in the object information primary storage unit 123. In other words, even if the target real object does not exist in the object information storage unit 124, the object information estimation unit 122 can estimate the mass using the size of the real object. In addition, since the information regarding the mass variable flag has not been acquired, the object information estimation unit 122 temporarily sets the mass variable flag to true. Thereafter, the process transitions to immediately after step S1212.
  • steps in the flowcharts shown in FIGS. 7, 8A, 8B, 9A, and 9B do not necessarily have to be processed in time series in the order described. That is, each step in the flowchart may be processed in an order different from the order described or may be processed in parallel.
  • Example> In the above, the flow of processing by the information processing apparatus 100 has been described. Next, various embodiments using the information processing apparatus 100 will be described.
  • a virtual object corresponding to a real object is generated by the information processing apparatus 100, and various graphics are provided using the virtual object.
  • the graphics display processing unit 140 acquires the object information of the real object registered in the object information primary storage unit 123 or the object information storage unit 124. Then, the graphics display processing unit 140 generates a virtual object corresponding to the real object based on the object information, and reflects the estimated physical quantity of the real object in the physical quantity of the virtual object.
  • the graphics display processing unit 140 may project graphics that cause the generated virtual object to interact with each other, or may project graphics that cause the real object and the virtual object to interact with each other. it can.
  • the user projects a real object by arranging a plurality of boxes that are real objects, and the graphics display processing unit 140 projects a plurality of boxes that are virtual objects on the projection plane 200. And a virtual domino game where a virtual object is mixed.
  • the graphics display processing unit 140 may adjust the projection position of the virtual object based on, for example, a touch operation on the projection plane 200 by the user.
  • the graphics display processing unit 140 recognizes that the real object in front of the virtual object has fallen (or has fallen), the graphics display processing unit 140 performs projection so that the boxes that are the virtual objects are sequentially fallen down.
  • the graphics display processing unit 140 inputs how to arrange the virtual objects, physical information, etc. (for example, shape and mass) to the physical simulation program, etc. Move closer to. For example, a virtual object having a relatively heavy mass is projected to fall more easily than a virtual object having a relatively light mass.
  • the graphics display processing unit 140 can increase the sense of reality by causing the output unit 150 to output the sound at the timing when the virtual object falls.
  • the real object is obtained from each surface (or each direction).
  • the texture at the time of shooting is acquired (see, for example, step S1208 and step S1212 in FIG. 9A).
  • the object information estimation unit 122 causes the object information estimation unit 122 to change to 11B by performing some work (including playing) using the box.
  • the six-sided view of this box is acquired as a texture and registered as object information.
  • the graphics display processing unit 140 can generate a 3D model of a virtual object that faithfully reproduces a box that is a real object by using this object information.
  • the feature amount of the real object in the input image is substantially the same as the feature amount of the real object registered in the object information primary storage unit 123 or the object information storage unit 124 (or the difference is equal to or less than a predetermined value).
  • a box 30a having a package design is different from a box 30b having a package design different from the box 30a because of the nature (taste, color, etc.) of the contents.
  • various object information such as the size, mass, emitted sound, mass variable flag, friction coefficient, and the like of the box 30a and the box 30b are substantially the same.
  • the object information estimation unit 122 treats the box 30a and the box 30b as substantially the same if the difference between the value indicating the size or the value indicating the texture is equal to or less than a predetermined value. Also good. This eliminates the trouble of registering all real objects having slight differences in size, texture, and the like.
  • the object information estimation unit 122 converts the texture of the box 30a and the box 30b into the box 31a and the box 31b expressed in gray scale, as shown in 12B.
  • 12C shows a histogram 32 when the box 31a and the box 31b after being converted to gray scale are compared (the vertical axis indicates the number of pixels (the number of pixels), and the horizontal axis indicates the brightness). ing).
  • the object information estimation unit 122 treats the box 30a and the box 30b as substantially the same thing.
  • the object information estimation unit 122 can more appropriately determine whether or not the real objects can be handled as substantially the same object by performing the comparison process after converting the input image into the grayscale image.
  • a specular reflection area 40 may occur in the input image, for example, as shown in 13A of FIG.
  • the input unit 130 may perform imaging processing via a polarizing filter.
  • the input unit 130 can generate an input image from which the specular reflection area 40 is removed, so that the identification accuracy of the real object can be improved.
  • the object information estimation unit 122 sets the real object as the object information primary storage unit 123 or the object information estimation process.
  • the size of the real object is also taken into consideration.
  • FIGS. 14A to 14C when there are real objects having mutually different sizes and substantially the same texture (in the example of FIG. 14, 14A is a key chain). , 14B is a coaster and 14C is a storefront sign), the object tracking unit 121 and the object information estimation unit 122 can distinguish these real objects based on their sizes. Therefore, the technique of the present disclosure is useful compared to a known image recognition technique in which real objects having different sizes and substantially the same texture are regarded as substantially the same object.
  • the information processing apparatus 100 performs weighing (for example, measurement of mass) of materials used for cooking.
  • the user places a carrot on a cutting board placed on the projection plane 200.
  • the object information estimation unit 122 of the information processing apparatus 100 analyzes the input image, thereby identifying carrots and estimating object information (mass etc.).
  • the graphics display processing unit 140 projects graphics on the projection plane 200 based on the result of the processing (in the example of 15A, the text “This is carrot 100g” is projected). Thereby, the user can know the estimated value of the mass of the carrot which is a real object, without using a mass measuring device.
  • the object tracking unit 121 detects the occurrence of splitting by analyzing the input image, and the object ID of each lump after splitting is changed to the branch number of the original carrot object ID (in the example of FIG.
  • the carrot object ID is 10 and the object IDs of each lump after division are 10-A to 10-C).
  • the object tracking part 121 estimates the object information (mass etc.) of each lump.
  • the object tracking unit 121 can estimate the object information of each lump after the division by appropriately using the carrot object information before the division.
  • the object tracking unit 121 can estimate the mass of each lump after division using the mass and size of the carrot before division and the size of each lump after division.
  • the processing content is not limited to this.
  • the object tracking unit 121 takes over various pieces of object information such as sound generated by carrots before splitting, a variable mass flag, and a friction coefficient in the pieces of object information of each lump after splitting. But you can. As a result, even when cooking progresses and the real object is divided (or integrated), the object tracking unit 121 can effectively use the object information of the real object before the division (or before the integration). .
  • the graphics display processing unit 140 projects graphics on the projection plane 200 based on the result of the above processing (in the example of 15B, text indicating the mass of each block is projected).
  • the user can know the estimated value of the object information (mass etc.) of the real object.
  • the graphics display processing unit 140 may control the projection content according to the accuracy of the object information of the real object.
  • the graphics display processing unit 140 may project object information whose accuracy is higher than a predetermined value and object information whose accuracy is a predetermined value or less in different colors.
  • the user can intuitively recognize the accuracy of the estimated value of the object information (mass etc.).
  • the butter box is placed on the projection plane 200, so that the object information estimation unit 122 analyzes the input image, and the object information (mass etc.) of the butter box is obtained. presume.
  • 16B it is assumed that the butter box has moved due to the user's hand hitting it.
  • the object tracking unit 121 resolves the contradiction by updating the estimated value of the mass of the butter box to a lower value, and the graphics display processing unit 140 applies the projection surface 200 based on the updated mass. Project the graphics (in the example of 16B, the text "I will soon disappear” is projected). Thereby, the user can know the estimated value of the mass without using the mass measuring device even for the box whose mass is not known from the texture.
  • deterioration of the material used for cooking may not be properly determined from the texture of the material (texture in the input image in the visible light band).
  • the difference between the fresh raw meat 50a and the fresh raw meat 50b having a reduced freshness may not be properly determined from the texture in the input image in the visible light band.
  • the input unit 130 may perform imaging processing through a multispectral filter that transmits only a specific wavelength.
  • the input unit 130 can generate an input image of a specific wavelength band, so that the object tracking unit 121 and the object information estimation unit 122 can obtain fresh raw meat 51a based on the texture in the image. And fresh meat 51b having a reduced freshness can be properly identified.
  • the information processing apparatus 100 performs measurement (for example, measurement of mass) of a sample used for a chemical experiment.
  • the user places a liquid sample 60 and a liquid sample 61 having different densities on the projection plane 200.
  • the object information estimation unit 122 of the information processing apparatus 100 analyzes the input image to identify the liquid sample 60 and the liquid sample 61 and estimate the mass.
  • the object ID of the liquid sample 60 is 1
  • the object ID of the liquid sample 61 is 2.
  • the graphics display processing unit 140 projects graphics on the projection plane 200 based on the estimation result (in the example of 18A, text indicating the mass of the liquid sample 60 and the liquid sample 61 is projected).
  • the object tracking unit 121 determines that the volume of the liquid sample 60 has increased by analyzing the input image, and estimates the increased mass based on the density of the liquid sample 60 and the increased volume (liquid Since the density of the sample 60 is 1 g / cm 3 , the increased mass is estimated to be 30 g in the example of 18B).
  • the object tracking unit 121 analyzes the input image, so that the liquid sample 60 is not simply increased based on the fact that the container in which the liquid sample 61 is contained is empty. It is recognized that the liquid sample 61 has been added to the sample 60. Then, the object tracking unit 121 updates the mass of the integrated liquid sample 62 by estimating the mass using the density of the liquid sample 61 for the increase, and also updates the density. As described above, the information processing apparatus 100 can appropriately estimate the physical quantity of a liquid or gas that is a tangible object other than a solid.
  • the information processing apparatus 100 assists the user (for example, gives an instruction regarding the work). It is.
  • the object information estimation unit 122 of the information processing apparatus 100 recognizes that the can 70 is a new real object.
  • the object information estimation unit 122 functions as an instruction unit that gives an instruction to the user, and in order to efficiently acquire physical information of the can 70, an instruction as shown in FIG. (In the example of FIG. 19, the text “Please make three rotations in this direction” and an arrow indicating the direction of rolling are projected).
  • the object information estimation unit 122 can efficiently acquire the texture of the can 70 and estimate a physical quantity such as mass with high accuracy.
  • the object information estimation unit 122 recognizes that the box 80 is a new real object. Then, the object information estimation unit 122 may cause the graphics display processing unit 140 to project an instruction as illustrated in FIG. 20 in order to efficiently acquire the physical information of the box 80 (in the example of FIG. 20, “ Please put all six sides facing up "and the projected areas 81a to 81f where the box 80 is placed while changing the top surface).
  • the object information estimation unit 122 can efficiently acquire the texture of the box 80.
  • FIG. 21 is a diagram illustrating a hardware configuration example of the information processing apparatus 100.
  • the information processing apparatus 100 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, a host bus 904, a bridge 905, an external bus 906, an interface 907, , An input device 908, an output device 909, a storage device (HDD) 910, a drive 911, and a communication device 912.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • bridge 905 an external bus 906, an interface 907,
  • HDD storage device
  • the CPU 901 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the information processing apparatus 100 according to various programs. Further, the CPU 901 may be a microprocessor.
  • the ROM 902 stores programs used by the CPU 901, calculation parameters, and the like.
  • the RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. These are connected to each other by a host bus 904 including a CPU bus.
  • the functions of the control unit 110, the processing unit 120, or the graphics display processing unit 140 of the information processing apparatus 100 are realized by the cooperation of the CPU 901, the ROM 902, and the RAM 903.
  • the host bus 904 is connected to an external bus 906 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 905.
  • an external bus 906 such as a PCI (Peripheral Component Interconnect / Interface) bus
  • PCI Peripheral Component Interconnect / Interface
  • the host bus 904, the bridge 905, and the external bus 906 are not necessarily configured separately, and these functions may be mounted on one bus.
  • the input device 908 includes input means for inputting information such as a mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input control circuit that generates an input signal based on the input by the user and outputs the input signal to the CPU 901. Etc.
  • a user of the information processing apparatus 100 can operate the input device 908 to input various information and instruct processing operations to each device.
  • the function of the input unit 130 is realized by the input device 908.
  • the output device 909 includes display devices such as a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, an OLED (Organic Light Emitting Diode) device, and a lamp. Furthermore, the output device 909 includes an audio output device such as a speaker and headphones. The output device 909 outputs the played content, for example. Specifically, the display device displays various information such as reproduced video data as text or images. On the other hand, the audio output device converts reproduced audio data or the like into audio and outputs it. The function of the output unit 150 is realized by the output device 909.
  • display devices such as a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, an OLED (Organic Light Emitting Diode) device, and a lamp. Furthermore, the output device 909 includes an audio output device such as a speaker and headphones. The output device 909 outputs the played content, for example. Specifically, the display device displays various information such as reproduced video
  • the storage device 910 is a device for storing data.
  • the storage device 910 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
  • the storage device 910 is composed of, for example, an HDD (Hard Disk Drive).
  • the storage device 910 drives a hard disk and stores programs executed by the CPU 901 and various data. Each function of the object information primary storage unit 123 or the object information storage unit 124 can be realized by the storage device 910.
  • the drive 911 is a storage medium reader / writer, and is built in or externally attached to the information processing apparatus 100.
  • the drive 911 reads information recorded in a removable storage medium 913 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 903.
  • the drive 911 can also write information to the removable storage medium 913.
  • the communication device 912 is a communication interface configured by a communication device for connecting to the communication network 914, for example.
  • the information processing apparatus 100 analyzes the input image, identifies a plurality of real objects in the input image, and detects an event corresponding to a physical action between the plurality of real objects. To do. Then, the information processing apparatus 100 estimates a physical quantity of at least one real object among the plurality of real objects based on the detected event. As a result, the user can obtain an estimated value of the physical quantity of the real object without using a dedicated measuring instrument or the like.
  • An identification unit for identifying a plurality of real objects existing in the real space in the input image; A detection unit for detecting an event corresponding to a physical action between the plurality of real objects based on the input image; An estimation unit that estimates a physical quantity of at least one real object among the plurality of real objects based on the event, Information processing device.
  • the estimation unit estimates the physical quantity for each event. The information processing apparatus according to (1).
  • the estimation unit estimates a physical quantity at a new event using a physical quantity estimated at a past event, The information processing apparatus according to (2).
  • the estimation unit estimates the physical quantity based on a difference between the actual behavior and the actual behavior in the new event, which is expected based on the physical quantity estimated in the past event.
  • the estimation unit estimates a physical quantity with lower accuracy based on a physical quantity estimation result with higher accuracy.
  • the estimation unit estimates a physical quantity of the real object after division or integration based on the physical quantity estimated in the past event.
  • the information processing apparatus according to any one of (3) to (5).
  • the estimating unit estimates the physical quantity based on a size or texture of the real object;
  • the estimation unit determines, based on the size or the texture, an identity between a real object whose physical quantity is known and a real object that is a physical quantity estimation target, and the real object that is the estimation target. Estimate physical quantities, The information processing apparatus according to (7).
  • a generator that generates a virtual object corresponding to the real object; The information processing apparatus according to any one of (1) to (8).
  • the generating unit reflects the physical quantity estimated by the estimating unit on a physical quantity of the virtual object; The information processing apparatus according to (9).
  • (11) A display control unit for controlling the behavior of the virtual object displayed by a predetermined method; The information processing apparatus according to (9) or (10).
  • the display control unit controls the behavior of another virtual object based on the behavior of the virtual object; The information processing apparatus according to (11). (13) The display control unit controls the behavior of the virtual object based on the behavior of the real object; The information processing apparatus according to (11) or (12). (14) The event includes a collision or contact between the plurality of real objects or a behavior of another real object on a real object. The information processing apparatus according to any one of (1) to (13). (15) The event occurs along with the work performed by the user using the real object. The information processing apparatus according to any one of (1) to (14). (16) An instruction unit for instructing the user about the work; The information processing apparatus according to (15). (17) The physical quantity includes mass or coefficient of friction, The information processing apparatus according to any one of (1) to (16).

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

[Problem] To enable a physical quantity of at least one real object to be more appropriately estimated on the basis of the behaviors of a plurality of real objects. [Solution] Provided is an information processing device comprising: an identification unit for identifying, in an inputted image, a plurality of real objects present in a real space; a detection unit for detecting, on the basis of the inputted image, an event corresponding to physical interaction among the plurality of real objects; and an estimation unit for estimating, on the basis of the event, a physical quantity of at least one of the plurality of real objects.

Description

情報処理装置、情報処理方法およびプログラムInformation processing apparatus, information processing method, and program
 本開示は、情報処理装置、情報処理方法およびプログラムに関する。 This disclosure relates to an information processing apparatus, an information processing method, and a program.
 近年、撮影画像を解析し、被写体である実オブジェクト(実空間に存在するオブジェクトを指す)を認識することで様々な機能を提供する技術が盛んに開発されている。例えば、以下の特許文献1には、実オブジェクトおよび当該実オブジェクトに対して行われた作業行為を撮影画像の解析によって認識し、履歴情報として管理する技術が開示されている。 In recent years, techniques for providing various functions by analyzing captured images and recognizing real objects that are subjects (pointing to objects that exist in real space) have been actively developed. For example, Patent Document 1 below discloses a technique for recognizing a real object and a work action performed on the real object by analyzing a photographed image and managing it as history information.
特開2013-114315号公報JP 2013-114315 A
 しかし、特許文献1の技術等によっては、複数の実オブジェクトの挙動に基づいて、少なくとも1つの実オブジェクトの物理量を適切に推定することができなかった。例えば、特許文献1の技術等によっては、複数の実オブジェクト同士が衝突したときの挙動に基づいて、少なくとも1つの実オブジェクトの質量等を適切に推定することができなかった。 However, depending on the technique of Patent Document 1, the physical quantity of at least one real object could not be estimated appropriately based on the behavior of a plurality of real objects. For example, depending on the technique of Patent Document 1, it is not possible to appropriately estimate the mass or the like of at least one real object based on the behavior when a plurality of real objects collide with each other.
 そこで、本開示では、複数の実オブジェクトの挙動に基づいて、少なくとも1つの実オブジェクトの物理量をより適切に推定することが可能な、新規かつ改良された情報処理装置、情報処理方法およびプログラムを提案する。 Therefore, the present disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of more appropriately estimating the physical quantity of at least one real object based on the behavior of a plurality of real objects. To do.
 本開示によれば、実空間に存在する複数の実オブジェクトを入力画像内で識別する識別部と、前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出する検出部と、前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定する推定部と、を備える、情報処理装置が提供される。 According to the present disclosure, an identification unit that identifies a plurality of real objects existing in a real space in an input image, and a detection that detects an event corresponding to a physical action between the plurality of real objects based on the input image And an estimation unit that estimates a physical quantity of at least one real object among the plurality of real objects based on the event.
 また、本開示によれば、実空間に存在する複数の実オブジェクトを入力画像内で識別することと、前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出することと、前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することと、を有する、コンピュータにより実行される情報処理方法が提供される。 Further, according to the present disclosure, a plurality of real objects existing in the real space are identified in the input image, and an event corresponding to a physical action between the plurality of real objects is detected based on the input image. And a computer-executed information processing method comprising: estimating a physical quantity of at least one real object among the plurality of real objects based on the event.
 また、本開示によれば、実空間に存在する複数の実オブジェクトを入力画像内で識別することと、前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出することと、前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することと、をコンピュータに実現させるためのプログラムが提供される。 Further, according to the present disclosure, a plurality of real objects existing in the real space are identified in the input image, and an event corresponding to a physical action between the plurality of real objects is detected based on the input image. And a program for causing a computer to realize a physical quantity of at least one real object among the plurality of real objects based on the event is provided.
 以上説明したように本開示によれば、複数の実オブジェクトの挙動に基づいて、少なくとも1つの実オブジェクトの物理量をより適切に推定することが可能になる。 As described above, according to the present disclosure, it is possible to more appropriately estimate the physical quantity of at least one real object based on the behavior of a plurality of real objects.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本実施形態に係る情報処理システムの構成例を示す図である。It is a figure showing an example of composition of an information processing system concerning this embodiment. 本実施形態に係る情報処理システムの構成例のバリエーションを示す図である。It is a figure which shows the variation of the structural example of the information processing system which concerns on this embodiment. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100による物理量の推定処理の概要について説明する図である。It is a figure explaining the outline | summary of the estimation process of the physical quantity by the information processing apparatus. 情報処理装置100の機能構成例を示すブロック図である。2 is a block diagram illustrating a functional configuration example of an information processing apparatus 100. FIG. 物体情報一次記憶部123の記憶内容の一例を示す図である。It is a figure which shows an example of the memory content of the object information primary storage part. 物体情報蓄積部124の記憶内容の一例を示す図である。It is a figure which shows an example of the memory content of the object information storage part. 入力からグラフィックス表示までの処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of a process from an input to a graphics display. 物体追跡処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of an object tracking process. 物体追跡処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of an object tracking process. 物体情報推定処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of an object information estimation process. 物体情報推定処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of an object information estimation process. 情報処理装置100を用いて実オブジェクトシミュレーションが行われる実施例について説明する図である。FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100. 情報処理装置100を用いて実オブジェクトシミュレーションが行われる実施例について説明する図である。FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100. 情報処理装置100を用いて実オブジェクトシミュレーションが行われる実施例について説明する図である。FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100. 情報処理装置100を用いて実オブジェクトシミュレーションが行われる実施例について説明する図である。FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100. 情報処理装置100を用いて実オブジェクトシミュレーションが行われる実施例について説明する図である。FIG. 11 is a diagram illustrating an example in which real object simulation is performed using the information processing apparatus 100. 情報処理装置100を用いて調理補助が行われる実施例について説明する図である。It is a figure explaining the Example in which cooking assistance is performed using the information processing apparatus. 情報処理装置100を用いて調理補助が行われる実施例について説明する図である。It is a figure explaining the Example in which cooking assistance is performed using the information processing apparatus. 情報処理装置100を用いて調理補助が行われる実施例について説明する図である。It is a figure explaining the Example in which cooking assistance is performed using the information processing apparatus. 情報処理装置100を用いて化学実験が行われる実施例について説明する図である。It is a figure explaining the Example in which a chemical experiment is performed using the information processing apparatus. 情報処理装置100の使用時にユーザ補助が行われる実施例について説明する図である。It is a figure explaining the Example by which user assistance is performed at the time of use of information processor 100. 情報処理装置100の使用時にユーザ補助が行われる実施例について説明する図である。It is a figure explaining the Example by which user assistance is performed at the time of use of information processor 100. 情報処理装置100のハードウェア構成例を示すブロック図である。2 is a block diagram illustrating a hardware configuration example of an information processing apparatus 100. FIG.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 1.実施形態
  1.1.概要
  1.2.物理量の推定処理の概要
  1.3.機能構成例
  1.4.処理の流れ
 2.実施例
  2.1.実オブジェクトシミュレーション
  2.2.調理補助
  2.3.化学実験
  2.4.ユーザ補助
 3.ハードウェア構成例
 4.まとめ
The description will be made in the following order.
1. Embodiment 1.1. Outline 1.2. Outline of physical quantity estimation processing 1.3. Functional configuration example 1.4. Flow of processing Example 2.1. Real object simulation 2.2. Cooking assistance 2.3. Chemical experiment 2.4. 2. User assistance 3. Hardware configuration example Summary
  <1.実施形態>
 (1.1.概要)
 まず、本開示の実施形態の概要について説明する。
<1. Embodiment>
(1.1. Overview)
First, an overview of an embodiment of the present disclosure will be described.
 図1に示すように、本実施形態に係る情報処理システムは、情報処理装置100と、情報処理装置100によって投影が行われる投影面200と、を備えている。 As shown in FIG. 1, the information processing system according to the present embodiment includes an information processing apparatus 100 and a projection plane 200 on which projection is performed by the information processing apparatus 100.
 情報処理装置100は、実空間に存在する実オブジェクトの物理量を推定する機能を有する装置である。より具体的に説明すると、情報処理装置100は、図1に示すように、投影面200の上部に設置されることで、投影面200全体を撮影する撮影機能を有している。そして、情報処理装置100は、撮影された入力画像を解析することで、入力画像内の複数の実オブジェクトを識別する。 The information processing apparatus 100 is an apparatus having a function of estimating a physical quantity of a real object existing in a real space. More specifically, as shown in FIG. 1, the information processing apparatus 100 has an imaging function for imaging the entire projection plane 200 by being installed above the projection plane 200. The information processing apparatus 100 identifies a plurality of real objects in the input image by analyzing the captured input image.
 また、情報処理装置100は、当該複数の実オブジェクト間の物理的作用に対応するイベントを入力画像の解析によって検出する。さらに、情報処理装置100は、検出したイベントに基づいて複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することができる。これによって、ユーザは、専用の測定器等を用いることなく実オブジェクトの物理量の推定値を得ることができる。 Further, the information processing apparatus 100 detects an event corresponding to a physical action between the plurality of real objects by analyzing the input image. Furthermore, the information processing apparatus 100 can estimate the physical quantity of at least one real object among a plurality of real objects based on the detected event. As a result, the user can obtain an estimated value of the physical quantity of the real object without using a dedicated measuring instrument or the like.
 なお、上記の実オブジェクトとは、実空間に存在する有体物であればいかなるものであってもよい。例えば、実オブジェクトは、有体物のうち、定形性を有する固体でもよいし、定形性を有さない液体または気体等であってもよい。 Note that the above real object may be any tangible object that exists in real space. For example, the real object may be a solid having a regularity among tangible objects, or a liquid or gas having no regularity.
 また、上記のイベントとは、実オブジェクト間の物理的作用に対応していればいかなるものであってもよい。例えば、イベントは、実オブジェクト同士の衝突もしくは接触のように実オブジェクトに何らかの力が作用するものを含む。また、イベントは、投影面200を有する実オブジェクト上での他の実オブジェクトの挙動に関するものを含んでもよい。 Also, the above event may be any event as long as it corresponds to a physical action between real objects. For example, the event includes an event in which some force acts on the real object such as a collision or contact between the real objects. The event may also include an event related to the behavior of another real object on the real object having the projection plane 200.
 また、上記の推定対象である物理量の内容も特に限定されない。例えば、推定対象である物理量には、質量、密度、摩擦係数、速度、加速度、温度、電流または電圧等が含まれ得る。なお、推定対象である物理量はこれらに限定されない。 Also, the contents of the physical quantity that is the estimation target are not particularly limited. For example, the physical quantity to be estimated may include mass, density, friction coefficient, speed, acceleration, temperature, current, voltage, or the like. Note that the physical quantity to be estimated is not limited to these.
 また、情報処理装置100は、物理量の推定対象である実オブジェクトに対応する仮想オブジェクトを生成することができる。そして、情報処理装置100は、推定した実オブジェクトの物理量を、当該実オブジェクトに対応する仮想オブジェクトの物理量に反映させることができる。例えば、情報処理装置100が、実オブジェクトであるリンゴの質量を推定した場合、情報処理装置100は、リンゴの仮想オブジェクトを生成し、推定したリンゴの質量を、当該仮想オブジェクトであるリンゴの質量に反映させることができる。 In addition, the information processing apparatus 100 can generate a virtual object corresponding to a real object that is a physical quantity estimation target. Then, the information processing apparatus 100 can reflect the estimated physical quantity of the real object on the physical quantity of the virtual object corresponding to the real object. For example, when the information processing apparatus 100 estimates the mass of an apple that is a real object, the information processing apparatus 100 generates a virtual object of the apple and sets the estimated mass of the apple to the mass of the apple that is the virtual object. It can be reflected.
 また、情報処理装置100は、投影面200に任意の映像を投影することができる。例えば、情報処理装置100は、生成した仮想オブジェクトを投影面200に投影することができる。このとき、情報処理装置100は、仮想オブジェクトの物理量に、推定した実オブジェクトの物理量を反映させることで、仮想オブジェクトの挙動を実オブジェクトの挙動に近づけることができる。 Further, the information processing apparatus 100 can project an arbitrary image on the projection plane 200. For example, the information processing apparatus 100 can project the generated virtual object on the projection plane 200. At this time, the information processing apparatus 100 can approximate the behavior of the virtual object to the behavior of the real object by reflecting the estimated physical quantity of the real object on the physical quantity of the virtual object.
 これによって、情報処理装置100は、仮想空間にて仮想オブジェクトを相互に作用させるような映像を投影することができる。例えば、情報処理装置100は、仮想空間にて仮想オブジェクトである複数のリンゴを相互に衝突させるような映像を投影することができる。 Thereby, the information processing apparatus 100 can project an image that causes the virtual objects to interact with each other in the virtual space. For example, the information processing apparatus 100 can project an image that causes a plurality of apples that are virtual objects to collide with each other in a virtual space.
 また、情報処理装置100は、実オブジェクトと仮想オブジェクトを相互に作用させるような映像を投影してもよい。例えば、実オブジェクトであるドミノと、仮想オブジェクトであるドミノが順に並べられ、ユーザが実オブジェクトであるドミノを倒した場合、情報処理装置100は、実オブジェクトであるドミノが倒れた後に仮想オブジェクトであるドミノを仮想空間で倒してもよい。これによって、情報処理装置100は、実空間と仮想空間の境界が曖昧になったような印象をユーザに与えることができる。 Further, the information processing apparatus 100 may project a video that causes a real object and a virtual object to interact with each other. For example, when dominoes that are real objects and dominoes that are virtual objects are arranged in order, and the user defeats dominoes that are real objects, the information processing apparatus 100 is a virtual object after the dominoes that are real objects have fallen. You may defeat Domino in virtual space. Thereby, the information processing apparatus 100 can give the user an impression that the boundary between the real space and the virtual space has become ambiguous.
 なお、上記で説明した情報処理装置100の機能については適宜変更され得る。例えば、情報処理装置100は、各種センサを備えることによって、入力画像以外のセンサ情報にも基づいて実オブジェクトの識別および実オブジェクトの物理量の推定を実現してもよい。また、上記で説明した情報処理装置100の機能は、異なる装置に備えられていてもよい。情報処理装置100による実オブジェクトの物理量の推定処理、および、生成された仮想オブジェクトの投影処理等の詳細については後述する。 Note that the functions of the information processing apparatus 100 described above can be changed as appropriate. For example, the information processing apparatus 100 may include various sensors to realize identification of a real object and estimation of a physical quantity of the real object based on sensor information other than the input image. Further, the functions of the information processing apparatus 100 described above may be provided in different apparatuses. Details of the physical object physical quantity estimation processing and the generated virtual object projection processing performed by the information processing apparatus 100 will be described later.
 投影面200は、情報処理装置100によって映像を投影可能な面であればいかなる面であってもよい。例えば、投影面200は、凹凸を有する面、曲面または球面等であってもよい。また、投影面200の素材は特に限定されない。例えば、投影面200の素材は、木材、ゴム素材、金属素材またはプラスチック素材等であってもよい。このように、投影面200はいかなる面でもよく、いかなる素材によって構成されていてもよいため、ユーザは、任意の面の上部に情報処理装置100さえ設置することができれば、本情報処理システムを利用することができる。 The projection surface 200 may be any surface as long as the image can be projected by the information processing apparatus 100. For example, the projection surface 200 may be an uneven surface, a curved surface, a spherical surface, or the like. Moreover, the material of the projection surface 200 is not particularly limited. For example, the material of the projection surface 200 may be wood, rubber material, metal material, plastic material, or the like. As described above, the projection surface 200 may be any surface and may be formed of any material. Therefore, the user can use the information processing system as long as the information processing apparatus 100 can be installed on an arbitrary surface. can do.
 なお、本実施形態に係る情報処理システムの態様は図1に限定されない。例えば、図2の2Aに示すように、情報処理装置100aが、透過型スクリーンである投影面200aの下部に設置されることで、投影面200aの下方向から映像が投影されてもよい。この場合、情報処理装置100aは、投影面200aの上方向に位置する(必ずしも投影面200a上に乗っている必要はない)実オブジェクトを、投影面200の下方向から撮影し、当該撮影画像を解析することで、実オブジェクトの識別および実オブジェクトの物理量の推定を実現する。 In addition, the aspect of the information processing system which concerns on this embodiment is not limited to FIG. For example, as illustrated in 2A of FIG. 2, an image may be projected from below the projection plane 200a by installing the information processing apparatus 100a below the projection plane 200a that is a transmissive screen. In this case, the information processing apparatus 100a captures a real object positioned in the upward direction of the projection plane 200a (not necessarily on the projection plane 200a) from the downward direction of the projection plane 200, and the captured image is captured. By analyzing, real object identification and physical object physical quantity estimation are realized.
 また、例えば、図2の2Bに示すように、情報処理装置100bが、センサ付きタッチパネルを備えていてもよい。より具体的には、情報処理装置100bは、タッチパネル表面に複数のイメージセンサを備えることで、タッチパネルの上方向に位置する(必ずしもタッチパネル上に乗っている必要はない)実オブジェクトを撮影し、当該撮影画像を解析することで、実オブジェクトの識別および実オブジェクトの物理量の推定を実現する。この場合、情報処理装置100bによって映像が表示されるため、投影面200は存在しない。なお、図2の2Aおよび2Bの態様も、仕様や運用に応じて柔軟に変形可能である。 For example, as illustrated in 2B of FIG. 2, the information processing apparatus 100b may include a touch panel with a sensor. More specifically, the information processing apparatus 100b includes a plurality of image sensors on the surface of the touch panel so as to photograph a real object that is positioned above the touch panel (not necessarily on the touch panel), and By analyzing the captured image, real object identification and physical object physical quantity estimation are realized. In this case, since the video is displayed by the information processing apparatus 100b, the projection plane 200 does not exist. Note that the modes 2A and 2B in FIG. 2 can be flexibly modified in accordance with specifications and operations.
 (1.2.物理量の推定処理の概要)
 上記では、本開示の実施形態の概要について説明した。続いて、図3A~図3Gを参照して、情報処理装置100による物理量の推定処理の概要について説明する。
(1.2. Outline of physical quantity estimation process)
The overview of the embodiment of the present disclosure has been described above. Next, an overview of physical quantity estimation processing by the information processing apparatus 100 will be described with reference to FIGS. 3A to 3G.
 まず、前提として、情報処理装置100は、図3Aに示す情報を保持している。例えば、情報処理装置100は、3A-1に示すブロック(実オブジェクト)の大きさ、テクスチャ、発する音(図中には「音」と表記)、質量、質量が可変であるか否かに関する情報(以降、「質量可変フラグ」と呼称する)および素材に関する情報を保持している。より具体的には、当該ブロックは、縦1cm、横3cm、高さ1cmの大きさおよび9gの質量を有し、質量が可変でないプラスチック製の実オブジェクトである。また、ブロックが発する音に関する情報については、当該ブロックが他の実オブジェクトと衝突した際に発する音に関する音データが登録されている。なお、3A-1に示す情報は、既知の物体情報として情報処理装置100に蓄積されている。詳細については後述する。 First, as a premise, the information processing apparatus 100 holds the information shown in FIG. 3A. For example, the information processing apparatus 100 determines whether or not the block (real object) size, texture, emitted sound (indicated as “sound” in the drawing), mass, and mass shown in 3A-1 are variable. (Hereinafter referred to as “mass variable flag”) and information on the material. More specifically, the block is a plastic real object having a size of 1 cm in length, 3 cm in width, 1 cm in height, and a mass of 9 g, and the mass is not variable. In addition, as for the information related to the sound generated by the block, sound data related to the sound generated when the block collides with another real object is registered. Note that the information shown in 3A-1 is stored in the information processing apparatus 100 as known object information. Details will be described later.
 また、情報処理装置100は、3A-2に示すように、基準密度を1g/cmとしている。ここで、基準密度とは、情報処理装置100が入力画像内で識別した質量不明の実オブジェクトの質量を推定する際に用いる情報である。より具体的には、情報処理装置100は、入力画像を解析することで実オブジェクトの体積を出力し、当該体積に基準密度を乗算して得られる値を実オブジェクトの質量として推定することができる。 Further, the information processing apparatus 100 has a reference density of 1 g / cm 3 as indicated by 3A-2. Here, the reference density is information used when the information processing apparatus 100 estimates the mass of a real object with unknown mass identified in the input image. More specifically, the information processing apparatus 100 outputs the volume of the real object by analyzing the input image, and can estimate the value obtained by multiplying the volume by the reference density as the mass of the real object. .
 また、情報処理装置100は、3A-3に示すように、基準投影面を木材面としている。ここで、基準投影面とは、情報処理装置100が入力画像の解析等によって投影面200の素材等を識別することができない場合に、暫定的に登録される投影面200の素材等に関する情報である。 Further, as shown in 3A-3, the information processing apparatus 100 uses the reference projection surface as a wood surface. Here, the reference projection plane is information regarding the material of the projection plane 200 that is provisionally registered when the information processing apparatus 100 cannot identify the material of the projection plane 200 by analyzing the input image or the like. is there.
 また、情報処理装置100は、3A-4および3A-5に示すように、プラスチックの、木材面に対する摩擦係数とゴム面に対する摩擦係数をそれぞれ0.4および0.8としている。 Further, as shown in 3A-4 and 3A-5, the information processing apparatus 100 sets the friction coefficient of the plastic to the wood surface and the friction coefficient of the rubber surface to 0.4 and 0.8, respectively.
 さて、ここで、図3Bの3B-1に示すように、ユーザが、投影面200上に実オブジェクトであるリンゴ10を置いたとする。そして、情報処理装置100が、3B-2に示されている大きさ、テクスチャ、発する音、質量、質量可変フラグ、摩擦係数および確度を含む物体情報を測定または推定する場合について考える。ここで、確度とは、物体情報の確からしさの指標値として用いられる情報であり、確度が高いほど、物体情報がより確かであると言える。なお、上記の物体情報はあくまで一例であり、物体情報の内容は特に限定されない。 Now, suppose that the user has placed the apple 10 that is a real object on the projection plane 200, as indicated by 3B-1 in FIG. 3B. Then, consider a case where the information processing apparatus 100 measures or estimates object information including the size, texture, emitted sound, mass, mass variable flag, friction coefficient, and accuracy shown in 3B-2. Here, the accuracy is information used as an index value of the accuracy of object information. It can be said that the higher the accuracy, the more reliable the object information. Note that the above object information is merely an example, and the content of the object information is not particularly limited.
 なお、リンゴ10が置かれるタイミング(図3B時点)では、情報処理装置100は、リンゴ10に関する物体情報を把握していないため、3B-2においては各情報が空値(null)となっている。また、この例においては、情報処理装置100が、入力画像の解析によって投影面200の素材を識別することができないとし、基準投影面に関する情報を用いて投影面200を暫定的に木材面と推定している。また、3B-2に示す情報は、未確定の物体情報として情報処理装置100に蓄積される。詳細については後述する。 Note that at the timing when the apple 10 is placed (at the time of FIG. 3B), the information processing apparatus 100 does not grasp object information related to the apple 10, and thus each piece of information has a null value (null) in 3B-2. . In this example, it is assumed that the information processing apparatus 100 cannot identify the material of the projection plane 200 by analyzing the input image, and the projection plane 200 is temporarily estimated as a wood plane using information on the reference projection plane. doing. The information shown in 3B-2 is accumulated in the information processing apparatus 100 as undetermined object information. Details will be described later.
 情報処理装置100は、入力画像を解析することでリンゴ10を識別し、図3Cの3C-2に示すように、入力画像の解析に基づいてリンゴ10の大きさを、縦3cm、横3cm、高さ3cmと出力する。そして、情報処理装置100は、大きさに関する情報に基づいて得られる体積(縦、横および高さそれぞれの値を乗算して得られる値)と基準密度を乗算することでリンゴ10の質量を27gと推定する。また、情報処理装置100は、値が不明である質量可変フラグおよび摩擦係数にはそれぞれtrue(暫定的な値として登録)、不明を登録し、確度には0.0が登録されている。 The information processing apparatus 100 identifies the apple 10 by analyzing the input image. Based on the analysis of the input image, the size of the apple 10 is 3 cm in length, 3 cm in width, as shown in 3C-2 in FIG. 3C. Outputs a height of 3 cm. Then, the information processing apparatus 100 multiplies the volume of the apple 10 by 27 g by multiplying the volume (value obtained by multiplying the vertical, horizontal, and height values) based on the size information and the reference density. Estimated. Further, the information processing apparatus 100 registers true (registered as a provisional value) and unknown for the mass variable flag and the friction coefficient whose values are unknown, and 0.0 is registered for the accuracy.
 なお、上記の大きさに関する情報は、簡略化された値であり適宜変更され得る。例えば、情報処理装置100は、リンゴ10をはじめとする実オブジェクトの詳細な形状を認識可能であり、当該実オブジェクトの形状に応じて大きさに関する情報の内容を適宜変更可能である。より具体的には、実オブジェクトが糸である場合、情報処理装置100は、大きさに関する情報として、糸の長さに関する情報を保持してもよい。また、情報処理装置100は、大きさに関する情報に代えて(または、大きさに関する情報に加えて)実オブジェクトの体積に関する情報を保持してもよい。 Note that the information on the above size is a simplified value and can be changed as appropriate. For example, the information processing apparatus 100 can recognize the detailed shape of the real object including the apple 10 and can appropriately change the content of the information regarding the size according to the shape of the real object. More specifically, when the real object is a thread, the information processing apparatus 100 may hold information regarding the length of the thread as information regarding the size. Further, the information processing apparatus 100 may hold information on the volume of the real object instead of (or in addition to the information on the size) the information on the size.
 続いて、図3Dの3D-1に示すように、ユーザが、投影面200上のリンゴ10の隣に実オブジェクトであるブロック20を置いたとする。この場合、情報処理装置100は、撮影された入力画像を解析することでブロック20を識別し、その特徴量に基づいて、当該ブロック20が図3Aの3A-1に示された既知のブロックと略同一(または類似)であると判定する。そして、情報処理装置100は、3D-1に示すように、3A-1の情報をブロック20に関する物体情報に反映させる。ここで、3A-1に示された既知のブロックの素材はプラスチックであるため、3D-1におけるブロック20の摩擦係数には、3A-4に示された木材面に対する摩擦係数である0.4が登録されており、物体情報が既知であるため確度には1.0が登録されている。 Subsequently, as shown in 3D-1 of FIG. 3D, it is assumed that the user places a block 20 that is a real object next to the apple 10 on the projection plane 200. In this case, the information processing apparatus 100 identifies the block 20 by analyzing the captured input image, and based on the feature amount, the block 20 is identified as a known block indicated by 3A-1 in FIG. 3A. It is determined that they are substantially the same (or similar). Then, the information processing apparatus 100 reflects the information of 3A-1 on the object information regarding the block 20 as shown in 3D-1. Here, since the material of the known block shown in 3A-1 is plastic, the friction coefficient of the block 20 in 3D-1 is the friction coefficient for the wood surface shown in 3A-4. Since the object information is known, 1.0 is registered as the accuracy.
 続いて、図3Eの3E-1に示すように、ユーザが、(1)にて投影面200上のブロック20をリンゴ10に衝突させたことで、(2)にてリンゴ10の位置が移動したとする。この場合、情報処理装置100は、上記3D-2で登録していたリンゴ10とブロック20の各種物体情報に基づいて、リンゴ10とブロック20の挙動(例えば、衝突による移動距離等)を予想する。リンゴ10とブロック20の挙動の予想方法については特に限定されず、例えば、物理シミュレーション等の公知技術が用いられ得る。 Subsequently, as shown by 3E-1 in FIG. 3E, when the user collides the block 20 on the projection surface 200 with the apple 10 in (1), the position of the apple 10 moves in (2). Suppose that In this case, the information processing apparatus 100 predicts the behavior of the apple 10 and the block 20 (for example, the moving distance due to the collision) based on the various object information of the apple 10 and the block 20 registered in the above 3D-2. . The method for predicting the behavior of the apple 10 and the block 20 is not particularly limited, and for example, a known technique such as physical simulation can be used.
 ここで、情報処理装置100が予想したブロック20の移動距離よりも、実際の移動距離の方が短かったとする。この場合、ブロック20の大きさや質量等の物体情報は正確であるはずなので、情報処理装置100は、投影面200の素材が間違っていると判断し、ブロック20の移動距離に基づいて投影面200がゴム面であると推定する。そして、3E-2に示すように、情報処理装置100は、3A-5に示されたプラスチックのゴム面に対する摩擦係数である0.8をブロック20の摩擦係数に登録する。 Here, it is assumed that the actual moving distance is shorter than the moving distance of the block 20 predicted by the information processing apparatus 100. In this case, since the object information such as the size and mass of the block 20 should be accurate, the information processing apparatus 100 determines that the material of the projection surface 200 is incorrect, and based on the moving distance of the block 20, the projection surface 200. Is the rubber surface. Then, as indicated by 3E-2, the information processing apparatus 100 registers 0.8, which is the friction coefficient with respect to the rubber surface of the plastic indicated by 3A-5, in the friction coefficient of the block 20.
 次いで、情報処理装置100は、投影面200がゴム面である場合において、ブロック20に働いたエネルギーを算出する。そして、仮に、ブロック20とリンゴ10に働いたエネルギーが同一であることが入力画像の解析によって判明した場合、情報処理装置100は、ブロック20に働いたエネルギーがリンゴ10に働いた場合のゴム面上でのリンゴ10の挙動(例えば、衝突による移動距離等)を出力する。ここで、情報処理装置100が予想したリンゴ10の移動距離よりも、実際の移動距離の方が短かったとする。この場合、情報処理装置100は、リンゴ10の質量が間違っていると判断し、図3Fの3F-2に示すように、実際の移動距離に基づいてリンゴ10の質量を推定し、更新する(3F-2の例では、リンゴ10の質量が108gと推定されている)。なお、情報処理装置100は、基準密度または摩擦係数等の他の物体情報が誤っていると判断し、これらの情報を推定してもよい。 Next, the information processing apparatus 100 calculates the energy applied to the block 20 when the projection surface 200 is a rubber surface. Then, if it is found by analysis of the input image that the energy applied to the block 20 and the apple 10 is the same, the information processing apparatus 100 determines the rubber surface when the energy applied to the block 20 is applied to the apple 10. The behavior of the apple 10 above (for example, the movement distance due to the collision, etc.) is output. Here, it is assumed that the actual moving distance is shorter than the moving distance of the apple 10 predicted by the information processing apparatus 100. In this case, the information processing apparatus 100 determines that the mass of the apple 10 is incorrect, and estimates and updates the mass of the apple 10 based on the actual moving distance, as indicated by 3F-2 in FIG. 3F ( In the example of 3F-2, the mass of apple 10 is estimated to be 108 g). Note that the information processing apparatus 100 may determine that other object information such as the reference density or the friction coefficient is incorrect and estimate the information.
 情報処理装置100は、上記のようなイベント(上記の例では、衝突)が発生する度に物体情報(質量等の物理量を含む)の推定処理を行う。そして、情報処理装置100は、上記のように、過去のイベントにおける推定結果を用いて、新たなイベントにおける推定処理を行うことで、物体情報の推定精度を向上させていくことができる(この例では、図3Gに示すように、情報処理装置100は、最終的にリンゴ10の質量を106.5gと推定している)。 The information processing apparatus 100 performs estimation processing of object information (including physical quantities such as mass) each time an event as described above (in the above example, a collision) occurs. Then, as described above, the information processing apparatus 100 can improve the estimation accuracy of the object information by performing the estimation process in the new event using the estimation result in the past event (this example) Then, as shown to FIG. 3G, the information processing apparatus 100 estimates the mass of the apple 10 finally to 106.5g).
 なお、上記においては、既知のブロック20との衝突によってリンゴ10の物理量が推定される例を説明したが、これに限定されない。例えば、情報処理装置100は、ブロック20を用いることなく、リンゴ10が投影面200上で転がされたり、投影面200上に落されたりした場合の挙動に基づいてリンゴ10の物理量を推定してもよい。また、上記では、物理量の推定処理に用いられる実オブジェクトの挙動が主に移動距離であったが、これに限定されない。より具体的には、情報処理装置100は、実オブジェクトの転がり具合、跳ね具合、崩れ具合等の様々な挙動を予想し、実際の挙動との差異を検出することで実オブジェクトの物理量を推定してもよい。この場合、物理シミュレーションプログラムに対して実オブジェクトの形状または性質等(例えば、転がり易さ、跳ね易さ、崩れ易さ等)が入力されること等によって挙動の予想が行われる。 In the above description, the example in which the physical quantity of the apple 10 is estimated by the collision with the known block 20 has been described. However, the present invention is not limited to this. For example, the information processing apparatus 100 estimates the physical quantity of the apple 10 based on the behavior when the apple 10 is rolled on the projection plane 200 or dropped on the projection plane 200 without using the block 20. May be. In the above description, the behavior of the real object used for the physical quantity estimation processing is mainly the movement distance, but the present invention is not limited to this. More specifically, the information processing apparatus 100 predicts various physical behaviors such as rolling, jumping, and collapsing of a real object, and estimates a physical quantity of the real object by detecting a difference from the actual behavior. May be. In this case, the behavior is predicted by inputting the shape or property of the real object (for example, ease of rolling, ease of jumping, ease of collapse, etc.) to the physical simulation program.
 (1.3.機能構成例)
 上記では、情報処理装置100による物理量の推定処理の概要について説明した。続いて、図4を参照して、情報処理装置100の機能構成例について説明する。
(1.3. Functional configuration example)
The outline of the physical quantity estimation process by the information processing apparatus 100 has been described above. Next, a functional configuration example of the information processing apparatus 100 will be described with reference to FIG.
 図4に示すように、情報処理装置100は、制御部110と、処理部120と、入力部130と、グラフィックス表示処理部140と、出力部150と、を備える。 As shown in FIG. 4, the information processing apparatus 100 includes a control unit 110, a processing unit 120, an input unit 130, a graphics display processing unit 140, and an output unit 150.
 (制御部110)
 制御部110は、情報処理装置100が行う処理全般を統括的に制御する機能構成である。例えば、制御部110は、入力部130からの入力に基づいて処理部120をはじめとする各機能構成の起動や停止を制御したり、ディスプレイまたはスピーカ等の出力部150を制御したりすることができる。なお、制御部110の制御内容はこれらに限定されない。例えば、制御部110は、汎用コンピュータ、PC(Personal Computer)、タブレットPCまたはスマートフォン等において一般的に行われる処理(例えば、OS(Operating System)の処理等)を実現してもよい。
(Control unit 110)
The control unit 110 has a functional configuration that comprehensively controls the overall processing performed by the information processing apparatus 100. For example, the control unit 110 may control activation or stop of each functional configuration including the processing unit 120 based on an input from the input unit 130, or may control the output unit 150 such as a display or a speaker. it can. In addition, the control content of the control part 110 is not limited to these. For example, the control unit 110 may realize processing (for example, OS (Operating System) processing or the like) generally performed in a general-purpose computer, a PC (Personal Computer), a tablet PC, a smartphone, or the like.
 (処理部120)
 処理部120は、実オブジェクトの物理量を推定し、実オブジェクトの物体情報を管理する機能構成である。図4に示すように、処理部120は、物体追跡部121と、物体情報推定部122と、物体情報一次記憶部123と、物体情報蓄積部124と、を備える。
(Processing unit 120)
The processing unit 120 has a functional configuration that estimates physical quantities of real objects and manages object information of real objects. As illustrated in FIG. 4, the processing unit 120 includes an object tracking unit 121, an object information estimation unit 122, an object information primary storage unit 123, and an object information storage unit 124.
 (物体追跡部121)
 物体追跡部121は、実オブジェクトを追跡する物体追跡処理を行う機能構成である。より具体的には、物体追跡部121は、入力画像内の実オブジェクトの特徴量と、物体情報一次記憶部123または物体情報蓄積部124に登録されている実オブジェクトの特徴量とを比較することで、入力画像内の実オブジェクトを識別する識別部として機能する。
(Object tracking unit 121)
The object tracking unit 121 has a functional configuration that performs object tracking processing for tracking a real object. More specifically, the object tracking unit 121 compares the feature amount of the real object in the input image with the feature amount of the real object registered in the object information primary storage unit 123 or the object information storage unit 124. Thus, it functions as an identification unit for identifying a real object in the input image.
 そして、物体追跡部121は、入力画像を解析することで、複数の実オブジェクト間のイベントを検出する検出部としても機能する。より具体的には、物体追跡部121は、入力画像を解析することで実オブジェクトの静動を監視し、実オブジェクトが動いたこと等によってイベントを検出する。イベントが発生した場合、物体追跡部121は、当該イベントにおける実オブジェクトの挙動を予想し、実際の挙動との差異を検出する。ここで、予想された挙動と実際の挙動との差異が生じることを「矛盾が生じる」とも呼称し、当該差異の検出処理を「矛盾検出処理」とも呼称する。 The object tracking unit 121 also functions as a detection unit that detects an event between a plurality of real objects by analyzing the input image. More specifically, the object tracking unit 121 monitors the static movement of the real object by analyzing the input image, and detects an event when the real object has moved. When an event occurs, the object tracking unit 121 predicts the behavior of the real object in the event and detects a difference from the actual behavior. Here, the occurrence of a difference between the expected behavior and the actual behavior is also referred to as “inconsistency”, and the detection process of the difference is also referred to as “inconsistency detection processing”.
 さらに、物体追跡部121は、実オブジェクトの物理量を推定する推定部としても機能する。より具体的には、上記のようにイベントによって矛盾が生じた場合、物体追跡部121は、予想された挙動と実際の挙動との差異を解消する(または差異を所定値以下にする)物理量の値を算出する。ここで、当該差異を解消することを「矛盾解消処理」とも呼称する。物体追跡部121による処理の詳細については後述する。 Furthermore, the object tracking unit 121 also functions as an estimation unit that estimates a physical quantity of a real object. More specifically, when a contradiction occurs due to an event as described above, the object tracking unit 121 cancels the difference between the predicted behavior and the actual behavior (or reduces the difference to a predetermined value or less). Calculate the value. Here, eliminating the difference is also referred to as “contradiction resolution processing”. Details of the processing by the object tracking unit 121 will be described later.
 (物体情報推定部122)
 物体情報推定部122は、実オブジェクトの物体情報を推定する物体情報推定処理を行う機能構成である。より具体的には、物体情報推定部122は、物体追跡部121と同様の方法で入力画像内の実オブジェクトを識別する識別部として機能する。
(Object information estimation unit 122)
The object information estimation unit 122 has a functional configuration for performing object information estimation processing for estimating object information of a real object. More specifically, the object information estimation unit 122 functions as an identification unit that identifies a real object in the input image in the same manner as the object tracking unit 121.
 さらに、物体情報推定部122は、実オブジェクトの物理量を推定する推定部としても機能する。より具体的には、物体情報推定部122は、対象の実オブジェクトの特徴量(例えば、大きさまたはテクスチャ等)に基づいて、対象の実オブジェクトと物体情報蓄積部124に登録されている実オブジェクトとの同一性を判定する。そして、物体情報推定部122は、これらの実オブジェクトの同一性が確認できた場合に、物体情報蓄積部124に登録されている実オブジェクトの物理量を用いて対象の実オブジェクトの物理量を更新する。 Furthermore, the object information estimation unit 122 also functions as an estimation unit that estimates a physical quantity of a real object. More specifically, the object information estimation unit 122 is based on the feature amount (for example, size or texture) of the target real object and the real object registered in the target information storage unit 124. The identity is determined. Then, when the identity of these real objects can be confirmed, the object information estimation unit 122 updates the physical quantity of the target real object using the physical quantity of the real object registered in the object information storage unit 124.
 ここで、例えば、内容物の性質(味や色等)が異なるためにパッケージデザインが互いに異なる商品群においては、質量等の物理量にほとんど差異がない。そこで、物体情報推定部122は、対象の実オブジェクトの特徴量と物体情報蓄積部124に登録されている実オブジェクトの特徴量との差が所定値以下であれば、これらの実オブジェクトを略同一物として扱ってもよい。 Here, for example, there is almost no difference in physical quantities such as mass in product groups with different package designs due to different properties (taste, color, etc.) of the contents. Therefore, the object information estimation unit 122 determines that these real objects are substantially the same if the difference between the feature quantity of the target real object and the feature quantity of the real object registered in the object information storage unit 124 is equal to or less than a predetermined value. You may treat it as a thing.
 より具体的には、大きさを示す値またはテクスチャを示す値の差異が所定値以下であれば、物体情報推定部122は、対象の実オブジェクトと物体情報蓄積部124に登録されている実オブジェクトが略同一物であると扱ってもよい。これによって、大きさやテクスチャ等に若干の差異がある実オブジェクトを全て登録する手間が省略される。物体情報推定部122による処理の詳細については後述する。 More specifically, if the difference between the value indicating the size or the value indicating the texture is equal to or less than a predetermined value, the object information estimation unit 122 determines that the target real object and the real object registered in the object information storage unit 124 are present. May be treated as substantially the same. This eliminates the trouble of registering all real objects having slight differences in size, texture, and the like. Details of the processing by the object information estimation unit 122 will be described later.
 (物体情報一次記憶部123)
 物体情報一次記憶部123は、未確定の実オブジェクトについての物体情報を記憶する機能構成である。より具体的には、物体情報一次記憶部123は、図5に示すように、未確定の実オブジェクトについて、物体ID、大きさ、テクスチャ、発する音、質量、質量可変フラグ、摩擦係数および確度等を含む物体情報を記憶する。物体IDは、情報処理装置100によって識別された実オブジェクトに割り当てられる識別情報である。その他の物体情報の内容については上記のとおりである。
(Object information primary storage unit 123)
The object information primary storage unit 123 has a functional configuration for storing object information about an undetermined real object. More specifically, as shown in FIG. 5, the object information primary storage unit 123 has an object ID, a size, a texture, a sound to be emitted, a mass, a variable mass flag, a friction coefficient, an accuracy, etc. The object information including is stored. The object ID is identification information assigned to the real object identified by the information processing apparatus 100. The contents of the other object information are as described above.
 上記で説明したような実オブジェクト同士の衝突等に基づく物理量の推定処理が繰り返されるにつれて、物体情報一次記憶部123における物体情報の確度が高くなる。そして、確度が所定値以上となった実オブジェクトについては、物体情報が物体情報一次記憶部123から物体情報蓄積部124へ移行される。なお、物体情報一次記憶部123に記憶される物体情報は図5の例に限定されない。 As the physical quantity estimation process based on collision between real objects as described above is repeated, the accuracy of the object information in the object information primary storage unit 123 increases. Then, object information is transferred from the object information primary storage unit 123 to the object information storage unit 124 for the real object whose accuracy is equal to or greater than a predetermined value. The object information stored in the object information primary storage unit 123 is not limited to the example of FIG.
 (物体情報蓄積部124)
 物体情報蓄積部124は、既知(または、確度が所定値以上)の実オブジェクトについての物体情報を記憶する機能構成である。より具体的には、物体情報蓄積部124は、図6に示すように、既知の実オブジェクトについて、大きさ、テクスチャ、発する音、質量、質量可変フラグおよび素材等を含む物体情報を記憶する。
(Object information storage unit 124)
The object information storage unit 124 is a functional configuration that stores object information about a real object that is known (or the accuracy is a predetermined value or more). More specifically, as shown in FIG. 6, the object information storage unit 124 stores object information including a size, a texture, a sound to be emitted, a mass, a mass variable flag, a material, and the like for a known real object.
 これらの物体情報は、情報処理装置100による物理量の推定処理で登録されてもよいし、ユーザによって入力されてもよいし、インターネット等の任意のネットワークから取得されてもよい。また、これらの物体情報は、画像認識ベースの物体認識エンジンによって登録されてもよい。これによって、より多くの種類の実オブジェクトの認識が可能となる。また、確度の高い物体情報が物体認識エンジンに入力されることで、物体認識エンジンの辞書データが更新されてもよい。なお、物体情報蓄積部124に記憶される物体情報は図6の例に限定されない。 These pieces of object information may be registered by physical quantity estimation processing by the information processing apparatus 100, may be input by a user, or may be acquired from an arbitrary network such as the Internet. These object information may be registered by an image recognition-based object recognition engine. As a result, more types of real objects can be recognized. The dictionary data of the object recognition engine may be updated by inputting highly accurate object information to the object recognition engine. The object information stored in the object information storage unit 124 is not limited to the example of FIG.
 (入力部130)
 入力部130は、ユーザ等からの入力を受ける機能構成である。例えば、入力部130は、イメージセンサを備え、投影面200全体を撮影することで入力画像を生成することができる。また、入力部130は、可視光帯域の入力画像を生成できてもよいし、特定波長だけを透過するマルチスペクトルフィルタを介することで特定波長帯域(例えば、赤外光帯域等)の入力画像を生成できてもよい。また、入力部130は、偏光フィルタを介することで偏光が除去された入力画像を生成できてもよい。
(Input unit 130)
The input unit 130 has a functional configuration that receives input from a user or the like. For example, the input unit 130 includes an image sensor and can generate an input image by photographing the entire projection surface 200. The input unit 130 may generate an input image in the visible light band, or an input image in a specific wavelength band (for example, an infrared light band) through a multispectral filter that transmits only a specific wavelength. It may be generated. Further, the input unit 130 may be able to generate an input image from which polarized light has been removed through a polarization filter.
 さらに、入力部130は、デプスセンサを備えることで3D情報を生成できてもよい。ここで、デプスセンサの種類やセンシング方式は特に限定されず、例えば、ステレオカメラが用いられたり、TOF(Time of flight)方式またはStructured light方式等が用いられたりしてもよい。 Furthermore, the input unit 130 may be able to generate 3D information by including a depth sensor. Here, the type of the depth sensor and the sensing method are not particularly limited, and for example, a stereo camera may be used, or a TOF (Time of Flight) method or a Structured Light method may be used.
 また、入力部130は、ユーザが投影面200にタッチする動作を検出可能なタッチセンサを備えていてもよい。これによって、ユーザは、投影面200をタッチすることで所望の入力を行うことができる。タッチセンサの種類やセンシング方式も特に限定されない。例えば、投影面200にタッチパネルが備えられることでタッチが検出されてもよいし、上記のイメージセンサによって生成された入力画像の解析によってタッチが検出されてもよい。 Further, the input unit 130 may include a touch sensor that can detect an operation of the user touching the projection surface 200. As a result, the user can make a desired input by touching the projection surface 200. The type of touch sensor and the sensing method are not particularly limited. For example, a touch may be detected by providing the projection surface 200 with a touch panel, or the touch may be detected by analyzing an input image generated by the image sensor.
 なお、入力部130に備えられるセンサはこれらに限定されない。例えば、入力部130は、音センサ、温度センサ、照度センサ、位置センサ(例えば、GNSS(Global Navigation Satellite System)センサ等)または気圧センサ等の任意のセンサを備えていてもよい。入力部130は、入力された情報を制御部110、処理部120、グラフィックス表示処理部140に提供する。 In addition, the sensor provided in the input unit 130 is not limited to these. For example, the input unit 130 may include an arbitrary sensor such as a sound sensor, a temperature sensor, an illuminance sensor, a position sensor (for example, a GNSS (Global Navigation Satellite System) sensor) or an atmospheric pressure sensor. The input unit 130 provides the input information to the control unit 110, the processing unit 120, and the graphics display processing unit 140.
 (グラフィックス表示処理部140)
 グラフィックス表示処理部140は、グラフィックスの表示に関する処理を行う機能構成である。より具体的には、グラフィックス表示処理部140は、処理部120から提供される実オブジェクトの物体情報や、入力部130から提供される入力を任意のソフトウェア(例えば、グラフィックスアプリケーション等)に入力することで投影面200に投影されるグラフィックスを出力する。グラフィックス表示処理部140は、出力したグラフィックスに関する情報を出力部150に提供する。
(Graphics display processing unit 140)
The graphics display processing unit 140 has a functional configuration that performs processing related to graphics display. More specifically, the graphics display processing unit 140 inputs the object information of the real object provided from the processing unit 120 and the input provided from the input unit 130 into arbitrary software (for example, a graphics application). As a result, the graphics projected on the projection plane 200 are output. The graphics display processing unit 140 provides information about the output graphics to the output unit 150.
 また、グラフィックス表示処理部140は、物理量の推定対象である実オブジェクトに対応する仮想オブジェクトを生成する生成部として機能する。そして、グラフィックス表示処理部140は、推定された実オブジェクトの物理量を、当該実オブジェクトに対応する仮想オブジェクトの物理量に反映させることができる。 Further, the graphics display processing unit 140 functions as a generation unit that generates a virtual object corresponding to a real object that is a physical quantity estimation target. Then, the graphics display processing unit 140 can reflect the estimated physical quantity of the real object on the physical quantity of the virtual object corresponding to the real object.
 さらに、グラフィックス表示処理部140は、生成した仮想オブジェクトの挙動を制御する表示制御部としても機能することができる。例えば、グラフィックス表示処理部140は、仮想オブジェクトの挙動に基づいて他の仮想オブジェクトの挙動を制御することができる。また、グラフィックス表示処理部140は、実オブジェクトの挙動に基づいて仮想オブジェクトの挙動を制御することもできる。これによって、グラフィックス表示処理部140は、実空間と仮想空間の境界が曖昧になったような印象をユーザに与えることができる。グラフィックス表示処理部140による表示制御の具体例については後述する。 Furthermore, the graphics display processing unit 140 can also function as a display control unit that controls the behavior of the generated virtual object. For example, the graphics display processing unit 140 can control the behavior of another virtual object based on the behavior of the virtual object. Further, the graphics display processing unit 140 can control the behavior of the virtual object based on the behavior of the real object. Thus, the graphics display processing unit 140 can give the user an impression that the boundary between the real space and the virtual space has become ambiguous. A specific example of display control by the graphics display processing unit 140 will be described later.
 (出力部150)
 出力部150は、各種情報を出力する機能構成である。例えば、出力部150は、プロジェクタ等の投影手段を備えており、グラフィックス表示処理部140によって出力されたグラフィックスを投影面200へ投影することができる。なお、出力部150は、各種ディスプレイ等の表示手段またはスピーカまたはアンプ等の音声出力手段等を備えていてもよい。なお、出力手段は上記に限定されない。
(Output unit 150)
The output unit 150 has a functional configuration that outputs various types of information. For example, the output unit 150 includes projection means such as a projector, and can project the graphics output by the graphics display processing unit 140 onto the projection plane 200. The output unit 150 may include display means such as various displays or sound output means such as a speaker or an amplifier. The output means is not limited to the above.
 以上、情報処理装置100の機能構成例について説明した。なお、図4を用いて説明した上記の機能構成はあくまで一例であり、情報処理装置100の機能構成は係る例に限定されない。例えば、情報処理装置100は、図4に示す構成の全てを必ずしも備えなくてもよい。また、情報処理装置100の機能構成は、仕様や運用に応じて柔軟に変形可能である。 The functional configuration example of the information processing apparatus 100 has been described above. The above-described functional configuration described with reference to FIG. 4 is merely an example, and the functional configuration of the information processing apparatus 100 is not limited to the example. For example, the information processing apparatus 100 does not necessarily include all the configurations illustrated in FIG. Further, the functional configuration of the information processing apparatus 100 can be flexibly modified according to specifications and operations.
 (1.4.処理の流れ)
 上記では、情報処理装置100の機能構成例について説明した。続いて、情報処理装置100による処理の流れについて説明する。
(1.4. Flow of processing)
The functional configuration example of the information processing apparatus 100 has been described above. Next, the flow of processing by the information processing apparatus 100 will be described.
 (入力からグラフィックス表示までの処理の流れ)
 まず、図7を参照して、入力からグラフィックス表示までの処理の流れについて説明する。
(Flow of processing from input to graphics display)
First, the flow of processing from input to graphics display will be described with reference to FIG.
 ステップS1000では、入力部130が入力画像をはじめとする実オブジェクトに関する情報を取得し、処理部120が入力部130から当該情報を受け取る。上記のとおり、入力部130が取得する情報は、入力画像だけでなく様々なセンサ情報を含み得る。 In step S1000, the input unit 130 acquires information about the real object including the input image, and the processing unit 120 receives the information from the input unit 130. As described above, the information acquired by the input unit 130 may include various sensor information as well as the input image.
 ステップS1004では、物体追跡部121が、入力部130から受け取った情報を用いて実オブジェクトを追跡する物体追跡処理を行う。物体追跡処理の詳細については後述する。また、ステップS1008では、物体情報推定部122が、実オブジェクトの物体情報を推定する物体情報推定処理を行う。物体情報推定処理の詳細についても後述する。 In step S1004, the object tracking unit 121 performs object tracking processing for tracking a real object using the information received from the input unit 130. Details of the object tracking process will be described later. In step S1008, the object information estimation unit 122 performs object information estimation processing for estimating the object information of the real object. Details of the object information estimation process will also be described later.
 ステップS1012では、グラフィックス表示処理部140が、前段の処理の結果に応じて適宜グラフィックス表示処理を行う。例えば、グラフィックス表示処理部140は、実オブジェクトに対応する仮想オブジェクトを生成することで、当該仮想オブジェクトを相互に作用させるような映像を投影したり、実オブジェクトと仮想オブジェクトを相互に作用させるような映像を投影したりする。 In step S1012, the graphics display processing unit 140 performs graphics display processing as appropriate according to the result of the previous process. For example, the graphics display processing unit 140 generates a virtual object corresponding to the real object, thereby projecting a video that causes the virtual object to interact with each other, or to cause the real object and the virtual object to interact with each other. Or projecting an image.
 上記の各種処理は、ステップS1016にて、ユーザによる終了操作が行われるまで繰り返され、ユーザによる終了操作が行われた場合(ステップS1016/Yes)、一連の処理が終了する。 The above-described various processes are repeated until an end operation by the user is performed in step S1016. When the end operation is performed by the user (step S1016 / Yes), a series of processes ends.
 (物体追跡処理の流れ)
 続いて、図7のステップS1004にて行われる物体追跡処理の詳細を、図8Aおよび図8Bを参照しながら説明する。
(Flow of object tracking process)
Next, details of the object tracking process performed in step S1004 of FIG. 7 will be described with reference to FIGS. 8A and 8B.
 物体追跡部121は、ステップS1100~ステップS1128の処理を、入力画像内に含まれる実オブジェクト毎に繰り返す。ステップS1100では、物体追跡部121が、入力画像内の実オブジェクトの特徴量と、物体情報一次記憶部123に登録されている実オブジェクトの特徴量とを比較することで、入力画像内に含まれる実オブジェクトが物体情報一次記憶部123に存在するか否かを確認する。入力画像内に含まれる実オブジェクトが物体情報一次記憶部123に存在する場合(ステップS1100/Yes)、ステップS1104にて、物体追跡部121が、入力画像を解析することで、実オブジェクトが一つ前のフレームで静止していたか否かを判定する。 The object tracking unit 121 repeats the processing from step S1100 to step S1128 for each real object included in the input image. In step S <b> 1100, the object tracking unit 121 compares the feature quantity of the real object in the input image with the feature quantity of the real object registered in the object information primary storage unit 123, so that it is included in the input image. It is confirmed whether or not a real object exists in the object information primary storage unit 123. When a real object included in the input image is present in the object information primary storage unit 123 (step S1100 / Yes), in step S1104, the object tracking unit 121 analyzes the input image, thereby obtaining one real object. It is determined whether or not the frame is stationary in the previous frame.
 ステップS1100にて、入力画像内に含まれる実オブジェクトが物体情報一次記憶部123に存在しない場合(ステップS1100/No)、物体追跡部121は、入力画像内に含まれる他の実オブジェクトに対するステップS1100~ステップS1128の処理を行う。 In step S1100, when the real object included in the input image does not exist in the object information primary storage unit 123 (step S1100 / No), the object tracking unit 121 performs step S1100 for other real objects included in the input image. Step S1128 is performed.
 ステップS1104にて、物体追跡部121が、実オブジェクトが一つ前のフレームで静止していなかったと判定した場合(換言すると、実オブジェクトが一つ前のフレームで動いていた場合)(ステップS1104/No)、ステップS1108にて、物体追跡部121は、実オブジェクトが現フレームで静止したか否かを判定する。物体追跡部121が、実オブジェクトが現フレームで静止したと判定した場合(ステップS1108/Yes)、ステップS1112にて、物体追跡部121は、実オブジェクトが動いていた間の座標変化と実オブジェクトが発した音情報を取得する。 When the object tracking unit 121 determines in step S1104 that the real object has not been stationary in the previous frame (in other words, the real object has moved in the previous frame) (step S1104 / No) In step S1108, the object tracking unit 121 determines whether or not the real object is stationary in the current frame. If the object tracking unit 121 determines that the real object is stationary in the current frame (step S1108 / Yes), in step S1112, the object tracking unit 121 determines that the coordinate change and the real object during the movement of the real object have occurred. Acquires sound information.
 ステップS1104にて、実オブジェクトが一つ前のフレームで静止していた場合(換言すると、実オブジェクトが静止し続けている場合)(ステップS1104/Yes)、物体追跡部121は、入力画像内に含まれる他の実オブジェクトに対するステップS1100~ステップS1128の処理を行う。また、ステップS1108にて、物体追跡部121が、実オブジェクトが現フレームで静止していないと判定した場合も(換言すると、実オブジェクトが動き続けている場合)(ステップS1108/No)、物体追跡部121は、入力画像内に含まれる他の実オブジェクトに対するステップS1100~ステップS1128の処理を行う。 In step S1104, when the real object is stationary in the previous frame (in other words, when the real object is still stationary) (step S1104 / Yes), the object tracking unit 121 includes the input object in the input image. The processes in steps S1100 to S1128 are performed on other contained real objects. In addition, when the object tracking unit 121 determines in step S1108 that the real object is not stationary in the current frame (in other words, when the real object continues to move) (step S1108 / No), the object tracking is performed. The unit 121 performs the processing from step S1100 to step S1128 for other real objects included in the input image.
 ステップS1116にて、物体追跡部121は、動いていた実オブジェクトの座標変化について矛盾が生じていないかを判定する。換言すると、物体追跡部121は、実オブジェクトの挙動を予想し、実際の挙動との差異が生じてないか(または、所定値以上の差異が生じていないか)を判定する。 In step S1116, the object tracking unit 121 determines whether there is a contradiction regarding the coordinate change of the moving real object. In other words, the object tracking unit 121 predicts the behavior of the real object and determines whether there is a difference from the actual behavior (or whether a difference greater than a predetermined value has occurred).
 物体追跡部121が、実オブジェクトの座標変化について矛盾が生じていないと判定した場合(ステップS1116/Yes)、物体追跡部121は、入力画像内に含まれる他の実オブジェクトに対するステップS1100~ステップS1128の処理を行う。 When the object tracking unit 121 determines that there is no contradiction regarding the coordinate change of the real object (step S1116 / Yes), the object tracking unit 121 performs steps S1100 to S1128 with respect to other real objects included in the input image. Perform the process.
 物体追跡部121が、実オブジェクトの座標変化について矛盾が生じていると判定した場合(ステップS1116/No)、ステップS1120にて、物体追跡部121は、物体情報一次記憶部123において当該実オブジェクトの質量可変フラグがfalse(換言すると、質量が不変)であるか否かを判定する。物体情報一次記憶部123において実オブジェクトの質量可変フラグがfalseである場合(ステップS1120/Yes)、ステップS1124にて、物体追跡部121は、矛盾確定数を1増やす。ここで、矛盾確定数とは、後段の処理にて、更新対象となる推定値を決定する際に使用される値である。詳細については後述する。 When the object tracking unit 121 determines that there is a contradiction regarding the coordinate change of the real object (step S1116 / No), in step S1120, the object tracking unit 121 stores the real object in the object information primary storage unit 123. It is determined whether the mass variable flag is false (in other words, the mass is unchanged). When the mass variable flag of the real object is false in the object information primary storage unit 123 (step S1120 / Yes), the object tracking unit 121 increases the contradiction decision number by 1 in step S1124. Here, the contradiction fixed number is a value used when an estimated value to be updated is determined in subsequent processing. Details will be described later.
 ステップS1120にて、物体情報一次記憶部123において実オブジェクトの質量可変フラグがtrueである場合(ステップS1120/No)、ステップS1128にて、物体追跡部121は、当該実オブジェクトを矛盾テーブルに追加する。ここで、矛盾テーブルとは、物体情報の更新対象となる実オブジェクトが登録されるテーブルである。 If the mass variable flag of the real object is true in the object information primary storage unit 123 in step S1120 (step S1120 / No), the object tracking unit 121 adds the real object to the contradiction table in step S1128. . Here, the contradiction table is a table in which real objects that are object information update targets are registered.
 物体追跡部121は、上記ステップS1100~ステップS1128の処理を、入力画像内に含まれる実オブジェクト毎に繰り返した後に、ステップS1132にて、矛盾確定数がゼロであるか否かを判定する。矛盾確定数がゼロである場合(ステップS1132/Yes)、一連の処理が終了する。矛盾確定数がゼロでない場合(ステップS1132/No)、ステップS1136にて、物体追跡部121が、矛盾確定数が所定の閾値以上であるか否かを判定する。 The object tracking unit 121 repeats the processing from step S1100 to step S1128 for each real object included in the input image, and then determines in step S1132 whether or not the contradiction decision number is zero. When the contradiction fixed number is zero (step S1132 / Yes), a series of processing ends. If the contradiction decision number is not zero (step S1132 / No), in step S1136, the object tracking unit 121 determines whether the contradiction decision number is equal to or greater than a predetermined threshold.
 矛盾確定数が所定の閾値以上でない場合(ステップS1136/No)、座標変化に矛盾が生じていない実オブジェクトが所定数より多いため、投影面200の推定値(例えば、投影面200の素材)よりも、各実オブジェクトの物理量の推定値の方が疑わしいと言える。 If the contradiction decision number is not equal to or greater than the predetermined threshold (step S1136 / No), since there are more real objects with no contradiction in coordinate changes than the predetermined number, the estimated value of the projection plane 200 (eg, the material of the projection plane 200) However, it can be said that the estimated physical quantity of each real object is more suspicious.
 そこで、物体追跡部121は、ステップS1140~ステップS1148の処理を、矛盾テーブルに登録されている実オブジェクト毎に繰り返すことで、各実オブジェクトの物体情報の更新を試みる。より具体的には、ステップS1140にて、物体追跡部121は、矛盾テーブルに登録されている実オブジェクトの確度を一定値に下げる。 Therefore, the object tracking unit 121 attempts to update the object information of each real object by repeating the processing of steps S1140 to S1148 for each real object registered in the contradiction table. More specifically, in step S1140, the object tracking unit 121 reduces the accuracy of the real object registered in the contradiction table to a constant value.
 ここで、仮に、当該実オブジェクトが、他の実オブジェクトとの衝突等によって動いた場合、ステップS1144にて、物体追跡部121は、衝突した他の実オブジェクトの確度が1.0であるか否か(または、他の実オブジェクトの確度が所定値よりも高いか否か)を判定する。 Here, if the real object moves due to a collision with another real object or the like, in step S1144, the object tracking unit 121 determines whether or not the accuracy of the other real object that collided is 1.0. (Or whether the accuracy of other real objects is higher than a predetermined value).
 衝突した他の実オブジェクトの確度が1.0である場合(または、他の実オブジェクトの確度が所定値よりも高い場合)(ステップS1144/Yes)、ステップS1148にて、物体追跡部121は、衝突によって生じるエネルギー、衝突前後の実オブジェクトの動きまたは実オブジェクトの大きさ等に基づいて、矛盾テーブルに登録されている実オブジェクトの質量を推定し、物体情報を更新する(これは、より確度の高い物理量に基づいて、より確度の低い物理量を推定することと同じである)。その際、物体追跡部121は、当該実オブジェクトについての確度を0.1増やす。衝突した他の実オブジェクトの確度が1.0でない場合(または、他の実オブジェクトの確度が所定値以下である場合)(ステップS1144/No)、ステップS1148の処理は行われない。 When the accuracy of the other real object collided is 1.0 (or when the accuracy of the other real object is higher than a predetermined value) (step S1144 / Yes), in step S1148, the object tracking unit 121 Based on the energy generated by the collision, the movement of the real object before and after the collision, the size of the real object, etc., the mass of the real object registered in the contradiction table is estimated, and the object information is updated (this is more accurate) Equivalent to estimating less accurate physical quantities based on higher physical quantities). At that time, the object tracking unit 121 increases the accuracy of the real object by 0.1. When the accuracy of the other real object that has collided is not 1.0 (or when the accuracy of the other real object is equal to or less than the predetermined value) (step S1144 / No), the process of step S1148 is not performed.
 ステップS1136にて、矛盾確定数が所定の閾値以上である場合(ステップS1136/Yes)、座標変化に矛盾が生じている実オブジェクトが所定数より多いため、各実オブジェクトの物理量の推定値よりも、投影面200の推定値(例えば、投影面200の素材)の方が疑わしいと言える。 In step S1136, when the number of contradictions is greater than or equal to a predetermined threshold value (step S1136 / Yes), since there are more real objects with inconsistent coordinate changes than the predetermined number, the estimated physical quantity of each real object is larger than the estimated value. It can be said that the estimated value of the projection plane 200 (for example, the material of the projection plane 200) is more suspicious.
 そこで、ステップS1152にて、物体追跡部121は、投影面200の推定値を更新する。例えば、物体追跡部121は、投影面200を木材面と推定していた場合、ゴム面等に更新する。ステップS1156では、更新後の投影面200に対する摩擦係数が既知である場合、物体追跡部121は、実オブジェクトの摩擦係数を更新する。 Therefore, in step S1152, the object tracking unit 121 updates the estimated value of the projection plane 200. For example, when the projection surface 200 is estimated to be a wood surface, the object tracking unit 121 updates the rubber surface or the like. In step S1156, when the friction coefficient with respect to the updated projection plane 200 is known, the object tracking unit 121 updates the friction coefficient of the real object.
 ステップS1160では、物体追跡部121は、確度が1.0より低い実オブジェクトについて、衝突によって生じるエネルギー、衝突前後の実オブジェクトの動きまたは実オブジェクトの大きさ等に基づいて、当該実オブジェクトの質量を推定し、物体情報を更新する。その際、物体追跡部121は、当該実オブジェクトについての確度を0.1増やす。以上によって一連の物体追跡処理が終了する。 In step S1160, the object tracking unit 121 calculates the mass of the real object based on the energy generated by the collision, the movement of the real object before and after the collision, the size of the real object, etc. Estimate and update object information. At that time, the object tracking unit 121 increases the accuracy of the real object by 0.1. Thus, a series of object tracking processing is completed.
 (物体情報推定処理の流れ)
 続いて、図7のステップS1008にて行われる物体情報推定処理の詳細を、図9Aおよび図9Bを参照しながら説明する。
(Flow of object information estimation process)
Next, details of the object information estimation process performed in step S1008 of FIG. 7 will be described with reference to FIGS. 9A and 9B.
 物体情報推定部122は、図9Aおよび図9Bに示す処理を、入力画像内に含まれる実オブジェクト毎に繰り返す。ステップS1200では、物体情報推定部122が入力部130から提供された入力画像等の情報を解析することによって、実オブジェクトの大きさとテクスチャを取得する。ステップS1204では、物体情報推定部122は、取得した大きさとテクスチャの情報に基づいて、検出された実オブジェクトが物体情報一次記憶部123に存在するか否かを確認する。 The object information estimation unit 122 repeats the process shown in FIGS. 9A and 9B for each real object included in the input image. In step S1200, the object information estimation unit 122 analyzes the information such as the input image provided from the input unit 130, thereby acquiring the size and texture of the real object. In step S1204, the object information estimation unit 122 confirms whether or not the detected real object exists in the object information primary storage unit 123 based on the acquired size and texture information.
 検出された実オブジェクトが物体情報一次記憶部123に存在する場合(ステップS1204/Yes)、ステップS1208にて、物体情報推定部122は、実オブジェクトの大きさとテクスチャを示す値が、物体情報一次記憶部123に存在するものと比べて所定の閾値以上変化したか否かを判定する。これによって、物体情報推定部122は、実オブジェクトが転がったり倒れたりすること等によって、入力画像内における実オブジェクトの見え方(テクスチャ)が変化したことを捉えることができる。 When the detected real object exists in the object information primary storage unit 123 (step S1204 / Yes), in step S1208, the object information estimation unit 122 stores the values indicating the size and texture of the real object in the object information primary storage. It is determined whether or not the threshold value has changed by more than a predetermined threshold value compared with that existing in the unit 123. Thus, the object information estimation unit 122 can grasp that the appearance (texture) of the real object in the input image has changed due to the real object rolling or falling.
 実オブジェクトの大きさとテクスチャを示す値が、物体情報一次記憶部123に存在するものと比べて所定の閾値以上変化した場合(ステップS1208/Yes)、ステップS1212にて、物体情報推定部122は、新たに取得された実オブジェクトの大きさとテクスチャの情報を用いて当該実オブジェクトの物体情報を更新する。これによって、実オブジェクトが転がったり倒れたりすること等を通じて、物体情報推定部122は、当該実オブジェクトの3Dモデルを生成することができる。 When the value indicating the size and texture of the real object has changed by more than a predetermined threshold value compared to that existing in the object information primary storage unit 123 (step S1208 / Yes), in step S1212 the object information estimation unit 122 The object information of the real object is updated using the newly acquired size and texture information of the real object. Thus, the object information estimation unit 122 can generate a 3D model of the real object through, for example, the real object rolling or falling.
 ステップS1208にて、実オブジェクトの大きさとテクスチャを示す値が、物体情報一次記憶部123に存在するものと比べて所定の閾値以上変化していない場合には(ステップS1208/No)、更新されるべき物体情報が存在しないため、上記ステップS1212の処理は行われない。 In step S1208, when the values indicating the size and texture of the real object have not changed by a predetermined threshold value or more compared to those existing in the object information primary storage unit 123 (step S1208 / No), it is updated. Since there is no power object information, the process of step S1212 is not performed.
 ステップS1204にて、検出された実オブジェクトが物体情報一次記憶部123に存在しない場合(ステップS1204/No)、ステップS1216にて、物体情報推定部122は、対象の実オブジェクトとの比較において、大きさを示す値の差が所定の閾値以下である実オブジェクトが物体情報蓄積部124に存在するか否かを確認する。 In step S1204, when the detected real object does not exist in the object information primary storage unit 123 (step S1204 / No), in step S1216, the object information estimation unit 122 performs a large comparison with the target real object. It is confirmed whether or not there is a real object in the object information storage unit 124 in which the difference between the values indicating the length is equal to or less than a predetermined threshold.
 対象の実オブジェクトとの比較において、大きさを示す値の差が所定の閾値以下である実オブジェクトが物体情報蓄積部124に存在する場合(ステップS1216/Yes)、ステップS1220にて、物体情報推定部122は、対象の実オブジェクトとの比較において、テクスチャを示す値の差が所定の閾値以下である実オブジェクトが物体情報蓄積部124に存在するか否かを確認する。 In the comparison with the target real object, if there is a real object in the object information storage unit 124 in which the difference in value indicating the size is equal to or smaller than a predetermined threshold (step S1216 / Yes), the object information estimation is performed in step S1220. In comparison with the target real object, the unit 122 confirms whether or not there is a real object in the object information storage unit 124 whose difference in value indicating the texture is equal to or smaller than a predetermined threshold.
 対象の実オブジェクトとの比較において、テクスチャを示す値の差が所定の閾値以下である実オブジェクトが物体情報蓄積部124に存在する場合(ステップS1220/Yes)、物体情報推定部122は、対象の実オブジェクトが物体情報蓄積部124に存在する実オブジェクトと略同一である(または類似している)と判定する。そのため、ステップS1224にて、物体情報推定部122は、物体情報蓄積部124に存在する実オブジェクトの物理情報を用いて、対象の実オブジェクトの物体情報を物体情報一次記憶部123へ登録する。その際、物体情報蓄積部124に登録されている物理情報は確かなものであるため、物体情報推定部122は、物理情報についての確度を1.0とする。 In the comparison with the target real object, if there is a real object in the object information storage unit 124 in which the difference between the values indicating the texture is equal to or smaller than a predetermined threshold (step S1220 / Yes), the object information estimation unit 122 It is determined that the real object is substantially the same (or similar) as the real object existing in the object information storage unit 124. Therefore, in step S1224, the object information estimation unit 122 registers the object information of the target real object in the object information primary storage unit 123 using the physical information of the real object existing in the object information storage unit 124. At this time, since the physical information registered in the object information storage unit 124 is reliable, the object information estimation unit 122 sets the accuracy of the physical information to 1.0.
 ステップS1228では、物体情報推定部122が、実オブジェクトの素材と投影面200の素材に基づいて実オブジェクトの投影面200に対する摩擦係数を算出し、物体情報一次記憶部123の物体情報を更新する。その後、処理はステップS1212直後へ遷移する。 In step S1228, the object information estimation unit 122 calculates the friction coefficient of the real object with respect to the projection surface 200 based on the material of the real object and the material of the projection surface 200, and updates the object information in the object information primary storage unit 123. Thereafter, the process transitions to immediately after step S1212.
 ステップS1216にて、対象の実オブジェクトとの比較において、大きさを示す値の差が所定の閾値以下である実オブジェクトが物体情報蓄積部124に存在しない場合(ステップS1216/No)、または、ステップS1220にて、テクスチャを示す値の差が所定の閾値以下である実オブジェクトが物体情報蓄積部124に存在しない場合(ステップS1220/No)、処理はステップS1232へ遷移する。 In step S1216, in comparison with the target real object, if there is no real object in the object information storage unit 124 in which the difference in value indicating the size is equal to or smaller than a predetermined threshold (step S1216 / No), or step In S1220, when there is no real object in the object information storage unit 124 in which the difference between the values indicating the texture is equal to or smaller than the predetermined threshold (Step S1220 / No), the process transitions to Step S1232.
 ステップS1232では、物体情報推定部122が、取得された大きさとテクスチャの情報を用いて、対象の実オブジェクトの物体情報を物体情報一次記憶部123へ登録する。その際、物体情報推定部122は、物理情報についての確度を0.0とする。ステップS1236では、物体情報推定部122は、取得された大きさと基準密度に基づいて実オブジェクトの質量を算出し、物体情報一次記憶部123の物体情報を更新する。換言すると、仮に、対象の実オブジェクトが物体情報蓄積部124に存在しない場合であっても、物体情報推定部122は、当該実オブジェクトの大きさを用いて質量を推定することができる。なお、質量可変フラグに関する情報は取得されていないため、物体情報推定部122は、質量可変フラグを暫定的にtrueとする。その後、処理はステップS1212直後へ遷移する。 In step S1232, the object information estimation unit 122 registers the object information of the target real object in the object information primary storage unit 123 using the acquired size and texture information. At that time, the object information estimation unit 122 sets the accuracy of the physical information to 0.0. In step S1236, the object information estimation unit 122 calculates the mass of the real object based on the acquired size and reference density, and updates the object information in the object information primary storage unit 123. In other words, even if the target real object does not exist in the object information storage unit 124, the object information estimation unit 122 can estimate the mass using the size of the real object. In addition, since the information regarding the mass variable flag has not been acquired, the object information estimation unit 122 temporarily sets the mass variable flag to true. Thereafter, the process transitions to immediately after step S1212.
 上記のとおり、図9Aおよび図9Bに示す処理が、入力画像内に含まれる実オブジェクト毎に繰り返された場合に、一連の物体情報推定処理が終了する。 As described above, when the processing shown in FIGS. 9A and 9B is repeated for each real object included in the input image, the series of object information estimation processing ends.
 なお、図7、図8A、図8B、図9Aおよび図9Bに示したフローチャートにおける各ステップは、必ずしも記載された順序に沿って時系列に処理する必要はない。すなわち、フローチャートにおける各ステップは、記載された順序と異なる順序で処理されても、並列的に処理されてもよい。 Note that the steps in the flowcharts shown in FIGS. 7, 8A, 8B, 9A, and 9B do not necessarily have to be processed in time series in the order described. That is, each step in the flowchart may be processed in an order different from the order described or may be processed in parallel.
  <2.実施例>
 上記では、情報処理装置100による処理の流れについて説明した。続いて、情報処理装置100を用いた様々な実施例について説明する。
<2. Example>
In the above, the flow of processing by the information processing apparatus 100 has been described. Next, various embodiments using the information processing apparatus 100 will be described.
 (2.1.実オブジェクトシミュレーション)
 まず、情報処理装置100を用いて実オブジェクトシミュレーションが行われる実施例について説明する。
(2.1. Real object simulation)
First, an embodiment in which real object simulation is performed using the information processing apparatus 100 will be described.
 この実施例は、上記でも一部説明しているが、情報処理装置100によって実オブジェクトに対応する仮想オブジェクトが生成され、当該仮想オブジェクトを用いて様々なグラフィックスが提供されるものである。 In this embodiment, as described in part above, a virtual object corresponding to a real object is generated by the information processing apparatus 100, and various graphics are provided using the virtual object.
 例えば、グラフィックス表示処理部140は、物体情報一次記憶部123または物体情報蓄積部124に登録されている実オブジェクトの物体情報を取得する。そして、グラフィックス表示処理部140は、当該物体情報に基づいて実オブジェクトに対応する仮想オブジェクトを生成し、推定された実オブジェクトの物理量を、当該仮想オブジェクトの物理量に反映させる。 For example, the graphics display processing unit 140 acquires the object information of the real object registered in the object information primary storage unit 123 or the object information storage unit 124. Then, the graphics display processing unit 140 generates a virtual object corresponding to the real object based on the object information, and reflects the estimated physical quantity of the real object in the physical quantity of the virtual object.
 これによって、グラフィックス表示処理部140は、生成した仮想オブジェクトを相互に作用させるようなグラフィックスを投影したり、実オブジェクトと仮想オブジェクトを相互に作用させるようなグラフィックスを投影したりすることができる。例えば、図10の10Aに示すように、ユーザが実オブジェクトである箱を複数並べ、グラフィックス表示処理部140が仮想オブジェクトである箱を複数並べるように投影面200に投影することで、実オブジェクトと仮想オブジェクトが混在するドミノ倒しゲームが準備されたとする。このとき、グラフィックス表示処理部140は、ユーザによる投影面200へのタッチ操作等に基づいて仮想オブジェクトの投影位置を調整してもよい。 Accordingly, the graphics display processing unit 140 may project graphics that cause the generated virtual object to interact with each other, or may project graphics that cause the real object and the virtual object to interact with each other. it can. For example, as shown in 10A of FIG. 10, the user projects a real object by arranging a plurality of boxes that are real objects, and the graphics display processing unit 140 projects a plurality of boxes that are virtual objects on the projection plane 200. And a virtual domino game where a virtual object is mixed. At this time, the graphics display processing unit 140 may adjust the projection position of the virtual object based on, for example, a touch operation on the projection plane 200 by the user.
 そして、ユーザが、実オブジェクトである複数の箱のうちの一部を指で押して倒すことによって、並べられた実オブジェクトである箱が順次倒れていく。グラフィックス表示処理部140は、仮想オブジェクトの手前の実オブジェクトが倒れたこと(または倒れていること)を認識した場合、仮想オブジェクトである箱が順次倒れるように投影を行う。このとき、グラフィックス表示処理部140は、仮想オブジェクトの並び方や物理情報等(例えば、形状および質量等)を物理シミュレーションプログラムに入力すること等によって、仮想オブジェクトの倒れ方をより実オブジェクトの倒れ方に近づける。例えば、相対的に重い質量を有する仮想オブジェクトは、相対的に軽い質量を有する仮想オブジェクトよりも倒れにくい様子が投影される。 Then, when the user pushes and depresses a part of the plurality of boxes that are real objects with a finger, the boxes that are the real objects arranged sequentially fall down. When the graphics display processing unit 140 recognizes that the real object in front of the virtual object has fallen (or has fallen), the graphics display processing unit 140 performs projection so that the boxes that are the virtual objects are sequentially fallen down. At this time, the graphics display processing unit 140 inputs how to arrange the virtual objects, physical information, etc. (for example, shape and mass) to the physical simulation program, etc. Move closer to. For example, a virtual object having a relatively heavy mass is projected to fall more easily than a virtual object having a relatively light mass.
 さらに、仮想オブジェクトの発する音が物体情報として登録されている場合、グラフィックス表示処理部140は、仮想オブジェクトが倒れるタイミングで当該音を出力部150に出力させることで臨場感を高めることができる。 Furthermore, when the sound emitted from the virtual object is registered as the object information, the graphics display processing unit 140 can increase the sense of reality by causing the output unit 150 to output the sound at the timing when the virtual object falls.
 ここで、上記で説明してきた本開示に係る様々な機能は、実オブジェクトシミュレーションを実施する上で有利な効果を発生させることができる。 Here, the various functions according to the present disclosure that have been described above can generate advantageous effects in performing the real object simulation.
 例えば、実オブジェクトシミュレーションにおいて、実オブジェクトの形状を仮想オブジェクトで忠実に再現することは重要であるところ、物体情報推定部122による物体情報推定処理にて、各面(または各方向)から実オブジェクトを撮影したときのテクスチャが取得される(例えば、図9AステップS1208およびステップS1212参照)。より具体的には、図11の11Aに示す箱が実オブジェクトとして存在する場合、ユーザがこの箱を用いて何らかの作業(遊ぶことも含む)をすることによって、物体情報推定部122は、11Bに示すようにこの箱の六面図をテクスチャとして取得し、物体情報として登録する。これによって、11Cに示すように、グラフィックス表示処理部140は、この物体情報を用いることで実オブジェクトである箱を忠実に再現する仮想オブジェクトの3Dモデルを生成することができる。 For example, in the real object simulation, it is important to faithfully reproduce the shape of the real object with the virtual object. However, in the object information estimation process by the object information estimation unit 122, the real object is obtained from each surface (or each direction). The texture at the time of shooting is acquired (see, for example, step S1208 and step S1212 in FIG. 9A). More specifically, when the box shown in 11A of FIG. 11 exists as a real object, the object information estimation unit 122 causes the object information estimation unit 122 to change to 11B by performing some work (including playing) using the box. As shown, the six-sided view of this box is acquired as a texture and registered as object information. Thus, as shown in 11C, the graphics display processing unit 140 can generate a 3D model of a virtual object that faithfully reproduces a box that is a real object by using this object information.
 また、入力画像における実オブジェクトの特徴量が、物体情報一次記憶部123または物体情報蓄積部124に登録されている実オブジェクトの特徴量と略同一である(または差異が所定値以下である)場合について考える。例えば、図12の12Aに示すように、あるパッケージデザインが施された箱30aと、内容物の性質(味や色等)が異なるために箱30aとは異なるパッケージデザインが施された箱30bが存在するとする。ここで、箱30aと箱30bの大きさ、質量、発する音、質量可変フラグ、摩擦係数等の各種物体情報は互いに略同一であるとする。 Further, when the feature amount of the real object in the input image is substantially the same as the feature amount of the real object registered in the object information primary storage unit 123 or the object information storage unit 124 (or the difference is equal to or less than a predetermined value). think about. For example, as shown in 12A of FIG. 12, a box 30a having a package design is different from a box 30b having a package design different from the box 30a because of the nature (taste, color, etc.) of the contents. Suppose it exists. Here, it is assumed that various object information such as the size, mass, emitted sound, mass variable flag, friction coefficient, and the like of the box 30a and the box 30b are substantially the same.
 このとき、物体情報推定部122は、上記のように例えば、互いの大きさを示す値またはテクスチャを示す値の差異が所定値以下であれば、箱30aと箱30bを略同一物として扱ってもよい。これによって、大きさやテクスチャ等に若干の差異がある実オブジェクトを全て登録する手間が省略される。 At this time, as described above, the object information estimation unit 122 treats the box 30a and the box 30b as substantially the same if the difference between the value indicating the size or the value indicating the texture is equal to or less than a predetermined value. Also good. This eliminates the trouble of registering all real objects having slight differences in size, texture, and the like.
 ここで、テクスチャについての差異を抽出する際に、物体情報推定部122は、12Bに示すように、箱30aおよび箱30bのテクスチャをグレースケールで表した箱31aおよび箱31bに変換した後に比較を行ってもよい。12Cには、グレースケールへ変換された後の箱31aおよび箱31bを比較した際の、ヒストグラム32が示されている(縦軸はピクセル数(画素数)を示し、横軸は明るさを示している)。当該ヒストグラム32のように箱31aおよび箱31bの差異が小さい場合、物体情報推定部122は、箱30aおよび箱30bを略同一物として扱う。このように、物体情報推定部122は、入力画像をグレースケール画像へ変換した後に比較処理を行うことによって、実オブジェクト同士が略同一物として扱えるか否かをより適切に判定することができる。 Here, when extracting the difference about the texture, the object information estimation unit 122 converts the texture of the box 30a and the box 30b into the box 31a and the box 31b expressed in gray scale, as shown in 12B. You may go. 12C shows a histogram 32 when the box 31a and the box 31b after being converted to gray scale are compared (the vertical axis indicates the number of pixels (the number of pixels), and the horizontal axis indicates the brightness). ing). When the difference between the box 31a and the box 31b is small as in the histogram 32, the object information estimation unit 122 treats the box 30a and the box 30b as substantially the same thing. As described above, the object information estimation unit 122 can more appropriately determine whether or not the real objects can be handled as substantially the same object by performing the comparison process after converting the input image into the grayscale image.
 また、実オブジェクトの表面で可視光が反射すること等によって、例えば、図13の13Aに示すように、入力画像にて鏡面反射領域40が発生する場合がある。このような場合、上記のとおり、入力部130は、偏光フィルタを介して撮影処理を行ってもよい。これによって、13Bに示すように、入力部130は、鏡面反射領域40が除去された入力画像を生成することができるため、実オブジェクトの識別精度を向上させることができる。 Further, due to the reflection of visible light on the surface of the real object, a specular reflection area 40 may occur in the input image, for example, as shown in 13A of FIG. In such a case, as described above, the input unit 130 may perform imaging processing via a polarizing filter. As a result, as shown in 13B, the input unit 130 can generate an input image from which the specular reflection area 40 is removed, so that the identification accuracy of the real object can be improved.
 また、物体追跡部121が物体追跡処理にて実オブジェクトの質量を推定する際(図8B参照)、および、物体情報推定部122が物体情報推定処理にて実オブジェクトを物体情報一次記憶部123または物体情報蓄積部124に登録されている実オブジェクトと比較する際に(図9Aおよび図9B参照)、実オブジェクトの大きさも考慮されて処理が行われている。これによって、例えば、図14の14A~14Cに示すように、互いに異なる大きさを有し、かつ、略同一のテクスチャを有する実オブジェクトが存在する場合(図14の例では、14Aがキーホルダーであり、14Bがコースターであり、14Cが店頭看板である)であっても、物体追跡部121および物体情報推定部122は、それぞれの大きさに基づいてこれらの実オブジェクトを区別することができる。したがって、互いに異なる大きさを有し、かつ、略同一のテクスチャを有する実オブジェクトを略同一物とみなしてしまうような公知の画像認識技術に比べて、本開示の技術は有用である。 When the object tracking unit 121 estimates the mass of the real object by the object tracking process (see FIG. 8B), the object information estimation unit 122 sets the real object as the object information primary storage unit 123 or the object information estimation process. When comparing with a real object registered in the object information storage unit 124 (see FIGS. 9A and 9B), the size of the real object is also taken into consideration. Thus, for example, as shown in FIGS. 14A to 14C, when there are real objects having mutually different sizes and substantially the same texture (in the example of FIG. 14, 14A is a key chain). , 14B is a coaster and 14C is a storefront sign), the object tracking unit 121 and the object information estimation unit 122 can distinguish these real objects based on their sizes. Therefore, the technique of the present disclosure is useful compared to a known image recognition technique in which real objects having different sizes and substantially the same texture are regarded as substantially the same object.
 なお、以上の説明はあくまで一例であるため、実施の態様は、実オブジェクトシミュレーション機能の仕様や運用に応じて柔軟に変形可能である。 In addition, since the above description is an example to the last, the aspect of implementation can be flexibly deformed according to the specification and operation of the real object simulation function.
 (2.2.調理補助)
 続いて、情報処理装置100を用いて調理補助が行われる実施例について説明する。
(2.2. Cooking assistance)
Next, an example in which cooking assistance is performed using the information processing apparatus 100 will be described.
 この実施例は、情報処理装置100によって調理に使用される材料の計量(例えば、質量の測定)が行われるものである。 In this embodiment, the information processing apparatus 100 performs weighing (for example, measurement of mass) of materials used for cooking.
 まず、図15の15Aに示すように、ユーザが投影面200に配置されたまな板の上に人参を置く。すると、情報処理装置100の物体情報推定部122が入力画像を解析することで、人参を識別し物体情報(質量等)を推定する。グラフィックス表示処理部140は、当該処理の結果に基づいて投影面200にグラフィックスを投影する(15Aの例では、「これは人参100gです」というテキストが投影されている)。これによって、ユーザは、質量測定器を用いることなく実オブジェクトである人参の質量の推定値を知ることができる。 First, as shown in FIG. 15A, the user places a carrot on a cutting board placed on the projection plane 200. Then, the object information estimation unit 122 of the information processing apparatus 100 analyzes the input image, thereby identifying carrots and estimating object information (mass etc.). The graphics display processing unit 140 projects graphics on the projection plane 200 based on the result of the processing (in the example of 15A, the text “This is carrot 100g” is projected). Thereby, the user can know the estimated value of the mass of the carrot which is a real object, without using a mass measuring device.
 続いて、ユーザが、人参を包丁で切ることによって、15Bに示すように、人参が3つの塊に***したとする。この場合、物体追跡部121が入力画像を解析することで***の発生を検出し、***後の各塊の物体IDを、元の人参の物体IDの枝番(図15の例では、元の人参の物体IDが10であり、***後の各塊の物体IDが10-A~10-Cである)として管理する。 Subsequently, it is assumed that the carrot is divided into three chunks as shown in 15B by cutting the carrot with a knife. In this case, the object tracking unit 121 detects the occurrence of splitting by analyzing the input image, and the object ID of each lump after splitting is changed to the branch number of the original carrot object ID (in the example of FIG. The carrot object ID is 10 and the object IDs of each lump after division are 10-A to 10-C).
 そして、物体追跡部121は、各塊の物体情報(質量等)を推定する。このとき、物体追跡部121は、***前の人参の物体情報を適宜用いて***後の各塊の物体情報を推定することができる。例えば、物体追跡部121は、***前の人参の質量および大きさと***後の各塊の大きさを用いて、***後の各塊の質量を推定することができる。なお、処理内容はこれに限定されず、例えば、物体追跡部121は、***前の人参の発する音、質量可変フラグ、摩擦係数等の各種物体情報を、***後の各塊の物体情報に引き継いでもよい。これによって、調理が進み、実オブジェクトの***(または統合)が発生した場合であっても、物体追跡部121は、***前(または統合前)の実オブジェクトの物体情報を有効活用することができる。 And the object tracking part 121 estimates the object information (mass etc.) of each lump. At this time, the object tracking unit 121 can estimate the object information of each lump after the division by appropriately using the carrot object information before the division. For example, the object tracking unit 121 can estimate the mass of each lump after division using the mass and size of the carrot before division and the size of each lump after division. The processing content is not limited to this. For example, the object tracking unit 121 takes over various pieces of object information such as sound generated by carrots before splitting, a variable mass flag, and a friction coefficient in the pieces of object information of each lump after splitting. But you can. As a result, even when cooking progresses and the real object is divided (or integrated), the object tracking unit 121 can effectively use the object information of the real object before the division (or before the integration). .
 そして、グラフィックス表示処理部140は、上記処理の結果に基づいて投影面200にグラフィックスを投影する(15Bの例では、各塊の質量を示すテキストが投影されている)。これによって、調理が進み、実オブジェクトの***(または統合)が発生した場合であっても、ユーザは、実オブジェクトの物体情報(質量等)の推定値を知ることができる。 Then, the graphics display processing unit 140 projects graphics on the projection plane 200 based on the result of the above processing (in the example of 15B, text indicating the mass of each block is projected). Thus, even when cooking progresses and the real object is split (or integrated), the user can know the estimated value of the object information (mass etc.) of the real object.
 ここで、グラフィックス表示処理部140は、実オブジェクトの物体情報の確度に応じて投影内容を制御してもよい。例えば、グラフィックス表示処理部140は、確度が所定値より高い物体情報と、確度が所定値以下である物体情報を互いに異なる色で投影してもよい。これによって、ユーザは、物体情報(質量等)の推定値の確度を直感的に認識することができる。 Here, the graphics display processing unit 140 may control the projection content according to the accuracy of the object information of the real object. For example, the graphics display processing unit 140 may project object information whose accuracy is higher than a predetermined value and object information whose accuracy is a predetermined value or less in different colors. Thus, the user can intuitively recognize the accuracy of the estimated value of the object information (mass etc.).
 また、例えば、図16の16Aに示すように、バターの箱が投影面200上に置かれることで、物体情報推定部122が入力画像を解析し、バターの箱の物体情報(質量等)を推定する。続いて、16Bにて、ユーザの手が当たったことでバターの箱が移動したとする。ここで、物体追跡部121が質量の推定値に基づいて予想したバターの箱の移動距離よりも、実際の移動距離の方が長かった(換言すると、矛盾が生じた)場合について考える。この場合、物体追跡部121は、バターの箱の質量の推定値をより低い値に更新することで矛盾を解消し、グラフィックス表示処理部140は、更新後の質量に基づいて投影面200にグラフィックスを投影する(16Bの例では、「もうすぐ無くなります」というテキストが投影されている)。これによって、ユーザは、テクスチャからは質量が分からない箱についても質量測定器を用いることなく質量の推定値を知ることができる。 Further, for example, as shown in 16A of FIG. 16, the butter box is placed on the projection plane 200, so that the object information estimation unit 122 analyzes the input image, and the object information (mass etc.) of the butter box is obtained. presume. Subsequently, in 16B, it is assumed that the butter box has moved due to the user's hand hitting it. Here, consider a case where the actual movement distance is longer than the movement distance of the butter box predicted by the object tracking unit 121 based on the estimated mass (in other words, a contradiction occurs). In this case, the object tracking unit 121 resolves the contradiction by updating the estimated value of the mass of the butter box to a lower value, and the graphics display processing unit 140 applies the projection surface 200 based on the updated mass. Project the graphics (in the example of 16B, the text "I will soon disappear" is projected). Thereby, the user can know the estimated value of the mass without using the mass measuring device even for the box whose mass is not known from the texture.
 また、例えば、調理に使用される材料の劣化等は、材料のテクスチャ(可視光帯域の入力画像におけるテクスチャ)から適切に判断できない場合がある。例えば、図17の17Aに示すように、新鮮な生肉50aと鮮度の落ちた生肉50bとの差異は、可視光帯域の入力画像におけるテクスチャからは適切に判断できない場合がある。このような場合、上記のとおり、入力部130は、特定波長だけを透過するマルチスペクトルフィルタを介して撮影処理を行ってもよい。これによって、17Bに示すように、入力部130が特定波長帯域の入力画像を生成することができるため、物体追跡部121および物体情報推定部122は、当該画像におけるテクスチャに基づいて新鮮な生肉51aと鮮度の落ちた生肉51bとを適切に識別することができる。 Also, for example, deterioration of the material used for cooking may not be properly determined from the texture of the material (texture in the input image in the visible light band). For example, as shown to 17A of FIG. 17, the difference between the fresh raw meat 50a and the fresh raw meat 50b having a reduced freshness may not be properly determined from the texture in the input image in the visible light band. In such a case, as described above, the input unit 130 may perform imaging processing through a multispectral filter that transmits only a specific wavelength. As a result, as shown in 17B, the input unit 130 can generate an input image of a specific wavelength band, so that the object tracking unit 121 and the object information estimation unit 122 can obtain fresh raw meat 51a based on the texture in the image. And fresh meat 51b having a reduced freshness can be properly identified.
 なお、以上の説明はあくまで一例であるため、実施の態様は、調理補助機能の仕様や運用に応じて柔軟に変形可能である。 In addition, since the above description is an example to the last, the aspect of implementation can be flexibly changed according to the specification and operation of a cooking assistance function.
 (2.3.化学実験)
 続いて、情報処理装置100を用いて化学実験が行われる実施例について説明する。
(2.3. Chemical experiments)
Next, an example in which a chemical experiment is performed using the information processing apparatus 100 will be described.
 この実施例は、情報処理装置100によって、化学実験に使用される試料の計量(例えば、質量の測定)が行われるものである。 In this embodiment, the information processing apparatus 100 performs measurement (for example, measurement of mass) of a sample used for a chemical experiment.
 まず、図18の18Aに示すように、ユーザが、互いに密度の異なる液体試料60と液体試料61を投影面200上に置く。すると、情報処理装置100の物体情報推定部122が入力画像を解析することで、液体試料60と液体試料61を識別し質量を推定する。ここで、液体試料60の物体IDは1であり、液体試料61の物体IDは2であるとする。グラフィックス表示処理部140は、推定の結果に基づいて投影面200にグラフィックスを投影する(18Aの例では、液体試料60と液体試料61の質量を示すテキストが投影されている)。 First, as shown in 18A of FIG. 18, the user places a liquid sample 60 and a liquid sample 61 having different densities on the projection plane 200. Then, the object information estimation unit 122 of the information processing apparatus 100 analyzes the input image to identify the liquid sample 60 and the liquid sample 61 and estimate the mass. Here, it is assumed that the object ID of the liquid sample 60 is 1 and the object ID of the liquid sample 61 is 2. The graphics display processing unit 140 projects graphics on the projection plane 200 based on the estimation result (in the example of 18A, text indicating the mass of the liquid sample 60 and the liquid sample 61 is projected).
 続いて、18Bに示すように、ユーザが、液体試料60に対して液体試料61を加える(統合する)ことで液体試料62を作ったとする。この場合、物体追跡部121が入力画像を解析することで、液体試料60の体積が増加したと判断し、液体試料60の密度と増加後の体積に基づいて増加後の質量を推定する(液体試料60の密度は1g/cmであるため、18Bの例では、増加後の質量が30gと推定されている)。 Subsequently, as illustrated in 18B, it is assumed that the user has created the liquid sample 62 by adding (integrating) the liquid sample 61 to the liquid sample 60. In this case, the object tracking unit 121 determines that the volume of the liquid sample 60 has increased by analyzing the input image, and estimates the increased mass based on the density of the liquid sample 60 and the increased volume (liquid Since the density of the sample 60 is 1 g / cm 3 , the increased mass is estimated to be 30 g in the example of 18B).
 その後18C時点にて、物体追跡部121が入力画像を解析することで、液体試料61が入っていた容器が空になっていることに基づいて、単に液体試料60が増加したのではなく、液体試料60に対して液体試料61が加えられたことを認識する。そして、物体追跡部121は、増加分については液体試料61の密度を用いて質量を推定することで統合後の液体試料62の質量を更新し、併せて密度も更新する。このように、情報処理装置100は、固体以外の有体物である液体または気体についても物理量を適切に推定することができる。 After that, at 18C, the object tracking unit 121 analyzes the input image, so that the liquid sample 60 is not simply increased based on the fact that the container in which the liquid sample 61 is contained is empty. It is recognized that the liquid sample 61 has been added to the sample 60. Then, the object tracking unit 121 updates the mass of the integrated liquid sample 62 by estimating the mass using the density of the liquid sample 61 for the increase, and also updates the density. As described above, the information processing apparatus 100 can appropriately estimate the physical quantity of a liquid or gas that is a tangible object other than a solid.
 なお、以上の説明はあくまで一例であるため、実施の態様は、化学実験機能の仕様や運用に応じて柔軟に変形可能である。 In addition, since the above description is an example to the last, the aspect of implementation can be flexibly deformed according to the specification and operation of the chemical experiment function.
 (2.4.ユーザ補助)
 続いて、情報処理装置100の使用時にユーザに対する補助が行われる実施例について説明する。
(2.4. User assistance)
Next, an embodiment in which assistance for the user is performed when the information processing apparatus 100 is used will be described.
 この実施例は、ユーザが情報処理装置100を用いて何らかの作業(遊ぶことも含む)をする際に、情報処理装置100によってユーザに対する補助(例えば、作業に関する指示を行うこと等)が行われるものである。 In this embodiment, when the user performs some work (including playing) using the information processing apparatus 100, the information processing apparatus 100 assists the user (for example, gives an instruction regarding the work). It is.
 例えば、図19に示すように、ユーザが、情報処理装置100で初めて使用する缶70を投影面200上に置いたとする。すると、情報処理装置100の物体情報推定部122は、缶70が新たな実オブジェクトであることを認識する。ここで、物体情報推定部122は、ユーザに対して指示を行う指示部として機能し、缶70の物理情報を効率よく取得するために、図19に示すような指示をグラフィックス表示処理部140に投影させてもよい(図19の例では、「この方向に3回転がしてください」というテキストと、転がす方向を示す矢印が投影されている)。ユーザがこの指示に従うことによって、物体情報推定部122は、缶70のテクスチャを効率よく取得し、質量等の物理量を高精度に推定することができる。 For example, as shown in FIG. 19, it is assumed that the user places a can 70 used for the first time on the information processing apparatus 100 on the projection plane 200. Then, the object information estimation unit 122 of the information processing apparatus 100 recognizes that the can 70 is a new real object. Here, the object information estimation unit 122 functions as an instruction unit that gives an instruction to the user, and in order to efficiently acquire physical information of the can 70, an instruction as shown in FIG. (In the example of FIG. 19, the text “Please make three rotations in this direction” and an arrow indicating the direction of rolling are projected). When the user follows this instruction, the object information estimation unit 122 can efficiently acquire the texture of the can 70 and estimate a physical quantity such as mass with high accuracy.
 また、図20に示すように、ユーザが、情報処理装置100で初めて使用する箱80を投影面200上に置いたとする。すると、物体情報推定部122は、箱80が新たな実オブジェクトであることを認識する。そして、物体情報推定部122は、箱80の物理情報を効率よく取得するために、図20に示すような指示をグラフィックス表示処理部140に投影させてもよい(図20の例では、「六面全てを上に向けて置いてください」というテキストと、上面を変えながら箱80が置かれる領域81a~領域81fが投影されている)。ユーザがこの指示に従うことによって、物体情報推定部122は、箱80のテクスチャを効率よく取得することができる。 Further, as shown in FIG. 20, it is assumed that the user places a box 80 used for the first time in the information processing apparatus 100 on the projection plane 200. Then, the object information estimation unit 122 recognizes that the box 80 is a new real object. Then, the object information estimation unit 122 may cause the graphics display processing unit 140 to project an instruction as illustrated in FIG. 20 in order to efficiently acquire the physical information of the box 80 (in the example of FIG. 20, “ Please put all six sides facing up "and the projected areas 81a to 81f where the box 80 is placed while changing the top surface). When the user follows this instruction, the object information estimation unit 122 can efficiently acquire the texture of the box 80.
 なお、以上の説明はあくまで一例であるため、実施の態様は、ユーザに対する補助機能の仕様や運用に応じて柔軟に変形可能である。 In addition, since the above description is an example to the last, the aspect of implementation can be flexibly deformed according to the specification and operation of the auxiliary function for the user.
  <3.ハードウェア構成例>
 上記では、情報処理装置100を用いた様々な実施例について説明した。続いて、図21を参照して、情報処理装置100のハードウェア構成例について説明する。
<3. Hardware configuration example>
In the above description, various embodiments using the information processing apparatus 100 have been described. Next, a hardware configuration example of the information processing apparatus 100 will be described with reference to FIG.
 図21は、情報処理装置100のハードウェア構成例を示す図である。情報処理装置100は、CPU(Central Processing Unit)901と、ROM(Read Only Memory)902と、RAM(Random Access Memory)903と、ホストバス904と、ブリッジ905と、外部バス906と、インタフェース907と、入力装置908と、出力装置909と、ストレージ装置(HDD)910と、ドライブ911と、通信装置912と、を備える。 FIG. 21 is a diagram illustrating a hardware configuration example of the information processing apparatus 100. The information processing apparatus 100 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, a host bus 904, a bridge 905, an external bus 906, an interface 907, , An input device 908, an output device 909, a storage device (HDD) 910, a drive 911, and a communication device 912.
 CPU901は、演算処理装置および制御装置として機能し、各種プログラムに従って情報処理装置100内の動作全般を制御する。また、CPU901は、マイクロプロセッサであってもよい。ROM902は、CPU901が使用するプログラムや演算パラメータ等を記憶する。RAM903は、CPU901の実行において使用するプログラムや、その実行において適宜変化するパラメータ等を一時記憶する。これらはCPUバスなどから構成されるホストバス904により相互に接続されている。当該CPU901、ROM902およびRAM903の協働により、情報処理装置100の制御部110、処理部120またはグラフィックス表示処理部140の各機能が実現される。 The CPU 901 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the information processing apparatus 100 according to various programs. Further, the CPU 901 may be a microprocessor. The ROM 902 stores programs used by the CPU 901, calculation parameters, and the like. The RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. These are connected to each other by a host bus 904 including a CPU bus. The functions of the control unit 110, the processing unit 120, or the graphics display processing unit 140 of the information processing apparatus 100 are realized by the cooperation of the CPU 901, the ROM 902, and the RAM 903.
 ホストバス904は、ブリッジ905を介して、PCI(Peripheral Component Interconnect/Interface)バスなどの外部バス906に接続されている。なお、必ずしもホストバス904、ブリッジ905および外部バス906を分離構成する必要はなく、1つのバスにこれらの機能を実装してもよい。 The host bus 904 is connected to an external bus 906 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 905. Note that the host bus 904, the bridge 905, and the external bus 906 are not necessarily configured separately, and these functions may be mounted on one bus.
 入力装置908は、マウス、キーボード、タッチパネル、ボタン、マイクロフォン、スイッチおよびレバーなどユーザが情報を入力するための入力手段と、ユーザによる入力に基づいて入力信号を生成し、CPU901に出力する入力制御回路などから構成されている。情報処理装置100の使用者は、該入力装置908を操作することにより、各装置に対して各種情報を入力したり処理動作を指示したりすることができる。当該入力装置908により、入力部130の機能が実現される。 The input device 908 includes input means for inputting information such as a mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input control circuit that generates an input signal based on the input by the user and outputs the input signal to the CPU 901. Etc. A user of the information processing apparatus 100 can operate the input device 908 to input various information and instruct processing operations to each device. The function of the input unit 130 is realized by the input device 908.
 出力装置909は、例えば、CRT(Cathode Ray Tube)ディスプレイ装置、液晶ディスプレイ(LCD)装置、OLED(Organic Light Emitting Diode)装置およびランプなどの表示装置を含む。さらに、出力装置909は、スピーカおよびヘッドホンなどの音声出力装置を含む。出力装置909は、例えば、再生されたコンテンツを出力する。具体的には、表示装置は再生された映像データ等の各種情報をテキストまたはイメージで表示する。一方、音声出力装置は、再生された音声データ等を音声に変換して出力する。当該出力装置909により、出力部150の機能が実現される。 The output device 909 includes display devices such as a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, an OLED (Organic Light Emitting Diode) device, and a lamp. Furthermore, the output device 909 includes an audio output device such as a speaker and headphones. The output device 909 outputs the played content, for example. Specifically, the display device displays various information such as reproduced video data as text or images. On the other hand, the audio output device converts reproduced audio data or the like into audio and outputs it. The function of the output unit 150 is realized by the output device 909.
 ストレージ装置910は、データ格納用の装置である。ストレージ装置910は、記憶媒体、記憶媒体にデータを記録する記録装置、記憶媒体からデータを読み出す読出し装置および記憶媒体に記録されたデータを削除する削除装置などを含んでもよい。ストレージ装置910は、例えば、HDD(Hard Disk Drive)で構成される。このストレージ装置910は、ハードディスクを駆動し、CPU901が実行するプログラムや各種データを格納する。当該ストレージ装置910により物体情報一次記憶部123または物体情報蓄積部124の各機能が実現され得る。 The storage device 910 is a device for storing data. The storage device 910 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like. The storage device 910 is composed of, for example, an HDD (Hard Disk Drive). The storage device 910 drives a hard disk and stores programs executed by the CPU 901 and various data. Each function of the object information primary storage unit 123 or the object information storage unit 124 can be realized by the storage device 910.
 ドライブ911は、記憶媒体用リーダライタであり、情報処理装置100に内蔵、あるいは外付けされる。ドライブ911は、装着されている磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリ等のリムーバブル記憶媒体913に記録されている情報を読み出して、RAM903に出力する。また、ドライブ911は、リムーバブル記憶媒体913に情報を書き込むこともできる。 The drive 911 is a storage medium reader / writer, and is built in or externally attached to the information processing apparatus 100. The drive 911 reads information recorded in a removable storage medium 913 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 903. The drive 911 can also write information to the removable storage medium 913.
 通信装置912は、例えば、通信網914に接続するための通信デバイス等で構成された通信インタフェースである。 The communication device 912 is a communication interface configured by a communication device for connecting to the communication network 914, for example.
  <4.まとめ>
 以上で説明してきたように、情報処理装置100は、入力画像を解析することで、入力画像内の複数の実オブジェクトを識別し、当該複数の実オブジェクト間の物理的作用に対応するイベントを検出する。そして、情報処理装置100は、検出したイベントに基づいて複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定する。これによって、ユーザは、専用の測定器等を用いることなく実オブジェクトの物理量の推定値を得ることができる。
<4. Summary>
As described above, the information processing apparatus 100 analyzes the input image, identifies a plurality of real objects in the input image, and detects an event corresponding to a physical action between the plurality of real objects. To do. Then, the information processing apparatus 100 estimates a physical quantity of at least one real object among the plurality of real objects based on the detected event. As a result, the user can obtain an estimated value of the physical quantity of the real object without using a dedicated measuring instrument or the like.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in the present specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 実空間に存在する複数の実オブジェクトを入力画像内で識別する識別部と、
 前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出する検出部と、
 前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定する推定部と、を備える、
 情報処理装置。
(2)
 前記推定部は、前記イベント毎に前記物理量を推定する、
 前記(1)に記載の情報処理装置。
(3)
 前記推定部は、過去のイベントにて推定した物理量を用いて、新たなイベントにて物理量を推定する、
 前記(2)に記載の情報処理装置。
(4)
 前記推定部は、前記過去のイベントにて推定した物理量に基づいて予想される、前記新たなイベントでの前記実オブジェクトの挙動と、実際の挙動との差異に基づいて前記物理量を推定する、
 前記(3)に記載の情報処理装置。
(5)
 前記差異が発生した場合、前記推定部は、より確度の高い物理量の推定結果に基づいて、より確度の低い物理量を推定する、
 前記(4)に記載の情報処理装置。
(6)
 前記実オブジェクトが複数に***または1つに統合された場合、前記推定部は、前記過去のイベントにて推定した物理量に基づいて、***後または統合後の実オブジェクトの物理量を推定する、
 前記(3)から(5)のいずれか1項に記載の情報処理装置。
(7)
 前記推定部は、前記実オブジェクトの大きさまたはテクスチャに基づいて前記物理量を推定する、
 前記(1)から(6)のいずれか1項に記載の情報処理装置。
(8)
 前記推定部は、前記大きさまたは前記テクスチャに基づいて、物理量が既知である実オブジェクトと、物理量の推定対象である実オブジェクトとの同一性を判定することで、前記推定対象である実オブジェクトの物理量を推定する、
 前記(7)に記載の情報処理装置。
(9)
 前記実オブジェクトに対応する仮想オブジェクトを生成する生成部をさらに備える、
 前記(1)から(8)のいずれか1項に記載の情報処理装置。
(10)
 前記生成部は、前記推定部により推定された前記物理量を、前記仮想オブジェクトの物理量に反映させる、
 前記(9)に記載の情報処理装置。
(11)
 所定の方法で表示される前記仮想オブジェクトの挙動を制御する表示制御部をさらに備える、
 前記(9)または(10)に記載の情報処理装置。
(12)
 前記表示制御部は、前記仮想オブジェクトの挙動に基づいて他の仮想オブジェクトの挙動を制御する、
  前記(11)に記載の情報処理装置。
(13)
 前記表示制御部は、前記実オブジェクトの挙動に基づいて前記仮想オブジェクトの挙動を制御する、
 前記(11)または(12)に記載の情報処理装置。
(14)
 前記イベントは、前記複数の実オブジェクト同士の衝突もしくは接触、または、ある実オブジェクト上での他の実オブジェクトの挙動に関するものを含む、
 前記(1)から(13)のいずれか1項に記載の情報処理装置。
(15)
 前記イベントは、ユーザが前記実オブジェクトを用いて行う作業に伴って発生する、
 前記(1)から(14)のいずれか1項に記載の情報処理装置。
(16)
 前記ユーザに対して前記作業に関する指示を行う指示部をさらに備える、
 前記(15)に記載の情報処理装置。
(17)
 前記物理量は、質量または摩擦係数を含む、
 前記(1)から(16)のいずれか1項に記載の情報処理装置。
(18)
 実空間に存在する複数の実オブジェクトを入力画像内で識別することと、
 前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出することと、
 前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することと、を有する、
 コンピュータにより実行される情報処理方法。
(19)
 実空間に存在する複数の実オブジェクトを入力画像内で識別することと、
 前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出することと、
 前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することと、
 をコンピュータに実現させるためのプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
An identification unit for identifying a plurality of real objects existing in the real space in the input image;
A detection unit for detecting an event corresponding to a physical action between the plurality of real objects based on the input image;
An estimation unit that estimates a physical quantity of at least one real object among the plurality of real objects based on the event,
Information processing device.
(2)
The estimation unit estimates the physical quantity for each event.
The information processing apparatus according to (1).
(3)
The estimation unit estimates a physical quantity at a new event using a physical quantity estimated at a past event,
The information processing apparatus according to (2).
(4)
The estimation unit estimates the physical quantity based on a difference between the actual behavior and the actual behavior in the new event, which is expected based on the physical quantity estimated in the past event.
The information processing apparatus according to (3).
(5)
When the difference occurs, the estimation unit estimates a physical quantity with lower accuracy based on a physical quantity estimation result with higher accuracy.
The information processing apparatus according to (4).
(6)
When the real object is divided into a plurality of pieces or integrated into one, the estimation unit estimates a physical quantity of the real object after division or integration based on the physical quantity estimated in the past event.
The information processing apparatus according to any one of (3) to (5).
(7)
The estimating unit estimates the physical quantity based on a size or texture of the real object;
The information processing apparatus according to any one of (1) to (6).
(8)
The estimation unit determines, based on the size or the texture, an identity between a real object whose physical quantity is known and a real object that is a physical quantity estimation target, and the real object that is the estimation target. Estimate physical quantities,
The information processing apparatus according to (7).
(9)
A generator that generates a virtual object corresponding to the real object;
The information processing apparatus according to any one of (1) to (8).
(10)
The generating unit reflects the physical quantity estimated by the estimating unit on a physical quantity of the virtual object;
The information processing apparatus according to (9).
(11)
A display control unit for controlling the behavior of the virtual object displayed by a predetermined method;
The information processing apparatus according to (9) or (10).
(12)
The display control unit controls the behavior of another virtual object based on the behavior of the virtual object;
The information processing apparatus according to (11).
(13)
The display control unit controls the behavior of the virtual object based on the behavior of the real object;
The information processing apparatus according to (11) or (12).
(14)
The event includes a collision or contact between the plurality of real objects or a behavior of another real object on a real object.
The information processing apparatus according to any one of (1) to (13).
(15)
The event occurs along with the work performed by the user using the real object.
The information processing apparatus according to any one of (1) to (14).
(16)
An instruction unit for instructing the user about the work;
The information processing apparatus according to (15).
(17)
The physical quantity includes mass or coefficient of friction,
The information processing apparatus according to any one of (1) to (16).
(18)
Identifying multiple real objects in real space in the input image;
Detecting an event corresponding to a physical action between the plurality of real objects based on the input image;
Estimating a physical quantity of at least one real object of the plurality of real objects based on the event,
An information processing method executed by a computer.
(19)
Identifying multiple real objects in real space in the input image;
Detecting an event corresponding to a physical action between the plurality of real objects based on the input image;
Estimating a physical quantity of at least one real object among the plurality of real objects based on the event;
A program to make a computer realize.
 100  情報処理装置
 110  制御部
 120  処理部
 121  物体追跡部
 122  物体情報推定部
 123  物体情報一次記憶部
 124  物体情報蓄積部
 130  入力部
 140  グラフィックス表示処理部
 150  出力部
 200  投影面
DESCRIPTION OF SYMBOLS 100 Information processing apparatus 110 Control part 120 Processing part 121 Object tracking part 122 Object information estimation part 123 Object information primary storage part 124 Object information storage part 130 Input part 140 Graphics display processing part 150 Output part 200 Projection surface

Claims (19)

  1.  実空間に存在する複数の実オブジェクトを入力画像内で識別する識別部と、
     前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出する検出部と、
     前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定する推定部と、を備える、
     情報処理装置。
    An identification unit for identifying a plurality of real objects existing in the real space in the input image;
    A detection unit for detecting an event corresponding to a physical action between the plurality of real objects based on the input image;
    An estimation unit that estimates a physical quantity of at least one real object among the plurality of real objects based on the event,
    Information processing device.
  2.  前記推定部は、前記イベント毎に前記物理量を推定する、
     請求項1に記載の情報処理装置。
    The estimation unit estimates the physical quantity for each event.
    The information processing apparatus according to claim 1.
  3.  前記推定部は、過去のイベントにて推定した物理量を用いて、新たなイベントにて物理量を推定する、
     請求項2に記載の情報処理装置。
    The estimation unit estimates a physical quantity at a new event using a physical quantity estimated at a past event,
    The information processing apparatus according to claim 2.
  4.  前記推定部は、前記過去のイベントにて推定した物理量に基づいて予想される、前記新たなイベントでの前記実オブジェクトの挙動と、実際の挙動との差異に基づいて前記物理量を推定する、
     請求項3に記載の情報処理装置。
    The estimation unit estimates the physical quantity based on a difference between the actual behavior and the actual behavior in the new event, which is expected based on the physical quantity estimated in the past event.
    The information processing apparatus according to claim 3.
  5.  前記差異が発生した場合、前記推定部は、より確度の高い物理量の推定結果に基づいて、より確度の低い物理量を推定する、
     請求項4に記載の情報処理装置。
    When the difference occurs, the estimation unit estimates a physical quantity with lower accuracy based on a physical quantity estimation result with higher accuracy.
    The information processing apparatus according to claim 4.
  6.  前記実オブジェクトが複数に***または1つに統合された場合、前記推定部は、前記過去のイベントにて推定した物理量に基づいて、***後または統合後の実オブジェクトの物理量を推定する、
     請求項3に記載の情報処理装置。
    When the real object is divided into a plurality of pieces or integrated into one, the estimation unit estimates a physical quantity of the real object after division or integration based on the physical quantity estimated in the past event.
    The information processing apparatus according to claim 3.
  7.  前記推定部は、前記実オブジェクトの大きさまたはテクスチャに基づいて前記物理量を推定する、
     請求項1に記載の情報処理装置。
    The estimating unit estimates the physical quantity based on a size or texture of the real object;
    The information processing apparatus according to claim 1.
  8.  前記推定部は、前記大きさまたは前記テクスチャに基づいて、物理量が既知である実オブジェクトと、物理量の推定対象である実オブジェクトとの同一性を判定することで、前記推定対象である実オブジェクトの物理量を推定する、
     請求項7に記載の情報処理装置。
    The estimation unit determines, based on the size or the texture, an identity between a real object whose physical quantity is known and a real object that is a physical quantity estimation target, and the real object that is the estimation target. Estimate physical quantities,
    The information processing apparatus according to claim 7.
  9.  前記実オブジェクトに対応する仮想オブジェクトを生成する生成部をさらに備える、
     請求項1に記載の情報処理装置。
    A generator that generates a virtual object corresponding to the real object;
    The information processing apparatus according to claim 1.
  10.  前記生成部は、前記推定部により推定された前記物理量を、前記仮想オブジェクトの物理量に反映させる、
     請求項9に記載の情報処理装置。
    The generating unit reflects the physical quantity estimated by the estimating unit on a physical quantity of the virtual object;
    The information processing apparatus according to claim 9.
  11.  所定の方法で表示される前記仮想オブジェクトの挙動を制御する表示制御部をさらに備える、
     請求項9に記載の情報処理装置。
    A display control unit for controlling the behavior of the virtual object displayed by a predetermined method;
    The information processing apparatus according to claim 9.
  12.  前記表示制御部は、前記仮想オブジェクトの挙動に基づいて他の仮想オブジェクトの挙動を制御する、
      請求項11に記載の情報処理装置。
    The display control unit controls the behavior of another virtual object based on the behavior of the virtual object;
    The information processing apparatus according to claim 11.
  13.  前記表示制御部は、前記実オブジェクトの挙動に基づいて前記仮想オブジェクトの挙動を制御する、
     請求項11に記載の情報処理装置。
    The display control unit controls the behavior of the virtual object based on the behavior of the real object;
    The information processing apparatus according to claim 11.
  14.  前記イベントは、前記複数の実オブジェクト同士の衝突もしくは接触、または、ある実オブジェクト上での他の実オブジェクトの挙動に関するものを含む、
     請求項1に記載の情報処理装置。
    The event includes a collision or contact between the plurality of real objects or a behavior of another real object on a real object.
    The information processing apparatus according to claim 1.
  15.  前記イベントは、ユーザが前記実オブジェクトを用いて行う作業に伴って発生する、
     請求項1に記載の情報処理装置。
    The event occurs along with the work performed by the user using the real object.
    The information processing apparatus according to claim 1.
  16.  前記ユーザに対して前記作業に関する指示を行う指示部をさらに備える、
     請求項15に記載の情報処理装置。
    An instruction unit for instructing the user about the work;
    The information processing apparatus according to claim 15.
  17.  前記物理量は、質量または摩擦係数を含む、
     請求項1に記載の情報処理装置。
    The physical quantity includes mass or coefficient of friction,
    The information processing apparatus according to claim 1.
  18.  実空間に存在する複数の実オブジェクトを入力画像内で識別することと、
     前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出することと、
     前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することと、を有する、
     コンピュータにより実行される情報処理方法。
    Identifying multiple real objects in real space in the input image;
    Detecting an event corresponding to a physical action between the plurality of real objects based on the input image;
    Estimating a physical quantity of at least one real object of the plurality of real objects based on the event,
    An information processing method executed by a computer.
  19.  実空間に存在する複数の実オブジェクトを入力画像内で識別することと、
     前記複数の実オブジェクト間の物理的作用に対応するイベントを前記入力画像に基づいて検出することと、
     前記イベントに基づいて前記複数の実オブジェクトのうち少なくとも1つの実オブジェクトの物理量を推定することと、
     をコンピュータに実現させるためのプログラム。
    Identifying multiple real objects in real space in the input image;
    Detecting an event corresponding to a physical action between the plurality of real objects based on the input image;
    Estimating a physical quantity of at least one real object among the plurality of real objects based on the event;
    A program to make a computer realize.
PCT/JP2018/045758 2018-01-30 2018-12-12 Information processing device, information processing method, and program WO2019150778A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-013222 2018-01-30
JP2018013222 2018-01-30

Publications (1)

Publication Number Publication Date
WO2019150778A1 true WO2019150778A1 (en) 2019-08-08

Family

ID=67479241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/045758 WO2019150778A1 (en) 2018-01-30 2018-12-12 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2019150778A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015501044A (en) * 2011-12-01 2015-01-08 クアルコム,インコーポレイテッド Method and system for capturing and moving 3D models of real world objects and correctly scaled metadata

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015501044A (en) * 2011-12-01 2015-01-08 クアルコム,インコーポレイテッド Method and system for capturing and moving 3D models of real world objects and correctly scaled metadata

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
INABA, MASAYUKI: "Robot vision for grasping", JOURNAL OF THE ROBOTICS SOCIETY OF JAPAN, vol. 11, no. 7, 15 October 1993 (1993-10-15), pages 15 - 20 *

Similar Documents

Publication Publication Date Title
US20160343145A1 (en) System and method for object recognition and tracking in a video stream
CN111476306B (en) Object detection method, device, equipment and storage medium based on artificial intelligence
TWI489397B (en) Method, apparatus and computer program product for providing adaptive gesture analysis
US9529566B2 (en) Interactive content creation
EP2877254B1 (en) Method and apparatus for controlling augmented reality
US20190041998A1 (en) System and method for inputting user commands to a processor
TWI484422B (en) Method, apparatus and computer program product for providing gesture analysis
US20070018966A1 (en) Predicted object location
TWI543069B (en) Electronic apparatus and drawing method and computer products thereof
US20110262006A1 (en) Interface apparatus, gesture recognition method, and gesture recognition program
JP2017529635A5 (en)
US9557836B2 (en) Depth image compression
KR20170093801A (en) Augmentation of stop-motion content
KR102203810B1 (en) User interfacing apparatus and method using an event corresponding a user input
EP2702464B1 (en) Laser diode modes
CN111190481A (en) Systems and methods for generating haptic effects based on visual characteristics
WO2019150778A1 (en) Information processing device, information processing method, and program
US20060101985A1 (en) System and method for determining genre of audio
WO2019134606A1 (en) Terminal control method, device, storage medium, and electronic apparatus
JP2019053527A (en) Assembly work analysis device, assembly work analysis method, computer program, and storage medium
JP2006338368A5 (en)
US20230394784A1 (en) Information processing apparatus, information processing method, and computer program
JP2021149296A (en) System for visualizing clustering to actual object, visualization control unit, visualization method, and program for visualization control
Zaqout et al. Augmented piano reality
Sen et al. Novel Human Machine Interface via Robust Hand Gesture Recognition System using Channel Pruned YOLOv5s Model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903791

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18903791

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP