WO2022190533A1 - テンプレート生成装置、照合システム、照合装置、テンプレート生成方法、照合方法およびプログラム - Google Patents
テンプレート生成装置、照合システム、照合装置、テンプレート生成方法、照合方法およびプログラム Download PDFInfo
- Publication number
- WO2022190533A1 WO2022190533A1 PCT/JP2021/047102 JP2021047102W WO2022190533A1 WO 2022190533 A1 WO2022190533 A1 WO 2022190533A1 JP 2021047102 W JP2021047102 W JP 2021047102W WO 2022190533 A1 WO2022190533 A1 WO 2022190533A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- template
- predetermined object
- region
- matching
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 39
- 238000005259 measurement Methods 0.000 claims abstract description 79
- 230000036544 posture Effects 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 description 32
- 238000012545 processing Methods 0.000 description 19
- 238000004364 calculation method Methods 0.000 description 18
- 238000012795 verification Methods 0.000 description 12
- 238000012937 correction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
Definitions
- the present invention relates to a template generation device, a verification system, a verification device, a template generation method, a verification method, and a program.
- the position and orientation of an object can be recognized by comparing a template showing the characteristics of an object in each orientation with an image (measurement image) obtained by imaging (measuring) the object with an imaging device. It is
- Patent Document 1 a template indicating the shape of an object is stored, and the largest plane of the object in the measurement image is compared with the template, thereby improving the efficiency of the matching process for recognizing the target object. ing.
- the present invention adopts the following configuration.
- the template generation device is used in a collation device that collates a measurement image representing a measurement result of a measurement range including a predetermined object with a template representing the feature amount of the predetermined object.
- a template generating apparatus for generating a template comprising first generating means for generating a projection image representing the predetermined object as a two-dimensional image based on the three-dimensional model of the predetermined object; a feature amount acquisition unit for acquiring a feature amount corrected according to the degree of attention of each region of the three-dimensional model based on the projection image; and a template generation device.
- a matching device using a template generated by a template generating device can perform matching using a template with an appropriate feature amount according to the degree of attention of the region. Therefore, the template and the measurement image can be matched with high accuracy.
- the template generation device further includes setting means for setting a first region having a lower degree of attention than other regions in the three-dimensional model, and the feature amount acquisition means determines the features of the first region. You may acquire the suppressed feature-value.
- the matching device that uses the template generated by the template generating device can perform matching without giving importance to regions of the template that have a low degree of attention. Therefore, the template and the measurement image can be matched with high accuracy.
- the setting means may set the area specified by the user as the first area.
- the setting means may set, as the first area, an area in which a ratio of areas having the same characteristics in the predetermined object to the predetermined object is equal to or greater than a first value. good. According to this, it is possible to deemphasize areas that are universally present in a predetermined object in matching. Therefore, the template and the measurement image can be matched with high accuracy.
- the setting means sets a second area having a higher degree of attention than the first area in the three-dimensional model
- the feature amount obtaining means sets the second area. You may acquire the feature-value which emphasized the feature.
- the matching device using the template generated by the template generating device can perform matching with emphasis placed on a region of the template that has a high degree of attention. Therefore, the template and the measurement image can be matched with high accuracy.
- the setting means may set an uneven area as the second area. According to this, it is possible to attach importance to the concave-convex region, which is generally a characteristic region, in matching. Therefore, the template and the measurement image can be matched with high accuracy.
- the setting means may set, as the second region, a region in which a ratio of regions having the same characteristics in the predetermined object to the predetermined object is equal to or less than a second value. good. According to this, it is possible to attach importance to a characteristic region in a predetermined object in matching. Therefore, the template and the measurement image can be matched with high accuracy.
- the measurement image may be a distance image in which each pixel represents the distance to the subject.
- the first generation means generates a plurality of projection images each representing the predetermined object in different postures as a two-dimensional image
- the second generation means generates the plurality of projection images.
- the template may be generated for each projection image. According to this, since it is possible to generate a template of a predetermined object viewed from various viewpoints, the matching device can perform matching using templates of many patterns.
- the matching system may include the template generating device, and a matching device for matching the template generated by the template generating device and the measurement image representing the result of measuring the measurement range.
- the matching device includes: an estimating means for estimating the position and orientation of the predetermined object based on the measured image; determination means for determining a superimposition region in which another object is superimposed on the predetermined object; and update means for updating the template so as to suppress features of the region of the template corresponding to the superimposition region.
- the collation device can deemphasize in collation an area in which a predetermined object is hidden by another object (so-called occlusion area), which should not be emphasized in collation. Therefore, the template and the measurement image can be matched with high accuracy.
- a matching device includes acquisition means for acquiring a measurement image representing a measurement result of a measurement range including a predetermined object and a template representing a feature amount of the predetermined object; estimating means for estimating the position and orientation of the predetermined object based on the above, and another object is superimposed on the predetermined object in the measured image based on the position and orientation of the predetermined object estimated by the estimating means. determining means for determining a superimposed region; updating means for updating the template so as to suppress features of the region of the template corresponding to the superimposed region; and collating the template updated by the updating means with the measurement image. and a matching means for performing matching.
- the collation device can deemphasize in collation an area in which a predetermined object is hidden by another object (so-called occlusion area), which should not be emphasized in collation. Therefore, the template and the measurement image can be matched with high accuracy.
- the present invention may be regarded as a device having at least part of the above means, or as an electronic device, a control system, an information processing system, an information processing device, a recognition device, or a recognition system. Further, the present invention may be regarded as a control method, template generation method, collation method, and recognition method including at least part of the above processing. The present invention can also be regarded as a program for realizing such a method and a recording medium (storage medium) in which the program is non-temporarily recorded. It should be noted that each of the means and processes described above can be combined with each other as much as possible to constitute the present invention.
- matching accuracy is improved when matching a template of an object with an image.
- FIG. 1 is a diagram illustrating a matching system according to Embodiment 1.
- FIG. FIG. 2 is a configuration diagram of a template generation device and a matching device according to the first embodiment.
- FIG. 3 is a flowchart of processing of the template generation device according to the first embodiment.
- FIG. 4 is a flow chart of processing of the matching device according to the first embodiment.
- FIG. 5 is a diagram for explaining a collation system according to the second embodiment.
- a matching system 1 for matching a template representing a feature amount of an object based on a three-dimensional model or a two-dimensional image of the object with an image (measurement image) obtained by measuring the object with an imaging device is , to obtain a template whose feature amount is corrected according to the degree of attention.
- the collation system 1 corrects the feature amount so as to suppress the feature of the area with low attention and corrects the feature amount so as to emphasize the feature of the area with high attention.
- the matching system 1 calculates a result of matching the template and the measurement image.
- an area with a high degree of attention is, for example, a characteristic area such as an uneven area or an area colored with multiple colors.
- a region with a low degree of attention is a region in which there are many other regions having similar characteristics, a region in which other objects are likely to be superimposed and hidden, and the like. Note that these areas can be set by a user's instruction (operation; designation).
- the matching system 1 can perform matching without emphasizing areas of the template with low attention. For this reason, the matching system 1 can appropriately match the measurement image and the template by matching using the high attention area without giving importance to the low attention area. Therefore, the template and the measurement image can be matched with high accuracy.
- the matching system 1 recognizes the position and orientation of the object 2 in the measurement image by matching the template for the object 2 with the measurement image obtained by measuring the object 2 .
- the collation system 1 has an imaging sensor 10 , a template generation device 20 , a collation device 30 and a storage device 40 . Note that the position and orientation are position and orientation in this embodiment, but may be position or orientation as long as there is no technical contradiction.
- the imaging sensor 10 acquires a measurement image by measuring a measurement range including the object 2.
- the imaging sensor 10 acquires an image of the subject through the left lens and an image of the subject through the right lens, and compares the difference between the two images (left and right images).
- each pixel indicates the distance from the imaging sensor 10 to the subject.
- the imaging sensor 10 may acquire a range image by any method such as triangulation measurement or a ToF (Time of Flight) method.
- the measurement image may be a temperature image in which each pixel indicates the temperature of the subject, or may be a normal optical image (an image expressing the color and luminance of the subject).
- the template generation device 20 generates a template indicating the feature amount of the object 2 based on a 3D model of the object 2 measured in advance or a 3D model of the object 2 used when designing the object 2 .
- a three-dimensional model (three-dimensional data) can be data representing the object 2 by point cloud data.
- the matching device 30 uses the measured image acquired by the imaging sensor 10 and the template generated by the template generating device 20 to perform matching processing. Also, the matching device 30 recognizes the current position and orientation of the object 2 based on the matching result. Therefore, the matching device 30 can also be said to be a recognition device that recognizes the position and orientation of the object 2 .
- the storage device 40 stores (records) the three-dimensional model of the object 2, the template generated by the template generation device 20, the matching result of the matching by the matching device 30, and/or information on the position and orientation of the object 2 recognized by the matching device 30. )do.
- the storage device 40 may be a server or the like having a hard disk (HDD) or memory (RAM; Random Access Memory).
- the storage device 40 may be a storage medium that can be inserted into and removed from the template generation device 20 and the matching device 30 .
- the storage device 40 stores a plurality of templates representing feature amounts of objects 2 in different postures (objects 2 viewed from different viewpoints).
- the internal configuration of the template generation device 20 will be described with reference to FIG.
- the template generation device 20 has a control section 201 , an information acquisition section 202 , an area setting section 203 , a projected image generation section 204 , a feature quantity calculation section 205 , a template generation section 206 and an information output section 207 .
- the control unit 201 controls each functional unit of the template generation device 20 according to a program non-temporarily stored in the storage medium.
- the information acquisition unit 202 acquires the three-dimensional model of the object 2.
- the information acquisition unit 202 may acquire the three-dimensional model of the object 2 from the storage device 40 or from another external device.
- the information acquisition unit 202 also acquires the imaging parameters of the imaging sensor 10 (camera focal length, image center coordinates, lens distortion correction coefficient).
- the region setting unit 203 sets a region with a high degree of attention (region of interest) and a region with a low degree of attention (non-attention region) for the three-dimensional model.
- the projection image generation unit 204 generates a projection image by converting the three-dimensional model into a two-dimensional image. Specifically, the projected image generation unit 204 generates a projected image that represents the object 2 in each orientation as a two-dimensional image when measured by the imaging sensor 10 . At this time, since the measured image changes depending on the imaging parameters of the imaging sensor 10, the projection image generation unit 204 generates a projection image corrected by the imaging parameters.
- the feature amount calculation unit 205 calculates (acquires) the feature amount of each pixel (each region) in the projection image based on the three-dimensional model or the projection image.
- the feature amount can be an edge feature amount (edge direction histogram) or a normal line feature amount (normal line direction histogram).
- the feature amount is not limited to this, and may be distance information, temperature information, or color information. Then, when a region with a high degree of attention (region of interest) or a region with a low degree of attention (non-attention region) is set, the feature amount calculation unit 205 calculates the feature of the region corrected according to the degree of attention. get the quantity.
- the template generation unit 206 generates a template, which is a two-dimensional image in which each pixel indicates the feature amount calculated (corrected) by the feature amount calculation unit 205 .
- the template generation unit 206 adds information on the orientation of the object 2 corresponding to the projection image that is the basis of the template to the template.
- the information output unit 207 outputs the template generated by the template generation unit 206 to the storage device 40 .
- the projection image generation unit 204 generates a plurality of projection images from the three-dimensional model.
- the plurality of projected images are two-dimensional images representing the object 2 in different postures (the object 2 viewed from different viewpoints).
- the feature quantity calculation unit 205 calculates a feature quantity for each of the plurality of projection images.
- the template generation unit 206 generates a template for each of the multiple projection images, and the information output unit 207 outputs the multiple templates to the storage device 40 .
- the matching device 30 has a control unit 301 , an image acquisition unit 302 , an information acquisition unit 303 , a feature amount calculation unit 304 , an area setting unit 305 , a template update unit 306 , a matching unit 307 , a recognition unit 308 and a result output unit 309 .
- the control unit 301 controls each functional unit of the verification device 30 according to a program non-temporarily stored in the storage medium.
- the image acquisition unit 302 acquires a measurement image from the imaging sensor 10. Note that the image acquisition unit 302 does not need to acquire the measurement image from the imaging sensor 10, and may acquire the measurement image stored in the storage device 40, for example.
- the information acquisition unit 303 acquires a plurality of templates from the storage device 40.
- the feature amount calculation unit 304 calculates the feature amount of each pixel (each region) of the measurement image.
- the region setting unit 305 selects a region (occlusion region; superimposed area) is determined. Then, the area setting unit 305 sets the occlusion area as a non-attention area with a low degree of attention.
- the template updating unit 306 updates feature amounts in a plurality of templates. Specifically, the template update unit 306 corrects (updates) the feature amount in the template for the area set by the area setting unit 305 as the non-attention area (occlusion area).
- a matching unit 307 performs matching between each of the plurality of templates and the measurement image. Specifically, the matching unit 307 calculates the degree of matching (similarity) between the feature amount of the template and the feature amount of the measurement image. Then, if the evaluation value is greater than a predetermined value, the matching unit 307 determines that the matching has succeeded. On the other hand, if the evaluation value is less than or equal to the predetermined value, the matching unit 307 determines that the matching has failed. Any matching method may be used as the matching method using the feature amount.
- the recognition unit 308 recognizes the position and orientation of the object 2 when the image of the object 2 is captured from the template when the matching unit 307 determines that the matching is successful. Specifically, the recognition unit 308 can recognize the position and orientation of the object 2 at the time of imaging based on the information on the position and orientation of the object 2 added to the template. In addition, as will be described later, the recognition unit 308 can determine the position and orientation of the object 2 in more detail by matching the three-dimensional model of the object 2 with the initial values of the position and orientation using a template to the object 2 in the measurement image. may recognize.
- the result output unit 309 outputs the matching result and the recognition result of the position and orientation of the object 2 to the storage device 40 and an external device. For example, if the result output unit 309 outputs the recognition result to a robot or the like for driving the object 2, the robot performs control to bring the object 2 into a predetermined position and orientation according to the position and orientation of the object 2. be able to. Further, the result output unit 309 may output the updated template information to the storage device 40 or the like.
- the template generation device 20 and the matching device 30 can be configured by a computer including, for example, a CPU (processor), memory, storage, and the like.
- a computer including, for example, a CPU (processor), memory, storage, and the like.
- the configuration shown in FIG. 2 is realized by loading the program stored in the storage into the memory and executing the program by the CPU.
- a computer may be a general-purpose computer such as a personal computer, a server computer, a tablet terminal, a smart phone, or a built-in computer such as an on-board computer.
- all or part of the configuration shown in FIG. 2 may be configured with ASIC, FPGA, or the like.
- all or part of the configuration shown in FIG. 2 may be realized by cloud computing or distributed computing.
- the template generation device 20 may have each functional unit of the matching device 30 . That is, the template generation device 20 may generate (update) a template according to the measurement image, or may perform matching and recognition processing.
- step S ⁇ b>1001 the control unit 201 controls the information acquisition unit 202 to acquire the three-dimensional model of the object 2 and the imaging parameters of the imaging sensor 10 .
- the three-dimensional model can be a three-dimensional model obtained by measuring the object 2 in advance, or a three-dimensional model (CAD data) of the object 2 used when designing the object 2, as described above.
- step S ⁇ b>1002 the control unit 201 determines whether or not it is possible to identify the attention area or the non-attention area in the object 2 . For example, if the user inputs information on the attention area or the non-attention area in advance, the control unit 201 determines that the attention area or the non-attention area can be identified in the object 2 . If it is determined that the region of interest or the region of non-interest can be identified in the object 2, the process proceeds to step S1003; otherwise, the process proceeds to step S1005.
- the control unit 201 may determine whether or not it is possible to identify the attention area or the non-attention area based on the three-dimensional model of the object 2. Specifically, if there is an area to be set as the attention area or the non-attention area, the control unit 201 may determine that the attention area or the non-attention area can be identified. For example, if the three-dimensional model of the object 2 has a region with a special feature (unique feature) such as an uneven shape, the control unit 201 should set this region as the region of interest.
- a special feature unique feature
- control unit 201 controls that the area (surface area) of the area having a certain first feature in the three-dimensional model of the object 2 is equal to or greater than a first ratio of the area (surface area) of the entire area (surface area) of the three-dimensional model of the object 2. (eg, 70% or more), the area having the first characteristic should be set as the non-attention area. Furthermore, the control unit 201 determines that the area of the region having a certain second feature in the three-dimensional model of the object 2 is equal to or less than a second percentage (for example, 10% or less) of the area of the entire region of the object 2 in the three-dimensional model. ), then the region with the second feature should be set as the region of interest.
- a first ratio of the area (surface area) of the entire area (surface area) of the three-dimensional model of the object 2. e. 70% or more
- the area having the first characteristic should be set as the non-attention area.
- control unit 201 determines that the area of the region having a certain second feature in the three-
- the control unit 201 controls the area setting unit 203 to set the attention area and/or the non-attention area. Specifically, if the user inputs information on the attention area or the non-attention area in advance, the area setting unit 203 sets the attention area or the non-attention area according to the information. Also, if the control unit 201 has determined an area to be the attention area or the non-attention area in step S1002, the area setting unit 203 sets the area as the attention area or the non-attention area. Note that the region setting unit 203 may set the degree of attention (importance) for each region in the projection image instead of setting the region of interest or the region of non-attention.
- step S1004 the control unit 201 controls the projection image generation unit 204 to generate a plurality of two-dimensional images corresponding to each orientation of the object 2 as projection images based on the three-dimensional model. That is, the plurality of projection images are two-dimensional images of the object 2 viewed from each viewpoint around it.
- Step S1005 is similar to step S1004.
- the control unit 201 controls the feature amount calculation unit 205 to extract (acquire) feature amounts in each pixel (region) of each projection image based on the three-dimensional model or the projection image.
- the feature amount may be a normal feature amount representing a normal vector or an edge feature amount representing an edge vector, but may be any feature amount.
- the feature amount calculation unit 205 acquires (calculates) the feature amount corrected according to the degree of attention of each region of the three-dimensional model. Specifically, the feature amount calculation unit 205 performs correction (filtering) to suppress the features extracted from the three-dimensional model or the projection image, and obtains the feature amount for the non-interest area. For example, the feature amount calculation unit 205 acquires an amount obtained by multiplying the feature amount extracted from the three-dimensional model or the projected image by a predetermined value (positive number less than 1) corresponding to the degree of attention, or A corrected feature amount is obtained by obtaining an average feature amount of the area surrounding the non-interesting area. Alternatively, the feature amount calculation unit 205 may extract the feature amount from the non-attention area after performing the blurring process on the non-attention area in the three-dimensional model.
- the edge feature amount when the edge feature amount is used in matching the template and the measurement image, matching is performed by associating regions (positions) that are similar to each other and have significant (high edge strength) edge feature amounts. is processed.
- the edge feature amount of the non-interesting region of the template is remarkable, the non-interesting region of the template is associated with the region (position) in the measurement image that is not the region (position) that should be associated originally.
- the feature amount calculation unit 205 performs correction (filtering) to suppress the feature of the non-attention area and acquires the feature amount, the edge feature of the non-attention area can be obtained. quantity becomes lower. Therefore, the possibility that the edge feature amount of the non-target area is used for matching is reduced, or the possibility that the edge feature amount affects matching is reduced. That is, the accuracy of matching using templates is improved.
- the feature amount calculation unit 205 may acquire the feature amount by correcting the attention area so as to emphasize the feature extracted from the three-dimensional model or the projection image.
- the feature amount of the region of interest is more likely to be used for matching. Therefore, matching can be performed using a feature amount suitable for matching, and the accuracy of matching using a template is improved.
- the degree of suppressing or emphasizing this feature may be a degree corresponding to the degree of attention.
- step S1007 the control unit 201 controls the feature amount calculation unit 205 to calculate the feature of each pixel (region) of each projection image based on the three-dimensional model or the projection image without performing correction according to the degree of attention. Extract (get) a quantity.
- step S1008 the control unit 201 controls the template generation unit 206 to generate templates for each of the plurality of projection images. Specifically, the template generation unit 206 generates, as a template, a two-dimensional image in which each pixel indicates the feature amount acquired in step S1006 for each of the plurality of projection images.
- step S1009 the control unit 201 controls the template generation unit 206 to generate different templates for each of the plurality of projection images, as in step S1008.
- step S ⁇ b>1010 the control unit 201 controls the information output unit 207 to output a plurality of templates generated by the template generation unit 206 to the storage device 40 .
- a plurality of templates for the object 2 are thereby stored in the storage device 40 .
- the control unit 201 selects the non-attention area and the attention area for each of the plurality of projection images according to the ratio of the areas having the same characteristics to the area of the object 2 appearing in the projection images. It may be set (determined). That is, the control unit 201 controls that the area of the area having the certain first feature in the object 2 appearing in the projection image is equal to or more than a first ratio (for example, 70% or more) of the area of the entire area of the object 2 appearing in the projection image. ), the region having the first characteristic may be set as the non-interest region.
- a first ratio for example, 70% or more
- control unit 201 ensures that the area of the area having a certain second feature in the object 2 appearing in the projection image is equal to or less than a second percentage (for example, 10% or less) of the area of the entire area of the object 2 appearing in the projection image. ), the region having the second characteristic may be set as the region of interest.
- step S ⁇ b>2001 the control unit 301 controls the image acquisition unit 302 to acquire a measurement image from the imaging sensor 10 .
- the imaging sensor 10 identifies corresponding pixels between the two left and right images, and calculates the positional difference of the corresponding pixels.
- the imaging sensor 10 measures the distance to the subject using a triangulation technique based on the positional difference between the corresponding pixels and the positional difference between the left and right lenses. Thereby, the imaging sensor 10 can acquire a measurement image, which is a distance image having point cloud data, for example.
- control unit 301 controls the information acquisition unit 303 to acquire a plurality of templates for the object 2 from the storage device 40 .
- Each of the plurality of templates may be a template whose feature amount has been corrected by the template generation device 20 or may be a template whose feature amount has not been corrected by the template generation device 20 .
- control unit 301 may control the image acquisition unit 302 to acquire the imaging parameters of the imaging sensor 10 , or may control the information acquisition unit 303 to acquire the three-dimensional model of the object 2 .
- step S2002 the control unit 301 controls the feature amount calculation unit 304 to calculate feature amounts from the measurement image (distance image).
- step S2003 the control unit 301 estimates the position and orientation of the object 2.
- the control unit 301 controls the matching unit 307 and the recognition unit 308 to perform matching using a template that does not consider the information of the non-attention area or the attention area (the feature amount is not corrected). to estimate the position and orientation of the object 2 .
- the control unit 301 may estimate the position and orientation of the object 2 based on past position and orientation information of the object 2 .
- any method may be used for estimation.
- step S2004 the control unit 301 determines whether or not there is an area (occlusion area; superimposition area) where the object 2 is hidden by another object in the measurement image.
- the control unit 301 can determine an occlusion area based on preset position information of objects around the object 2 and the estimated position and orientation of the object 2 . If it is determined that there is an occlusion area, the process proceeds to step S2005; otherwise, the process proceeds to step S2007.
- step S2005 the control unit 301 controls the area setting unit 305 to set the occlusion area determined in step S2004 as a non-interest area.
- step S2006 the control unit 301 controls the template updating unit 306 to update the template. Specifically, the template update unit 306 corrects (updates) the feature quantity for each template so that the occlusion area is treated as a non-target area and the feature of the occlusion area (non-target area) is suppressed.
- steps S2005 and S2006 the area setting unit 305 sets a non-interest area for the template, and the template updating unit 306 corrects the feature amount of the non-interest area of the template.
- steps S2005 and S2006 even if the region setting unit 305 sets a non-attention region to the three-dimensional model, and the template updating unit 306 acquires a corrected feature amount again from the non-attention region of the three-dimensional model, good.
- steps S2003 to S2006 may not be performed, and when the processing of step S2002 is completed, the process may proceed to step S2007.
- step S2007 the control unit 301 controls the matching unit 307 to match each of the plurality of templates with the measurement image.
- the collating unit 307 acquires a matching degree (similarity) by collating (comparing) the feature amount of the measurement image and the feature amount of the template.
- the collation unit 307 may obtain the reciprocal 1/Sum(D) of the total sum Sum(D) of the differences D of the feature amounts of the pixels corresponding to each other between the template and the measurement image as the matching degree. .
- step S ⁇ b>2008 the control unit 301 controls the recognition unit 308 to recognize the position and orientation of the object 2 .
- the recognition unit 308 recognizes, as the orientation of the object 2, the orientation corresponding to the template with the highest degree of matching acquired by the matching unit 307, and based on the position in the measurement image that best matches the template, the recognition unit 308 to recognize the position of
- the recognition unit 308 uses the position and orientation of the object 2 recognized by the template as initial values to align the three-dimensional model with the object 2 in the measurement image.
- the detailed position and orientation of the object 2 may be recognized by embedding.
- the recognition unit 308 can fit the three-dimensional model to the measurement image by calculating the correspondence between each point of the three-dimensional model and each point of the object 2 in the measurement image.
- an ICP Intelligent Closest Point
- step S ⁇ b>2009 the control unit 301 controls the result output unit 309 to output information on the position and orientation of the object 2 to the storage device 40 .
- information on the position and orientation of the object 2 is stored in the storage device 40 .
- the matching device 30 can perform matching using a template whose feature amount is corrected according to the degree of attention. Therefore, the collation device 30 can place more importance on important areas (areas to be noticed) in collation and deemphasize unimportant areas in collation, so that more accurate collation results can be obtained. Therefore, the matching device 30 can recognize the position and orientation of the object 2 with higher accuracy.
- a matching system 1 robot control system in which a robot 60 controls the position and orientation of the gripped object 3 to connect the gripped object 3 to the object 2
- the collation system 1 has a robot control device 50 and a robot 60 in addition to the configuration of the collation system 1 .
- the configuration of the imaging sensor 10, the template generation device 20, the matching device 30, and the storage device 40 is the same as the configuration according to the first embodiment.
- the robot control device 50 controls the posture of the robot 60 based on the recognition result of the position and posture of the object 2 (the object 2 and the gripped object 3).
- the robot controller 50 controls the robot 60 to connect the gripped object 3 gripped by the gripper 61 of the robot 60 to the object 2 .
- the posture of the robot 60 is controlled by the robot control device 50 .
- the robot 60 has a gripper 61 that grips the gripped object 3 .
- the imaging sensor 10 is connected (fixed) to a portion of the grip portion 61 . That is, in the present embodiment, the position and orientation of the imaging sensor 10 change as the orientation of the robot 60 changes. However, the position and orientation of the imaging sensor 10 may be fixed regardless of the orientation of the robot 60 .
- the template generation device 20 generates a template of the object 2 in the same manner as in the processing according to the first embodiment. Then, the template generation device 20 executes the same processing (each processing shown in the flowchart of FIG. 3) as the processing for generating the template of the object 2 for the gripped object 3 as well. Thereby, the template generating device 20 generates a template of the gripped object 3 .
- steps S2003 and S2004 will be described below.
- steps S2005 to S2009 while only "object 2" is processed in the first embodiment, it is sufficient to process "each of object 2 and gripped object 3", so a description thereof will be omitted.
- steps S2001 and S2002 are the same as the steps with the same reference numerals according to the first embodiment, and thus description thereof is omitted.
- step S2003 the control unit 301 estimates the positions and orientations of the object 2 and the grasped object 3 by the same method as in the first embodiment.
- step S2004 the control unit 301 determines whether or not there is an area (occlusion area; superimposed area) in which one is hidden by the other between the object 2 and the gripped object 3 in the measurement image.
- the control unit 301 determines that area as an occlusion area. If the occlusion area exists, the process proceeds to step S2005, and if the occlusion area does not exist, the process proceeds to step S2007.
- the robot control device 50 needs to control the attitude of the robot 60 so that the object 2 and the gripped object 3 are connected. 4 (steps S2001 to S2009) until the verification device 30 determines that the object 2 and the grasped object 3 are connected based on the recognition results of the positions and orientations of the object 2 and the grasped object 3. is repeated.
- the matching device 30 can determine (set) an occlusion area (non-interest area) generated between the object 2 and the grasped object 3 based on the estimated positions and orientations of the object 2 and the grasped object 3 . Therefore, it is possible to prevent the occlusion area generated between the object 2 and the grasped object 3 from being emphasized in collation. Therefore, the matching device 30 can recognize the position and orientation of the object 2 with higher accuracy. As a result, the collation system 1 (object moving system) can connect the grasped object 3 to the object 2 with high accuracy.
- the matching system 1 sets the non-attention area and the attention area (the attention level of each area) based on the user's instruction, the three-dimensional model, the occlusion area, etc.
- You may set a non-attention area and an attention area by .
- the verification system 1 randomly (exhaustively) sets a non-attention area and an attention area a plurality of times, and among the results of recognizing the position and orientation of the object 2,
- the non-attention area and the attention area at the time of posture recognition may also be used for subsequent recognition.
- a template generator (20) characterized in that it comprises:
- a template generation method characterized by comprising:
- a matching method characterized by having
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
以下では、或る物体の3次元モデルまたは2次元画像に基づく当該物体の特徴量を示すテンプレートと、撮像装置が当該物体を計測して得た画像(計測画像)とを照合する照合システム1は、注目度に応じて特徴量を補正したテンプレートを取得する。例えば、照合システム1は、注目度が低い領域の特徴を抑えるように特徴量を補正し、注目度が高い領域の特徴を強調するように特徴量を補正する。そして、照合システム1は、テンプレートと計測画像とを照合した結果を算出する。
[照合システムの構成]
図1を参照して、実施形態1に係る照合システム1の構成を説明する。照合システム1は、物体2についてのテンプレートと物体2を計測した計測画像とを照合することによって、計測画像における物体2の位置姿勢を認識する。照合システム1は、撮像センサ10、テンプレート生成装置20、照合装置30、記憶装置40を有する。なお、位置姿勢とは、本実施形態では、位置および姿勢であるが、技術的な矛盾が生じなければ、位置または姿勢であってもよい。
図2を参照して、テンプレート生成装置20の内部構成を説明する。テンプレート生成装置20は、制御部201、情報取得部202、領域設定部203、投影画像生成部204、特徴量算出部205、テンプレート生成部206、情報出力部207を有する。
図2を参照して、照合装置30の内部構成について説明する。照合装置30は、制御部301、画像取得部302、情報取得部303、特徴量算出部304、領域設定部305、テンプレート更新部306、照合部307、認識部308、結果出力部309を有する。
図3のフローチャートを用いて、テンプレート生成装置20の処理を詳細に説明する。図3のフローチャートの各処理は、制御部201がプログラムに従って動作することによって実現される。
図4のフローチャートを用いて、照合装置30の処理を詳細に説明する。図4のフローチャートの各処理は、制御部301がプログラムに従って動作することによって実現される。
実施形態2では、図5に示すように、把持物体3の位置姿勢をロボット60が制御することによって、把持物体3を物体2に接続させるような照合システム1(ロボット制御システム)について説明する。照合システム1は、照合システム1の構成に加えて、ロボット制御装置50とロボット60を有する。本実施形態では、撮像センサ10、テンプレート生成装置20、照合装置30、記憶装置40の構成は、実施形態1に係る構成と同じである。
実施形態2では、テンプレート生成装置20は、実施形態1に係る処理と同様に物体2のテンプレートを生成する。そして、テンプレート生成装置20は、物体2のテンプレートを生成する処理と同様の処理(図3のフローチャートに示す各処理)を把持物体3に対しても実行する。これによって、テンプレート生成装置20は、把持物体3のテンプレートを生成する。
また、照合装置30の処理が実施形態1とは異なるので、図4のフローチャートを用いて、実施形態2に係る照合装置30の処理を詳細に説明する。なお、以下では、ステップS2003,S2004について説明する。ステップS2005~S2009については、実施形態1では「物体2」についてのみ処理をしていたのに対して、「物体2および把持物体3のそれぞれ」について処理を行えばよいので説明を省略する。また、ステップS2001,S2002については、実施形態1に係る同一符号のステップと同様であるため、説明を省略する。
所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを照合する照合装置(30)に用いられる前記テンプレートを生成するテンプレート生成装置(20)であって、
前記所定の物体の3次元モデルに基づき、前記所定の物体を2次元画像として表す投影画像を生成する第1の生成手段(204)と、
前記3次元モデルまたは前記投影画像に基づき、前記3次元モデルの各領域の注目度に応じて補正した特徴量を取得する特徴量取得手段(205)と、
前記投影画像に対応する前記特徴量を示す、前記テンプレートを生成する第2の生成手段(206)と、
を有することを特徴とするテンプレート生成装置(20)。
所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを取得する取得手段(302,303)と、
前記計測画像に基づき、前記所定の物体の位置姿勢を推定する推定手段(301)と、
前記推定手段(301)が推定した前記所定の物体の位置姿勢に基づき、前記計測画像において前記所定の物体に対して他の物体が重畳する重畳領域を判定する判定手段(301)と、
前記重畳領域に対応する前記テンプレートの領域の特徴を抑制するように、前記テンプレートを更新する更新手段(306)と、
前記更新手段(306)が更新したテンプレートと前記計測画像とを照合する照合手段(308)と、
を有することを特徴とする照合装置(30)。
所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを照合する照合装置(30)に用いられる前記テンプレートを生成するテンプレート生成方法であって、
前記所定の物体の3次元モデルに基づき、前記所定の物体を2次元画像として表す投影画像を生成する第1の生成ステップ(S1004)と、
前記3次元モデルまたは前記投影画像に基づき、前記3次元モデルの各領域の注目度に応じて補正した特徴量を取得する特徴量取得ステップ(S1006)と、
前記投影画像に対応する前記特徴量を示す、前記テンプレートを生成する第2の生成ステップ(S1008)と、
を有することを特徴とするテンプレート生成方法。
所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを取得する取得ステップ(S2001)と、
前記計測画像に基づき、前記所定の物体の位置姿勢を推定する推定ステップ(S2003)と、
前記推定ステップにおいて推定された前記所定の物体の位置姿勢に基づき、前記計測画像において前記所定の物体に対して他の物体が重畳する重畳領域を判定する判定ステップ(S2004)と、
前記重畳領域に対応する前記テンプレートの領域の特徴を抑制するように、前記テンプレートを更新する更新ステップ(S2006)と、
前記更新ステップにおいて更新されたテンプレートと前記計測画像とを照合する照合ステップ(S2007)と、
を有することを特徴とする照合方法。
30:照合装置、40:記憶装置、201:制御装置、202:情報取得部、
203:領域設定部、204:投影画像生成部、205:特徴量算出部、
206:テンプレート生成部、207:情報出力部、301:制御部、
302:画像取得部、303:情報取得部、304:特徴量算出部、
305:領域設定部、306:テンプレート更新部、307:照合部、
308:認識部、309:結果出力部
Claims (16)
- 所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを照合する照合装置に用いられる前記テンプレートを生成するテンプレート生成装置であって、
前記所定の物体の3次元モデルに基づき、前記所定の物体を2次元画像として表す投影画像を生成する第1の生成手段と、
前記3次元モデルまたは前記投影画像に基づき、前記3次元モデルの各領域の注目度に応じて補正した特徴量を取得する特徴量取得手段と、
前記投影画像に対応する前記特徴量を示す、前記テンプレートを生成する第2の生成手段と、
を有することを特徴とするテンプレート生成装置。 - 前記3次元モデルにおいて、他の領域よりも注目度の低い第1の領域を設定する設定手段をさらに有し、
前記特徴量取得手段は、前記第1の領域の特徴を抑制した特徴量を取得する、
ことを特徴とする請求項1に記載のテンプレート生成装置。 - 前記設定手段は、ユーザが指定した領域を前記第1の領域として設定する、
ことを特徴とする請求項2に記載のテンプレート生成装置。 - 前記設定手段は、前記所定の物体に対する、前記所定の物体において同一の特徴を有する領域の割合が第1の値以上である領域を前記第1の領域として設定する、
ことを特徴とする請求項2または3に記載のテンプレート生成装置。 - 前記設定手段は、前記3次元モデルおいて、前記第1の領域よりも注目度の高い第2の領域を設定し、
前記特徴量取得手段は、前記第2の領域の特徴を強調した特徴量を取得する、
ことを特徴とする請求項2から4のいずれか1項に記載のテンプレート生成装置。 - 前記設定手段は、凹凸形状の領域を前記第2の領域として設定する、
ことを特徴とする請求項5に記載のテンプレート生成装置。 - 前記設定手段は、前記所定の物体に対する、前記所定の物体において同一の特徴を有する領域の割合が第2の値以下である領域を前記第2の領域として設定する、
ことを特徴とする請求項5または6に記載のテンプレート生成装置。 - 前記計測画像は、被写体までの距離を各画素が表す距離画像である、
ことを特徴とする請求項1から7のいずれか1項に記載のテンプレート生成装置。 - 前記第1の生成手段は、それぞれが互いに異なる姿勢の前記所定の物体を2次元画像として表す複数の前記投影画像を生成し、
前記第2の生成手段は、前記複数の投影画像のそれぞれについて、前記テンプレートを生成する、
ことを特徴とする請求項1から8のいずれか1項に記載のテンプレート生成装置。 - 請求項1から9のいずれか1項に記載のテンプレート生成装置と、
前記テンプレート生成装置が生成したテンプレートと、前記計測画像とを照合する照合装置と、
を有することを特徴とする照合システム。 - 前記照合装置は、
前記計測画像に基づき、前記所定の物体の位置姿勢を推定する推定手段と、
前記推定手段が推定した前記所定の物体の位置姿勢に基づき、前記計測画像において前記所定の物体に対して他の物体が重畳する重畳領域を判定する判定手段と、
前記重畳領域に対応する前記テンプレートの領域の特徴を抑制するように、前記テンプレートを更新する更新手段と、
を有する、
ことを特徴とする請求項10に記載の照合システム。 - 所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを取得する取得手段と、
前記計測画像に基づき、前記所定の物体の位置姿勢を推定する推定手段と、
前記推定手段が推定した前記所定の物体の位置姿勢に基づき、前記計測画像において前記所定の物体に対して他の物体が重畳する重畳領域を判定する判定手段と、
前記重畳領域に対応する前記テンプレートの領域の特徴を抑制するように、前記テンプレートを更新する更新手段と、
前記更新手段が更新したテンプレートと前記計測画像とを照合する照合手段と、
を有することを特徴とする照合装置。 - 所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを照合する照合装置に用いられる前記テンプレートを生成するテンプレート生成方法であって、
前記所定の物体の3次元モデルに基づき、前記所定の物体を2次元画像として表す投影画像を生成する第1の生成ステップと、
前記3次元モデルまたは前記投影画像に基づき、前記3次元モデルの各領域の注目度に応じて補正した特徴量を取得する特徴量取得ステップと、
前記投影画像に対応する前記特徴量を示す、前記テンプレートを生成する第2の生成ステップと、
を有することを特徴とするテンプレート生成方法。 - 所定の物体を含む計測範囲を計測した結果を表す計測画像と前記所定の物体の特徴量を示すテンプレートとを取得する取得ステップと、
前記計測画像に基づき、前記所定の物体の位置姿勢を推定する推定ステップと、
前記推定ステップにおいて推定された前記所定の物体の位置姿勢に基づき、前記計測画像において前記所定の物体に対して他の物体が重畳する重畳領域を判定する判定ステップと、
前記重畳領域に対応する前記テンプレートの領域の特徴を抑制するように、前記テンプレートを更新する更新ステップと、
前記更新ステップにおいて更新されたテンプレートと前記計測画像とを照合する照合ステップと、
を有することを特徴とする照合方法。 - 請求項13に記載のテンプレート生成方法の各ステップをコンピュータに実行させるためのプログラム。
- 請求項14に記載の照合方法の各ステップをコンピュータに実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/546,006 US20240095950A1 (en) | 2021-03-11 | 2021-12-20 | Template generation device, collation system, collation device, template generation method, collation method, and program |
CN202180092548.1A CN116783615A (zh) | 2021-03-11 | 2021-12-20 | 模板生成装置、对照***、对照装置、模板生成方法、对照方法及程序 |
EP21930378.1A EP4273795A1 (en) | 2021-03-11 | 2021-12-20 | Template generation device, collation system, collation device, template generation method, collation method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-039417 | 2021-03-11 | ||
JP2021039417A JP2022139158A (ja) | 2021-03-11 | 2021-03-11 | テンプレート生成装置、照合システム、照合装置、テンプレート生成方法、照合方法およびプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022190533A1 true WO2022190533A1 (ja) | 2022-09-15 |
Family
ID=83226514
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/047102 WO2022190533A1 (ja) | 2021-03-11 | 2021-12-20 | テンプレート生成装置、照合システム、照合装置、テンプレート生成方法、照合方法およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240095950A1 (ja) |
EP (1) | EP4273795A1 (ja) |
JP (1) | JP2022139158A (ja) |
CN (1) | CN116783615A (ja) |
WO (1) | WO2022190533A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140379588A1 (en) * | 2013-03-15 | 2014-12-25 | Compology, Inc. | System and method for waste managment |
JP2015005093A (ja) * | 2013-06-20 | 2015-01-08 | キヤノン株式会社 | パターンマッチング装置及びパターンマッチング方法 |
JP2016192132A (ja) * | 2015-03-31 | 2016-11-10 | Kddi株式会社 | 画像認識ar装置並びにその姿勢推定装置及び姿勢追跡装置 |
JP2018097889A (ja) | 2018-01-17 | 2018-06-21 | セイコーエプソン株式会社 | 物体認識装置、物体認識方法、物体認識プログラム、ロボットシステム及びロボット |
-
2021
- 2021-03-11 JP JP2021039417A patent/JP2022139158A/ja active Pending
- 2021-12-20 CN CN202180092548.1A patent/CN116783615A/zh active Pending
- 2021-12-20 US US18/546,006 patent/US20240095950A1/en active Pending
- 2021-12-20 WO PCT/JP2021/047102 patent/WO2022190533A1/ja active Application Filing
- 2021-12-20 EP EP21930378.1A patent/EP4273795A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140379588A1 (en) * | 2013-03-15 | 2014-12-25 | Compology, Inc. | System and method for waste managment |
JP2015005093A (ja) * | 2013-06-20 | 2015-01-08 | キヤノン株式会社 | パターンマッチング装置及びパターンマッチング方法 |
JP2016192132A (ja) * | 2015-03-31 | 2016-11-10 | Kddi株式会社 | 画像認識ar装置並びにその姿勢推定装置及び姿勢追跡装置 |
JP2018097889A (ja) | 2018-01-17 | 2018-06-21 | セイコーエプソン株式会社 | 物体認識装置、物体認識方法、物体認識プログラム、ロボットシステム及びロボット |
Also Published As
Publication number | Publication date |
---|---|
EP4273795A1 (en) | 2023-11-08 |
US20240095950A1 (en) | 2024-03-21 |
CN116783615A (zh) | 2023-09-19 |
JP2022139158A (ja) | 2022-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9881204B2 (en) | Method for determining authenticity of a three-dimensional object | |
EP1677250B9 (en) | Image collation system and image collation method | |
KR101997500B1 (ko) | 개인화된 3d 얼굴 모델 생성 방법 및 장치 | |
US8055028B2 (en) | Object pose normalization method and apparatus and object recognition method | |
KR20120048370A (ko) | 물체 자세 인식장치 및 이를 이용한 물체 자세 인식방법 | |
JP4709668B2 (ja) | 3次元物体認識システム | |
JP2007004767A (ja) | 画像認識装置、方法およびプログラム | |
JP2010176380A (ja) | 情報処理装置および方法、プログラム、並びに記録媒体 | |
JPWO2009091029A1 (ja) | 顔姿勢推定装置、顔姿勢推定方法、及び、顔姿勢推定プログラム | |
CN113393524B (zh) | 一种结合深度学习和轮廓点云重建的目标位姿估计方法 | |
JP2017123087A (ja) | 連続的な撮影画像に映り込む平面物体の法線ベクトルを算出するプログラム、装置及び方法 | |
JP2002063567A (ja) | 物***置姿勢推定装置及びその方法並びそれを用いた特徴点位置抽出方法及び画像照合方法 | |
CN110120013A (zh) | 一种点云拼接方法及装置 | |
KR100933957B1 (ko) | 단일 카메라를 이용한 삼차원 인체 포즈 인식 방법 | |
JP4921847B2 (ja) | 対象物の三次元位置推定装置 | |
WO2022190533A1 (ja) | テンプレート生成装置、照合システム、照合装置、テンプレート生成方法、照合方法およびプログラム | |
JP4876742B2 (ja) | 画像処理装置及び画像処理プログラム | |
WO2022190534A1 (ja) | 認識装置、ロボット制御システム、認識方法、およびプログラム | |
JP2008003800A (ja) | 画像処理装置及び画像処理プログラム | |
JP4687579B2 (ja) | 画像処理装置及び画像処理プログラム | |
JP5628570B2 (ja) | 画像照合装置および画像照合方法 | |
JP4812743B2 (ja) | 顔認識装置、顔認識方法、顔認識プログラムおよびそのプログラムを記録した記録媒体 | |
CN112580496B (zh) | 结合人脸关键点检测的人脸相对姿态估计方法 | |
CN115147472A (zh) | 头部姿态估计方法、***、设备、介质和车辆 | |
JP2008003793A (ja) | 画像処理装置及び画像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21930378 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180092548.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18546006 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2021930378 Country of ref document: EP Effective date: 20230804 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |