CN109543705B - Template creation device and method, object recognition processing device, and recording medium - Google Patents

Template creation device and method, object recognition processing device, and recording medium Download PDF

Info

Publication number
CN109543705B
CN109543705B CN201810759087.9A CN201810759087A CN109543705B CN 109543705 B CN109543705 B CN 109543705B CN 201810759087 A CN201810759087 A CN 201810759087A CN 109543705 B CN109543705 B CN 109543705B
Authority
CN
China
Prior art keywords
normal vector
template
reference region
unit
quantized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810759087.9A
Other languages
Chinese (zh)
Other versions
CN109543705A (en
Inventor
小西嘉典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Corp filed Critical Omron Corp
Publication of CN109543705A publication Critical patent/CN109543705A/en
Application granted granted Critical
Publication of CN109543705B publication Critical patent/CN109543705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a template creation device and method, an object recognition processing device, and a recording medium, which are not susceptible to a change in the vector direction of a normal vector having a small angle with the camera optical axis. The template creation device (20) is provided with: a three-dimensional data acquisition unit (201) that acquires three-dimensional data of an object to be identified; a normal vector calculation unit (203) that calculates a normal vector of a feature point of the object obtained by observation from a predetermined viewpoint set for the object; a normal vector quantization unit (204) that obtains a quantized normal direction feature amount by mapping a normal vector to a reference region on a plane orthogonal to an axis passing through the viewpoint, the reference region including a center reference region corresponding to the vicinity of the axis and a reference region around the center reference region; a template creation unit (205) that creates a template for each viewpoint based on the quantized normal direction feature quantity; and a template information output unit (206) that outputs the created template.

Description

Template creation device and method, object recognition processing device, and recording medium
Technical Field
The present invention relates to a technique for creating a template used in object recognition by template matching.
Background
One method of identifying an object from an image is a method called template matching. The basic process of template matching is to prepare a template of an object to be identified in advance, and identify the position or posture of the object in an image by evaluating the similarity of image features between an input image and the template. Object recognition by template matching is put to practical use in various fields such as inspection, sorting, robot vision, and monitoring cameras of FA (factory automation (Factory Automation)).
Conventionally, in template matching in which an object to be identified is a three-dimensional object, a plane and normal line information of the object are used as feature values for the purpose of reducing the amount of computation or the time required for computation while maintaining a highly accurate identification result (for example, patent literature 1).
In addition, when using edge information in template matching as a feature quantity, it is known that the data quantity is reduced and the processing speed is increased by using a feature quantity obtained by converting a value of 0 to 360 degrees, which is an edge angle, into 8-bit data divided into 8 parts in 45-degree units (for example, patent document 2).
Patent document 1: japanese patent application laid-open No. 2015-079374
Patent document 2: japanese patent publication No. 5271031.
Disclosure of Invention
Problems to be solved by the invention
As shown in fig. 7, conventionally, when normal line information is used as a feature quantity, an angle θ between a vector in which a normal vector is mapped in an xy two-dimensional space and an x-axis is quantized, and a feature quantity of the normal vector is obtained. For example, in the example of fig. 7, the normal vector on the unit sphere shown in fig. 7 (a) is mapped on the two-dimensional space shown in fig. 7 (b) having the reference areas 1 to 8 corresponding to 8 portions obtained by equally dividing the XY plane 8 passing through the center of the sphere, and the feature quantity is obtained by quantization. For example, normal vector 1b of fig. 7 (b) corresponds to normal vector 1a of fig. 7 (a), and likewise, normal vector 2b corresponds to normal vector 2 a. The magnitude of the normal vector 1b corresponds to sin Φ of the normal vector 1 a.
Here, the boundaries of the reference regions 1 to 8 are concentrated near the z-axis. Therefore, among normal vectors extracted from an image obtained by capturing an identification object, a normal vector 1a having a small angle Φ with the camera optical axis (z axis) of the camera that captured the image is identified as a normal vector different from the normal vector to which the identification object should be originally due to the influence of noise, measurement error, or the like, and as a result, the reference region to which the mapped vector belongs in the xy two-dimensional space is liable to change, and as a result, the obtained quantized feature quantity is also liable to change. Thus, when the axis of the viewpoint passing through the template at the time of registration and the optical axis of the camera acquiring the input image obtained by capturing the recognition object coincide, the feature amount between the template and the input image obtained by capturing the recognition object does not coincide, among feature points having normal vectors with small angles Φ between the camera optical axis, resulting in a low recognition accuracy.
Accordingly, the present invention aims to provide a template creation device, an object recognition processing device, a template creation method, and a recording medium capable of improving recognition accuracy by using a template that is less susceptible to a change in the vector direction of a normal vector having a small angle with the camera optical axis.
Means for solving the problems
The template creation device according to one aspect of the present invention includes: a three-dimensional data acquisition unit that acquires three-dimensional data representing a three-dimensional shape of an object to be identified; a normal vector calculation unit that calculates a normal vector of a feature point of the object, the feature point being observed from a predetermined viewpoint set for the object, based on three-dimensional data; a normal vector quantization unit that obtains a quantized normal direction feature amount by quantizing a calculated normal vector by mapping the calculated normal vector to a reference region on a plane orthogonal to an axis passing through the viewpoint, the reference region including a center reference region corresponding to the vicinity of the axis and a reference region around the center reference region; a template creation unit that creates a template for object recognition by template matching for each viewpoint based on the obtained quantized normal direction feature amount; and a template information output unit that outputs the created template.
According to the present aspect, by including the reference region including the center reference region corresponding to the vicinity of the axis, even when the direction of the normal vector whose angle between the axis passing through the viewpoint is small is changed due to the influence of noise, measurement error, or the like, a certain quantized normal direction feature amount can be obtained. By acquiring the feature quantity which is less susceptible to the change in the direction of the normal vector due to noise, measurement error, or the like, the recognition accuracy can be improved as compared with the related art.
In the template creation device, the surrounding reference region may include a plurality of reference regions corresponding to a plurality of portions obtained by equally dividing the three-dimensional unit sphere. According to the present aspect, since the portion obtained by equally dividing the three-dimensional unit sphere corresponds to the reference region, the feature quantity of the dispersed normal vector can be grasped with high accuracy.
In the template creation apparatus described above, the center reference area may be set based on the angle Φ between the normal vector and the axis. According to the present aspect, a predetermined center reference area that can be tolerated can be easily set based on the relationship between the change in the normal vector direction and the angle Φ due to noise or measurement error.
In the template creation apparatus, the center reference area may be a circle having sin Φ as a radius, which is obtained when the angle Φ is a predetermined angle. According to the present aspect, a predetermined center reference area that can be tolerated can be easily set based on the relationship between the change in the normal vector direction and sin Φ due to noise or measurement error.
In the template creation apparatus, the normal vector quantization unit may allow the normal vector to be quantized in a reference region around the reference region to which the normal vector to be quantized belongs. According to the present aspect, even when the normal vector of the feature point in the input image is mapped to the reference region around the originally assumed reference region at the time of object recognition due to noise or measurement error, it is determined that the normal vector coincides, and the collation score can be calculated. By installing such robustness of the feature quantity on the template creation apparatus side, it is possible to set an allowable surrounding reference area without increasing the processing load at the time of object recognition.
In another aspect of the present invention, an object recognition processing apparatus for recognizing an object using a template created by the template creation apparatus includes: an image acquisition unit that acquires an input image; a normal vector calculation unit that calculates a normal vector of the feature point in the input image; a normal vector quantization unit that obtains a quantized normal direction feature vector by quantizing a calculated normal vector by mapping the vector to a reference region on a plane orthogonal to an optical axis of a camera in which an input image is obtained, the reference region including a center reference region corresponding to the vicinity of the optical axis and a reference region around the center reference region; a template matching unit for searching the position of the object in the input image based on the normal vector quantization unit, the template, and the feature quantity in the normal vector direction obtained by the normal vector quantization unit, and obtaining a comparison result; and a recognition result output unit that outputs a recognition result based on the comparison result.
In the object recognition processing device, the surrounding reference region may include a plurality of reference regions corresponding to a plurality of portions obtained by equally dividing the three-dimensional unit sphere.
In the object recognition processing device, the center reference region may be set based on an angle Φ between the normal vector and the axis.
In the object recognition processing device, the center reference region may be a circle having sin Φ as a radius, which is obtained when the angle Φ is a predetermined angle.
In the object recognition processing device, the normal vector quantization unit may allow the normal vector to be quantized in a reference region around the reference region to which the normal vector to be quantized belongs. According to the present aspect, even when the normal vector of the feature point in the input image is mapped to the reference region around the originally assumed reference region at the time of object recognition due to noise or measurement error, it is determined that the normal vector coincides, and the collation score can be calculated. By installing such a robust feature on the object recognition processing device side, an allowable surrounding reference area can be set based on the unique condition of each object recognition processing device.
A computer-implemented template creation method pertaining to other aspects of the present invention comprises the steps of: acquiring three-dimensional data representing a three-dimensional shape of an object of the recognition target; calculating a normal vector of a feature point of the object observed from a predetermined viewpoint set to the object based on the three-dimensional data; obtaining a quantized normal direction feature amount by quantizing a calculated normal vector by mapping the calculated normal vector to a reference region on a plane orthogonal to an axis passing through the viewpoint, the reference region including a center reference region corresponding to the vicinity of the axis and a reference region around the center reference region; creating templates for object recognition by template matching for each viewpoint based on the obtained quantized normal direction feature quantity; and outputting the created template.
A recording medium of other aspects of the present invention stores a program that causes a computer to execute: acquiring three-dimensional data representing a three-dimensional shape of an object of the recognition target; calculating a normal vector of a feature point of the object observed from a predetermined viewpoint set to the object based on the three-dimensional data; obtaining a quantized normal direction feature amount by quantizing a calculated normal vector by mapping the calculated normal vector to a reference region on a plane orthogonal to an axis passing through the viewpoint, the reference region including a center reference region corresponding to the vicinity of the axis and a reference region around the center reference region; creating templates for object recognition by template matching for each viewpoint based on the obtained quantized normal direction feature quantity; and outputting the created template.
Effects of the invention
According to the present invention, there are provided a template creation device, an object recognition processing device, a template creation method, and a recording medium capable of improving recognition accuracy by using a template that is less susceptible to a change in the vector direction of a normal vector having a small angle with the optical axis of a camera.
Drawings
Fig. 1 is a diagram showing the overall configuration of the object recognition apparatus.
Fig. 2 is a diagram showing a hardware configuration of the object recognition apparatus.
Fig. 3 is a diagram showing a software configuration of the image processing apparatus.
Fig. 4 is a conceptual diagram illustrating quantization normal vectors.
Fig. 5 is a flowchart showing the flow of the template login process performed by the template creation apparatus.
Fig. 6 is a flowchart showing a flow of the object recognition processing performed by the object recognition processing apparatus.
Fig. 7 is a conceptual diagram illustrating a conventional quantization normal vector.
Description of the reference numerals
1 … object recognition device; 2 … object; 3 … tray; 4 … PLC;10 … image processing means; 11 … camera; 12 … display; 13 … mouse; 14 … memory card; 112 … main reservoir; 114 … hard disk; a 116 … camera interface; 116a … image cache; 118 … input interface; 120 … display controller; 122 … PLC interface; 124 … communication interface; 126 … data reader/writer; 128 … bus; 20 … template creation means; 201 … three-dimensional data acquisition unit; 202 … distance image creation unit; 203 … normal vector calculation unit; 204 … normal vector quantization; 205 … template creation unit; 206 … template information output unit; 30 … object recognition processing means; 301 … image acquisition unit; 302 … normal vector calculation unit; 303 … normal vector quantization section; 304 … template matching portion; 305 and … recognition result output unit; 40 … storage means; 401 … template DB.
Detailed Description
Embodiments of the present invention will be described with reference to the accompanying drawings. The following embodiments are provided to facilitate understanding of the present invention and are not intended to limit the explanation of the present invention. Further, various modifications are possible without departing from the gist of the present invention. Further, if a person skilled in the art is able to adopt an embodiment in which each element described below is replaced with an equivalent, the related embodiment is also included in the scope of the present invention.
(integral construction of object recognition device)
According to a predetermined embodiment of the present invention, in a registration process of a template in template matching, an object recognition process, or the like, normal line information of an object is mapped to a reference region including a region to which data representing a feature of an image when viewed from a predetermined viewpoint set for a recognition target object, that is, a vicinity of an axis passing through the viewpoint at the time of template registration and normal line information to which the vicinity of an optical axis of a camera that acquires an input image of the recognition target object at the time of object recognition processing belongs, is quantized normal line direction feature quantity that can recognize the mapped reference region is acquired. Thus, a quantized normal direction feature amount having robustness against measurement errors or noise of normal information in the vicinity of the camera optical axis can be obtained. Here, the overall configuration and adaptation of the object recognition device according to an embodiment of the present invention will be described with reference to fig. 1.
The object recognition apparatus 1 is provided in a system such as a production line that recognizes the object 2 in the tray 3 using an image captured by the camera 11. The object 2 to be identified is bulk-packed on the tray 3. The object recognition device 1 captures images at predetermined time intervals by the camera 11, and executes processing of recognizing the position and posture of each object 2 included in the images by the image processing device 10, and outputs the result to the PLC (programmable logic controller) 4, the display 12, or the like. The recognition result, which is the output of the object recognition device 1, is used for, for example, control of the sorting robot, control of the processing device or the printing device, inspection or measurement of the object 2, and the like.
(hardware constitution)
The hardware configuration of the object recognition apparatus 1 will be described with reference to fig. 2. In general, the object recognition apparatus 1 is constituted by a camera 11 and an image processing apparatus 10.
The camera 11 is a photographing Device for the image processing Device 10 to take a digital image of the object 2, and for example, a cmos (Complementary Metal-Oxide-Semiconductor) camera or a CCD (Charge-Coupled Device) camera can be suitably used. The form of the input image of resolution, color/black and white, still image/video, gradation, data form, etc. is arbitrary and may be appropriately selected in accordance with the kind of the object 2 or the purpose of sensing. In the case where a special image other than a visible light image such as an X-ray image or a thermal image is used for object recognition or inspection, a camera matching the image may be used.
The image processing apparatus 10 includes a CPU110 corresponding to a hardware processor, a main memory 112 serving as a working memory, a hard disk 114 serving as a fixed memory unit, a camera interface 116, an input interface 118, a display controller 120, a PLC interface 122, a communication interface 124, and a data reader/writer 126. These various components are connected in data communication with each other via a bus 128.
The camera interface 116 is an intermediate part of data transfer between the CPU110 and the camera 11, and has an image buffer 116a for temporarily accumulating image data from the camera 11. The input interface 118 is a data transfer intermediary between the CPU110 and the input section. The input section includes a mouse 13, a keyboard, a touch panel, a jog controller, and the like. The display controller 120 is connected to the display 12 such as a liquid crystal monitor, and controls the display on the display. The PLC interface 122 is a data transfer intermediary between the CPU110 and the PLC 4. The communication interface 124 is a data transfer medium between the CPU110 and a console or personal computer, server device, or the like. The data reader/writer 126 is a data transfer medium between the CPU110 and the recording medium, i.e., the memory card 14. Each interface is configured as hardware, and is connected to the CPU110 through an interface such as USB.
The image processing apparatus 10 may be constituted by a computer having a general-purpose architecture, and the CPU110 executes various processes by reading and executing programs stored in the hard disk 114 or the memory card 14. Such a program is in a state stored in a computer-readable recording medium such as the memory card 14 or an optical disk, or is provided via a network or the like. The program of the present embodiment may be provided as a single application program or may be provided as a module embedded in a part of another program. Further, part or all of the processing may be replaced by a dedicated circuit such as an ASIC.
(software constitution)
Fig. 3 shows a software configuration of the image processing apparatus 10. The image processing apparatus 10 includes: the cpu110 reads and executes a program stored in the hard disk 114 or the memory card 14 as a processing unit of the template creation device 20, a processing unit of the object recognition processing device 30, and the storage device 40, and thereby operates as the template creation device 20 and the object recognition processing device 30. The storage device 40 is constituted by a hard disk 114.
The template creation device 20 performs a template creation process that is utilized for the object recognition process. The template created by the template creation means 20 is registered in the template Database (DB) 401 of the storage means 40. The object recognition processing device 30 performs processing of recognizing an object in an image by performing template matching on the image captured by the camera 11 using a template registered in the template DB401.
Here, the template described in the present specification is data representing the image characteristics of the object 2 to be identified. Any form may be used for the template, for example, an array form of feature amounts describing a plurality of feature points in the image may be used. The feature points are boundaries of objects, curved portions and bent portions of the contour lines of the objects in the image, and show positions on image coordinates of the features set in advance.
The template creation device 20 includes: a three-dimensional data acquisition unit 201, a distance image creation unit 202, a normal vector calculation unit 203, a normal vector quantization unit 204, a template creation unit 205, and a template information output unit 206. As described above, the CPU110 reads and executes the program stored in the hard disk 114 or the memory card 14, thereby realizing the processing of each section.
The three-dimensional data acquisition unit 201 acquires three-dimensional data representing the three-dimensional shape of the recognition target object 2. The three-dimensional data acquisition unit 201 can acquire arbitrary three-dimensional data that can be recognized as a stereoscopic pattern of the object 2 to be recognized, and is used to acquire three-dimensional CAD data in the present embodiment. The three-dimensional data acquisition unit 201 can acquire three-dimensional CAD data from an external three-dimensional CAD server or the like, or can acquire three-dimensional CAD data from the storage device 40.
The distance image creation unit 202 creates a distance image of the object 2 observed from a predetermined viewpoint set for the object 2 using the three-dimensional data acquired by the three-dimensional data acquisition unit 201. In the object recognition in three dimensions, even for the same object, there are cases where the appearance differs due to the viewpoint difference, and the distance image creating unit 202 creates a distance image of the object 2 observed from an arbitrary number of viewpoints according to the characteristics of the object 2 to be recognized.
The normal vector calculation unit 203 calculates a normal vector of a feature point of the object 2 observed from a predetermined viewpoint set for the object 2 based on the three-dimensional data acquired by the three-dimensional data acquisition unit 201 or the distance image of each viewpoint created by the distance image creation unit 202. The normal vector calculation unit 203 defines a plane from the three-dimensional data of the feature points and the three-dimensional data of points around the feature points, and calculates a normal vector of the defined plane. As for the detection of the feature points and the calculation of the normal vector, any known method can be used, and a detailed description thereof will be omitted in this specification.
The normal vector quantization unit 204 quantizes the normal vector calculated by the normal vector calculation unit 203. In the present embodiment, the normal vector quantization unit 204 quantizes the direction of the normal vector by mapping the normal vector on the unit sphere onto a reference region on a two-dimensional space composed of xy axes as shown in fig. 4 (b), and acquires a quantized normal direction feature quantity. Here, the angle Φ of fig. 4 (a) is an angle between the normal vector a and the z-axis.
The reference areas 1 to 8 of fig. 4 (b) correspond to 8 parts obtained by equally dividing the XY plane 8 passing through the center of the unit sphere of fig. 4 (a). Further, in the present embodiment, the center reference area 9 in the vicinity of the camera optical axis is set based on the angle Φ between the normal vector a and the z axis. That is, the reference region includes a central reference region 9 and surrounding reference regions 1 to 8 formed of line segments extending radially at equal intervals around the z-axis. In the present embodiment, a circle having sin Φ as a radius r, which is solved when the angle Φ is 10 degrees, is set as the center reference region 9. As shown in fig. 4 (b), each reference region is assigned an identification number for identifying the reference region, and the normal vector quantization unit 204 obtains, from among the 9 bits corresponding to the reference regions 1 to 9, a feature quantity located in the bit corresponding to the reference region to which the normal vector belongs.
In the present embodiment, the center reference area 9 is set to an angle Φ of 10 degrees, but the angle Φ may be set to an arbitrary value that can be allowed according to noise or measurement error, such as 5 degrees or 15 degrees. In the present embodiment, the XY plane 8 passing through the center of the unit sphere is equally divided, but any number of reference regions such as the center reference region 13 based on the angle Φ may be provided as the reference region obtained by equally dividing the XY plane 12 passing through the center of the unit sphere. In this way, by using the feature quantity of 13 bits further setting the center reference area on the basis of the reference area obtained by equally dividing the XY plane 12 passing through the center of the unit sphere, a storage space of 2 bytes is also required, and compared with the case of using the feature quantity of 9 bits, the amount of information lost due to quantization can be reduced.
As shown in fig. 4 b, the normal vector quantization unit 204 quantizes the normal vector 1 of the feature point 1 and acquires a quantized normal direction feature quantity (010000000). Similarly, the normal vector quantization unit 204 quantizes the normal vector 2 of the feature point 2 and acquires a quantized normal direction feature quantity (000000100). Similarly, the normal vector quantization unit 204 quantizes the normal vector 3 of the feature point 3 and acquires a quantized normal direction feature quantity (000000001).
In one embodiment, the normal vector quantization unit 204 may quantize the normal vector in a reference region around the reference region to which the normal vector to be quantized belongs. For example, the normal vector quantization unit 204 may permit quantization not only in the reference region to which the normal vector belongs but also in another reference region that is closest to the other reference region.
The larger the allowable reference area is, the more robustness of the feature quantity can be achieved, while if the allowable reference area is excessively large, the recognition accuracy due to erroneous recognition is lowered. Therefore, it is desirable to set an allowable reference region in consideration of the compromise between robustness of the feature quantity and erroneous recognition.
The template creation section 205 creates a template for each viewpoint based on the quantized normal direction feature amount acquired by the normal vector quantization section 204. The template may include any other number of feature amounts than the quantized normal direction feature amount.
The template information output unit 206 registers the template created by the template creation unit 205 in the template DB401 of the storage device.
The object recognition processing device 30 includes: an image acquisition unit 301, a normal vector calculation unit 302, a normal vector quantization unit 303, a template matching unit 304, and a recognition result output unit 305. As described above, the CPU110 reads and executes the program stored in the hard disk 114 or the memory card 14, thereby realizing the processing of each section.
The image acquisition section 301 acquires an input image from the camera 11. The input image may be any data such as a distance image, which can calculate a normal vector.
The normal vector calculation unit 302 calculates a normal vector of the feature point in the input image acquired by the image acquisition unit 301.
The normal vector quantization unit 303 quantizes the normal vector calculated by the normal vector calculation unit 302. The normal vector quantization unit 303 quantizes the normal vector using the reference region used by the normal vector quantization unit 204 at the time of template creation, and acquires the quantized normal direction feature quantity. In the present embodiment, among the reference regions obtained by equally dividing the XY plane 8 passing through the center of the unit sphere, the center reference region 9 is further provided based on the angle Φ between the normal vector and the z axis, and 9 reference regions can be used.
In the normal vector quantization unit 303, similarly to the normal vector quantization unit 204, in one embodiment, the normal vector may be quantized in a reference region around the reference region to which the normal vector to be quantized belongs. The robustness of the feature quantity of the allowable surrounding reference region quantization normal vector can be installed at any timing at the time of template creation or at the time of object recognition.
The template matching unit 304 searches the position of the object 2 in the input image based on the template registered in the template DB401 and the quantized normal direction feature amount acquired by the normal vector quantization unit 303, and obtains one or more collation results. That is, the template matching unit 304 performs the search processing only to the extent of the number of templates registered in the template DB401. In the present embodiment, with respect to all templates registered in the template DB, the coordinates of the object 2 identified in the input image and the collation score showing the similarity of the image features between the input image and the template according to each of the coordinates are obtained as collation results.
The recognition result output unit 305 outputs a final recognition result based on one or more collation results obtained by the template matching unit 304. When a plurality of collation results are obtained, in the present embodiment, the recognition result output unit 305 determines a template having the highest collation score for the same coordinate with respect to different collation results of the same coordinate, and outputs the recognition result.
The storage device 40 is provided with a template DB401. The template DB401 stores templates for respective viewpoints.
(template registration processing)
Next, a description will be given of a template registration process performed by the template creation apparatus 20 along the flowchart of fig. 5. The template registration process shown in fig. 5 is executed when the image processing apparatus 10 is changed from the new setting or the recognition target object 2.
In step S501, the three-dimensional data acquisition unit 201 of the template creation device 20 acquires three-dimensional data of the object 2 to be identified. For example, in the present embodiment, the three-dimensional data acquisition unit 201 acquires three-dimensional CAD data from an external three-dimensional CAD server.
Next, in step S502, the distance image creation unit 202 of the template creation device 20 creates a distance image of the object observed from a predetermined viewpoint set for the object, using the three-dimensional data acquired by the three-dimensional data acquisition unit 201. In the present embodiment, the distance image creation unit 202 arranges viewpoints at 642 vertices of a virtual 1280 plane body centered on the object 2, and creates a distance image of the object observed from each viewpoint.
Further, in S503, the normal vector calculation unit 203 of the template creation device 20 calculates the normal vector of the feature point of the object 2 observed from the predetermined viewpoint set for the object, based on the distance image of each viewpoint created by the distance image creation unit 202. In the present embodiment, the normal vector calculation unit 203 calculates the normal vector of the feature point of the object 2 observed from the specific viewpoint based on each of the 642 viewpoints created by the distance image creation unit 202. As described above, in an alternative embodiment, the normal vector of the feature point of the object 2 obtained by observing the specific viewpoint may be calculated based on the three-dimensional data acquired by the three-dimensional data acquisition unit 201.
In S504, the normal vector quantization unit 204 quantizes the normal vector calculated by the normal vector calculation unit 203. In the present embodiment, the normal vector quantization unit 204 quantizes the direction of the normal vector by mapping the normal vector on the unit sphere onto a reference region on a two-dimensional space composed of xy axes as shown in fig. 4 (b), and acquires a quantized normal direction feature quantity. Here, the angle Φ of fig. 4 (a) is an angle between the normal vector a and the z-axis. In the present embodiment, among the reference areas obtained by equally dividing the XY plane 8 passing through the center of the unit sphere, there are 9 reference areas available, which are further provided in the center reference area 9 having the radius r of sin Φ when the angle Φ is 10 degrees. That is, the reference region includes the central reference region 9 and the surrounding reference regions 1 to 8 formed of line segments extending radially at equal intervals around the z-axis.
Thereafter, in S505, the template creation section 205 of the template creation device 20 creates a template for each viewpoint based on the quantized normal direction feature amount acquired by the normal vector quantization section 204. In the present embodiment, the template creation unit 205 creates a template for each of 642 viewpoints of the created range image.
Finally, in S506, the template information output unit 206 of the template creation device 20 registers the template created by the template creation unit 205 in the template DB401 of the storage device. In the present embodiment, the template creation unit 205 registers, for each viewpoint, data in the form of an array describing the quantized normal direction feature amounts of the plurality of feature points acquired in S504 in the template DB401.
(object identification processing)
Next, the object recognition processing performed by the object recognition processing device 30 will be described along the flowchart of fig. 6.
In step S601, the image acquisition unit 301 of the object recognition processing device 30 acquires an input image from the camera 11. Next, in step S602, the normal vector calculation unit 302 of the object recognition processing device 30 calculates the normal vector of the feature point from the input image acquired by the image acquisition unit 301.
Further, in S603, the normal vector quantization unit 303 of the object recognition processing device 30 quantizes the normal vector calculated by the normal vector calculation unit 302. The normal vector quantization unit 303 quantizes the normal vector using the reference region used by the normal vector quantization unit 204 in S504 at the time of template creation, and acquires the quantized normal direction feature quantity. In the present embodiment, the normal vector quantization unit 303 further includes a center reference region 9 based on an angle Φ between the normal vector and the z axis on a reference region obtained by equally dividing the XY plane 8 passing through the center of the unit sphere, and uses 9 reference regions.
Then, in step S604, the template matching unit 304 of the object recognition processing device 30 searches the position of the object 2 in the input image based on the template registered in the template DB401 and the quantized normal direction feature amount acquired by the normal vector quantization unit 303, and obtains one or more collation results. That is, the template matching unit 304 performs the search processing only to the extent of the number of templates registered in the template DB401. In the present embodiment, with respect to 642 templates registered in the template DB, the coordinates of the object 2 identified in the input image and the collation score showing the similarity of the image features between the input image and the template for each of the coordinates are obtained as collation results.
Finally, in step S605, the recognition result output unit 305 of the object recognition processing device 30 unifies one or more collation results obtained by the template matching unit 304, and outputs a final recognition result. In the present embodiment, regarding 642 collation results obtained from 642 templates, when different collation results are output at the same coordinate, the recognition result output unit 305 determines to recognize the template having the highest collation score on the coordinate, and outputs the recognition result.
(additional embodiment)
In the additional embodiment, in S504, the normal vector quantization unit 204 of the template creation device 20 may quantize the normal vector in a reference region around the reference region to which the normal vector to be quantized belongs. For example, the normal vector quantization unit 204 can obtain the quantized normal direction feature quantity (011000000) by allowing quantization of the normal vector 1 of fig. 4 (b) not only in the reference region 2 to which the normal vector 1 belongs but also in the reference region 3 having the shortest distance from the end point of the normal vector 1 to the reference region, as compared with other reference regions. Similarly, the normal vector quantization unit 204 quantizes the normal vector 2, acquires a quantized normal direction feature quantity (000001100), and quantizes the normal vector 3, thereby acquiring a quantized normal direction feature quantity (100000001).
In this way, even when the feature quantity corresponding to the normal vector 1 of the feature point 1 in the input image is acquired (001000000) in S603 at the time of object recognition due to noise or measurement error, the template matching unit 304 determines that the normal vector 1 coincides in S604, and can calculate the collation score.
In the present embodiment, the normal vector quantization unit 204 has been described as an example in which quantization is allowed not only in the reference region to which the normal vector belongs but also in another reference region that is closest to the other reference region, but quantization may be allowed in another two reference regions that are located higher than the other reference region in an alternative embodiment. Further, the normal vector quantization unit 204 does not uniformly permit quantization of the other reference region for all normal vectors, and may permit quantization in the other reference region only when a predetermined condition is satisfied, for example, when the shortest distance from the end point of the normal vector to the other reference region is equal to or smaller than a predetermined threshold value.
In the present embodiment, the description has been made of an example in which the normal vector quantization unit 204 of the template creation apparatus 20 is mounted with the robustness of the feature quantity, but in an alternative embodiment, the normal vector quantization unit 303 of the object recognition processing apparatus 30 may be used to realize the robustness of the feature quantity instead of the normal vector quantization unit 204 of the template creation apparatus 20. By installing the feature quantity robustness on the template creation device side, the allowable surrounding reference area can be set without increasing the processing load at the time of object recognition. On the other hand, by installing the robustness of the feature quantity on the object recognition processing device side, the allowable surrounding reference area can be set based on the unique condition of each object recognition processing device. The robustness of the feature quantity can be achieved by mounting at any timing at the time of template creation or at the time of object recognition, but the invention does not prevent the robustness of the feature quantity from being mounted at both the time of template creation and at the time of object recognition.
The programs for executing the respective processes described in the present specification may be stored in a recording medium. By using this recording medium, the above-described program can be installed on the image processing apparatus 10. Here, the recording medium storing the program may be a non-transitory recording medium. The non-transitory recording medium is not particularly limited, and may be, for example, a recording medium such as a CD-ROM.
In addition, some or all of the above embodiments may be described as the following appendix, but are not limited to the following.
(appendix 1)
A template creation device is provided with at least one memory and at least one hardware processor connected to the memory, wherein the hardware processor acquires three-dimensional data representing the three-dimensional shape of an object to be identified, calculates a normal vector of a feature point of the object obtained by observing the object from a predetermined viewpoint set for the object based on the three-dimensional data, quantizes the calculated normal vector by mapping the normal vector onto a reference region on a plane orthogonal to an axis passing through the viewpoint, thereby acquiring a quantized normal direction feature amount, the reference region includes a central reference region corresponding to the vicinity of the axis and a reference region around the central reference region, creates a template for object identification by template matching for each viewpoint based on the acquired quantized normal direction feature amount, and outputs the created template.
(appendix 2)
A template creation method includes acquiring three-dimensional data representing a three-dimensional shape of an object to be identified by at least one hardware processor, calculating, by the hardware processor, a normal vector of feature points of the object obtained by observing a predetermined viewpoint set for the object based on the three-dimensional data, quantizing, by the hardware processor, the calculated normal vector by mapping the normal vector onto a reference region on a plane orthogonal to an axis passing through the viewpoint, thereby acquiring quantized normal direction feature values, the reference region including a central reference region corresponding to the vicinity of the axis and a reference region around the central reference region, creating, by the hardware processor, a template for object identification by template matching for each viewpoint based on the acquired quantized normal direction feature values, and outputting the created template by the hardware processor.

Claims (8)

1. A template creation device is characterized by comprising:
a three-dimensional data acquisition unit that acquires three-dimensional data representing a three-dimensional shape of an object to be identified;
a normal vector calculation unit that calculates a normal vector of a feature point of the object, the feature point being observed from a predetermined viewpoint set for the object, based on the three-dimensional data;
a normal vector quantization unit that obtains a quantized normal direction feature amount by quantizing the calculated normal vector by mapping the calculated normal vector to a reference region on a plane orthogonal to an axis passing through the viewpoint, the reference region including a center reference region corresponding to the vicinity of the axis and a surrounding reference region of the center reference region, the surrounding reference region being formed of line segments extending radially at equal intervals around the axis;
a template creation unit that creates a template for object recognition by template matching for each viewpoint based on the obtained quantized normal direction feature amount; and
a template information output unit for outputting the created template,
the central reference region is set based on the normal vector and the angle phi between the axes,
the center reference area is a circle having sin phi as a radius, which is obtained when the angle phi is a predetermined angle.
2. The template creation apparatus according to claim 1, wherein,
the surrounding reference region includes a plurality of reference regions corresponding to a plurality of portions obtained by equally dividing the three-dimensional unit sphere.
3. The template creation apparatus according to claim 1, wherein,
the normal vector quantization unit also allows the normal vector to be quantized in a reference region around a reference region to which the normal vector to be quantized belongs.
4. An object recognition processing apparatus for recognizing an object using a template, the object recognition processing apparatus comprising:
an image acquisition unit that acquires an input image;
a normal vector calculation unit that calculates a normal vector of the feature point from the input image;
a normal vector quantization unit that obtains a quantized normal direction feature amount by quantizing the calculated normal vector by mapping the calculated normal vector to a reference region on a plane orthogonal to an optical axis of a camera that has acquired the input image, the reference region including a center reference region corresponding to the vicinity of the optical axis and a surrounding reference region of the center reference region, the surrounding reference region being formed of line segments extending radially at equal intervals around the optical axis;
a template matching unit configured to search a position of the object in the input image based on the template and the quantized normal direction feature amount acquired by the normal vector quantization unit, and to obtain a comparison result; and
a recognition result output unit that outputs a recognition result based on the comparison result,
the center reference area is set based on an angle Φ between a normal vector and the optical axis, and the center reference area is a circle having sin Φ as a radius, which is obtained when the angle Φ is a predetermined angle.
5. The object recognition processing device according to claim 4, wherein,
the surrounding reference region includes a plurality of reference regions corresponding to a plurality of portions obtained by equally dividing the three-dimensional unit sphere.
6. The object recognition processing device according to claim 4, wherein,
the normal vector quantization unit also allows the normal vector to be quantized in a reference region around a reference region to which the normal vector to be quantized belongs.
7. A template creation method executed by a computer, the template creation method characterized by comprising the steps of:
acquiring three-dimensional data representing a three-dimensional shape of an object of the recognition target;
calculating a normal vector of a feature point of the object observed from a predetermined viewpoint set for the object based on the three-dimensional data;
obtaining a quantized normal direction feature amount by quantizing the calculated normal vector by mapping the calculated normal vector to a reference area on a plane orthogonal to an axis passing through the viewpoint, the reference area including a center reference area corresponding to the vicinity of the axis and a surrounding reference area of the center reference area, the surrounding reference area being formed of line segments extending at equal intervals radially about the axis;
creating templates for object recognition by template matching for each viewpoint based on the obtained quantized normal direction feature quantity; and
outputting the created template and the template is output,
the central reference region is set based on the normal vector and the angle phi between the axes,
the center reference area is a circle having sin phi as a radius, which is obtained when the angle phi is a predetermined angle.
8. A recording medium, characterized in that a program for causing a computer to execute the steps of:
acquiring three-dimensional data representing a three-dimensional shape of an object of the recognition target;
calculating a normal vector of a feature point of the object observed from a predetermined viewpoint set for the object based on the three-dimensional data;
obtaining a quantized normal direction feature amount by quantizing the calculated normal vector by mapping the calculated normal vector to a reference area on a plane orthogonal to an axis passing through the viewpoint, the reference area including a center reference area corresponding to the vicinity of the axis and a surrounding reference area of the center reference area, the surrounding reference area being formed of line segments extending at equal intervals radially about the axis;
creating templates for object recognition by template matching for each viewpoint based on the obtained quantized normal direction feature quantity; and
outputting the created template and the template is output,
the central reference region is set based on the normal vector and the angle phi between the axes,
the center reference area is a circle having sin phi as a radius, which is obtained when the angle phi is a predetermined angle.
CN201810759087.9A 2017-09-22 2018-07-11 Template creation device and method, object recognition processing device, and recording medium Active CN109543705B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-182517 2017-09-22
JP2017182517A JP6889865B2 (en) 2017-09-22 2017-09-22 Template creation device, object recognition processing device, template creation method and program

Publications (2)

Publication Number Publication Date
CN109543705A CN109543705A (en) 2019-03-29
CN109543705B true CN109543705B (en) 2023-05-12

Family

ID=62948018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810759087.9A Active CN109543705B (en) 2017-09-22 2018-07-11 Template creation device and method, object recognition processing device, and recording medium

Country Status (4)

Country Link
US (1) US10776657B2 (en)
EP (1) EP3460715B1 (en)
JP (1) JP6889865B2 (en)
CN (1) CN109543705B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6333871B2 (en) * 2016-02-25 2018-05-30 ファナック株式会社 Image processing apparatus for displaying an object detected from an input image
JP6968342B2 (en) * 2017-12-25 2021-11-17 オムロン株式会社 Object recognition processing device, object recognition processing method and program
CN110472538B (en) * 2019-07-31 2023-06-06 河南冠图信息科技有限公司 Image recognition method and storage medium of electronic drawing
US11023770B2 (en) 2019-09-23 2021-06-01 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for obtaining templates for tessellated images
WO2021210513A1 (en) * 2020-04-13 2021-10-21 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
CN112033307B (en) * 2020-07-15 2021-08-03 成都飞机工业(集团)有限责任公司 Farnet vector measuring device
CN112101448B (en) * 2020-09-10 2021-09-21 敬科(深圳)机器人科技有限公司 Screen image recognition method, device and system and readable storage medium
CN112179353B (en) * 2020-09-30 2023-07-18 深圳银星智能集团股份有限公司 Positioning method and device of self-moving robot, robot and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021569A (en) * 2013-02-28 2014-09-03 杭州海康威视数字技术股份有限公司 Human body target locking tracking device and method
JP2015173344A (en) * 2014-03-11 2015-10-01 三菱電機株式会社 object recognition device
JP2015225453A (en) * 2014-05-27 2015-12-14 村田機械株式会社 Object recognition device and object recognition method
CN105574063A (en) * 2015-08-24 2016-05-11 西安电子科技大学 Image retrieval method based on visual saliency
CN106062820A (en) * 2014-03-14 2016-10-26 欧姆龙株式会社 Image recognition device, image sensor, and image recognition method
WO2016175150A1 (en) * 2015-04-28 2016-11-03 オムロン株式会社 Template creation device and template creation method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5271031B2 (en) 2008-08-09 2013-08-21 株式会社キーエンス Image data compression method, pattern model positioning method in image processing, image processing apparatus, image processing program, and computer-readable recording medium
JP5254893B2 (en) * 2009-06-26 2013-08-07 キヤノン株式会社 Image conversion method and apparatus, and pattern identification method and apparatus
US8774510B2 (en) * 2012-09-11 2014-07-08 Sharp Laboratories Of America, Inc. Template matching with histogram of gradient orientations
JP2015079374A (en) 2013-10-17 2015-04-23 セイコーエプソン株式会社 Object recognition device, object recognition method, object recognition program, robot system, and robot
JP6334735B2 (en) * 2014-05-06 2018-05-30 ナント・ホールデイングス・アイ・ピー・エル・エル・シー Image feature detection using edge vectors
CA2983880A1 (en) * 2015-05-05 2016-11-10 Kyndi, Inc. Quanton representation for emulating quantum-like computation on classical processors
JP6968342B2 (en) * 2017-12-25 2021-11-17 オムロン株式会社 Object recognition processing device, object recognition processing method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021569A (en) * 2013-02-28 2014-09-03 杭州海康威视数字技术股份有限公司 Human body target locking tracking device and method
JP2015173344A (en) * 2014-03-11 2015-10-01 三菱電機株式会社 object recognition device
CN106062820A (en) * 2014-03-14 2016-10-26 欧姆龙株式会社 Image recognition device, image sensor, and image recognition method
JP2015225453A (en) * 2014-05-27 2015-12-14 村田機械株式会社 Object recognition device and object recognition method
WO2016175150A1 (en) * 2015-04-28 2016-11-03 オムロン株式会社 Template creation device and template creation method
CN105574063A (en) * 2015-08-24 2016-05-11 西安电子科技大学 Image retrieval method based on visual saliency

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Recognizing Objects in Range Data Using;Andrea Frome ET AL;《ECCV 2004》;20040101;全文 *

Also Published As

Publication number Publication date
US20190095749A1 (en) 2019-03-28
EP3460715A1 (en) 2019-03-27
JP2019057227A (en) 2019-04-11
CN109543705A (en) 2019-03-29
JP6889865B2 (en) 2021-06-18
US10776657B2 (en) 2020-09-15
EP3460715B1 (en) 2023-08-23

Similar Documents

Publication Publication Date Title
CN109543705B (en) Template creation device and method, object recognition processing device, and recording medium
CN110411441B (en) System and method for multi-modal mapping and localization
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
CN108573471B (en) Image processing apparatus, image processing method, and recording medium
US10366307B2 (en) Coarse-to-fine search method, image processing device and recording medium
EP3502958B1 (en) Object recognition processing apparatus, object recognition processing method, and program
CN107430776B (en) Template manufacturing device and template manufacturing method
JP2018151748A (en) Image processing device, image processing method, template generation device, object recognition processing device, and program
WO2011115143A1 (en) Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus
CN109934869B (en) Position and orientation estimation device, position and orientation estimation method, and recording medium
KR102386444B1 (en) Image depth determining method and living body identification method, circuit, device, and medium
EP2680223A1 (en) Feature point matching device, feature point matching method, and non-temporary computer-readable medium having feature point matching program stored thereon
CN110782531A (en) Method and computing device for processing three-dimensional point cloud data
US10623629B2 (en) Imaging apparatus and imaging condition setting method and program
JP6118976B2 (en) Head posture estimation apparatus, head posture estimation method, and program for causing computer to execute head posture estimation method
CN111951211B (en) Target detection method, device and computer readable storage medium
JP6218237B2 (en) Image conversion program, apparatus and method for parallelizing photographed image
CN110399892B (en) Environmental feature extraction method and device
JP6384961B2 (en) Camera calibration apparatus, camera calibration method, camera calibration program, and recording medium
CN117990058B (en) Method, device, computer equipment and medium for improving RTK measurement accuracy
JP7214057B1 (en) DATA PROCESSING DEVICE, DATA PROCESSING METHOD AND DATA PROCESSING PROGRAM
KR20240003944A (en) Method and apparatus for facial symmetry analysis through registration of 3d facial landmark
JP2017076260A (en) Image processing apparatus
JP2018055173A (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant