WO2021020305A1 - Système de traitement d'image, machine apprenante, processeur d'image et dispositif d'imagerie - Google Patents

Système de traitement d'image, machine apprenante, processeur d'image et dispositif d'imagerie Download PDF

Info

Publication number
WO2021020305A1
WO2021020305A1 PCT/JP2020/028566 JP2020028566W WO2021020305A1 WO 2021020305 A1 WO2021020305 A1 WO 2021020305A1 JP 2020028566 W JP2020028566 W JP 2020028566W WO 2021020305 A1 WO2021020305 A1 WO 2021020305A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
captured image
measurement position
distance
distance measurement
Prior art date
Application number
PCT/JP2020/028566
Other languages
English (en)
Japanese (ja)
Inventor
正貴 田野
正義 近藤
朋浩 濱口
敬祐 大川
Original Assignee
京セラ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019139264A external-priority patent/JP7309506B2/ja
Priority claimed from JP2019139266A external-priority patent/JP7401218B2/ja
Application filed by 京セラ株式会社 filed Critical 京セラ株式会社
Publication of WO2021020305A1 publication Critical patent/WO2021020305A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to an image processing system, a machine learning device, an image processing device, and an imaging device.
  • the image processing system includes a machine learning device, and the machine learning device sets a captured image for learning and a distance measurement position corresponding to the captured image for learning as a set of input data and a label. By learning the pair of the input data and the label as training data, the distance measurement position corresponding to the subject included in the captured image is learned.
  • the machine learning device includes at least one processor, and the processor acquires a captured image for learning and a distance measurement position corresponding to the captured image for learning as a set of input data and a label. Then, by learning the set of the input data and the label as training data, the distance measurement position corresponding to the subject included in the captured image is learned.
  • the image pickup apparatus includes a camera, a processor, and a communication interface
  • the communication interface is a trained model of a machine learner that learns a distance measurement position according to a subject included in a captured image.
  • the processor identifies the distance measurement position according to the subject included in the captured image based on the captured image captured by the camera and the trained model, and the distance indicating the distance measurement position. To measure.
  • the image processing system includes an image processor, and the image processor detects a feature component of the captured image and specifies a distance measurement position according to the subject based on the feature component.
  • the image processor includes at least one processor, which detects a feature component of the captured image and specifies a distance measurement position according to the subject based on the feature component.
  • the image pickup apparatus includes the image processor, the camera, and the processor according to the second aspect, and the image processor detects the feature component of the captured image captured by the camera, and the feature.
  • the distance measurement position according to the subject is specified based on the components.
  • FIG. 1 is a diagram showing an example of the overall configuration of the image processing system 1 according to the first embodiment.
  • FIG. 2 is a diagram showing an example of a functional block of the machine learning device 20 according to the first embodiment.
  • FIG. 3 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 4 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 5 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 6 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 7 is a diagram showing an example of a functional block of the management server 10 according to the first embodiment.
  • FIG. 8 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the first embodiment.
  • FIG. 9 is a diagram for explaining an example of the function of the image pickup apparatus 30 shown in FIG.
  • FIG. 10 is a diagram for explaining an example of the function of the image pickup apparatus 30 shown in FIG.
  • FIG. 11 is a sequence diagram showing an example of the operation of the image processing system 1 according to the first embodiment.
  • FIG. 12 is a sequence diagram showing an example of the operation of the image processing system 1 according to the second embodiment.
  • FIG. 13 is a diagram showing an example of the overall configuration of the image processing system 1 according to the third embodiment.
  • FIG. 14 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the third embodiment.
  • FIG. 15 is a sequence diagram showing an example of the operation of the image processing system 1 according to the third embodiment.
  • FIG. 16 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fourth embodiment.
  • FIG. 17 is a diagram showing an example of a functional block of the management server 10 according to the fourth embodiment.
  • FIG. 18 is a diagram showing an example of a functional block of the image processor 50 according to the fourth embodiment.
  • FIG. 19 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fourth embodiment.
  • FIG. 20 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fifth embodiment.
  • FIG. 21 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the fifth embodiment.
  • FIG. 22 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fifth embodiment.
  • FIG. 23 is a diagram showing an example of the overall configuration of the image processing system 1 according to the sixth embodiment.
  • FIG. 24 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the sixth embodiment.
  • FIG. 25 is a sequence diagram showing an example of the operation of the image processing system 1 according to the sixth embodiment.
  • the above-mentioned electronic camera had a problem that the size of the subject could not be calculated automatically.
  • the present disclosure has been made in view of the above-mentioned problems, and an object of the present disclosure is to make it possible to measure a desired length of a subject included in a captured image with a simpler operation.
  • FIG. 1 is a diagram showing an example of the overall configuration of the image processing system 1 according to the first embodiment.
  • the image processing system 1 according to the first embodiment includes a management server 10, an image pickup device 30, an exhibition device 40, and a market 2 composed of a communication network.
  • the management server 10 includes a machine learning device 20.
  • the machine learning device 20 has an acquisition unit 21, a processor 22, and a storage unit 23.
  • the acquisition unit 21 is configured to acquire the captured image for learning and the distance measurement position corresponding to the captured image for learning as a set of input data and a label.
  • the processor 22 is configured to learn the distance measurement position according to the subject included in the captured image by learning the set of the input data and the label as training data.
  • the processor 22 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). It is configured to generate a model.
  • the processor 22 may be configured to calculate the above-mentioned training data in a multi-layer structure, that is, to generate a trained model by deep learning.
  • the input data may include identification data indicating the category of the subject.
  • the subject category may be, for example, a top such as a T-shirt.
  • the subject category may be, for example, underwear such as pants and / or skirts.
  • the subject category may be, for example, furniture, home appliances, or a three-dimensional object such as a bag.
  • the distance measurement position may include at least a first point and a second point different from the first point. Further, the distance measurement position may be a line segment including a first point and a second point different from the first point. The line segment may be a straight line or a curved line.
  • the processor 22 has points 5 and points for measuring the length as distance measurement positions according to the subject included in the captured image (input data) including the identification data indicating the T-shirt as a category. 6.
  • Point 1 and point 9 (or point 2 and point 10) for measuring sleeve length
  • point 1, point 5 and point 9 (or point 2, point 5 and point 10) for measuring sleeve length
  • width of the body Learn at least one of points 3 and 4 to measure (chest circumference), points 7 and 8 to measure waist (or waist circumference), and points 9 and 10 to measure shoulder width. May be good.
  • the processor 22 when the captured image for measurement includes a top, the processor 22 has a length, sleeve length, sleeve length, width (that is, chest circumference), waist (that is, waist circumference), and shoulder width.
  • the distance measurement position may be learned so that at least one of the dimensions is measured.
  • the waist referred to here refers to the circumference of the waist portion of the upper garment, which has the shortest distance.
  • points 1 and 2 are points located above the tip of the sleeve (that is, the tip of the sleeve on the shoulder side).
  • Points 3 and 4 are points on the lower side of the base of the sleeve (that is, the base of the sleeve on the side).
  • Point 5 is a point located at the base of the collar and in the center of the subject Z.
  • Point 6 is a point located at the tip of the hem and in the center of the subject Z.
  • Point 7 is the center of the subject Z among the points on the boundary line X1 located between the points A and 3 located near the point 3 among the points A and B located on the outermost side of the hem. It is the closest point to.
  • the point 8 is located at the center of the subject Z among the points on the boundary line X1 located between the points B and 4 located near the point 4 among the points A and B located on the outermost side of the hem. This is the closest point.
  • Points 9 and 10 are points located above the base of the sleeve (ie, the base of the sleeve on the shoulder side).
  • the processor 22 measures the rise length as a distance measurement position according to the subject included in the captured image (input data) including the identification data indicating the pants as a category. And points 5 (or points 2 and 6), points 1 and 3 (or points 2 and 4) to measure the length of the inseam, points 3 and 5 (or points 5) to measure the total length. Points 4 and 6), points 1 and 14 (or points 2 and 13) for measuring the width (ie, thigh circumference), points 5 and 6 for measuring the waist (ie, waist circumference), To learn at least one of points 7 and 8 (or points 9 and 10) for measuring knee width, and points 3 and 11 (or points 4 and 12) for measuring hem width. It may be configured in.
  • the processor 22 has rise, inseam, total length, waist (that is, waist circumference), and width (that is, around the thigh).
  • the distance measurement position may be learned so that at least one of the knee width and the hem width is measured.
  • points 4 and 12 are points located at both ends of the hem portion.
  • Points 2 and 13 are points located at both ends of the portion corresponding to the wearer's thighs.
  • Points 9 and 10 are points located at both ends of the portion corresponding to the wearer's knee.
  • Points 5 and 6 are points located at both ends of the portion corresponding to the wearer's torso.
  • the processor 22 has a point 1 for measuring the width on the bottom surface as a distance measurement position according to the subject included in the captured image (input data) including the identification data indicating the bag as a category.
  • Point 2 point 2 and point 3 to measure the depth on the bottom surface
  • point 1 and point 6 or point 2 and point 5 or point 3 and point 4
  • it may be configured to learn at least one of points 5 and 6 for the purpose and points 4 and 5 for measuring the depth on the upper surface.
  • the processor 22 sets the distance measurement position so that at least one dimension of height, width, and depth is measured. You may learn.
  • the three-dimensional object may include at least one of furniture, home appliances and a bag.
  • the processor 22 may be configured to learn points for measuring the height of humans or animals, or the size of fish or plants.
  • the processor 22 corrects the distance measurement position output in response to the acquisition of the captured image for measurement based on the user operation, and uses the set of the captured image for measurement and the corrected distance measurement position as training data. It may be configured to perform further learning.
  • the processor 22 calculates a reward based on whether or not the distance measurement position output in response to the acquisition of the captured image for measurement has been corrected based on the user operation, and the distance measurement position is based on the reward. You may update the function to identify. That is, the processor 22 may be configured to perform reinforcement learning according to the presence or absence of correction of the distance measurement position based on the user operation.
  • the storage unit 23 is composed of a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a learned model generated by the processor 22. It is configured to do.
  • a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a learned model generated by the processor 22. It is configured to do.
  • the management server 10 includes a machine learning device 20, a communication interface, 11, and a processor 12.
  • the communication interface 11 is configured to send and receive predetermined information to and from the image pickup apparatus 30 using a wireless line or a wired line.
  • the communication interface 11 is configured to transmit the trained model generated by the machine learning device 20 to the imaging device 30.
  • the processor 12 is configured to perform predetermined processing.
  • the processor 12 inputs the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label) to the machine learning device 20. It is configured to instruct you to generate a trained model.
  • the image pickup apparatus 30 includes a camera 31, a communication interface 32, a processor 33, and a storage unit 34.
  • the camera 31 is configured to be able to acquire a captured image for measurement, and the communication interface 32 communicates with the management server 10 and the communication network (market) 2 using a wireless line or a wired line. Is configured to allow
  • the processor 33 identifies a distance measurement position according to the subject included in the image for measurement and measures the distance based on the image captured by the camera 31 for measurement and the learned model acquired from the management server 10. It is configured to measure the distance indicated by the position.
  • the distance indicated by the distance measurement position may be, for example, the distance of a line segment connecting these points when the distance measurement positions are the first point and the second point. Further, the distance indicated by the distance measurement position may be, for example, the distance indicated by the line segment when the distance measurement position is a line segment.
  • the processor 33 may be configured to measure at least one of the length, sleeve length, sleeve length, width of the body, waist, and shoulder width when the captured image for measurement includes a top.
  • the processor 33 may be configured to measure the distance between the points 5 and 6 as the length dimension. Further, as shown in FIG. 4, the processor 33 is configured to measure the distance between the points 1 and 9 (and / or the distance between the points 2 and 10) as the sleeve length dimension. It may have been.
  • the processor 33 has the distance (and / or) obtained by adding the distance between the points 1 and 9 and the distance between the points 5 and 9 as the sleeve length. , The distance between the point 2 and the point 10 and the distance between the point 5 and the point 9) may be measured.
  • the processor 33 may be configured to measure the distance between the points 3 and 4 as the dimension of the width of the body. Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 7 and 8 as the waist dimension. Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 9 and 10 as the dimension of the shoulder width.
  • the processor 33 is configured to measure at least one dimension of rise, inseam, total length, waist, width, knee width and hem width when the captured image for measurement includes a lower garment. It may have been done.
  • the processor 33 measures the distance between the points 1 and 5 (and / or the distance between the points 2 and 6) as the rise dimension. It may be configured as follows. Further, as shown in FIG. 5, the processor 33 is configured to measure the distance between the points 1 and 3 (and / or the distance between the points 2 and 4) as the inseam dimension. It may have been. Further, as shown in FIG. 5, the processor 33 measures the distance between the points 3 and 5 (and / or the distance between the points 4 and 6) as the total length dimension. It may be configured. Further, as shown in FIG. 5, the processor 33 may be configured to measure twice the distance between the points 5 and 6 as the waist dimension. Further, as shown in FIG.
  • the processor 33 measures twice the distance between the points 1 and 14 (and / or the distance between the points 2 and 13) as the dimension of the width. It may be configured to do so. Further, as shown in FIG. 5, the processor 33 measures twice as the knee width dimension as the distance between the points 7 and 8 (and / or the distance between the points 9 and 10). It may be configured to do so. Further, as shown in FIG. 5, the processor 33 measures twice as the hem width dimension as the distance between the point 3 and the point 11 (and / or the distance between the point 4 and the point 12). It may be configured to do so.
  • the processor 33 is configured to measure at least one dimension of height, width, and depth when a three-dimensional object is included in the captured image for measurement.
  • the distance indicated by the distance measurement position may be a distance connecting two points on the surface of the subject which is a three-dimensional object or a distance connecting two points on the edge of the subject which can be regarded as a flat surface.
  • the processor 33 has a height dimension of the distance between the points 1 and 6 (and / or the distance between the points 2 and 5 and the point 3). It may be configured to measure the distance between the point 4 and the point 4. Further, as shown in FIG. 6, the processor 33 may be configured to measure the distance between the point 1 and the point 2 as a width dimension. Further, as shown in FIG. 6, the processor 33 may be configured to measure the distance between the points 2 and 3 as a depth dimension.
  • the processor 33 may be configured to measure the height of humans or animals when the captured image for measurement includes humans or animals. Further, the processor 33 may be configured to measure the size of fish or plants when the captured image for measurement includes fish or plants.
  • the processor 33 identifies the first point and the second point based on the captured image captured by the camera 31 and the trained model acquired from the management server 10, and the surroundings based on the captured image captured by the camera 31.
  • the three-dimensional information of the environment and the position of the image pickup apparatus 30 may be specified.
  • the processor 33 determines the distance from the image pickup device 30 to the first point and the image pickup device 30 to the second point based on the three-dimensional information of the surrounding environment, the position of the image pickup device 30, the first point and the second point. You may specify the distance.
  • the processor 33 may be configured to measure the distance between each point (that is, each distance estimation position) using, for example, Virtual SLAM (Simultaneus Localization and Mapping) technology.
  • the processor 33 detects the manufacturer or brand from the tag information included in the captured image for measurement, and obtains the size chart data of the manufacturer or brand from the storage unit 34 or via the Internet. It may be configured to acquire and specify the distance indicated by the distance measurement position based on the size chart data.
  • the processor 33 can acquire the distance indicated by the distance measurement position in this way, even if the distance indicated by the distance measurement position can be measured based on the learned model acquired from the management server 10, the size table The distance indicated by the distance measurement position specified based on the data may be preferentially adopted.
  • the storage unit 34 is configured to store the size chart data of the above-mentioned manufacturer or brand.
  • the image pickup apparatus 30 can sell the target product on the market 2 composed of the communication network.
  • the image pickup device 30 may be, for example, a communication terminal such as a so-called smartphone or a portable communication terminal such as a tablet.
  • the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and the target product information including the target product image and the distance indicated by the distance measurement position. Is configured to be able to sell the target product by uploading it to the market 2.
  • the target product image may be a captured image captured for measuring the distance indicated by the distance measurement position, or may be another captured image in which the subject included in the captured image is captured. , The captured image according to the fourth embodiment described later may be used.
  • the exhibiting device 40 can exhibit the target product on the market 2 composed of the communication network.
  • the exhibiting device 40 may be, for example, a laptop computer, a desktop computer, a smart speaker, or the like.
  • the exhibiting device 40 acquires the distance indicated by the target product image and the distance measurement position from the imaging device 30 in response to the user operation, and inputs or inputs the distance indicated by the target product image and the distance measurement position as the target product information. It is configured so that the target product can be put up by selecting and uploading the target product information including the target product image and the distance indicated by the distance measurement position on the market 2.
  • the imaging device 30 and the exhibiting device 40 overlay the distance indicated by the distance measurement position on the image captured for measurement, and the captured image overlaid with the distance indicated by the distance measurement position is a product image. And may be uploaded on the market 2 as the target product information including the distance indicated by the distance measurement position.
  • FIG. 11 is a sequence diagram showing an example of the operation of the image processing system 1 according to the first embodiment.
  • step S1001 the management server 10 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label).
  • a model is generated, and in step S1002, such a trained model is transmitted to the image pickup apparatus 30.
  • step S1003 the imaging device 30 specifies a distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera 31 and the learned model acquired from the management server 10. Then, in step S1004, the distance indicated by the distance measurement position is measured.
  • step S1005 the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S1006, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
  • step S1007 such target product information is put up on the market 2.
  • the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the management server 10.
  • the desired dimensions of the subject included in the can be automatically measured.
  • the communication interface 11 is configured to acquire a measurement image captured by the camera 31 from the image pickup device 30 using a wireless line or a wired line. Further, in the management server 10, the communication interface 11 is configured to transmit the distance measurement position acquired from the processor 12 and the distance indicated by the distance measurement position to the image pickup apparatus 30 using a wireless line or a wired line. ing.
  • the processor 12 is a distance according to the shape of the subject included in the captured image for measurement based on the captured image for measurement acquired by the communication interface 11 and the learned model acquired from the machine learning device 20. It is configured to specify the measurement position and measure the distance indicated by the distance measurement position.
  • FIG. 12 is a sequence diagram showing an example of the operation of the image processing system 1 according to the second embodiment.
  • step S2001 the management server 10 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). Generate a model.
  • step S2002 the imaging device 30 captures the captured image for measurement by the camera 31, and in step S2003, transmits the captured image for measurement to the management server 10.
  • step S2004 the management server 10 specifies a distance measurement position according to the subject included in the image for measurement, based on the image for measurement acquired from the image pickup device 30 and the generated learned model, and the distance is taken. Measure the distance indicated by the measurement position.
  • step S2005 the captured image for measurement and the distance indicated by the distance measurement position are transmitted to the image pickup device 30.
  • step S2006 the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information in response to the user operation, and in step S2007, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
  • step S2008 such target product information is put up on the market 2.
  • the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the management server 10.
  • the desired dimensions of the subject included in the can be automatically measured.
  • the machine learning device 20 is provided in the image pickup device 30. Further, as shown in FIG. 14, the image pickup apparatus 30 includes a machine learning device 20, a camera 31, and a processor 33.
  • the processor 33 specifies the distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera 31 and the learned model acquired from the machine learning device 20. , It is configured to measure the distance indicated by the distance measurement position.
  • FIG. 15 is a sequence diagram showing an example of the operation of the image processing system 1 according to the third embodiment.
  • step S3001 the image pickup apparatus 30 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). Generate a model.
  • step S3002 the image pickup apparatus 30 captures a captured image for measurement by the camera 31.
  • step S3003 the image pickup apparatus 30 identifies a distance measurement position according to the subject included in the measurement image capture image based on the measurement image captured by the camera 31 and the generated learned model, and the distance is taken. Measure the distance indicated by the measurement position.
  • step S3004 the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S3005, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
  • step S3006 such target product information is put up on the market 2.
  • the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the imaging device 30.
  • the desired dimensions of the subject included in the can be automatically measured.
  • FIG. 16 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fourth embodiment.
  • the image processing system 1 according to the fourth embodiment includes a management server 10, an image pickup device 30, an exhibition device 40, and a market 2 composed of a communication network.
  • the management server 10 includes a communication interface 11, a processor 12, an image processor 50, and a machine learning device 20. ..
  • the communication interface 11 is configured to acquire an image captured by the camera 31 from the image pickup device 30 using a wireless line or a wired line. Further, the communication interface 11 is configured to transmit the distance indicated by the distance measurement position acquired from the processor 12 to the image pickup apparatus 30 by using a wireless line or a wired line.
  • the processor 12 is configured to specify the distance measurement position according to the subject included in the captured image acquired by the communication interface 11 and to measure the distance indicated by the distance measurement position. There is.
  • the image processor 50 includes a processor 51 and a storage unit 52.
  • the processor 51 detects the feature component of the captured image acquired by the communication interface 11.
  • the processor 51 may further identify the subject included in the captured image. Detection of feature components includes edge detection.
  • the processor 51 can detect the edge of the captured image by applying various methods.
  • the various methods may be methods using first-order differentiation or second-order differentiation.
  • Techniques that use first derivative include, for example, Sobel filters and Previt filters.
  • Techniques that use quadratic differentiation include, for example, Laplacian filters.
  • the processor 51 is configured to specify the distance measurement position according to the subject included in the captured image based on the detected feature component.
  • the distance measurement position may include at least a first point and a second point different from the first point. Further, the distance measurement position may be a line segment including a first point and a second point different from the first point.
  • the line segment may be a straight line or a curved line.
  • the processor 51 may specify the subject area (segmentation image) based on the detected feature component.
  • An image showing a subject area is also called a segmentation image.
  • the processor 51 may specify the contour of the subject based on the detected feature component.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component is to specify the subject region located at a predetermined coordinate with respect to the entire captured image or the subject region (segmentation image). It may be to specify a point on the top or on the contour of the subject.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component satisfies a specific condition and determines a point or region located on the contour (or around the contour) of the subject. It may be to specify.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component is to specify a straight line connecting two points in the subject area (segmentation image) satisfying a specific condition. It may be.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component follows at least a part of the contour (or the periphery of the contour) of the subject specified based on the feature component. It may be to specify a point located at the contour (or around the contour) that intersects the extension line of the line segment.
  • the processor 51 may be configured to specify the distance measurement position according to the subject included in the captured image based on the detected feature component and the specified subject. For example, the processor 51 may determine a condition to be satisfied by the feature component detected in order to specify the distance measurement position according to the specified subject.
  • the processor 51 when the processor 51 identifies that the subject included in the captured image indicates a T-shirt, the processor 51 measures points 5 and 6 for measuring the length and sleeve length as the distance measurement positions.
  • Point 1 and point 9 (or point 2 and point 10), point 1, point 5 and point 9 (or point 2, point 5 and point 10) to measure the sleeve length, and width (chest circumference)
  • At least one of points 3 and 4 for measuring waist (or waist circumference), points 7 and 8 for measuring waist width, and points 9 and 10 for measuring shoulder width may be specified.
  • the processor 51 is based on the specified subject and the detected feature component.
  • Length, sleeve length, sleeve length, width of the body (ie, chest circumference), waist (ie, waist circumference) and shoulder width may be specified so that at least one dimension is measured.
  • the waist referred to here refers to the circumference of the waist portion of the upper garment, which has the shortest distance.
  • the X-axis corresponds to the left-right direction of the captured image
  • the Y-axis corresponds to the vertical direction of the captured image.
  • the processor 51 can identify the subject included in the captured image
  • the X-axis may correspond to the left-right direction of the subject
  • the Y-axis may correspond to the vertical direction of the subject.
  • points 1 and 2 are end points in the X-axis direction (horizontal direction) of the subject Z included in the captured image. That is, points 1 and 2 are points located above the tip of the sleeve (that is, the tip of the sleeve on the shoulder side).
  • the points 1 and 2 are examples of points on the subject area or the outline of the subject located at predetermined coordinates with respect to the entire captured image or the subject area (segmentation image).
  • Points 3 and 4 are points on the lower side of the base of the sleeve (that is, the base of the sleeve on the side). Points 3 and 4 are examples of points or regions that satisfy specific conditions and are located on the contour (or around the contour) of the subject.
  • the specific condition is that the characteristic component exists in the negative direction (left direction) in the X-axis direction and the negative direction (downward direction) in the Y-axis at the point 3.
  • the specific condition is that the characteristic component exists in the positive direction (right direction) in the X-axis direction and the negative direction (downward direction) in the Y-axis at the point 4.
  • Points 5 and 6 are intersections of the perpendicular bisector L2 of the straight line connecting the points 3 and 4 and the boundary line X1 in the subject Z. That is, the point 5 is a point located at the base of the collar and the center of the subject Z, and the point 6 is a point located at the tip of the hem and the center of the subject Z.
  • the points 5 and 6 are examples of points on the subject area or the outline of the subject located at predetermined coordinates with respect to the entire captured image or the subject area (segmentation image).
  • the straight line connecting the points 7 and 8 has the longest distance among the straight lines connecting the end points of the subject Z in the negative direction (downward) of the Y axis with respect to the points 3 and 4 in parallel with the X axis. It is a small straight line.
  • the straight line connecting the points 7 and 8 is an example of a straight line connecting two points in the subject area (segmentation image) satisfying a specific condition.
  • the specific condition is that the distance is the smallest.
  • Point 9 is the intersection of the straight line L4 passing through the points 3 and 7 and the boundary line X1 in the subject Z.
  • the point 10 is an intersection of the straight line L5 passing through the points 4 and 8 and the boundary line X1 in the subject Z. That is, points 9 and 10 are points located above the base of the sleeve (that is, the base of the sleeve on the shoulder side). Points 9 and 10 are examples of points located at the contour (or around the contour) that intersects the extension line of the line segment along at least a part of the contour (or around the contour) of the subject specified based on the feature component. is there.
  • the processor 51 may specify points 1 to 10 as distance measurement positions when it is specified that the subject included in the captured image indicates a T-shirt, but the processor 51 does not specify that the subject indicates a T-shirt. , Points 1 to 10 may be specified based only on the detected characteristic component.
  • the distance measurement position is point 1 and point 5 or a point for measuring the length of the rise. 2 and 6, points 1 and 3 to measure inseam length, or points 2 and 4, points 3 and 5 to measure total length, or points 4 and 6, cross width (ie, thigh) Points 1 and 14 to measure the circumference), and points 2 and 13, points 5 and 6 to measure the waist (ie waist circumference), points 7 and 8 to measure the knee width, and point 9 And point 10, points 3 and 11 for measuring the hem width, and at least one of points 4 and 12 may be specified.
  • the processor 51 raises, inseams, total length, and waist (that is, that is).
  • the distance measurement position may be specified so that at least one dimension of waist circumference), width (ie, thigh circumference), knee width and hem width is measured.
  • points 4 and 12 are points located at both ends of the hem portion.
  • Points 2 and 13 are points located at both ends of the portion corresponding to the wearer's thighs.
  • Points 9 and 10 are points located at both ends of the portion corresponding to the wearer's knee.
  • Points 5 and 6 are points located at both ends of the portion corresponding to the wearer's torso.
  • the distance measurement position is point 1 and point 2 for measuring the width on the bottom surface, and the depth on the bottom surface.
  • At least one of points 4 and 5 for measuring the depth on the upper surface may be specified.
  • the processor 51 has at least one of height, width, and depth.
  • the distance measurement position may be specified so that one dimension is measured.
  • the three-dimensional object may include at least one of furniture, home appliances and a bag.
  • the processor 51 may be configured to identify a point for measuring the height of a human or animal, or the size of a fish or plant.
  • the storage unit 52 is composed of a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a distance measurement position specified by the processor 51. It is configured to do.
  • a RAM Random Access Memory
  • ROM Read Only Memory
  • auxiliary storage device such as a hard disk or a flash memory
  • the machine learning device 20 includes an acquisition unit 21, a processor 22, and a storage unit 23.
  • the acquisition unit 21 is configured to acquire the captured image for learning and the name of the subject corresponding to the captured image for learning as a set of input data and a label.
  • the processor 22 is configured to learn what the subject included in the captured image indicates by learning the set of the input data and the label as training data.
  • the processor 22 is configured to calculate the above-mentioned training data in a multi-layer structure, that is, to generate a trained model for indicating what the subject included in the captured image shows by deep learning. You may be.
  • the storage unit 23 is composed of a storage device such as FRAM (registered trademark) (Ferroelectric Ramdom Access Memory) or an auxiliary storage device such as a hard disk or a flash memory so as to store the learned model generated by the processor 33. It is configured.
  • FRAM registered trademark
  • auxiliary storage device such as a hard disk or a flash memory
  • processor 51 of the image processor 50 may be configured to specify what the subject included in the captured image indicates based on the trained model of the machine learning device 20.
  • the processor 12 of the management server 10 may be configured to measure the distance indicated by the distance measurement position based on the distance measurement position specified by the image processor 50.
  • the processor 12 is imaged by the same method as the measurement method as described above, which is performed when the processor 33 of the image pickup apparatus 30 according to the first embodiment measures one dimension by using the image captured for measurement. It may be configured to measure one dimension using an image.
  • the processor 12 is configured to detect the manufacturer or brand from the tag information included in the captured image and specify the distance indicated by the distance measurement position from the size table of the manufacturer or brand. May be good.
  • the image pickup device 30 and the exhibition device 40 can exhibit the target product on the market 2 composed of the communication network.
  • the imaging device 30 and the exhibiting device 40 have, for example, the same configuration as the configuration in the case where the target product can be exhibited on the market 2 configured by the communication network in the first embodiment described above. May be good.
  • FIG. 19 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fourth embodiment.
  • the imaging device 30 captures an captured image including a subject by the camera 31 in step S4001, and transmits the captured image to the management server 10 in step S4002.
  • step S4003 the management server 10 specifies the distance measurement position according to the subject included in the acquired captured image and measures the distance indicated by the distance measurement position, and in step S4005, the distance is measured with respect to the image pickup device 30. Distance The distance indicated by the measurement position is transmitted.
  • step S4006 the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S4007, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
  • step S4008 such target product information is put up on the market 2.
  • the image processor 50 provided in the management server 10 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image.
  • the desired dimensions of the subject can be automatically measured.
  • a portable communication terminal or the like can easily obtain a desired dimension of a subject included in the captured image. It can be measured automatically.
  • the image processor 50 is provided not in the management server 10 but in the image pickup device 30.
  • the image pickup apparatus 30 includes a communication interface 32, a camera 31, a processor 33, and an image processor 50.
  • the communication interface 32 is configured to be able to communicate with the management server 10 and the communication network (market) 2 using a wireless line or a wired line, and the camera 31 acquires a captured image including a subject. Is configured to allow
  • the processor 33 specifies the distance measurement position according to the subject included in the captured image captured by the camera 31 and measures the distance indicated by the distance measurement position based on the processing of the image processor 50. It is configured as follows.
  • the image processor 50 specifies what the subject indicates based on the captured image captured by the camera 31, detects the characteristic component of the subject, and identifies the specified subject and features. It is configured to specify the distance measurement position according to the subject included in the captured image based on the components.
  • FIG. 22 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fifth embodiment.
  • the management server 10 generates a trained model for showing what the subject included in the captured image shows by learning the set of the input data and the label as training data in step S5001. Then, in step S5002, the trained model is transmitted to the image pickup apparatus 30.
  • step S5002 the image pickup apparatus 30 identifies what the subject included in the captured image indicates based on the received learned model, detects the feature component of the subject, and detects the identified subject and the detected feature component.
  • the distance measurement position corresponding to the subject included in the captured image is specified based on the above, and in step S5003, the distance indicated by the distance measurement position is measured.
  • step S5004 the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S5005, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
  • step S5006 such target product information is put up on the market 2.
  • the image processor 50 provided in the image pickup apparatus 30 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image.
  • the desired dimensions of the subject can be automatically measured.
  • the image processor 50 and the machine learning device 20 are provided in the image pickup device 30. Further, as shown in FIG. 24, the image pickup device 30 includes an image processor 50, a machine learning device 20, a communication interface 32, a camera 31, and a processor 33.
  • the processor 33 identifies the three-dimensional information of the surrounding environment and the position of the imaging device 30 based on the captured image captured by the camera 31, and the three-dimensional information of the surrounding environment, the position of the imaging device 30, the first. Based on the points and the second point, the distance from the image pickup device 30 to the first point and the distance from the image pickup device 30 to the second point are specified, the distance from the image pickup device 30 to the first point, and the distance from the image pickup device 30 to the second point. It may be configured to specify the distance from the first point to the second point based on the distance to the point. That is, the processor 33 may be configured to measure the distance between each point (that is, each distance estimation position) using, for example, Virtual SLAM (Simultaneus Localization and Mapping) technology.
  • Virtual SLAM Simultaneus Localization and Mapping
  • FIG. 25 is a sequence diagram showing an example of the operation of the image processing system 1 according to the present embodiment.
  • step S6001 the image pickup apparatus 30 generates a trained model for indicating what the subject included in the captured image shows by learning the set of the input data and the label as training data. Then, in step S6002, what the subject included in the captured image indicates is specified based on the trained model, the characteristic component of the subject is detected, and the image is taken based on the specified subject and the detected characteristic component. The distance measurement position corresponding to the subject included in the image is specified, and the distance indicated by the distance measurement position is measured.
  • step S6003 the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S6004, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
  • step S6005 the target product information is put up on the market 2.
  • the image processor 50 provided in the image pickup apparatus 30 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image.
  • the desired dimensions of the subject can be automatically measured.
  • a program that causes a computer to execute each process performed by the image pickup device 30, the management server 10, the image processor 50, and the machine learning device 20 may be provided.
  • the program may be recorded on a computer-readable medium.
  • Computer-readable media can be used to install programs on a computer.
  • the computer-readable medium on which the program is recorded may be a non-transient recording medium.
  • the non-transient recording medium is not particularly limited, but may be, for example, a recording medium such as a CD-ROM or a DVD-ROM.
  • the image pickup device 30 is not limited to the device capable of listing the target product on the market 2.
  • the image pickup device 30 may be any device that can present the user at least a distance specified according to the subject included in the captured image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un système de traitement d'image (1) étant équipé d'une machine apprenante (20). La machine apprenante (20) acquiert, en tant que combinaison de données d'entrée et d'une étiquette, une image capturée utilisée pour l'apprentissage et des positions de mesure de distance correspondant à l'image capturée utilisée pour l'apprentissage, et effectue un apprentissage à l'aide de combinaisons de données d'entrée et d'étiquettes en tant que données d'apprentissage pour apprendre les positions de mesure de distance pour un sujet compris dans une image capturée.
PCT/JP2020/028566 2019-07-29 2020-07-22 Système de traitement d'image, machine apprenante, processeur d'image et dispositif d'imagerie WO2021020305A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019-139264 2019-07-29
JP2019139264A JP7309506B2 (ja) 2019-07-29 2019-07-29 画像処理システム、機械学習器、撮像装置及び学習方法
JP2019139266A JP7401218B2 (ja) 2019-07-29 2019-07-29 画像処理システム、画像処理器、撮像装置及び処理方法
JP2019-139266 2019-07-29

Publications (1)

Publication Number Publication Date
WO2021020305A1 true WO2021020305A1 (fr) 2021-02-04

Family

ID=74230313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/028566 WO2021020305A1 (fr) 2019-07-29 2020-07-22 Système de traitement d'image, machine apprenante, processeur d'image et dispositif d'imagerie

Country Status (1)

Country Link
WO (1) WO2021020305A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017194301A (ja) * 2016-04-19 2017-10-26 株式会社デジタルハンズ 顔形状測定装置及び方法
WO2018170421A1 (fr) * 2017-03-17 2018-09-20 Magic Leap, Inc. Procédés et techniques d'estimation de disposition de pièce
JP2019056966A (ja) * 2017-09-19 2019-04-11 株式会社東芝 情報処理装置、画像認識方法および画像認識プログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017194301A (ja) * 2016-04-19 2017-10-26 株式会社デジタルハンズ 顔形状測定装置及び方法
WO2018170421A1 (fr) * 2017-03-17 2018-09-20 Magic Leap, Inc. Procédés et techniques d'estimation de disposition de pièce
JP2019056966A (ja) * 2017-09-19 2019-04-11 株式会社東芝 情報処理装置、画像認識方法および画像認識プログラム

Similar Documents

Publication Publication Date Title
US11375922B2 (en) Body measurement device and method for controlling the same
CN105701447B (zh) 迎宾机器人
US9842255B2 (en) Calculation device and calculation method
JP6195915B2 (ja) 画像計測装置
CN113711269A (zh) 用于确定身体量度和提供服装尺码推荐的方法和***
JP7309506B2 (ja) 画像処理システム、機械学習器、撮像装置及び学習方法
JP2014127208A (ja) 物体検出方法及び物体検出装置
WO2017085771A1 (fr) Système, programme et procédé d'aide au paiement
JP2014106692A5 (fr)
WO2016036478A1 (fr) Procédé et appareil de création de base de données de modèles de prise de photo et de fourniture d'informations de recommandation de prise de photo
US20200218896A1 (en) Body measurement device and method for cotnrolling the same
CN117203677A (zh) 使用计算机视觉的物品识别***
WO2021020305A1 (fr) Système de traitement d'image, machine apprenante, processeur d'image et dispositif d'imagerie
CN105180802A (zh) 一种物体尺寸信息识别方法和装置
JP2009289046A (ja) 3次元データを用いた作業支援装置及び方法
JP7401218B2 (ja) 画像処理システム、画像処理器、撮像装置及び処理方法
US20220148074A1 (en) Visualization of garments on a body model of a human
TWI686775B (zh) 利用影像偵測閱讀姿勢之方法及系統、電腦可讀取之記錄媒體及電腦程式產品
KR102086227B1 (ko) 신체 치수 계측 장치
US11176396B2 (en) Detection of whether mobile computing device is pointing to visual code
Wu et al. Toward Design of a Drip‐Stand Patient Follower Robot
Takeda et al. Reduction of marker-body matching work in activity recognition using motion capture
Kawasue et al. Pig weight prediction system using RGB-D sensor and AR glasses: analysis method with free camera capture direction
CN112116647B (zh) 估重方法和估重装置
KR101618308B1 (ko) 미러월드 기반 인터랙티브 온라인 쇼핑몰 구축을 위한 파노라마 영상 획득 및 객체 검출이 가능한 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20847410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20847410

Country of ref document: EP

Kind code of ref document: A1