WO2021020305A1 - Image processing system, machine learner, image processor, and imaging device - Google Patents

Image processing system, machine learner, image processor, and imaging device Download PDF

Info

Publication number
WO2021020305A1
WO2021020305A1 PCT/JP2020/028566 JP2020028566W WO2021020305A1 WO 2021020305 A1 WO2021020305 A1 WO 2021020305A1 JP 2020028566 W JP2020028566 W JP 2020028566W WO 2021020305 A1 WO2021020305 A1 WO 2021020305A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
captured image
measurement position
distance
distance measurement
Prior art date
Application number
PCT/JP2020/028566
Other languages
French (fr)
Japanese (ja)
Inventor
正貴 田野
正義 近藤
朋浩 濱口
敬祐 大川
Original Assignee
京セラ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019139266A external-priority patent/JP7401218B2/en
Priority claimed from JP2019139264A external-priority patent/JP7309506B2/en
Application filed by 京セラ株式会社 filed Critical 京セラ株式会社
Publication of WO2021020305A1 publication Critical patent/WO2021020305A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to an image processing system, a machine learning device, an image processing device, and an imaging device.
  • the image processing system includes a machine learning device, and the machine learning device sets a captured image for learning and a distance measurement position corresponding to the captured image for learning as a set of input data and a label. By learning the pair of the input data and the label as training data, the distance measurement position corresponding to the subject included in the captured image is learned.
  • the machine learning device includes at least one processor, and the processor acquires a captured image for learning and a distance measurement position corresponding to the captured image for learning as a set of input data and a label. Then, by learning the set of the input data and the label as training data, the distance measurement position corresponding to the subject included in the captured image is learned.
  • the image pickup apparatus includes a camera, a processor, and a communication interface
  • the communication interface is a trained model of a machine learner that learns a distance measurement position according to a subject included in a captured image.
  • the processor identifies the distance measurement position according to the subject included in the captured image based on the captured image captured by the camera and the trained model, and the distance indicating the distance measurement position. To measure.
  • the image processing system includes an image processor, and the image processor detects a feature component of the captured image and specifies a distance measurement position according to the subject based on the feature component.
  • the image processor includes at least one processor, which detects a feature component of the captured image and specifies a distance measurement position according to the subject based on the feature component.
  • the image pickup apparatus includes the image processor, the camera, and the processor according to the second aspect, and the image processor detects the feature component of the captured image captured by the camera, and the feature.
  • the distance measurement position according to the subject is specified based on the components.
  • FIG. 1 is a diagram showing an example of the overall configuration of the image processing system 1 according to the first embodiment.
  • FIG. 2 is a diagram showing an example of a functional block of the machine learning device 20 according to the first embodiment.
  • FIG. 3 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 4 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 5 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 6 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG.
  • FIG. 7 is a diagram showing an example of a functional block of the management server 10 according to the first embodiment.
  • FIG. 8 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the first embodiment.
  • FIG. 9 is a diagram for explaining an example of the function of the image pickup apparatus 30 shown in FIG.
  • FIG. 10 is a diagram for explaining an example of the function of the image pickup apparatus 30 shown in FIG.
  • FIG. 11 is a sequence diagram showing an example of the operation of the image processing system 1 according to the first embodiment.
  • FIG. 12 is a sequence diagram showing an example of the operation of the image processing system 1 according to the second embodiment.
  • FIG. 13 is a diagram showing an example of the overall configuration of the image processing system 1 according to the third embodiment.
  • FIG. 14 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the third embodiment.
  • FIG. 15 is a sequence diagram showing an example of the operation of the image processing system 1 according to the third embodiment.
  • FIG. 16 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fourth embodiment.
  • FIG. 17 is a diagram showing an example of a functional block of the management server 10 according to the fourth embodiment.
  • FIG. 18 is a diagram showing an example of a functional block of the image processor 50 according to the fourth embodiment.
  • FIG. 19 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fourth embodiment.
  • FIG. 20 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fifth embodiment.
  • FIG. 21 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the fifth embodiment.
  • FIG. 22 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fifth embodiment.
  • FIG. 23 is a diagram showing an example of the overall configuration of the image processing system 1 according to the sixth embodiment.
  • FIG. 24 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the sixth embodiment.
  • FIG. 25 is a sequence diagram showing an example of the operation of the image processing system 1 according to the sixth embodiment.
  • the above-mentioned electronic camera had a problem that the size of the subject could not be calculated automatically.
  • the present disclosure has been made in view of the above-mentioned problems, and an object of the present disclosure is to make it possible to measure a desired length of a subject included in a captured image with a simpler operation.
  • FIG. 1 is a diagram showing an example of the overall configuration of the image processing system 1 according to the first embodiment.
  • the image processing system 1 according to the first embodiment includes a management server 10, an image pickup device 30, an exhibition device 40, and a market 2 composed of a communication network.
  • the management server 10 includes a machine learning device 20.
  • the machine learning device 20 has an acquisition unit 21, a processor 22, and a storage unit 23.
  • the acquisition unit 21 is configured to acquire the captured image for learning and the distance measurement position corresponding to the captured image for learning as a set of input data and a label.
  • the processor 22 is configured to learn the distance measurement position according to the subject included in the captured image by learning the set of the input data and the label as training data.
  • the processor 22 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). It is configured to generate a model.
  • the processor 22 may be configured to calculate the above-mentioned training data in a multi-layer structure, that is, to generate a trained model by deep learning.
  • the input data may include identification data indicating the category of the subject.
  • the subject category may be, for example, a top such as a T-shirt.
  • the subject category may be, for example, underwear such as pants and / or skirts.
  • the subject category may be, for example, furniture, home appliances, or a three-dimensional object such as a bag.
  • the distance measurement position may include at least a first point and a second point different from the first point. Further, the distance measurement position may be a line segment including a first point and a second point different from the first point. The line segment may be a straight line or a curved line.
  • the processor 22 has points 5 and points for measuring the length as distance measurement positions according to the subject included in the captured image (input data) including the identification data indicating the T-shirt as a category. 6.
  • Point 1 and point 9 (or point 2 and point 10) for measuring sleeve length
  • point 1, point 5 and point 9 (or point 2, point 5 and point 10) for measuring sleeve length
  • width of the body Learn at least one of points 3 and 4 to measure (chest circumference), points 7 and 8 to measure waist (or waist circumference), and points 9 and 10 to measure shoulder width. May be good.
  • the processor 22 when the captured image for measurement includes a top, the processor 22 has a length, sleeve length, sleeve length, width (that is, chest circumference), waist (that is, waist circumference), and shoulder width.
  • the distance measurement position may be learned so that at least one of the dimensions is measured.
  • the waist referred to here refers to the circumference of the waist portion of the upper garment, which has the shortest distance.
  • points 1 and 2 are points located above the tip of the sleeve (that is, the tip of the sleeve on the shoulder side).
  • Points 3 and 4 are points on the lower side of the base of the sleeve (that is, the base of the sleeve on the side).
  • Point 5 is a point located at the base of the collar and in the center of the subject Z.
  • Point 6 is a point located at the tip of the hem and in the center of the subject Z.
  • Point 7 is the center of the subject Z among the points on the boundary line X1 located between the points A and 3 located near the point 3 among the points A and B located on the outermost side of the hem. It is the closest point to.
  • the point 8 is located at the center of the subject Z among the points on the boundary line X1 located between the points B and 4 located near the point 4 among the points A and B located on the outermost side of the hem. This is the closest point.
  • Points 9 and 10 are points located above the base of the sleeve (ie, the base of the sleeve on the shoulder side).
  • the processor 22 measures the rise length as a distance measurement position according to the subject included in the captured image (input data) including the identification data indicating the pants as a category. And points 5 (or points 2 and 6), points 1 and 3 (or points 2 and 4) to measure the length of the inseam, points 3 and 5 (or points 5) to measure the total length. Points 4 and 6), points 1 and 14 (or points 2 and 13) for measuring the width (ie, thigh circumference), points 5 and 6 for measuring the waist (ie, waist circumference), To learn at least one of points 7 and 8 (or points 9 and 10) for measuring knee width, and points 3 and 11 (or points 4 and 12) for measuring hem width. It may be configured in.
  • the processor 22 has rise, inseam, total length, waist (that is, waist circumference), and width (that is, around the thigh).
  • the distance measurement position may be learned so that at least one of the knee width and the hem width is measured.
  • points 4 and 12 are points located at both ends of the hem portion.
  • Points 2 and 13 are points located at both ends of the portion corresponding to the wearer's thighs.
  • Points 9 and 10 are points located at both ends of the portion corresponding to the wearer's knee.
  • Points 5 and 6 are points located at both ends of the portion corresponding to the wearer's torso.
  • the processor 22 has a point 1 for measuring the width on the bottom surface as a distance measurement position according to the subject included in the captured image (input data) including the identification data indicating the bag as a category.
  • Point 2 point 2 and point 3 to measure the depth on the bottom surface
  • point 1 and point 6 or point 2 and point 5 or point 3 and point 4
  • it may be configured to learn at least one of points 5 and 6 for the purpose and points 4 and 5 for measuring the depth on the upper surface.
  • the processor 22 sets the distance measurement position so that at least one dimension of height, width, and depth is measured. You may learn.
  • the three-dimensional object may include at least one of furniture, home appliances and a bag.
  • the processor 22 may be configured to learn points for measuring the height of humans or animals, or the size of fish or plants.
  • the processor 22 corrects the distance measurement position output in response to the acquisition of the captured image for measurement based on the user operation, and uses the set of the captured image for measurement and the corrected distance measurement position as training data. It may be configured to perform further learning.
  • the processor 22 calculates a reward based on whether or not the distance measurement position output in response to the acquisition of the captured image for measurement has been corrected based on the user operation, and the distance measurement position is based on the reward. You may update the function to identify. That is, the processor 22 may be configured to perform reinforcement learning according to the presence or absence of correction of the distance measurement position based on the user operation.
  • the storage unit 23 is composed of a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a learned model generated by the processor 22. It is configured to do.
  • a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a learned model generated by the processor 22. It is configured to do.
  • the management server 10 includes a machine learning device 20, a communication interface, 11, and a processor 12.
  • the communication interface 11 is configured to send and receive predetermined information to and from the image pickup apparatus 30 using a wireless line or a wired line.
  • the communication interface 11 is configured to transmit the trained model generated by the machine learning device 20 to the imaging device 30.
  • the processor 12 is configured to perform predetermined processing.
  • the processor 12 inputs the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label) to the machine learning device 20. It is configured to instruct you to generate a trained model.
  • the image pickup apparatus 30 includes a camera 31, a communication interface 32, a processor 33, and a storage unit 34.
  • the camera 31 is configured to be able to acquire a captured image for measurement, and the communication interface 32 communicates with the management server 10 and the communication network (market) 2 using a wireless line or a wired line. Is configured to allow
  • the processor 33 identifies a distance measurement position according to the subject included in the image for measurement and measures the distance based on the image captured by the camera 31 for measurement and the learned model acquired from the management server 10. It is configured to measure the distance indicated by the position.
  • the distance indicated by the distance measurement position may be, for example, the distance of a line segment connecting these points when the distance measurement positions are the first point and the second point. Further, the distance indicated by the distance measurement position may be, for example, the distance indicated by the line segment when the distance measurement position is a line segment.
  • the processor 33 may be configured to measure at least one of the length, sleeve length, sleeve length, width of the body, waist, and shoulder width when the captured image for measurement includes a top.
  • the processor 33 may be configured to measure the distance between the points 5 and 6 as the length dimension. Further, as shown in FIG. 4, the processor 33 is configured to measure the distance between the points 1 and 9 (and / or the distance between the points 2 and 10) as the sleeve length dimension. It may have been.
  • the processor 33 has the distance (and / or) obtained by adding the distance between the points 1 and 9 and the distance between the points 5 and 9 as the sleeve length. , The distance between the point 2 and the point 10 and the distance between the point 5 and the point 9) may be measured.
  • the processor 33 may be configured to measure the distance between the points 3 and 4 as the dimension of the width of the body. Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 7 and 8 as the waist dimension. Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 9 and 10 as the dimension of the shoulder width.
  • the processor 33 is configured to measure at least one dimension of rise, inseam, total length, waist, width, knee width and hem width when the captured image for measurement includes a lower garment. It may have been done.
  • the processor 33 measures the distance between the points 1 and 5 (and / or the distance between the points 2 and 6) as the rise dimension. It may be configured as follows. Further, as shown in FIG. 5, the processor 33 is configured to measure the distance between the points 1 and 3 (and / or the distance between the points 2 and 4) as the inseam dimension. It may have been. Further, as shown in FIG. 5, the processor 33 measures the distance between the points 3 and 5 (and / or the distance between the points 4 and 6) as the total length dimension. It may be configured. Further, as shown in FIG. 5, the processor 33 may be configured to measure twice the distance between the points 5 and 6 as the waist dimension. Further, as shown in FIG.
  • the processor 33 measures twice the distance between the points 1 and 14 (and / or the distance between the points 2 and 13) as the dimension of the width. It may be configured to do so. Further, as shown in FIG. 5, the processor 33 measures twice as the knee width dimension as the distance between the points 7 and 8 (and / or the distance between the points 9 and 10). It may be configured to do so. Further, as shown in FIG. 5, the processor 33 measures twice as the hem width dimension as the distance between the point 3 and the point 11 (and / or the distance between the point 4 and the point 12). It may be configured to do so.
  • the processor 33 is configured to measure at least one dimension of height, width, and depth when a three-dimensional object is included in the captured image for measurement.
  • the distance indicated by the distance measurement position may be a distance connecting two points on the surface of the subject which is a three-dimensional object or a distance connecting two points on the edge of the subject which can be regarded as a flat surface.
  • the processor 33 has a height dimension of the distance between the points 1 and 6 (and / or the distance between the points 2 and 5 and the point 3). It may be configured to measure the distance between the point 4 and the point 4. Further, as shown in FIG. 6, the processor 33 may be configured to measure the distance between the point 1 and the point 2 as a width dimension. Further, as shown in FIG. 6, the processor 33 may be configured to measure the distance between the points 2 and 3 as a depth dimension.
  • the processor 33 may be configured to measure the height of humans or animals when the captured image for measurement includes humans or animals. Further, the processor 33 may be configured to measure the size of fish or plants when the captured image for measurement includes fish or plants.
  • the processor 33 identifies the first point and the second point based on the captured image captured by the camera 31 and the trained model acquired from the management server 10, and the surroundings based on the captured image captured by the camera 31.
  • the three-dimensional information of the environment and the position of the image pickup apparatus 30 may be specified.
  • the processor 33 determines the distance from the image pickup device 30 to the first point and the image pickup device 30 to the second point based on the three-dimensional information of the surrounding environment, the position of the image pickup device 30, the first point and the second point. You may specify the distance.
  • the processor 33 may be configured to measure the distance between each point (that is, each distance estimation position) using, for example, Virtual SLAM (Simultaneus Localization and Mapping) technology.
  • the processor 33 detects the manufacturer or brand from the tag information included in the captured image for measurement, and obtains the size chart data of the manufacturer or brand from the storage unit 34 or via the Internet. It may be configured to acquire and specify the distance indicated by the distance measurement position based on the size chart data.
  • the processor 33 can acquire the distance indicated by the distance measurement position in this way, even if the distance indicated by the distance measurement position can be measured based on the learned model acquired from the management server 10, the size table The distance indicated by the distance measurement position specified based on the data may be preferentially adopted.
  • the storage unit 34 is configured to store the size chart data of the above-mentioned manufacturer or brand.
  • the image pickup apparatus 30 can sell the target product on the market 2 composed of the communication network.
  • the image pickup device 30 may be, for example, a communication terminal such as a so-called smartphone or a portable communication terminal such as a tablet.
  • the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and the target product information including the target product image and the distance indicated by the distance measurement position. Is configured to be able to sell the target product by uploading it to the market 2.
  • the target product image may be a captured image captured for measuring the distance indicated by the distance measurement position, or may be another captured image in which the subject included in the captured image is captured. , The captured image according to the fourth embodiment described later may be used.
  • the exhibiting device 40 can exhibit the target product on the market 2 composed of the communication network.
  • the exhibiting device 40 may be, for example, a laptop computer, a desktop computer, a smart speaker, or the like.
  • the exhibiting device 40 acquires the distance indicated by the target product image and the distance measurement position from the imaging device 30 in response to the user operation, and inputs or inputs the distance indicated by the target product image and the distance measurement position as the target product information. It is configured so that the target product can be put up by selecting and uploading the target product information including the target product image and the distance indicated by the distance measurement position on the market 2.
  • the imaging device 30 and the exhibiting device 40 overlay the distance indicated by the distance measurement position on the image captured for measurement, and the captured image overlaid with the distance indicated by the distance measurement position is a product image. And may be uploaded on the market 2 as the target product information including the distance indicated by the distance measurement position.
  • FIG. 11 is a sequence diagram showing an example of the operation of the image processing system 1 according to the first embodiment.
  • step S1001 the management server 10 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label).
  • a model is generated, and in step S1002, such a trained model is transmitted to the image pickup apparatus 30.
  • step S1003 the imaging device 30 specifies a distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera 31 and the learned model acquired from the management server 10. Then, in step S1004, the distance indicated by the distance measurement position is measured.
  • step S1005 the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S1006, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
  • step S1007 such target product information is put up on the market 2.
  • the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the management server 10.
  • the desired dimensions of the subject included in the can be automatically measured.
  • the communication interface 11 is configured to acquire a measurement image captured by the camera 31 from the image pickup device 30 using a wireless line or a wired line. Further, in the management server 10, the communication interface 11 is configured to transmit the distance measurement position acquired from the processor 12 and the distance indicated by the distance measurement position to the image pickup apparatus 30 using a wireless line or a wired line. ing.
  • the processor 12 is a distance according to the shape of the subject included in the captured image for measurement based on the captured image for measurement acquired by the communication interface 11 and the learned model acquired from the machine learning device 20. It is configured to specify the measurement position and measure the distance indicated by the distance measurement position.
  • FIG. 12 is a sequence diagram showing an example of the operation of the image processing system 1 according to the second embodiment.
  • step S2001 the management server 10 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). Generate a model.
  • step S2002 the imaging device 30 captures the captured image for measurement by the camera 31, and in step S2003, transmits the captured image for measurement to the management server 10.
  • step S2004 the management server 10 specifies a distance measurement position according to the subject included in the image for measurement, based on the image for measurement acquired from the image pickup device 30 and the generated learned model, and the distance is taken. Measure the distance indicated by the measurement position.
  • step S2005 the captured image for measurement and the distance indicated by the distance measurement position are transmitted to the image pickup device 30.
  • step S2006 the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information in response to the user operation, and in step S2007, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
  • step S2008 such target product information is put up on the market 2.
  • the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the management server 10.
  • the desired dimensions of the subject included in the can be automatically measured.
  • the machine learning device 20 is provided in the image pickup device 30. Further, as shown in FIG. 14, the image pickup apparatus 30 includes a machine learning device 20, a camera 31, and a processor 33.
  • the processor 33 specifies the distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera 31 and the learned model acquired from the machine learning device 20. , It is configured to measure the distance indicated by the distance measurement position.
  • FIG. 15 is a sequence diagram showing an example of the operation of the image processing system 1 according to the third embodiment.
  • step S3001 the image pickup apparatus 30 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). Generate a model.
  • step S3002 the image pickup apparatus 30 captures a captured image for measurement by the camera 31.
  • step S3003 the image pickup apparatus 30 identifies a distance measurement position according to the subject included in the measurement image capture image based on the measurement image captured by the camera 31 and the generated learned model, and the distance is taken. Measure the distance indicated by the measurement position.
  • step S3004 the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S3005, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
  • step S3006 such target product information is put up on the market 2.
  • the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the imaging device 30.
  • the desired dimensions of the subject included in the can be automatically measured.
  • FIG. 16 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fourth embodiment.
  • the image processing system 1 according to the fourth embodiment includes a management server 10, an image pickup device 30, an exhibition device 40, and a market 2 composed of a communication network.
  • the management server 10 includes a communication interface 11, a processor 12, an image processor 50, and a machine learning device 20. ..
  • the communication interface 11 is configured to acquire an image captured by the camera 31 from the image pickup device 30 using a wireless line or a wired line. Further, the communication interface 11 is configured to transmit the distance indicated by the distance measurement position acquired from the processor 12 to the image pickup apparatus 30 by using a wireless line or a wired line.
  • the processor 12 is configured to specify the distance measurement position according to the subject included in the captured image acquired by the communication interface 11 and to measure the distance indicated by the distance measurement position. There is.
  • the image processor 50 includes a processor 51 and a storage unit 52.
  • the processor 51 detects the feature component of the captured image acquired by the communication interface 11.
  • the processor 51 may further identify the subject included in the captured image. Detection of feature components includes edge detection.
  • the processor 51 can detect the edge of the captured image by applying various methods.
  • the various methods may be methods using first-order differentiation or second-order differentiation.
  • Techniques that use first derivative include, for example, Sobel filters and Previt filters.
  • Techniques that use quadratic differentiation include, for example, Laplacian filters.
  • the processor 51 is configured to specify the distance measurement position according to the subject included in the captured image based on the detected feature component.
  • the distance measurement position may include at least a first point and a second point different from the first point. Further, the distance measurement position may be a line segment including a first point and a second point different from the first point.
  • the line segment may be a straight line or a curved line.
  • the processor 51 may specify the subject area (segmentation image) based on the detected feature component.
  • An image showing a subject area is also called a segmentation image.
  • the processor 51 may specify the contour of the subject based on the detected feature component.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component is to specify the subject region located at a predetermined coordinate with respect to the entire captured image or the subject region (segmentation image). It may be to specify a point on the top or on the contour of the subject.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component satisfies a specific condition and determines a point or region located on the contour (or around the contour) of the subject. It may be to specify.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component is to specify a straight line connecting two points in the subject area (segmentation image) satisfying a specific condition. It may be.
  • specifying the distance measurement position according to the subject included in the captured image based on the detected feature component follows at least a part of the contour (or the periphery of the contour) of the subject specified based on the feature component. It may be to specify a point located at the contour (or around the contour) that intersects the extension line of the line segment.
  • the processor 51 may be configured to specify the distance measurement position according to the subject included in the captured image based on the detected feature component and the specified subject. For example, the processor 51 may determine a condition to be satisfied by the feature component detected in order to specify the distance measurement position according to the specified subject.
  • the processor 51 when the processor 51 identifies that the subject included in the captured image indicates a T-shirt, the processor 51 measures points 5 and 6 for measuring the length and sleeve length as the distance measurement positions.
  • Point 1 and point 9 (or point 2 and point 10), point 1, point 5 and point 9 (or point 2, point 5 and point 10) to measure the sleeve length, and width (chest circumference)
  • At least one of points 3 and 4 for measuring waist (or waist circumference), points 7 and 8 for measuring waist width, and points 9 and 10 for measuring shoulder width may be specified.
  • the processor 51 is based on the specified subject and the detected feature component.
  • Length, sleeve length, sleeve length, width of the body (ie, chest circumference), waist (ie, waist circumference) and shoulder width may be specified so that at least one dimension is measured.
  • the waist referred to here refers to the circumference of the waist portion of the upper garment, which has the shortest distance.
  • the X-axis corresponds to the left-right direction of the captured image
  • the Y-axis corresponds to the vertical direction of the captured image.
  • the processor 51 can identify the subject included in the captured image
  • the X-axis may correspond to the left-right direction of the subject
  • the Y-axis may correspond to the vertical direction of the subject.
  • points 1 and 2 are end points in the X-axis direction (horizontal direction) of the subject Z included in the captured image. That is, points 1 and 2 are points located above the tip of the sleeve (that is, the tip of the sleeve on the shoulder side).
  • the points 1 and 2 are examples of points on the subject area or the outline of the subject located at predetermined coordinates with respect to the entire captured image or the subject area (segmentation image).
  • Points 3 and 4 are points on the lower side of the base of the sleeve (that is, the base of the sleeve on the side). Points 3 and 4 are examples of points or regions that satisfy specific conditions and are located on the contour (or around the contour) of the subject.
  • the specific condition is that the characteristic component exists in the negative direction (left direction) in the X-axis direction and the negative direction (downward direction) in the Y-axis at the point 3.
  • the specific condition is that the characteristic component exists in the positive direction (right direction) in the X-axis direction and the negative direction (downward direction) in the Y-axis at the point 4.
  • Points 5 and 6 are intersections of the perpendicular bisector L2 of the straight line connecting the points 3 and 4 and the boundary line X1 in the subject Z. That is, the point 5 is a point located at the base of the collar and the center of the subject Z, and the point 6 is a point located at the tip of the hem and the center of the subject Z.
  • the points 5 and 6 are examples of points on the subject area or the outline of the subject located at predetermined coordinates with respect to the entire captured image or the subject area (segmentation image).
  • the straight line connecting the points 7 and 8 has the longest distance among the straight lines connecting the end points of the subject Z in the negative direction (downward) of the Y axis with respect to the points 3 and 4 in parallel with the X axis. It is a small straight line.
  • the straight line connecting the points 7 and 8 is an example of a straight line connecting two points in the subject area (segmentation image) satisfying a specific condition.
  • the specific condition is that the distance is the smallest.
  • Point 9 is the intersection of the straight line L4 passing through the points 3 and 7 and the boundary line X1 in the subject Z.
  • the point 10 is an intersection of the straight line L5 passing through the points 4 and 8 and the boundary line X1 in the subject Z. That is, points 9 and 10 are points located above the base of the sleeve (that is, the base of the sleeve on the shoulder side). Points 9 and 10 are examples of points located at the contour (or around the contour) that intersects the extension line of the line segment along at least a part of the contour (or around the contour) of the subject specified based on the feature component. is there.
  • the processor 51 may specify points 1 to 10 as distance measurement positions when it is specified that the subject included in the captured image indicates a T-shirt, but the processor 51 does not specify that the subject indicates a T-shirt. , Points 1 to 10 may be specified based only on the detected characteristic component.
  • the distance measurement position is point 1 and point 5 or a point for measuring the length of the rise. 2 and 6, points 1 and 3 to measure inseam length, or points 2 and 4, points 3 and 5 to measure total length, or points 4 and 6, cross width (ie, thigh) Points 1 and 14 to measure the circumference), and points 2 and 13, points 5 and 6 to measure the waist (ie waist circumference), points 7 and 8 to measure the knee width, and point 9 And point 10, points 3 and 11 for measuring the hem width, and at least one of points 4 and 12 may be specified.
  • the processor 51 raises, inseams, total length, and waist (that is, that is).
  • the distance measurement position may be specified so that at least one dimension of waist circumference), width (ie, thigh circumference), knee width and hem width is measured.
  • points 4 and 12 are points located at both ends of the hem portion.
  • Points 2 and 13 are points located at both ends of the portion corresponding to the wearer's thighs.
  • Points 9 and 10 are points located at both ends of the portion corresponding to the wearer's knee.
  • Points 5 and 6 are points located at both ends of the portion corresponding to the wearer's torso.
  • the distance measurement position is point 1 and point 2 for measuring the width on the bottom surface, and the depth on the bottom surface.
  • At least one of points 4 and 5 for measuring the depth on the upper surface may be specified.
  • the processor 51 has at least one of height, width, and depth.
  • the distance measurement position may be specified so that one dimension is measured.
  • the three-dimensional object may include at least one of furniture, home appliances and a bag.
  • the processor 51 may be configured to identify a point for measuring the height of a human or animal, or the size of a fish or plant.
  • the storage unit 52 is composed of a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a distance measurement position specified by the processor 51. It is configured to do.
  • a RAM Random Access Memory
  • ROM Read Only Memory
  • auxiliary storage device such as a hard disk or a flash memory
  • the machine learning device 20 includes an acquisition unit 21, a processor 22, and a storage unit 23.
  • the acquisition unit 21 is configured to acquire the captured image for learning and the name of the subject corresponding to the captured image for learning as a set of input data and a label.
  • the processor 22 is configured to learn what the subject included in the captured image indicates by learning the set of the input data and the label as training data.
  • the processor 22 is configured to calculate the above-mentioned training data in a multi-layer structure, that is, to generate a trained model for indicating what the subject included in the captured image shows by deep learning. You may be.
  • the storage unit 23 is composed of a storage device such as FRAM (registered trademark) (Ferroelectric Ramdom Access Memory) or an auxiliary storage device such as a hard disk or a flash memory so as to store the learned model generated by the processor 33. It is configured.
  • FRAM registered trademark
  • auxiliary storage device such as a hard disk or a flash memory
  • processor 51 of the image processor 50 may be configured to specify what the subject included in the captured image indicates based on the trained model of the machine learning device 20.
  • the processor 12 of the management server 10 may be configured to measure the distance indicated by the distance measurement position based on the distance measurement position specified by the image processor 50.
  • the processor 12 is imaged by the same method as the measurement method as described above, which is performed when the processor 33 of the image pickup apparatus 30 according to the first embodiment measures one dimension by using the image captured for measurement. It may be configured to measure one dimension using an image.
  • the processor 12 is configured to detect the manufacturer or brand from the tag information included in the captured image and specify the distance indicated by the distance measurement position from the size table of the manufacturer or brand. May be good.
  • the image pickup device 30 and the exhibition device 40 can exhibit the target product on the market 2 composed of the communication network.
  • the imaging device 30 and the exhibiting device 40 have, for example, the same configuration as the configuration in the case where the target product can be exhibited on the market 2 configured by the communication network in the first embodiment described above. May be good.
  • FIG. 19 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fourth embodiment.
  • the imaging device 30 captures an captured image including a subject by the camera 31 in step S4001, and transmits the captured image to the management server 10 in step S4002.
  • step S4003 the management server 10 specifies the distance measurement position according to the subject included in the acquired captured image and measures the distance indicated by the distance measurement position, and in step S4005, the distance is measured with respect to the image pickup device 30. Distance The distance indicated by the measurement position is transmitted.
  • step S4006 the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S4007, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
  • step S4008 such target product information is put up on the market 2.
  • the image processor 50 provided in the management server 10 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image.
  • the desired dimensions of the subject can be automatically measured.
  • a portable communication terminal or the like can easily obtain a desired dimension of a subject included in the captured image. It can be measured automatically.
  • the image processor 50 is provided not in the management server 10 but in the image pickup device 30.
  • the image pickup apparatus 30 includes a communication interface 32, a camera 31, a processor 33, and an image processor 50.
  • the communication interface 32 is configured to be able to communicate with the management server 10 and the communication network (market) 2 using a wireless line or a wired line, and the camera 31 acquires a captured image including a subject. Is configured to allow
  • the processor 33 specifies the distance measurement position according to the subject included in the captured image captured by the camera 31 and measures the distance indicated by the distance measurement position based on the processing of the image processor 50. It is configured as follows.
  • the image processor 50 specifies what the subject indicates based on the captured image captured by the camera 31, detects the characteristic component of the subject, and identifies the specified subject and features. It is configured to specify the distance measurement position according to the subject included in the captured image based on the components.
  • FIG. 22 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fifth embodiment.
  • the management server 10 generates a trained model for showing what the subject included in the captured image shows by learning the set of the input data and the label as training data in step S5001. Then, in step S5002, the trained model is transmitted to the image pickup apparatus 30.
  • step S5002 the image pickup apparatus 30 identifies what the subject included in the captured image indicates based on the received learned model, detects the feature component of the subject, and detects the identified subject and the detected feature component.
  • the distance measurement position corresponding to the subject included in the captured image is specified based on the above, and in step S5003, the distance indicated by the distance measurement position is measured.
  • step S5004 the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S5005, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
  • step S5006 such target product information is put up on the market 2.
  • the image processor 50 provided in the image pickup apparatus 30 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image.
  • the desired dimensions of the subject can be automatically measured.
  • the image processor 50 and the machine learning device 20 are provided in the image pickup device 30. Further, as shown in FIG. 24, the image pickup device 30 includes an image processor 50, a machine learning device 20, a communication interface 32, a camera 31, and a processor 33.
  • the processor 33 identifies the three-dimensional information of the surrounding environment and the position of the imaging device 30 based on the captured image captured by the camera 31, and the three-dimensional information of the surrounding environment, the position of the imaging device 30, the first. Based on the points and the second point, the distance from the image pickup device 30 to the first point and the distance from the image pickup device 30 to the second point are specified, the distance from the image pickup device 30 to the first point, and the distance from the image pickup device 30 to the second point. It may be configured to specify the distance from the first point to the second point based on the distance to the point. That is, the processor 33 may be configured to measure the distance between each point (that is, each distance estimation position) using, for example, Virtual SLAM (Simultaneus Localization and Mapping) technology.
  • Virtual SLAM Simultaneus Localization and Mapping
  • FIG. 25 is a sequence diagram showing an example of the operation of the image processing system 1 according to the present embodiment.
  • step S6001 the image pickup apparatus 30 generates a trained model for indicating what the subject included in the captured image shows by learning the set of the input data and the label as training data. Then, in step S6002, what the subject included in the captured image indicates is specified based on the trained model, the characteristic component of the subject is detected, and the image is taken based on the specified subject and the detected characteristic component. The distance measurement position corresponding to the subject included in the image is specified, and the distance indicated by the distance measurement position is measured.
  • step S6003 the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S6004, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
  • step S6005 the target product information is put up on the market 2.
  • the image processor 50 provided in the image pickup apparatus 30 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image.
  • the desired dimensions of the subject can be automatically measured.
  • a program that causes a computer to execute each process performed by the image pickup device 30, the management server 10, the image processor 50, and the machine learning device 20 may be provided.
  • the program may be recorded on a computer-readable medium.
  • Computer-readable media can be used to install programs on a computer.
  • the computer-readable medium on which the program is recorded may be a non-transient recording medium.
  • the non-transient recording medium is not particularly limited, but may be, for example, a recording medium such as a CD-ROM or a DVD-ROM.
  • the image pickup device 30 is not limited to the device capable of listing the target product on the market 2.
  • the image pickup device 30 may be any device that can present the user at least a distance specified according to the subject included in the captured image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

An image processing system 1 is equipped with a machine learner 20. The machine learner 20 acquires, as a combination of input data and a label, a captured image used for learning and distance measurement positions corresponding to the captured image used for learning, and carries out learning using combinations of input data and labels as training data to learn the distance measurement positions for a subject included in a captured image.

Description

画像処理システム、機械学習器、画像処理器及び撮像装置Image processing system, machine learning device, image processing device and imaging device
 本開示は、画像処理システム、機械学習器、画像処理器及び撮像装置に関する。 The present disclosure relates to an image processing system, a machine learning device, an image processing device, and an imaging device.
 従来、被写体を撮影することで、画像中において被写体及びその大きさ(長さ)を簡単に把握することが可能な電子カメラが知られている(例えば、特許文献1参照)。 Conventionally, there is known an electronic camera capable of easily grasping the subject and its size (length) in the image by photographing the subject (see, for example, Patent Document 1).
特開2005-142938号公報Japanese Unexamined Patent Publication No. 2005-142938
 第1の態様に係る画像処理システムは、機械学習器を備え、前記機械学習器は、学習用の撮像画像及び前記学習用の撮像画像に対応する距離測定位置を、入力データとラベルとの組として取得し、前記入力データと前記ラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体に応じた前記距離測定位置を学習する。 The image processing system according to the first aspect includes a machine learning device, and the machine learning device sets a captured image for learning and a distance measurement position corresponding to the captured image for learning as a set of input data and a label. By learning the pair of the input data and the label as training data, the distance measurement position corresponding to the subject included in the captured image is learned.
 第2の態様に係る機械学習器は、少なくとも1つのプロセッサを備え、前記プロセッサは、学習用の撮像画像及び前記学習用の撮像画像に対応する距離測定位置を入力データとラベルとの組として取得し、前記入力データと前記ラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体に応じた前記距離測定位置を学習する。 The machine learning device according to the second aspect includes at least one processor, and the processor acquires a captured image for learning and a distance measurement position corresponding to the captured image for learning as a set of input data and a label. Then, by learning the set of the input data and the label as training data, the distance measurement position corresponding to the subject included in the captured image is learned.
 第3の態様に係る撮像装置は、カメラとプロセッサと通信インタフェースとを備えるであって、前記通信インタフェースは、撮像画像に含まれる被写体に応じた距離測定位置を学習する機械学習器の学習済モデルを取得し、前記プロセッサは、前記カメラにより撮像された撮像画像及び前記学習済モデルに基づき、前記撮像画像に含まれる被写体に応じた前記距離測定位置を特定し、前記該距離測定位置を示す距離を計測する。 The image pickup apparatus according to the third aspect includes a camera, a processor, and a communication interface, and the communication interface is a trained model of a machine learner that learns a distance measurement position according to a subject included in a captured image. The processor identifies the distance measurement position according to the subject included in the captured image based on the captured image captured by the camera and the trained model, and the distance indicating the distance measurement position. To measure.
 第4の態様に係る画像処理システムは、画像処理器を備え、前記画像処理器は、撮像画像の特徴成分を検出し、前記特徴成分に基づいて被写体に応じた距離測定位置を特定する。 The image processing system according to the fourth aspect includes an image processor, and the image processor detects a feature component of the captured image and specifies a distance measurement position according to the subject based on the feature component.
 第5の態様に係る画像処理器は、少なくとも1つのプロセッサを備え、前記プロセッサは、撮像画像の特徴成分を検出し、前記特徴成分に基づいて被写体に応じた距離測定位置を特定する。 The image processor according to the fifth aspect includes at least one processor, which detects a feature component of the captured image and specifies a distance measurement position according to the subject based on the feature component.
 第6の態様に係る撮像装置は、第2の態様に係る画像処理器とカメラとプロセッサとを備え、前記画像処理器は、前記カメラにより撮像された撮像画像の特徴成分を検出し、前記特徴成分に基づいて被写体に応じた距離測定位置を特定する。 The image pickup apparatus according to the sixth aspect includes the image processor, the camera, and the processor according to the second aspect, and the image processor detects the feature component of the captured image captured by the camera, and the feature. The distance measurement position according to the subject is specified based on the components.
図1は、第1実施形態に係る画像処理システム1の全体構成の一例を示す図である。FIG. 1 is a diagram showing an example of the overall configuration of the image processing system 1 according to the first embodiment. 図2は、第1実施形態に係る機械学習器20の機能ブロックの一例を示す図である。FIG. 2 is a diagram showing an example of a functional block of the machine learning device 20 according to the first embodiment. 図3は、図2に示す機械学習器20の機能の一例を説明するための図である。FIG. 3 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG. 図4は、図2に示す機械学習器20の機能の一例を説明するための図である。FIG. 4 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG. 図5は、図2に示す機械学習器20の機能の一例を説明するための図である。FIG. 5 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG. 図6は、図2に示す機械学習器20の機能の一例を説明するための図である。FIG. 6 is a diagram for explaining an example of the function of the machine learning device 20 shown in FIG. 図7は、第1実施形態に係る管理サーバ10の機能ブロックの一例を示す図である。FIG. 7 is a diagram showing an example of a functional block of the management server 10 according to the first embodiment. 図8は、第1実施形態に係る撮像装置30の機能ブロックの一例を示す図である。FIG. 8 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the first embodiment. 図9は、図8に示す撮像装置30の機能の一例を説明するための図である。FIG. 9 is a diagram for explaining an example of the function of the image pickup apparatus 30 shown in FIG. 図10は、図8に示す撮像装置30の機能の一例を説明するための図である。FIG. 10 is a diagram for explaining an example of the function of the image pickup apparatus 30 shown in FIG. 図11は、第1実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。FIG. 11 is a sequence diagram showing an example of the operation of the image processing system 1 according to the first embodiment. 図12は、第2実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。FIG. 12 is a sequence diagram showing an example of the operation of the image processing system 1 according to the second embodiment. 図13は、第3実施形態に係る画像処理システム1の全体構成の一例を示す図である。FIG. 13 is a diagram showing an example of the overall configuration of the image processing system 1 according to the third embodiment. 図14は、第3実施形態に係る撮像装置30の機能ブロックの一例を示す図である。FIG. 14 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the third embodiment. 図15は、第3実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。FIG. 15 is a sequence diagram showing an example of the operation of the image processing system 1 according to the third embodiment. 図16は、第4実施形態に係る画像処理システム1の全体構成の一例を示す図である。FIG. 16 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fourth embodiment. 図17は、第4実施形態に係る管理サーバ10の機能ブロックの一例を示す図である。FIG. 17 is a diagram showing an example of a functional block of the management server 10 according to the fourth embodiment. 図18は、第4実施形態に係る画像処理器50の機能ブロックの一例を示す図である。FIG. 18 is a diagram showing an example of a functional block of the image processor 50 according to the fourth embodiment. 図19は、第4実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。FIG. 19 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fourth embodiment. 図20は、第5実施形態に係る画像処理システム1の全体構成の一例を示す図である。FIG. 20 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fifth embodiment. 図21は、第5実施形態に係る撮像装置30の機能ブロックの一例を示す図である。FIG. 21 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the fifth embodiment. 図22は、第5実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。FIG. 22 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fifth embodiment. 図23は、第6実施形態に係る画像処理システム1の全体構成の一例を示す図である。FIG. 23 is a diagram showing an example of the overall configuration of the image processing system 1 according to the sixth embodiment. 図24は、第6実施形態に係る撮像装置30の機能ブロックの一例を示す図である。FIG. 24 is a diagram showing an example of a functional block of the image pickup apparatus 30 according to the sixth embodiment. 図25は、第6実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。FIG. 25 is a sequence diagram showing an example of the operation of the image processing system 1 according to the sixth embodiment.
 上述の電子カメラでは、自動で被写体の大きさを算出することができないという問題点があった。 The above-mentioned electronic camera had a problem that the size of the subject could not be calculated automatically.
 そこで、本開示は、上述の課題に鑑みてなされたものであり、より簡便な操作で撮像画像に含まれる被写体の所望の長さを計測すること可能とすることを目的とする。 Therefore, the present disclosure has been made in view of the above-mentioned problems, and an object of the present disclosure is to make it possible to measure a desired length of a subject included in a captured image with a simpler operation.
 以下において、各実施形態について図面を参照しながら説明する。なお、以下の図面の記載において、同一又は類似の部分には、同一又は類似の符号を付している。 In the following, each embodiment will be described with reference to the drawings. In the description of the drawings below, the same or similar parts are designated by the same or similar reference numerals.
 但し、図面は模式的なものであり、各寸法の比率などは現実のものとは異なる場合があることに留意すべきである。従って、具体的な寸法などは以下の説明を参酌して判断すべきである。また、図面相互間においても互いの寸法の関係又は比率が異なる部分が含まれている場合があることは勿論である。 However, it should be noted that the drawings are schematic and the ratio of each dimension may differ from the actual one. Therefore, the specific dimensions should be determined in consideration of the following explanation. In addition, it goes without saying that parts having different dimensional relationships or ratios may be included between the drawings.
 (第1実施形態)
 以下、図1~図11を参照して、第1実施形態について説明する。
(First Embodiment)
Hereinafter, the first embodiment will be described with reference to FIGS. 1 to 11.
 図1は、第1実施形態に係る画像処理システム1の全体構成の一例を示す図である。図1に示すように、第1実施形態に係る画像処理システム1は、管理サーバ10と、撮像装置30と、出品装置40と、通信網で構成されるマーケット2とを有している。 FIG. 1 is a diagram showing an example of the overall configuration of the image processing system 1 according to the first embodiment. As shown in FIG. 1, the image processing system 1 according to the first embodiment includes a management server 10, an image pickup device 30, an exhibition device 40, and a market 2 composed of a communication network.
 また、第1実施形態に係る画像処理システム1では、管理サーバ10は、機械学習器20を備えている。 Further, in the image processing system 1 according to the first embodiment, the management server 10 includes a machine learning device 20.
 図2に示すように、機械学習器20は、取得部21と、プロセッサ22と、記憶部23とを有している。 As shown in FIG. 2, the machine learning device 20 has an acquisition unit 21, a processor 22, and a storage unit 23.
 取得部21は、学習用の撮像画像及び学習用の撮像画像に対応する距離測定位置を、入力データとラベルとの組として取得するように構成されている。 The acquisition unit 21 is configured to acquire the captured image for learning and the distance measurement position corresponding to the captured image for learning as a set of input data and a label.
 プロセッサ22は、入力データとラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体に応じた距離測定位置を学習するように構成されている。 The processor 22 is configured to learn the distance measurement position according to the subject included in the captured image by learning the set of the input data and the label as training data.
 具体的には、図3に示すように、プロセッサ22は、入力データである撮像画像と出力(ラベル)である距離測定位置との組である訓練データ(学習データセット)を用いて、学習済モデルを生成するように構成されている。 Specifically, as shown in FIG. 3, the processor 22 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). It is configured to generate a model.
 ここで、プロセッサ22は、上述の訓練データを多層構造で演算する、すなわち、深層学習(Deep Learning)によって学習済モデルを生成するように構成されていてもよい。 Here, the processor 22 may be configured to calculate the above-mentioned training data in a multi-layer structure, that is, to generate a trained model by deep learning.
 また、入力データは、被写体のカテゴリを示す識別データを含んでいてもよい。被写体のカテゴリは、例えば、Tシャツ等の上衣でもよい。被写体のカテゴリは、例えば、ズボン及び/又はスカート等の下衣でもよい。被写体のカテゴリは、例えば、家具、家電、及び又はバッグ等の立体物でもよい。 Further, the input data may include identification data indicating the category of the subject. The subject category may be, for example, a top such as a T-shirt. The subject category may be, for example, underwear such as pants and / or skirts. The subject category may be, for example, furniture, home appliances, or a three-dimensional object such as a bag.
 また、距離測定位置は、第1点及び第1点とは異なる第2点を少なくとも含んでいてもよい。また、距離測定位置は、第1点及び第1点とは異なる第2点を含む線分であってもよい。線分は、直線であっても曲線であってもよい。 Further, the distance measurement position may include at least a first point and a second point different from the first point. Further, the distance measurement position may be a line segment including a first point and a second point different from the first point. The line segment may be a straight line or a curved line.
 例えば、図4に示すように、プロセッサ22は、カテゴリとしてTシャツを示す識別データを含む撮像画像(入力データ)に含まれる被写体に応じた距離測定位置として、着丈を測るための点5及び点6、袖丈を測るための点1及び点9(又は、点2及び点10)、裄丈を測るための点1、点5及び点9(又は、点2、点5及び点10)、身幅(胸囲)を測るための点3及び点4、ウエスト(又は、胴囲)を測るための点7及び点8、肩幅を測るための点9及び点10のうちの少なくとも1つを学習してもよい。 For example, as shown in FIG. 4, the processor 22 has points 5 and points for measuring the length as distance measurement positions according to the subject included in the captured image (input data) including the identification data indicating the T-shirt as a category. 6. Point 1 and point 9 (or point 2 and point 10) for measuring sleeve length, point 1, point 5 and point 9 (or point 2, point 5 and point 10) for measuring sleeve length, width of the body Learn at least one of points 3 and 4 to measure (chest circumference), points 7 and 8 to measure waist (or waist circumference), and points 9 and 10 to measure shoulder width. May be good.
 すなわち、図4に示すように、プロセッサ22は、計測用の撮像画像に上衣が含まれる場合に、着丈、袖丈、裄丈、身幅(すなわち、胸囲)、ウエスト(すなわち、胴囲)及び肩幅のうちの少なくとも1つの寸法が計測されるように距離測定位置を学習してもよい。なお、ここでいうウエストは、上衣の胴部分の周囲のうち、胴部分において最も距離が短い周囲を指す。 That is, as shown in FIG. 4, when the captured image for measurement includes a top, the processor 22 has a length, sleeve length, sleeve length, width (that is, chest circumference), waist (that is, waist circumference), and shoulder width. The distance measurement position may be learned so that at least one of the dimensions is measured. The waist referred to here refers to the circumference of the waist portion of the upper garment, which has the shortest distance.
 例えば、図4において、点1及び点2は、袖の先端の上側(すなわち、肩側の袖の先端)に位置する点である。点3及び点4は、袖の付け根の下側(すなわち脇側の袖の付け根)の点である。点5は、衿の付け根且つ被写体Zの中心に位置する点である。点6は、裾の先端且つ被写体Zの中心に位置する点である。点7は、裾の最も外側に位置する点A及び点Bのうち点3の近くに位置する点Aと点3との間に位置する境界線X1上の点の中で、被写体Zの中心に最も近い点である。点8は、裾の最も外側に位置する点A及び点Bのうち点4の近くに位置する点Bと点4と間に位置する境界線X1上の点の中で、被写体Zの中心に最も近い点である。点9及び点10は、袖の付け根の上側(すなわち、肩側の袖の付け根)に位置する点である。 For example, in FIG. 4, points 1 and 2 are points located above the tip of the sleeve (that is, the tip of the sleeve on the shoulder side). Points 3 and 4 are points on the lower side of the base of the sleeve (that is, the base of the sleeve on the side). Point 5 is a point located at the base of the collar and in the center of the subject Z. Point 6 is a point located at the tip of the hem and in the center of the subject Z. Point 7 is the center of the subject Z among the points on the boundary line X1 located between the points A and 3 located near the point 3 among the points A and B located on the outermost side of the hem. It is the closest point to. The point 8 is located at the center of the subject Z among the points on the boundary line X1 located between the points B and 4 located near the point 4 among the points A and B located on the outermost side of the hem. This is the closest point. Points 9 and 10 are points located above the base of the sleeve (ie, the base of the sleeve on the shoulder side).
 或いは、図5に示すように、プロセッサ22は、カテゴリとしてズボンを示す識別データを含む撮像画像(入力データ)に含まれる被写体に応じた距離測定位置として、股上の長さを測るための点1及び点5(又は、点2及び点6)、股下の長さを測るための点1及び点3(又は、点2及び点4)、総丈を測るための点3及び点5(又は、点4及び点6)、わたり幅(すなわち腿周り)を測るための点1及び点14(又は、点2及び点13)、ウエスト(すなわち、胴囲)を測るための点5及び点6、膝幅を測るための点7及び点8(又は、点9及び点10)、裾幅を測るための点3及び点11(又は、点4及び点12)のうち少なくとも1つを学習するように構成されていてもよい。 Alternatively, as shown in FIG. 5, the processor 22 measures the rise length as a distance measurement position according to the subject included in the captured image (input data) including the identification data indicating the pants as a category. And points 5 (or points 2 and 6), points 1 and 3 (or points 2 and 4) to measure the length of the inseam, points 3 and 5 (or points 5) to measure the total length. Points 4 and 6), points 1 and 14 (or points 2 and 13) for measuring the width (ie, thigh circumference), points 5 and 6 for measuring the waist (ie, waist circumference), To learn at least one of points 7 and 8 (or points 9 and 10) for measuring knee width, and points 3 and 11 (or points 4 and 12) for measuring hem width. It may be configured in.
 すなわち、図5に示すように、プロセッサ22は、計測用の撮像画像に下衣が含まれる場合に、股上、股下、総丈、ウエスト(すなわち、胴囲)、わたり幅(すなわち、腿周り)、膝幅及び裾幅のうちの少なくとも1つの寸法が計測されるように距離測定位置を学習してもよい。 That is, as shown in FIG. 5, when the captured image for measurement includes the lower garment, the processor 22 has rise, inseam, total length, waist (that is, waist circumference), and width (that is, around the thigh). , The distance measurement position may be learned so that at least one of the knee width and the hem width is measured.
 例えば、図5において、点4及び点12(又は、点3及び点11)は、裾部分の両端に位置する点である。点2及び点13(又は、点1及び点14)は、着用者の太腿に対応する部分の両端に位置する点である。点9及び点10(又は、点7及び点8)は、着用者の膝に対応する部分の両端に位置する点である。点5及び点6は、着用者の胴に対応する部分の両端に位置する点である。 For example, in FIG. 5, points 4 and 12 (or points 3 and 11) are points located at both ends of the hem portion. Points 2 and 13 (or points 1 and 14) are points located at both ends of the portion corresponding to the wearer's thighs. Points 9 and 10 (or points 7 and 8) are points located at both ends of the portion corresponding to the wearer's knee. Points 5 and 6 are points located at both ends of the portion corresponding to the wearer's torso.
 或いは、図6に示すように、プロセッサ22は、カテゴリとしてバッグを示す識別データを含む撮像画像(入力データ)に含まれる被写体に応じた距離測定位置として、底面における幅を測るための点1及び点2、底面における奥行きを測るための点2及び点3、高さを測るための点1及び点6(又は、点2及び点5、或いは、点3及び点4)、上面における幅を測るための点5及び点6、上面における奥行きを測るための点4及び点5のうち少なくとも1つを学習するように構成されていてもよい。 Alternatively, as shown in FIG. 6, the processor 22 has a point 1 for measuring the width on the bottom surface as a distance measurement position according to the subject included in the captured image (input data) including the identification data indicating the bag as a category. Point 2, point 2 and point 3 to measure the depth on the bottom surface, point 1 and point 6 (or point 2 and point 5 or point 3 and point 4) to measure the height, measure the width on the top surface It may be configured to learn at least one of points 5 and 6 for the purpose and points 4 and 5 for measuring the depth on the upper surface.
 すなわち、図6に示すように、プロセッサ22は、計測用の撮像画像に立体物が含まれる場合に、高さ、幅及び奥行きのうちの少なくとも1つの寸法が計測されるように距離測定位置を学習してもよい。ここで、立体物は、家具、家電及びバッグのうちの少なくとも1つを含んでいてもよい。 That is, as shown in FIG. 6, when the captured image for measurement includes a three-dimensional object, the processor 22 sets the distance measurement position so that at least one dimension of height, width, and depth is measured. You may learn. Here, the three-dimensional object may include at least one of furniture, home appliances and a bag.
 その他、プロセッサ22は、人間若しくは動物の身長、又は、魚若しくは植物のサイズを測るための点を学習するように構成されていてもよい。 In addition, the processor 22 may be configured to learn points for measuring the height of humans or animals, or the size of fish or plants.
 また、プロセッサ22は、計測用の撮像画像の取得に応じて出力された距離測定位置をユーザ操作に基づいて修正し、計測用の撮像画像と修正された距離測定位置との組を訓練データとしてさらに学習を行うように構成されていてもよい。 Further, the processor 22 corrects the distance measurement position output in response to the acquisition of the captured image for measurement based on the user operation, and uses the set of the captured image for measurement and the corrected distance measurement position as training data. It may be configured to perform further learning.
 或いは、プロセッサ22は、計測用の撮像画像の取得に応じて出力された距離測定位置がユーザ操作に基づいて修正されたか否かに基づいて報酬を計算し、かかる報酬に基づいて、距離測定位置を特定するための関数を更新してもよい。すなわち、プロセッサ22は、ユーザ操作に基づく距離測定位置の修正の有無に応じた強化学習を行うように構成されていてもよい。 Alternatively, the processor 22 calculates a reward based on whether or not the distance measurement position output in response to the acquisition of the captured image for measurement has been corrected based on the user operation, and the distance measurement position is based on the reward. You may update the function to identify. That is, the processor 22 may be configured to perform reinforcement learning according to the presence or absence of correction of the distance measurement position based on the user operation.
 記憶部23は、RAM(Random Access Memory)又はROM(Read Only Memory)等を含む記憶装置或いはハードディスクやフラッシュメモリ等の補助記憶装置によって構成されており、プロセッサ22によって生成された学習済モデルを記憶するように構成されている。 The storage unit 23 is composed of a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a learned model generated by the processor 22. It is configured to do.
 図7に示すように、第1実施形態に係る画像処理システム1において、管理サーバ10は、機械学習器20と、通信インタフェースと11と、プロセッサ12とを有する。 As shown in FIG. 7, in the image processing system 1 according to the first embodiment, the management server 10 includes a machine learning device 20, a communication interface, 11, and a processor 12.
 通信インタフェース11は、無線回線又は有線回線を用いて、撮像装置30との間で所定情報の送受信を行うように構成されている。第1実施形態では、通信インタフェース11は、機械学習器20によって生成された学習済モデルを撮像装置30に送信するように構成されている。 The communication interface 11 is configured to send and receive predetermined information to and from the image pickup apparatus 30 using a wireless line or a wired line. In the first embodiment, the communication interface 11 is configured to transmit the trained model generated by the machine learning device 20 to the imaging device 30.
 プロセッサ12は、所定処理を行うように構成されている。第1実施形態では、プロセッサ12は、機械学習器20に対して、入力データである撮像画像と出力(ラベル)である距離測定位置との組である訓練データ(学習データセット)を入力し、学習済モデルを生成するように指示するように構成されている。 The processor 12 is configured to perform predetermined processing. In the first embodiment, the processor 12 inputs the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label) to the machine learning device 20. It is configured to instruct you to generate a trained model.
 図8に示すように、撮像装置30は、カメラ31と、通信インタフェース32と、プロセッサ33と、記憶部34とを備えている。 As shown in FIG. 8, the image pickup apparatus 30 includes a camera 31, a communication interface 32, a processor 33, and a storage unit 34.
 カメラ31は、計測用の撮像画像を取得することができるように構成されており、通信インタフェース32は、無線回線又は有線回線を用いて、管理サーバ10及び通信網(マーケット)2と通信することができるように構成されている。 The camera 31 is configured to be able to acquire a captured image for measurement, and the communication interface 32 communicates with the management server 10 and the communication network (market) 2 using a wireless line or a wired line. Is configured to allow
 プロセッサ33は、カメラ31により撮像された計測用の撮像画像及び管理サーバ10から取得した学習済モデルに基づき、計測用の撮像画像に含まれる被写体に応じた距離測定位置を特定し、かかる距離測定位置が示す距離を計測するように構成されている。距離測定位置が示す距離とは、例えば、距離測定位置が第1点及び第2点である場合、これらを直線で結ぶ線分の距離であってもよい。また、距離測定位置が示す距離とは、例えば、距離測定位置が線分である場合、当該線分が示す距離であってもよい。 The processor 33 identifies a distance measurement position according to the subject included in the image for measurement and measures the distance based on the image captured by the camera 31 for measurement and the learned model acquired from the management server 10. It is configured to measure the distance indicated by the position. The distance indicated by the distance measurement position may be, for example, the distance of a line segment connecting these points when the distance measurement positions are the first point and the second point. Further, the distance indicated by the distance measurement position may be, for example, the distance indicated by the line segment when the distance measurement position is a line segment.
 例えば、プロセッサ33は、計測用の撮像画像に上衣が含まれる場合に、着丈、袖丈、裄丈、身幅、ウエスト及び肩幅のうちの少なくとも1つの寸法を計測するように構成されていてもよい。 For example, the processor 33 may be configured to measure at least one of the length, sleeve length, sleeve length, width of the body, waist, and shoulder width when the captured image for measurement includes a top.
 具体的には、プロセッサ33は、図4に示すように、着丈の寸法として、点5と点6との間の距離を計測するように構成されていてもよい。また、プロセッサ33は、図4に示すように、袖丈の寸法として、点1と点9との間の距離(及び/或いは、点2と点10との間の距離)を計測するように構成されていてもよい。 Specifically, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 5 and 6 as the length dimension. Further, as shown in FIG. 4, the processor 33 is configured to measure the distance between the points 1 and 9 (and / or the distance between the points 2 and 10) as the sleeve length dimension. It may have been.
 また、プロセッサ33は、図4に示すように、裄丈の寸法として、点1と点9との間の距離と点5と点9との間の距離とを足し合わせた距離(及び/或いは、点2と点10との間の距離及び点5と点9との間の距離とを足し合わせた距離)を計測するように構成されていてもよい。 Further, as shown in FIG. 4, the processor 33 has the distance (and / or) obtained by adding the distance between the points 1 and 9 and the distance between the points 5 and 9 as the sleeve length. , The distance between the point 2 and the point 10 and the distance between the point 5 and the point 9) may be measured.
 また、プロセッサ33は、図4に示すように、身幅の寸法として、点3と点4との間の距離を計測するように構成されていてもよい。また、プロセッサ33は、図4に示すように、ウエストの寸法として、点7と点8との間の距離を計測するように構成されていてもよい。また、プロセッサ33は、図4に示すように、肩幅の寸法として、点9と点10との間の距離を計測するように構成されていてもよい。 Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 3 and 4 as the dimension of the width of the body. Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 7 and 8 as the waist dimension. Further, as shown in FIG. 4, the processor 33 may be configured to measure the distance between the points 9 and 10 as the dimension of the shoulder width.
 また、プロセッサ33は、計測用の撮像画像に下衣が含まれる場合に、股上、股下、総丈、ウエスト、わたり幅、膝幅及び裾幅のうちの少なくとも1つの寸法を計測するように構成されていてもよい。 Further, the processor 33 is configured to measure at least one dimension of rise, inseam, total length, waist, width, knee width and hem width when the captured image for measurement includes a lower garment. It may have been done.
 具体的には、プロセッサ33は、図5に示すように、股上の寸法として、点1と点5との間の距離(及び/又は、点2と点6との間の距離)を計測するように構成されていてもよい。また、プロセッサ33は、図5に示すように、股下の寸法として、点1と点3との間の距離(及び/又は、点2と点4との間の距離)を計測するように構成されていてもよい。また、プロセッサ33は、図5に示すように、総丈の寸法として、点3と点5との間の距離(及び/又は、点4と点6との間の距離)を計測するように構成されていてもよい。また、プロセッサ33は、図5に示すように、ウエストの寸法として、点5と点6との間の距離の2倍を計測するように構成されていてもよい。また、プロセッサ33は、図5に示すように、わたり幅の寸法として、点1と点14との間の距離(及び/又は、点2と点13との間の距離)の2倍を計測するように構成されていてもよい。また、プロセッサ33は、図5に示すように、膝幅の寸法として、点7と点8との間の距離(及び/又は、点9と点10との間の距離)の2倍を計測するように構成されていてもよい。また、プロセッサ33は、図5に示すように、裾幅の寸法として、点3と点11との間の距離(及び/又は、点4と点12との間の距離)の2倍を計測するように構成されていてもよい。 Specifically, as shown in FIG. 5, the processor 33 measures the distance between the points 1 and 5 (and / or the distance between the points 2 and 6) as the rise dimension. It may be configured as follows. Further, as shown in FIG. 5, the processor 33 is configured to measure the distance between the points 1 and 3 (and / or the distance between the points 2 and 4) as the inseam dimension. It may have been. Further, as shown in FIG. 5, the processor 33 measures the distance between the points 3 and 5 (and / or the distance between the points 4 and 6) as the total length dimension. It may be configured. Further, as shown in FIG. 5, the processor 33 may be configured to measure twice the distance between the points 5 and 6 as the waist dimension. Further, as shown in FIG. 5, the processor 33 measures twice the distance between the points 1 and 14 (and / or the distance between the points 2 and 13) as the dimension of the width. It may be configured to do so. Further, as shown in FIG. 5, the processor 33 measures twice as the knee width dimension as the distance between the points 7 and 8 (and / or the distance between the points 9 and 10). It may be configured to do so. Further, as shown in FIG. 5, the processor 33 measures twice as the hem width dimension as the distance between the point 3 and the point 11 (and / or the distance between the point 4 and the point 12). It may be configured to do so.
 また、プロセッサ33は、計測用の撮像画像に立体物が含まれる場合に、高さ、幅及び奥行きのうちの少なくとも1つの寸法を計測するように構成されている。 Further, the processor 33 is configured to measure at least one dimension of height, width, and depth when a three-dimensional object is included in the captured image for measurement.
 ここで、距離測定位置が示す距離は、立体物である被写体の表面の2点を結ぶ距離又は平面とみなせる被写体の端部の2点を結ぶ距離であってもよい。具体的には、プロセッサ33は、図6に示すように、高さの寸法として、点1と点6との間の距離(及び/又は、点2と点5との間の距離、点3と点4との間の距離)を計測するように構成されていてもよい。また、プロセッサ33は、図6に示すように、幅の寸法として、点1と点2との間の距離を計測するように構成されていてもよい。また、プロセッサ33は、図6に示すように、奥行きの寸法として、点2と点3との間の距離を計測するように構成されていてもよい。 Here, the distance indicated by the distance measurement position may be a distance connecting two points on the surface of the subject which is a three-dimensional object or a distance connecting two points on the edge of the subject which can be regarded as a flat surface. Specifically, as shown in FIG. 6, the processor 33 has a height dimension of the distance between the points 1 and 6 (and / or the distance between the points 2 and 5 and the point 3). It may be configured to measure the distance between the point 4 and the point 4. Further, as shown in FIG. 6, the processor 33 may be configured to measure the distance between the point 1 and the point 2 as a width dimension. Further, as shown in FIG. 6, the processor 33 may be configured to measure the distance between the points 2 and 3 as a depth dimension.
 その他、プロセッサ33は、計測用の撮像画像に人間又は動物が含まれる場合に、これらの身長を計測するように構成されていてもよい。また、プロセッサ33は、計測用の撮像画像に魚又は植物が含まれる場合に、これらのサイズを計測するように構成されていてもよい。 In addition, the processor 33 may be configured to measure the height of humans or animals when the captured image for measurement includes humans or animals. Further, the processor 33 may be configured to measure the size of fish or plants when the captured image for measurement includes fish or plants.
 なお、プロセッサ33は、カメラ31により撮像された撮像画像及び管理サーバ10から取得した学習済モデルに基づき、第1点及び第2点を特定し、カメラ31により撮像された撮像画像に基づき、周囲環境の3次元情報及び撮像装置30の位置を特定してもよい。そして、プロセッサ33は、かかる周囲環境の3次元情報、撮像装置30の位置、第1点及び第2点に基づき、撮像装置30から第1点までの距離及び撮像装置30から第2点までの距離を特定してもよい。さらに、プロセッサ33は、撮像装置30から第1点までの距離及び撮像装置30から第2点までの距離に基づき、第1点から第2点までの距離を特定するように構成されていてもよい。すなわち、プロセッサ33は、例えば、Virtual SLAM(Simultaneous Localization and Mapping)技術を用いて、各点(すなわち、各距離推定位置)の間の距離を計測するように構成されていてもよい。 The processor 33 identifies the first point and the second point based on the captured image captured by the camera 31 and the trained model acquired from the management server 10, and the surroundings based on the captured image captured by the camera 31. The three-dimensional information of the environment and the position of the image pickup apparatus 30 may be specified. Then, the processor 33 determines the distance from the image pickup device 30 to the first point and the image pickup device 30 to the second point based on the three-dimensional information of the surrounding environment, the position of the image pickup device 30, the first point and the second point. You may specify the distance. Further, even if the processor 33 is configured to specify the distance from the first point to the second point based on the distance from the image pickup device 30 to the first point and the distance from the image pickup device 30 to the second point. Good. That is, the processor 33 may be configured to measure the distance between each point (that is, each distance estimation position) using, for example, Virtual SLAM (Simultaneus Localization and Mapping) technology.
 また、図9に示すように、プロセッサ33は、計測用の撮像画像に含まれるタグ情報からメーカー又はブランドを検出し、記憶部34から又はインターネットを介して、かかるメーカー又はブランドのサイズ表データを取得し、該サイズ表データに基づいて距離測定位置が示す距離を特定するように構成されていてもよい。プロセッサ33は、このようにして距離測定位置が示す距離を取得可能であった場合、管理サーバ10から取得した学習済モデルに基づき距離測定位置が示す距離を計測可能であったとしても、サイズ表データに基づき特定される距離測定位置が示す距離を優先して採用してもよい。 Further, as shown in FIG. 9, the processor 33 detects the manufacturer or brand from the tag information included in the captured image for measurement, and obtains the size chart data of the manufacturer or brand from the storage unit 34 or via the Internet. It may be configured to acquire and specify the distance indicated by the distance measurement position based on the size chart data. When the processor 33 can acquire the distance indicated by the distance measurement position in this way, even if the distance indicated by the distance measurement position can be measured based on the learned model acquired from the management server 10, the size table The distance indicated by the distance measurement position specified based on the data may be preferentially adopted.
 記憶部34は、上述のメーカー又はブランドのサイズ表データを記憶するように構成されている。 The storage unit 34 is configured to store the size chart data of the above-mentioned manufacturer or brand.
 また、第1実施形態に係る画像処理システム1において、撮像装置30は、通信網で構成されるマーケット2上に対象商品を出品可能である。かかる場合、撮像装置30は、例えば、所謂スマートフォンのような通信端末やタブレット等の携帯用通信端末であってもよい。 Further, in the image processing system 1 according to the first embodiment, the image pickup apparatus 30 can sell the target product on the market 2 composed of the communication network. In such a case, the image pickup device 30 may be, for example, a communication terminal such as a so-called smartphone or a portable communication terminal such as a tablet.
 ここで、撮像装置30は、ユーザ操作に応じて、対象商品画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、かかる対象商品画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードすることで対象商品を出品することができるように構成されている。なお、対象商品画像は、距離測定位置が示す距離の計測用に撮像された撮像画像であってもよいし、かかる撮像画像に含まれる被写体が撮像された別の撮像画像であってもよいし、後述する第4実施形態に係る撮像画像であってもよい。 Here, the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and the target product information including the target product image and the distance indicated by the distance measurement position. Is configured to be able to sell the target product by uploading it to the market 2. The target product image may be a captured image captured for measuring the distance indicated by the distance measurement position, or may be another captured image in which the subject included in the captured image is captured. , The captured image according to the fourth embodiment described later may be used.
 なお、第1実施形態に係る画像処理システム1において、出品装置40は、通信網で構成されるマーケット2上に対象商品を出品可能である。かかる場合、出品装置40は、例えば、ラップトップコンピューター、デスクトップコンピューター、スマートスピーカー等であってもよい。 In the image processing system 1 according to the first embodiment, the exhibiting device 40 can exhibit the target product on the market 2 composed of the communication network. In such a case, the exhibiting device 40 may be, for example, a laptop computer, a desktop computer, a smart speaker, or the like.
 ここで、出品装置40は、ユーザ操作に応じて、撮像装置30から対象商品画像及び距離測定位置が示す距離を取得し、かかる対象商品画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、かかる対象商品画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードすることで対象商品を出品することができるように構成されている。 Here, the exhibiting device 40 acquires the distance indicated by the target product image and the distance measurement position from the imaging device 30 in response to the user operation, and inputs or inputs the distance indicated by the target product image and the distance measurement position as the target product information. It is configured so that the target product can be put up by selecting and uploading the target product information including the target product image and the distance indicated by the distance measurement position on the market 2.
 なお、図10に示すように、撮像装置30及び出品装置40は、計測用の撮像画像に距離測定位置が示す距離をオーバーレイさせ、距離測定位置が示す距離がオーバーレイされた撮像画像を、商品画像及び距離測定位置が示す距離を含む対象商品情報としてマーケット2上にアップロードしてもよい。 As shown in FIG. 10, the imaging device 30 and the exhibiting device 40 overlay the distance indicated by the distance measurement position on the image captured for measurement, and the captured image overlaid with the distance indicated by the distance measurement position is a product image. And may be uploaded on the market 2 as the target product information including the distance indicated by the distance measurement position.
 図11は、第1実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。 FIG. 11 is a sequence diagram showing an example of the operation of the image processing system 1 according to the first embodiment.
 図11に示すように、管理サーバ10は、ステップS1001において、入力データである撮像画像と出力(ラベル)である距離測定位置との組である訓練データ(学習データセット)を用いて、学習済モデルを生成し、ステップS1002において、かかる学習済モデルを撮像装置30に送信する。 As shown in FIG. 11, in step S1001, the management server 10 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). A model is generated, and in step S1002, such a trained model is transmitted to the image pickup apparatus 30.
 撮像装置30は、ステップS1003において、カメラ31により撮像された計測用の撮像画像及び管理サーバ10から取得した学習済モデルに基づき、計測用の撮像画像に含まれる被写体に応じた距離測定位置を特定し、ステップS1004において、かかる距離測定位置が示す距離を計測する。 In step S1003, the imaging device 30 specifies a distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera 31 and the learned model acquired from the management server 10. Then, in step S1004, the distance indicated by the distance measurement position is measured.
 撮像装置30は、ステップS1005において、ユーザ操作に応じて、対象商品画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、ステップS1006において、かかる対象商品画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードする。 In step S1005, the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S1006, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
 ステップS1007において、かかる対象商品情報が、マーケット2上に出品される。 In step S1007, such target product information is put up on the market 2.
 第1実施形態に係る画像処理システム1によれば、管理サーバ10に備えられている機械学習器20によって、撮像画像を入力データとし且つ距離測定位置をラベルとして機械学習を行うことにより、撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 According to the image processing system 1 according to the first embodiment, the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the management server 10. The desired dimensions of the subject included in the can be automatically measured.
 (第2実施形態)
 以下、図12を参照して、第2実施形態について、上述の第1実施形態との相違点に着目して説明する。
(Second Embodiment)
Hereinafter, the second embodiment will be described with reference to FIG. 12, focusing on the differences from the first embodiment described above.
 管理サーバ10において、通信インタフェース11は、無線回線又は有線回線を用いて、撮像装置30から、カメラ31により撮像された計測用の撮像画像を取得するように構成されている。また、管理サーバ10において、通信インタフェース11は、無線回線又は有線回線を用いて、撮像装置30に対して、プロセッサ12から取得した距離測定位置及び距離測定位置が示す距離を送信するように構成されている。 In the management server 10, the communication interface 11 is configured to acquire a measurement image captured by the camera 31 from the image pickup device 30 using a wireless line or a wired line. Further, in the management server 10, the communication interface 11 is configured to transmit the distance measurement position acquired from the processor 12 and the distance indicated by the distance measurement position to the image pickup apparatus 30 using a wireless line or a wired line. ing.
 管理サーバ10において、プロセッサ12は、通信インタフェース11によって取得された計測用の撮像画像及び機械学習器20から取得した学習済モデルに基づき、計測用の撮像画像に含まれる被写体の形状に応じた距離測定位置を特定し、かかる距離測定位置が示す距離を計測するように構成されている。 In the management server 10, the processor 12 is a distance according to the shape of the subject included in the captured image for measurement based on the captured image for measurement acquired by the communication interface 11 and the learned model acquired from the machine learning device 20. It is configured to specify the measurement position and measure the distance indicated by the distance measurement position.
 図12は、第2実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。 FIG. 12 is a sequence diagram showing an example of the operation of the image processing system 1 according to the second embodiment.
 図12に示すように、管理サーバ10は、ステップS2001において、入力データである撮像画像と出力(ラベル)である距離測定位置との組である訓練データ(学習データセット)を用いて、学習済モデルを生成する。 As shown in FIG. 12, in step S2001, the management server 10 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). Generate a model.
 撮像装置30は、ステップS2002において、カメラ31により計測用の撮像画像を撮像し、ステップS2003において、管理サーバ10に対して、かかる計測用の撮像画像を送信する。 In step S2002, the imaging device 30 captures the captured image for measurement by the camera 31, and in step S2003, transmits the captured image for measurement to the management server 10.
 管理サーバ10は、ステップS2004において、撮像装置30から取得した計測用の撮像画像及び生成した学習済モデルに基づき、計測用の撮像画像に含まれる被写体に応じた距離測定位置を特定し、かかる距離測定位置が示す距離を計測する。ステップS2005において、撮像装置30に対して、計測用の撮像画像及び距離測定位置が示す距離を送信する。 In step S2004, the management server 10 specifies a distance measurement position according to the subject included in the image for measurement, based on the image for measurement acquired from the image pickup device 30 and the generated learned model, and the distance is taken. Measure the distance indicated by the measurement position. In step S2005, the captured image for measurement and the distance indicated by the distance measurement position are transmitted to the image pickup device 30.
 撮像装置30は、ステップS2006において、ユーザ操作に応じて、対象商品画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、ステップS2007において、かかる対象商品画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードする。 In step S2006, the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information in response to the user operation, and in step S2007, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
 ステップS2008において、かかる対象商品情報が、マーケット2上に出品される。 In step S2008, such target product information is put up on the market 2.
 第2実施形態に係る画像処理システム1によれば、管理サーバ10に備えられている機械学習器20によって、撮像画像を入力データとし且つ距離測定位置をラベルとして機械学習を行うことにより、撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 According to the image processing system 1 according to the second embodiment, the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the management server 10. The desired dimensions of the subject included in the can be automatically measured.
 (第3実施形態)
 以下、図13~図15を参照して、第3実施形態について、上述の第1実施形態及び第2実施形態との相違点に着目して説明する。
(Third Embodiment)
Hereinafter, the third embodiment will be described with reference to FIGS. 13 to 15, focusing on the differences from the first embodiment and the second embodiment described above.
 図13に示すように、第3実施形態に係る画像処理システム1において、機械学習器20は、撮像装置30に備えられている。また、図14に示すように、撮像装置30は、機械学習器20と、カメラ31と、プロセッサ33とを備えている。 As shown in FIG. 13, in the image processing system 1 according to the third embodiment, the machine learning device 20 is provided in the image pickup device 30. Further, as shown in FIG. 14, the image pickup apparatus 30 includes a machine learning device 20, a camera 31, and a processor 33.
 ここで、プロセッサ33は、カメラ31により撮像された計測用の撮像画像及び機械学習器20から取得した学習済モデルに基づき、計測用の撮像画像に含まれる被写体に応じた距離測定位置を特定し、かかる距離測定位置が示す距離を計測するように構成されている。 Here, the processor 33 specifies the distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera 31 and the learned model acquired from the machine learning device 20. , It is configured to measure the distance indicated by the distance measurement position.
 図15は、第3実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。 FIG. 15 is a sequence diagram showing an example of the operation of the image processing system 1 according to the third embodiment.
 図15に示すように、撮像装置30は、ステップS3001において、入力データである撮像画像と出力(ラベル)である距離測定位置との組である訓練データ(学習データセット)を用いて、学習済モデルを生成する。 As shown in FIG. 15, in step S3001, the image pickup apparatus 30 has been trained using the training data (learning data set) which is a set of the captured image which is the input data and the distance measurement position which is the output (label). Generate a model.
 撮像装置30は、ステップS3002において、カメラ31により計測用の撮像画像を撮像する。 In step S3002, the image pickup apparatus 30 captures a captured image for measurement by the camera 31.
 撮像装置30は、ステップS3003において、カメラ31により撮像された計測用の撮像画像及び生成した学習済モデルに基づき、計測用の撮像画像に含まれる被写体に応じた距離測定位置を特定し、かかる距離測定位置が示す距離を計測する。 In step S3003, the image pickup apparatus 30 identifies a distance measurement position according to the subject included in the measurement image capture image based on the measurement image captured by the camera 31 and the generated learned model, and the distance is taken. Measure the distance indicated by the measurement position.
 撮像装置30は、ステップS3004において、ユーザ操作に応じて、対象商品画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、ステップS3005において、かかる対象商品画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードする。 In step S3004, the imaging device 30 inputs or selects the target product image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S3005, the distance indicated by the target product image and the distance measurement position. Upload the target product information including the above on Market 2.
 ステップS3006において、かかる対象商品情報が、マーケット2上に出品される。 In step S3006, such target product information is put up on the market 2.
 第3実施形態に係る画像処理システム1によれば、撮像装置30に備えられている機械学習器20によって、撮像画像を入力データとし且つ距離測定位置をラベルとして機械学習を行うことにより、撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 According to the image processing system 1 according to the third embodiment, the captured image is performed by machine learning using the captured image as input data and the distance measurement position as a label by the machine learning device 20 provided in the imaging device 30. The desired dimensions of the subject included in the can be automatically measured.
 (第4実施形態)
 以下、図16~図19を参照して、第4実施形態について、上述の第1乃至第3実施形態との相違点に着目して説明する。
(Fourth Embodiment)
Hereinafter, the fourth embodiment will be described with reference to FIGS. 16 to 19, focusing on the differences from the first to third embodiments described above.
 図16は、第4実施形態に係る画像処理システム1の全体構成の一例を示す図である。図16に示すように、第4実施形態に係る画像処理システム1は、管理サーバ10と、撮像装置30と、出品装置40と、通信網で構成されるマーケット2とを有している。 FIG. 16 is a diagram showing an example of the overall configuration of the image processing system 1 according to the fourth embodiment. As shown in FIG. 16, the image processing system 1 according to the fourth embodiment includes a management server 10, an image pickup device 30, an exhibition device 40, and a market 2 composed of a communication network.
 また、第4実施形態に係る画像処理システム1では、図17に示すように、管理サーバ10は、通信インタフェース11と、プロセッサ12と、画像処理器50と、機械学習器20とを備えている。 Further, in the image processing system 1 according to the fourth embodiment, as shown in FIG. 17, the management server 10 includes a communication interface 11, a processor 12, an image processor 50, and a machine learning device 20. ..
 通信インタフェース11は、無線回線又は有線回線を用いて、撮像装置30から、カメラ31により撮像された撮像画像を取得するように構成されている。また、通信インタフェース11は、無線回線又は有線回線を用いて、撮像装置30に対して、プロセッサ12から取得した距離測定位置が示す距離を送信するように構成されている。 The communication interface 11 is configured to acquire an image captured by the camera 31 from the image pickup device 30 using a wireless line or a wired line. Further, the communication interface 11 is configured to transmit the distance indicated by the distance measurement position acquired from the processor 12 to the image pickup apparatus 30 by using a wireless line or a wired line.
 プロセッサ12は、画像処理器50の処理に基づき、通信インタフェース11によって取得された撮像画像に含まれる被写体に応じた距離測定位置を特定すると共に距離測定位置が示す距離を計測するように構成されている。 Based on the processing of the image processor 50, the processor 12 is configured to specify the distance measurement position according to the subject included in the captured image acquired by the communication interface 11 and to measure the distance indicated by the distance measurement position. There is.
 図18に示すように、画像処理器50は、プロセッサ51と、記憶部52とを有している。 As shown in FIG. 18, the image processor 50 includes a processor 51 and a storage unit 52.
 プロセッサ51は、通信インタフェース11によって取得された撮像画像の特徴成分を検出する。プロセッサ51は、さらに、撮像画像に含まれる被写体を特定してもよい。特徴成分の検出は、エッジ検出を含む。プロセッサ51は、各種手法を適用することによって、撮像画像のエッジを検出できる。各種手法とは、1次微分又は2次微分を用いる手法であってもよい。1次微分を用いる手法は、例えば、ソーベルフィルタ及びプレヴィットフィルタを含む。2次微分を用いる手法は、例えば、ラプラシアンフィルタを含む。 The processor 51 detects the feature component of the captured image acquired by the communication interface 11. The processor 51 may further identify the subject included in the captured image. Detection of feature components includes edge detection. The processor 51 can detect the edge of the captured image by applying various methods. The various methods may be methods using first-order differentiation or second-order differentiation. Techniques that use first derivative include, for example, Sobel filters and Previt filters. Techniques that use quadratic differentiation include, for example, Laplacian filters.
 また、プロセッサ51は、検出された特徴成分に基づいて、撮像画像に含まれる被写体に応じた距離測定位置を特定するように構成されている。 Further, the processor 51 is configured to specify the distance measurement position according to the subject included in the captured image based on the detected feature component.
 ここで、距離測定位置は、第1点及び第1点とは異なる第2点を少なくとも含んでいてもよい。また、距離測定位置は、第1点及び第1点とは異なる第2点を含む線分であってもよい。線分は、直線であっても曲線であってもよい。 Here, the distance measurement position may include at least a first point and a second point different from the first point. Further, the distance measurement position may be a line segment including a first point and a second point different from the first point. The line segment may be a straight line or a curved line.
 プロセッサ51は、検出された特徴成分に基づいて、被写体領域(セグメンテーション画像)を特定してもよい。被写体領域を示す画像は、セグメンテーション画像ともいう。また、プロセッサ51は、検出された特徴成分に基づいて、被写体の輪郭を特定してもよい。 The processor 51 may specify the subject area (segmentation image) based on the detected feature component. An image showing a subject area is also called a segmentation image. Further, the processor 51 may specify the contour of the subject based on the detected feature component.
 ここで、検出された特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定することは、撮像画像全体又は被写体領域(セグメンテーション画像)に対して所定の座標に位置する被写体領域上又は被写体の輪郭上の点を特定することであってもよい。 Here, specifying the distance measurement position according to the subject included in the captured image based on the detected feature component is to specify the subject region located at a predetermined coordinate with respect to the entire captured image or the subject region (segmentation image). It may be to specify a point on the top or on the contour of the subject.
 また、検出された特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定することは、特定の条件を満たし且つ被写体の輪郭(又は、輪郭周辺)に位置する点又は領域を特定することであってもよい。 Further, specifying the distance measurement position according to the subject included in the captured image based on the detected feature component satisfies a specific condition and determines a point or region located on the contour (or around the contour) of the subject. It may be to specify.
 また、検出された特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定することは、特定の条件を満たす被写体領域(セグメンテーション画像)内の2点を結ぶ直線を特定することであってもよい。 Further, specifying the distance measurement position according to the subject included in the captured image based on the detected feature component is to specify a straight line connecting two points in the subject area (segmentation image) satisfying a specific condition. It may be.
 また、検出された特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定することは、特徴成分に基づき特定された被写体の輪郭(又は、輪郭周辺)の少なくとも一部に沿う線分の延長線と交わる輪郭(又は、輪郭周辺)に位置する点を特定することであってもよい。 Further, specifying the distance measurement position according to the subject included in the captured image based on the detected feature component follows at least a part of the contour (or the periphery of the contour) of the subject specified based on the feature component. It may be to specify a point located at the contour (or around the contour) that intersects the extension line of the line segment.
 また、プロセッサ51は、検出された特徴成分及び特定された被写体に基づいて、撮像画像に含まれる被写体に応じた距離測定位置を特定するように構成されていてもよい。例えば、プロセッサ51は、特定された被写体に応じて、距離測定位置を特定するために検出された特徴成分が満たすべき条件を判定してもよい。 Further, the processor 51 may be configured to specify the distance measurement position according to the subject included in the captured image based on the detected feature component and the specified subject. For example, the processor 51 may determine a condition to be satisfied by the feature component detected in order to specify the distance measurement position according to the specified subject.
 ここで、第4実施形態に係る画像処理器50の機能の一例を図4~図6を参照して説明する。 Here, an example of the function of the image processor 50 according to the fourth embodiment will be described with reference to FIGS. 4 to 6.
 例えば、図4に示すように、プロセッサ51は、撮像画像に含まれる被写体がTシャツを示すと特定した場合に、かかる距離測定位置として、着丈を測るための点5及び点6、袖丈を測るための点1及び点9(又は、点2及び点10)、裄丈を測るための点1、点5及び点9(又は、点2、点5及び点10)、身幅(胸囲)を測るための点3及び点4、ウエスト(又は、胴囲)を測るための点7及び点8、肩幅を測るための点9及び点10のうちの少なくとも1つを特定してもよい。 For example, as shown in FIG. 4, when the processor 51 identifies that the subject included in the captured image indicates a T-shirt, the processor 51 measures points 5 and 6 for measuring the length and sleeve length as the distance measurement positions. Point 1 and point 9 (or point 2 and point 10), point 1, point 5 and point 9 (or point 2, point 5 and point 10) to measure the sleeve length, and width (chest circumference) At least one of points 3 and 4 for measuring waist (or waist circumference), points 7 and 8 for measuring waist width, and points 9 and 10 for measuring shoulder width may be specified.
 すなわち、図4に示すように、プロセッサ51は、撮像画像に上衣が含まれる場合(すなわち、被写体が上衣を示すと特定された場合)に、特定された被写体及び検出された特徴成分に基づいて、着丈、袖丈、裄丈、身幅(すなわち、胸囲)、ウエスト(すなわち、胴囲)及び肩幅のうちの少なくとも1つの寸法が計測されるように距離測定位置を特定してもよい。なお、ここでいうウエストは、上衣の胴部分の周囲のうち、胴部分において最も距離が短い周囲を指す。 That is, as shown in FIG. 4, when the captured image includes the upper garment (that is, when the subject is specified to show the upper garment), the processor 51 is based on the specified subject and the detected feature component. , Length, sleeve length, sleeve length, width of the body (ie, chest circumference), waist (ie, waist circumference) and shoulder width may be specified so that at least one dimension is measured. The waist referred to here refers to the circumference of the waist portion of the upper garment, which has the shortest distance.
 図4において、X軸は、撮像画像の左右方向に対応し、Y軸は、撮像画像の上下方向に対応する。或いは、プロセッサ51が、撮像画像に含まれる被写体を特定できる場合、X軸は、被写体の左右方向に対応し、Y軸は、被写体の上下方向に対応してもよい。 In FIG. 4, the X-axis corresponds to the left-right direction of the captured image, and the Y-axis corresponds to the vertical direction of the captured image. Alternatively, when the processor 51 can identify the subject included in the captured image, the X-axis may correspond to the left-right direction of the subject and the Y-axis may correspond to the vertical direction of the subject.
 例えば、図4において、点1及び2は、撮像画像に含まれる被写体ZのX軸方向(左右方向)の端点である。すなわち、点1及び2は、袖の先端の上側(すなわち、肩側の袖の先端)に位置する点である。点1及び点2は、撮像画像全体又は被写体領域(セグメンテーション画像)に対して所定の座標に位置する被写体領域上又は被写体の輪郭上の点の一例である。 For example, in FIG. 4, points 1 and 2 are end points in the X-axis direction (horizontal direction) of the subject Z included in the captured image. That is, points 1 and 2 are points located above the tip of the sleeve (that is, the tip of the sleeve on the shoulder side). The points 1 and 2 are examples of points on the subject area or the outline of the subject located at predetermined coordinates with respect to the entire captured image or the subject area (segmentation image).
 点3及び4は、袖の付け根の下側(すなわち、脇側の袖の付け根)の点である。点3及び点4は、特定の条件を満たし且つ被写体の輪郭(又は、輪郭周辺)に位置する点又は領域の一例である。ここで、特定の条件とは、点3において、X軸方向の負方向(左方向)及びY軸の負方向(下方向)に特徴成分が存在することである。ここで、特定の条件とは、点4において、X軸方向の正方向(右方向)及びY軸の負方向(下方向)に特徴成分が存在することである。 Points 3 and 4 are points on the lower side of the base of the sleeve (that is, the base of the sleeve on the side). Points 3 and 4 are examples of points or regions that satisfy specific conditions and are located on the contour (or around the contour) of the subject. Here, the specific condition is that the characteristic component exists in the negative direction (left direction) in the X-axis direction and the negative direction (downward direction) in the Y-axis at the point 3. Here, the specific condition is that the characteristic component exists in the positive direction (right direction) in the X-axis direction and the negative direction (downward direction) in the Y-axis at the point 4.
 点5及び6は、点3と点4とを結んだ直線の垂直二等分線L2と被写体Zにおける境界線X1との交点である。すなわち、点5は、衿の付け根且つ被写体Zの中心に位置する点であり、点6は、裾の先端且つ被写体Zの中心に位置する点である。点5及び点6は、撮像画像全体又は被写体領域(セグメンテーション画像)に対して所定の座標に位置する被写体領域上又は被写体の輪郭上の点の一例である。 Points 5 and 6 are intersections of the perpendicular bisector L2 of the straight line connecting the points 3 and 4 and the boundary line X1 in the subject Z. That is, the point 5 is a point located at the base of the collar and the center of the subject Z, and the point 6 is a point located at the tip of the hem and the center of the subject Z. The points 5 and 6 are examples of points on the subject area or the outline of the subject located at predetermined coordinates with respect to the entire captured image or the subject area (segmentation image).
 点7及び点8を結ぶ直線は、点3及び点4に対してY軸の負方向(下方向)にある被写体Zの端点同士をX軸に対して並行に結ぶ直線のうち、最も距離が小さい直線である。点7及び点8を結ぶ直線は、特定の条件を満たす被写体領域(セグメンテーション画像)内の2点を結ぶ直線の一例である。ここで、特定の条件とは、距離が最も小さいことである。 The straight line connecting the points 7 and 8 has the longest distance among the straight lines connecting the end points of the subject Z in the negative direction (downward) of the Y axis with respect to the points 3 and 4 in parallel with the X axis. It is a small straight line. The straight line connecting the points 7 and 8 is an example of a straight line connecting two points in the subject area (segmentation image) satisfying a specific condition. Here, the specific condition is that the distance is the smallest.
 点9は、点3と点7とを通る直線L4と被写体Zにおける境界線X1との交点である。点10は、点4と点8とを通る直線L5と被写体Zにおける境界線X1との交点である。すなわち、点9及び点10は、袖の付け根の上側(すなわち、肩側の袖の付け根)に位置する点である。点9及び点10は、特徴成分に基づき特定された被写体の輪郭(又は、輪郭周辺)の少なくとも一部に沿う線分の延長線と交わる輪郭(又は、輪郭周辺)に位置する点の一例である。 Point 9 is the intersection of the straight line L4 passing through the points 3 and 7 and the boundary line X1 in the subject Z. The point 10 is an intersection of the straight line L5 passing through the points 4 and 8 and the boundary line X1 in the subject Z. That is, points 9 and 10 are points located above the base of the sleeve (that is, the base of the sleeve on the shoulder side). Points 9 and 10 are examples of points located at the contour (or around the contour) that intersects the extension line of the line segment along at least a part of the contour (or around the contour) of the subject specified based on the feature component. is there.
 なお、プロセッサ51は、撮像画像に含まれる被写体がTシャツを示すと特定した場合に、距離測定位置として点1~10を特定してもよいが、被写体がTシャツを示すと特定することなく、検出した特徴成分のみに基づいて点1~10を特定してもよい。 The processor 51 may specify points 1 to 10 as distance measurement positions when it is specified that the subject included in the captured image indicates a T-shirt, but the processor 51 does not specify that the subject indicates a T-shirt. , Points 1 to 10 may be specified based only on the detected characteristic component.
 或いは、図5に示すように、プロセッサ51は、撮像画像に含まれる被写体がズボンを示すと特定した場合、かかる距離測定位置として、股上の長さを測るための点1及び点5、または点2及び点6、股下の長さを測るための点1及び点3、または点2及び点4、総丈を測るための点3及び点5、または点4及び点6、わたり幅(すなわち腿周り)を測るための点1及び点14、並びに点2及び点13、ウエスト(すなわち胴囲)を測るための点5及び点6、膝幅を測るための点7及び点8、並びに点9及び点10、裾幅を測るための点3及び点11、並びに点4及び点12のうち少なくとも1つ点1~14を特定してもよい。 Alternatively, as shown in FIG. 5, when the processor 51 identifies that the subject included in the captured image indicates trousers, the distance measurement position is point 1 and point 5 or a point for measuring the length of the rise. 2 and 6, points 1 and 3 to measure inseam length, or points 2 and 4, points 3 and 5 to measure total length, or points 4 and 6, cross width (ie, thigh) Points 1 and 14 to measure the circumference), and points 2 and 13, points 5 and 6 to measure the waist (ie waist circumference), points 7 and 8 to measure the knee width, and point 9 And point 10, points 3 and 11 for measuring the hem width, and at least one of points 4 and 12 may be specified.
 すなわち、図5に示すように、プロセッサ51は、撮像画像に下衣が含まれる場合(すなわち、被写体が下衣を示すと特定された場合)に、股上、股下、総丈、ウエスト(すなわち、胴囲)、わたり幅(すなわち、腿周り)、膝幅及び裾幅のうちの少なくとも1つの寸法が計測されるように距離測定位置を特定してもよい。 That is, as shown in FIG. 5, when the captured image includes a lower garment (that is, when the subject is identified as showing the lower garment), the processor 51 raises, inseams, total length, and waist (that is, that is). The distance measurement position may be specified so that at least one dimension of waist circumference), width (ie, thigh circumference), knee width and hem width is measured.
 例えば、図5において、点4及び点12(並びに、点3及び点11)は、裾部分の両端に位置する点である。点2及び点13(並びに、点1及び点14)は、着用者の太腿に対応する部分の両端に位置する点である。点9及び点10(並びに、点7及び点8)は、着用者の膝に対応する部分の両端に位置する点である。点5及び点6は、着用者の胴に対応する部分の両端に位置する点である。 For example, in FIG. 5, points 4 and 12 (and points 3 and 11) are points located at both ends of the hem portion. Points 2 and 13 (and points 1 and 14) are points located at both ends of the portion corresponding to the wearer's thighs. Points 9 and 10 (and points 7 and 8) are points located at both ends of the portion corresponding to the wearer's knee. Points 5 and 6 are points located at both ends of the portion corresponding to the wearer's torso.
 或いは、図6に示すように、プロセッサ51は、撮像画像に含まれる被写体がバッグを示すと特定した場合、かかる距離測定位置として、底面における幅を測るための点1及び点2、底面における奥行きを測るための点2及び点3、高さを測るための点1及び点6(又は、点2及び点5、或いは、点3及び点4)、上面における幅を測るための点5及び点6、上面における奥行きを測るための点4及び点5のうち少なくとも1つを特定してもよい。 Alternatively, as shown in FIG. 6, when the processor 51 identifies that the subject included in the captured image indicates a bag, the distance measurement position is point 1 and point 2 for measuring the width on the bottom surface, and the depth on the bottom surface. Points 2 and 3 for measuring, points 1 and 6 (or points 2 and 5 or points 3 and 4) for measuring height, points 5 and points for measuring width on the upper surface 6. At least one of points 4 and 5 for measuring the depth on the upper surface may be specified.
 すなわち、図6に示すように、プロセッサ51は、撮像画像に立体物が含まれる場合(すなわち、被写体が立体物を示すと特定された場合)に、高さ、幅及び奥行きのうちの少なくとも1つの寸法が計測されるように距離測定位置を特定してもよい。ここで、立体物は、家具、家電及びバッグのうちの少なくとも1つを含んでいてもよい。 That is, as shown in FIG. 6, when the captured image contains a three-dimensional object (that is, when the subject is specified to show a three-dimensional object), the processor 51 has at least one of height, width, and depth. The distance measurement position may be specified so that one dimension is measured. Here, the three-dimensional object may include at least one of furniture, home appliances and a bag.
 その他、プロセッサ51は、人間若しくは動物の身長、又は、魚若しくは植物のサイズを測るための点を特定するように構成されていてもよい。 In addition, the processor 51 may be configured to identify a point for measuring the height of a human or animal, or the size of a fish or plant.
 記憶部52は、RAM(Random Access Memory)又はROM(Read Only Memory)等を含む記憶装置或いはハードディスクやフラッシュメモリ等の補助記憶装置によって構成されており、プロセッサ51によって特定された距離測定位置を記憶するように構成されている。 The storage unit 52 is composed of a storage device including a RAM (Random Access Memory) or a ROM (Read Only Memory) or an auxiliary storage device such as a hard disk or a flash memory, and stores a distance measurement position specified by the processor 51. It is configured to do.
 ここで、第4実施形態に係る機械学習器20の構成の一例を、図2を参照して説明する。図2に示すように、機械学習器20は、取得部21と、プロセッサ22と、記憶部23とを有している。 Here, an example of the configuration of the machine learning device 20 according to the fourth embodiment will be described with reference to FIG. As shown in FIG. 2, the machine learning device 20 includes an acquisition unit 21, a processor 22, and a storage unit 23.
 取得部21は、学習用の撮像画像及び学習用の撮像画像に対応する被写体の名称を入力データとラベルとの組として取得するように構成されている。 The acquisition unit 21 is configured to acquire the captured image for learning and the name of the subject corresponding to the captured image for learning as a set of input data and a label.
 プロセッサ22は、入力データとラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体が何を示すのかを学習するように構成されている。 The processor 22 is configured to learn what the subject included in the captured image indicates by learning the set of the input data and the label as training data.
 ここで、プロセッサ22は、上述の訓練データを多層構造で演算する、すなわち、深層学習(Deep Learning)によって、撮像画像に含まれる被写体が何を示すための学習済モデルを生成するように構成されていてもよい。 Here, the processor 22 is configured to calculate the above-mentioned training data in a multi-layer structure, that is, to generate a trained model for indicating what the subject included in the captured image shows by deep learning. You may be.
 記憶部23は、FRAM(登録商標)(Ferroelectric Random Access Memory)等の記憶装置或いはハードディスクやフラッシュメモリ等の補助記憶装置によって構成されており、プロセッサ33によって生成された学習済モデルを記憶するように構成されている。 The storage unit 23 is composed of a storage device such as FRAM (registered trademark) (Ferroelectric Ramdom Access Memory) or an auxiliary storage device such as a hard disk or a flash memory so as to store the learned model generated by the processor 33. It is configured.
 また、画像処理器50のプロセッサ51は、機械学習器20の学習済モデルに基づいて、撮像画像に含まれる被写体が何を示すのかを特定するように構成されていてもよい。 Further, the processor 51 of the image processor 50 may be configured to specify what the subject included in the captured image indicates based on the trained model of the machine learning device 20.
 さらに、管理サーバ10のプロセッサ12は、画像処理器50によって特定された距離測定位置に基づいて、かかる距離測定位置が示す距離を計測するように構成されていてもよい。 Further, the processor 12 of the management server 10 may be configured to measure the distance indicated by the distance measurement position based on the distance measurement position specified by the image processor 50.
 例えば、プロセッサ12は、第1実施形態に係る撮像装置30のプロセッサ33が計測用の撮像画像を用いて1つの寸法を計測する場合に行う、上述したような計測方法と同様の方法で、撮像画像を用いて1つの寸法を測定するように構成されていてもよい。 For example, the processor 12 is imaged by the same method as the measurement method as described above, which is performed when the processor 33 of the image pickup apparatus 30 according to the first embodiment measures one dimension by using the image captured for measurement. It may be configured to measure one dimension using an image.
 また、図9のように、プロセッサ12は、撮像画像に含まれるタグ情報からメーカー又はブランドを検出し、かかるメーカー又はブランドのサイズ表から距離測定位置が示す距離を特定するように構成されていてもよい。 Further, as shown in FIG. 9, the processor 12 is configured to detect the manufacturer or brand from the tag information included in the captured image and specify the distance indicated by the distance measurement position from the size table of the manufacturer or brand. May be good.
 また、第4実施形態に係る画像処理システム1において、撮像装置30及び出品装置40は、通信網で構成されるマーケット2上に対象商品を出品可能である。かかる場合、撮像装置30及び出品装置40は、例えば、上述した第1実施形態において通信網で構成されるマーケット2上に対象商品を出品可能である場合の構成と同様の構成を有していてもよい。 Further, in the image processing system 1 according to the fourth embodiment, the image pickup device 30 and the exhibition device 40 can exhibit the target product on the market 2 composed of the communication network. In such a case, the imaging device 30 and the exhibiting device 40 have, for example, the same configuration as the configuration in the case where the target product can be exhibited on the market 2 configured by the communication network in the first embodiment described above. May be good.
 図19は、第4実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。 FIG. 19 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fourth embodiment.
 図19に示すように、撮像装置30は、ステップS4001において、カメラ31によって被写体を含む撮像画像を撮像し、ステップS4002において、管理サーバ10に対して、かかる撮像画像を送信する。 As shown in FIG. 19, the imaging device 30 captures an captured image including a subject by the camera 31 in step S4001, and transmits the captured image to the management server 10 in step S4002.
 管理サーバ10は、ステップS4003において、取得した撮像画像に含まれる被写体に応じた距離測定位置を特定すると共に距離測定位置が示す距離を計測し、ステップS4005において、撮像装置30に対して、計測した距離測定位置が示す距離を送信する。 In step S4003, the management server 10 specifies the distance measurement position according to the subject included in the acquired captured image and measures the distance indicated by the distance measurement position, and in step S4005, the distance is measured with respect to the image pickup device 30. Distance The distance indicated by the measurement position is transmitted.
 撮像装置30は、ステップS4006において、ユーザ操作に応じて、撮像画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、ステップS4007において、かかる撮像画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードする。 In step S4006, the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S4007, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
 ステップS4008において、かかる対象商品情報が、マーケット2上に出品される。 In step S4008, such target product information is put up on the market 2.
 第4実施形態に係る画像処理システム1によれば、管理サーバ10に備えられている画像処理器50によって撮像画像に含まれる被写体に応じた距離測定位置を特定することで、撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 According to the image processing system 1 according to the fourth embodiment, the image processor 50 provided in the management server 10 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image. The desired dimensions of the subject can be automatically measured.
 また、第4実施形態に係る画像処理システム1によれば、各撮像装置30において画像処理器を備える必要がないので、携帯用通信端末等で簡便に撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 Further, according to the image processing system 1 according to the fourth embodiment, since it is not necessary to provide an image processor in each image pickup device 30, a portable communication terminal or the like can easily obtain a desired dimension of a subject included in the captured image. It can be measured automatically.
 (第5実施形態)
 以下、図20~図22を参照して、第5実施形態について、上述の第1乃至第4実施形態との相違点に着目して説明する。
(Fifth Embodiment)
Hereinafter, the fifth embodiment will be described with reference to FIGS. 20 to 22, focusing on the differences from the first to fourth embodiments described above.
 図20に示すように、本実施形態に係る画像処理システム1において、画像処理器50は、管理サーバ10ではなく撮像装置30に備えられる。 As shown in FIG. 20, in the image processing system 1 according to the present embodiment, the image processor 50 is provided not in the management server 10 but in the image pickup device 30.
 図21に示すように、撮像装置30は、通信インタフェース32と、カメラ31と、プロセッサ33と、画像処理器50とを有している。 As shown in FIG. 21, the image pickup apparatus 30 includes a communication interface 32, a camera 31, a processor 33, and an image processor 50.
 通信インタフェース32は、無線回線又は有線回線を用いて、管理サーバ10及び通信網(マーケット)2と通信することができるように構成されており、カメラ31は、被写体を含む撮像画像を取得することができるように構成されている。 The communication interface 32 is configured to be able to communicate with the management server 10 and the communication network (market) 2 using a wireless line or a wired line, and the camera 31 acquires a captured image including a subject. Is configured to allow
 プロセッサ33は、プロセッサ12と同様に、画像処理器50の処理に基づき、カメラ31によって撮像された撮像画像に含まれる被写体に応じた距離測定位置を特定すると共に距離測定位置が示す距離を計測するように構成されている。 Similar to the processor 12, the processor 33 specifies the distance measurement position according to the subject included in the captured image captured by the camera 31 and measures the distance indicated by the distance measurement position based on the processing of the image processor 50. It is configured as follows.
 画像処理器50は、第4実施形態の場合と同様に、カメラ31により撮像された撮像画像に基づき被写体が何を示すのかを特定し且つ被写体の特徴成分を検出し、特定された被写体及び特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定するように構成されている。 Similar to the case of the fourth embodiment, the image processor 50 specifies what the subject indicates based on the captured image captured by the camera 31, detects the characteristic component of the subject, and identifies the specified subject and features. It is configured to specify the distance measurement position according to the subject included in the captured image based on the components.
 図22は、第5実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。 FIG. 22 is a sequence diagram showing an example of the operation of the image processing system 1 according to the fifth embodiment.
 図22に示すように、管理サーバ10は、ステップS5001において、入力データとラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体が何を示すための学習済モデルを生成し、ステップS5002において、撮像装置30に対して、かかる学習済モデルを送信する。 As shown in FIG. 22, the management server 10 generates a trained model for showing what the subject included in the captured image shows by learning the set of the input data and the label as training data in step S5001. Then, in step S5002, the trained model is transmitted to the image pickup apparatus 30.
 撮像装置30は、ステップS5002において、受信した学習済モデルに基づいて撮像画像に含まれる被写体が何を示すのかを特定すると共に被写体の特徴成分を検出し、特定された被写体及び検出された特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定し、ステップS5003において、かかる距離測定位置が示す距離を計測する。 In step S5002, the image pickup apparatus 30 identifies what the subject included in the captured image indicates based on the received learned model, detects the feature component of the subject, and detects the identified subject and the detected feature component. The distance measurement position corresponding to the subject included in the captured image is specified based on the above, and in step S5003, the distance indicated by the distance measurement position is measured.
 撮像装置30は、ステップS5004において、ユーザ操作に応じて、撮像画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、ステップS5005において、かかる撮像画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードする。 In step S5004, the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S5005, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
 ステップS5006において、かかる対象商品情報が、マーケット2上に出品される。 In step S5006, such target product information is put up on the market 2.
 第5実施形態に係る画像処理システム1によれば、撮像装置30に備えられている画像処理器50によって撮像画像に含まれる被写体に応じた距離測定位置を特定することで、撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 According to the image processing system 1 according to the fifth embodiment, the image processor 50 provided in the image pickup apparatus 30 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image. The desired dimensions of the subject can be automatically measured.
 (第6実施形態)
 以下、図23~図25を参照して、第6実施形態について、上述の第1乃至第5実施形態との相違点に着目して説明する。
(Sixth Embodiment)
Hereinafter, the sixth embodiment will be described with reference to FIGS. 23 to 25, focusing on the differences from the first to fifth embodiments described above.
 図23に示すように、本実施形態に係る画像処理システム1において、画像処理器50及び機械学習器20は、撮像装置30に備えられている。また、図24に示すように、撮像装置30は、画像処理器50と、機械学習器20と、通信インタフェース32と、カメラ31と、プロセッサ33とを備えている。 As shown in FIG. 23, in the image processing system 1 according to the present embodiment, the image processor 50 and the machine learning device 20 are provided in the image pickup device 30. Further, as shown in FIG. 24, the image pickup device 30 includes an image processor 50, a machine learning device 20, a communication interface 32, a camera 31, and a processor 33.
 ここで、プロセッサ33は、カメラ31により撮像された撮像画像に基づき、周囲環境の3次元情報及び撮像装置30の位置を特定し、かかる周囲環境の3次元情報、撮像装置30の位置、第1点及び第2点に基づき、撮像装置30から第1点までの距離及び撮像装置30から第2点までの距離を特定し、撮像装置30から第1点までの距離及び撮像装置30から第2点までの距離に基づき、第1点から第2点までの距離を特定するように構成されていてもよい。すなわち、プロセッサ33は、例えば、Virtual SLAM(Simultaneous Localization and Mapping)技術を用いて、各点(すなわち、各距離推定位置)の間の距離を計測するように構成されていてもよい。 Here, the processor 33 identifies the three-dimensional information of the surrounding environment and the position of the imaging device 30 based on the captured image captured by the camera 31, and the three-dimensional information of the surrounding environment, the position of the imaging device 30, the first. Based on the points and the second point, the distance from the image pickup device 30 to the first point and the distance from the image pickup device 30 to the second point are specified, the distance from the image pickup device 30 to the first point, and the distance from the image pickup device 30 to the second point. It may be configured to specify the distance from the first point to the second point based on the distance to the point. That is, the processor 33 may be configured to measure the distance between each point (that is, each distance estimation position) using, for example, Virtual SLAM (Simultaneus Localization and Mapping) technology.
 図25は、本実施形態に係る画像処理システム1の動作の一例を示すシーケンス図である。 FIG. 25 is a sequence diagram showing an example of the operation of the image processing system 1 according to the present embodiment.
 図25に示すように、撮像装置30は、ステップS6001において、入力データとラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体が何を示すための学習済モデルを生成し、ステップS6002において、かかる学習済モデルに基づいて撮像画像に含まれる被写体が何を示すのかを特定すると共に被写体の特徴成分を検出し、特定された被写体及び検出された特徴成分に基づいて撮像画像に含まれる被写体に応じた距離測定位置を特定し、かかる距離測定位置が示す距離を計測する。 As shown in FIG. 25, in step S6001, the image pickup apparatus 30 generates a trained model for indicating what the subject included in the captured image shows by learning the set of the input data and the label as training data. Then, in step S6002, what the subject included in the captured image indicates is specified based on the trained model, the characteristic component of the subject is detected, and the image is taken based on the specified subject and the detected characteristic component. The distance measurement position corresponding to the subject included in the image is specified, and the distance indicated by the distance measurement position is measured.
 撮像装置30は、ステップS6003において、ユーザ操作に応じて、撮像画像及び距離測定位置が示す距離を対象商品情報として入力又は選択し、ステップS6004において、かかる撮像画像及び距離測定位置が示す距離を含む対象商品情報をマーケット2上にアップロードする。 In step S6003, the image pickup apparatus 30 inputs or selects the captured image and the distance indicated by the distance measurement position as the target product information according to the user operation, and in step S6004, includes the distance indicated by the captured image and the distance measurement position. Upload the target product information on Market 2.
 ステップS6005において、かかる対象商品情報が、マーケット2上に出品される。 In step S6005, the target product information is put up on the market 2.
 第6実施形態に係る画像処理システム1によれば、撮像装置30に備えられている画像処理器50によって撮像画像に含まれる被写体に応じた距離測定位置を特定することで、撮像画像に含まれる被写体における所望の寸法を自動的に計測することができる。 According to the image processing system 1 according to the sixth embodiment, the image processor 50 provided in the image pickup apparatus 30 is included in the captured image by specifying the distance measurement position according to the subject included in the captured image. The desired dimensions of the subject can be automatically measured.
(その他実施形態)
 撮像装置30、管理サーバ10、画像処理器50、機械学習器20が行う各処理をコンピュータに実行させるプログラムが提供されてもよい。プログラムは、コンピュータ読取り可能媒体に記録されていてもよい。コンピュータ読取り可能媒体を用いれば、コンピュータにプログラムをインストールすることが可能である。ここで、プログラムが記録されたコンピュータ読取り可能媒体は、非一過性の記録媒体であってもよい。非一過性の記録媒体は、特に限定されるものではないが、例えば、CD-ROMやDVD-ROM等の記録媒体であってもよい。
(Other embodiments)
A program that causes a computer to execute each process performed by the image pickup device 30, the management server 10, the image processor 50, and the machine learning device 20 may be provided. The program may be recorded on a computer-readable medium. Computer-readable media can be used to install programs on a computer. Here, the computer-readable medium on which the program is recorded may be a non-transient recording medium. The non-transient recording medium is not particularly limited, but may be, for example, a recording medium such as a CD-ROM or a DVD-ROM.
 以上、図面を参照して実施形態について詳しく説明したが、具体的な構成は上述のものに限られることはなく、要旨を逸脱しない範囲内において様々な設計変更等をすることが可能である。 Although the embodiments have been described in detail with reference to the drawings, the specific configuration is not limited to the above, and various design changes and the like can be made within a range that does not deviate from the gist.
 なお、撮像装置30は、マーケット2上に対象商品を出品可能な装置に限られない。撮像装置30は、少なくとも撮像画像に含まれる被写体に応じて特定された距離をユーザに提示できる装置であればよい。 The image pickup device 30 is not limited to the device capable of listing the target product on the market 2. The image pickup device 30 may be any device that can present the user at least a distance specified according to the subject included in the captured image.
 本願は、日本国特許出願第2019-139264号(2019年7月29日出願)及び日本国特許出願第2019-139266号(2019年7月29日出願)の優先権を主張し、その内容の全てが本願明細書に組み込まれている。 This application claims the priority of Japanese Patent Application No. 2019-139264 (filed on July 29, 2019) and Japanese Patent Application No. 2019-139266 (filed on July 29, 2019). Everything is incorporated herein by reference.

Claims (23)

  1.  機械学習器を備える画像処理システムであって、
     前記機械学習器は、
      学習用の撮像画像及び前記学習用の撮像画像に対応する距離測定位置を、入力データとラベルとの組として取得し、
      前記入力データと前記ラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体に応じた前記距離測定位置を学習する、画像処理システム。
    An image processing system equipped with a machine learning device
    The machine learning device
    The captured image for learning and the distance measurement position corresponding to the captured image for learning are acquired as a set of the input data and the label.
    An image processing system that learns the distance measurement position according to the subject included in the captured image by learning the set of the input data and the label as training data.
  2.  前記機械学習器を備える管理サーバを含み、
     前記管理サーバと通信可能且つカメラを備える撮像装置を少なくとも1つ含み、
     前記撮像装置は、前記カメラにより撮像された計測用の撮像画像及び前記管理サーバから取得した前記機械学習器の学習済モデルに基づき、前記計測用の撮像画像に含まれる被写体に応じた前記距離測定位置を特定し、前記距離測定位置が示す距離を計測する、請求項1に記載の画像処理システム。
    Including the management server equipped with the machine learning device
    Includes at least one imaging device capable of communicating with the management server and equipped with a camera.
    The imaging device measures the distance according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera and the trained model of the machine learning device acquired from the management server. The image processing system according to claim 1, wherein the position is specified and the distance indicated by the distance measurement position is measured.
  3.  前記機械学習器を備える管理サーバを含み、
     前記管理サーバと通信可能且つカメラを備える撮像装置を少なくとも1つ含み、
     前記撮像装置は、前記カメラにより撮像された計測用の撮像画像を前記管理サーバに送信し、
     前記管理サーバは、
      前記機械学習器の学習済モデルに基づき、前記計測用の撮像画像に含まれる被写体に応じた前記距離測定位置を特定すると共に前記距離測定位置が示す距離を計測し、
      前記距離測定位置及び前記距離測定位置が示す距離を前記撮像装置に送信する、請求項1に記載の画像処理システム。
    Including the management server equipped with the machine learning device
    Includes at least one imaging device capable of communicating with the management server and equipped with a camera.
    The imaging device transmits the captured image for measurement captured by the camera to the management server, and then transmits the captured image for measurement.
    The management server
    Based on the trained model of the machine learning device, the distance measurement position corresponding to the subject included in the captured image for measurement is specified, and the distance indicated by the distance measurement position is measured.
    The image processing system according to claim 1, wherein the distance measurement position and the distance indicated by the distance measurement position are transmitted to the image pickup apparatus.
  4.  カメラおよび前記機械学習器を備える撮像装置を少なくとも1つ含み、
     前記撮像装置は、前記カメラにより撮像された計測用の撮像画像及び前記機械学習器の学習済モデルに基づき、前記計測用の撮像画像に含まれる被写体に応じた前記距離測定位置を特定し、前記距離測定位置が示す距離を計測する、請求項1に記載の画像処理システム。
    It includes at least one imaging device including a camera and the machine learning device.
    The imaging device identifies the distance measurement position according to the subject included in the captured image for measurement based on the captured image for measurement captured by the camera and the trained model of the machine learning device, and the above-mentioned The image processing system according to claim 1, wherein the distance indicated by the distance measurement position is measured.
  5.  前記機械学習器は、
     前記計測用の撮像画像の取得に応じて出力された前記距離測定位置をユーザ操作に基づいて修正し、
     前記計測用の撮像画像と修正された前記距離測定位置との組を訓練データとしてさらに学習を行う、請求項2~4のいずれか1項に記載の画像処理システム。
    The machine learning device
    The distance measurement position output in response to the acquisition of the captured image for measurement is corrected based on the user operation.
    The image processing system according to any one of claims 2 to 4, further learning using a set of the captured image for measurement and the corrected distance measurement position as training data.
  6.  前記機械学習器は、
     前記計測用の撮像画像の取得に応じて出力された前記距離測定位置がユーザ操作に基づいて修正されたか否かに基づいて報酬を計算し、
     前記報酬に基づいて、前記距離測定位置を特定するための関数を更新する、請求項2~4のいずれか1項に記載の画像処理システム。
    The machine learning device
    The reward is calculated based on whether or not the distance measurement position output in response to the acquisition of the captured image for measurement has been corrected based on the user operation.
    The image processing system according to any one of claims 2 to 4, wherein the function for specifying the distance measurement position is updated based on the reward.
  7.  前記機械学習器は、前記訓練データを多層構造で演算する、請求項1~6のいずれか1項に記載の画像処理システム。 The image processing system according to any one of claims 1 to 6, wherein the machine learning device calculates the training data in a multi-layer structure.
  8.  前記入力データは、被写体のカテゴリを示す識別データを含む、請求項1~7のいずれか1項に記載の画像処理システム。 The image processing system according to any one of claims 1 to 7, wherein the input data includes identification data indicating a subject category.
  9.  前記距離測定位置は、第1点及び前記第1点とは異なる第2点を少なくとも含む、請求項1~8のいずれか1項に記載の画像処理システム。 The image processing system according to any one of claims 1 to 8, wherein the distance measurement position includes at least a first point and a second point different from the first point.
  10.  前記距離測定位置が示す距離は、立体物である被写体の表面の2点を結ぶ距離又は平面とみなせる被写体の端部の2点を結ぶ距離である、請求項9に記載の画像処理システム。 The image processing system according to claim 9, wherein the distance indicated by the distance measurement position is a distance connecting two points on the surface of the subject which is a three-dimensional object or a distance connecting two points at the end of the subject which can be regarded as a flat surface.
  11.  前記機械学習器は、
     前記計測用の撮像画像に上衣が含まれる場合に、着丈、袖丈、裄丈、身幅、ウエスト及び肩幅のうちの少なくとも1つの寸法が計測されるように前記距離測定位置を学習する、請求項1~10のいずれか1項に記載の画像処理システム。
    The machine learning device
    1. The distance measurement position is learned so that at least one of the length, sleeve length, sleeve length, width of the body, waist and shoulder width is measured when the captured image for measurement includes a top. The image processing system according to any one of 10 to 10.
  12.  前記機械学習器は、
     前記計測用の撮像画像に下衣が含まれる場合に、股上、股下、総丈、ウエスト、わたり幅、膝幅及び裾幅のうちの少なくとも1つの寸法が計測されるように前記距離測定位置を学習する、請求項1~10のいずれか1項に記載の画像処理システム。
    The machine learning device
    When the captured image for measurement includes a lower garment, the distance measurement position is set so that at least one of the rise, inseam, total length, waist, width, knee width, and hem width is measured. The image processing system according to any one of claims 1 to 10, which is to be learned.
  13.  前記機械学習器は、
     前記計測用の撮像画像に立体物が含まれる場合に、高さ、幅及び奥行きのうちの少なくとも1つの寸法が計測されるように前記距離測定位置を学習する、請求項1~10のいずれか1項に記載の画像処理システム。
    The machine learning device
    Any one of claims 1 to 10, which learns the distance measurement position so that at least one of height, width, and depth is measured when the captured image for measurement includes a three-dimensional object. The image processing system according to item 1.
  14.  前記立体物は、家具、家電及びバッグのうちの少なくとも1つを含む、請求項13に記載の画像処理システム。 The image processing system according to claim 13, wherein the three-dimensional object includes at least one of furniture, home appliances, and a bag.
  15.  少なくとも1つのプロセッサを備える機械学習器であって、
     前記プロセッサは、
     学習用の撮像画像及び前記学習用の撮像画像に対応する距離測定位置を入力データとラベルとの組として取得し、
     前記入力データと前記ラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体に応じた前記距離測定位置を学習する、機械学習器。
    A machine learning device with at least one processor
    The processor
    The captured image for learning and the distance measurement position corresponding to the captured image for learning are acquired as a set of the input data and the label.
    A machine learning device that learns the distance measurement position according to the subject included in the captured image by learning the set of the input data and the label as training data.
  16.  カメラとプロセッサと通信インタフェースとを備える撮像装置であって、
     前記通信インタフェースは、撮像画像に含まれる被写体に応じた距離測定位置を学習する機械学習器の学習済モデルを取得し、
     前記プロセッサは、前記カメラにより撮像された撮像画像及び前記学習済モデルに基づき、前記撮像画像に含まれる被写体に応じた前記距離測定位置を特定し、前記距離測定位置が示す距離を計測する、撮像装置。
    An image pickup device equipped with a camera, a processor, and a communication interface.
    The communication interface acquires a trained model of a machine learner that learns a distance measurement position according to a subject included in a captured image.
    Based on the captured image captured by the camera and the trained model, the processor identifies the distance measurement position according to the subject included in the captured image and measures the distance indicated by the distance measurement position. apparatus.
  17.  画像処理器を備える画像処理システムであって、
     前記画像処理器は、
     撮像画像の特徴成分を検出し、
     前記特徴成分に基づいて被写体に応じた距離測定位置を特定する、画像処理システム。
    An image processing system equipped with an image processor
    The image processor
    Detects the characteristic components of the captured image and
    An image processing system that specifies a distance measurement position according to a subject based on the feature component.
  18.  前記画像処理器は、
     前記特徴成分を検出し且つ前記撮像画像に含まれる前記被写体を特定し、
     前記特徴成分及び特定された前記被写体に基づいて前記被写体に応じた前記距離測定位置を特定する、請求項17に記載の画像処理システム。
    The image processor
    The feature component is detected and the subject included in the captured image is specified.
    The image processing system according to claim 17, wherein the distance measurement position according to the subject is specified based on the feature component and the specified subject.
  19.  機械学習器を含み、
     前記機械学習器は、
      学習用の撮像画像及び前記学習用の撮像画像に対応する被写体の名称を入力データとラベルとの組として取得し、
      前記入力データと前記ラベルとの組を訓練データとして学習を行うことにより、撮像画像に含まれる被写体が何を示すのかを学習する、請求項17又は18に記載の画像処理システム。
    Including machine learning device
    The machine learning device
    The captured image for learning and the name of the subject corresponding to the captured image for learning are acquired as a set of input data and a label.
    The image processing system according to claim 17 or 18, wherein learning is performed using a set of the input data and the label as training data to learn what the subject included in the captured image indicates.
  20.  前記画像処理器は、前記機械学習器の学習済モデルに基づいて、前記撮像画像に含まれる前記被写体が何を示すのかを特定する、請求項19に記載の画像処理システム。 The image processing system according to claim 19, wherein the image processor specifies what the subject included in the captured image indicates based on the trained model of the machine learning device.
  21.  少なくとも1つのプロセッサを備える画像処理器であって、
     前記プロセッサは、
     撮像画像の特徴成分を検出し、
     前記特徴成分に基づいて被写体に応じた距離測定位置を特定する、画像処理器。
    An image processor with at least one processor
    The processor
    Detects the characteristic components of the captured image and
    An image processor that specifies a distance measurement position according to a subject based on the feature component.
  22.  請求項21に記載の画像処理器とカメラとプロセッサとを備える撮像装置であって、
     前記画像処理器は、
     前記カメラにより撮像された撮像画像の特徴成分を検出し、
     前記特徴成分に基づいて被写体に応じた距離測定位置を特定する、撮像装置。
    An image pickup apparatus including the image processor, a camera, and a processor according to claim 21.
    The image processor
    Detecting the characteristic components of the captured image captured by the camera,
    An imaging device that identifies a distance measurement position according to a subject based on the characteristic component.
  23.  前記距離測定位置は、第1点及び前記第1点とは異なる第2点を含み、
     前記撮像装置に備えられている前記プロセッサは、
      前記カメラにより撮像された撮像画像に基づき、周囲環境の3次元情報及び前記撮像装置の位置を特定し、
      前記周囲環境の3次元情報、前記撮像装置の位置、前記第1点及び前記第2点に基づき、前記撮像装置から前記第1点までの距離及び前記撮像装置から前記第2点までの距離を特定し、
      前記撮像装置から前記第1点までの距離及び前記撮像装置から前記第2点までの距離に基づき、前記第1点から前記第2点までの距離を特定する、請求項22に記載の撮像装置。
    The distance measurement position includes a first point and a second point different from the first point.
    The processor provided in the image pickup apparatus
    Based on the captured image captured by the camera, the three-dimensional information of the surrounding environment and the position of the imaging device are specified.
    Based on the three-dimensional information of the surrounding environment, the position of the image pickup device, the first point and the second point, the distance from the image pickup device to the first point and the distance from the image pickup device to the second point are determined. Identify and
    The imaging device according to claim 22, wherein the distance from the first point to the second point is specified based on the distance from the imaging device to the first point and the distance from the imaging device to the second point. ..
PCT/JP2020/028566 2019-07-29 2020-07-22 Image processing system, machine learner, image processor, and imaging device WO2021020305A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019-139266 2019-07-29
JP2019-139264 2019-07-29
JP2019139266A JP7401218B2 (en) 2019-07-29 2019-07-29 Image processing system, image processor, imaging device and processing method
JP2019139264A JP7309506B2 (en) 2019-07-29 2019-07-29 Image processing system, machine learning device, imaging device and learning method

Publications (1)

Publication Number Publication Date
WO2021020305A1 true WO2021020305A1 (en) 2021-02-04

Family

ID=74230313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/028566 WO2021020305A1 (en) 2019-07-29 2020-07-22 Image processing system, machine learner, image processor, and imaging device

Country Status (1)

Country Link
WO (1) WO2021020305A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017194301A (en) * 2016-04-19 2017-10-26 株式会社デジタルハンズ Face shape measuring device and method
WO2018170421A1 (en) * 2017-03-17 2018-09-20 Magic Leap, Inc. Room layout estimation methods and techniques
JP2019056966A (en) * 2017-09-19 2019-04-11 株式会社東芝 Information processing device, image recognition method and image recognition program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017194301A (en) * 2016-04-19 2017-10-26 株式会社デジタルハンズ Face shape measuring device and method
WO2018170421A1 (en) * 2017-03-17 2018-09-20 Magic Leap, Inc. Room layout estimation methods and techniques
JP2019056966A (en) * 2017-09-19 2019-04-11 株式会社東芝 Information processing device, image recognition method and image recognition program

Similar Documents

Publication Publication Date Title
US11375922B2 (en) Body measurement device and method for controlling the same
CN105701447B (en) Guest-meeting robot
US9842255B2 (en) Calculation device and calculation method
JP6195915B2 (en) Image measuring device
CN113711269A (en) Method and system for determining body metrics and providing garment size recommendations
JP7309506B2 (en) Image processing system, machine learning device, imaging device and learning method
CN107481082A (en) A kind of virtual fit method and its device, electronic equipment and virtual fitting system
US20120095589A1 (en) System and method for 3d shape measurements and for virtual fitting room internet service
KR102341985B1 (en) Exercise assistant device and exercise assistant method
JP2014106692A5 (en)
WO2016036478A1 (en) Method and apparatus for creating photo-taking template database and for providing-taking recommendation information
JP2023503747A (en) FAILURE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM
US11527026B2 (en) Body measurement device and method for controlling the same
WO2021020305A1 (en) Image processing system, machine learner, image processor, and imaging device
CN105180802A (en) Identification method and device of object size information
CN117203677A (en) Article identification system using computer vision
Akhloufi 3D vision system for intelligent milking robot automation
JP2009289046A (en) Operation support device and method using three-dimensional data
JP7401218B2 (en) Image processing system, image processor, imaging device and processing method
US20220148074A1 (en) Visualization of garments on a body model of a human
KR102086227B1 (en) Apparatus for measuring body size
US11176396B2 (en) Detection of whether mobile computing device is pointing to visual code
WO2022081745A1 (en) Real-time rendering of 3d wearable articles on human bodies for camera-supported computing devices
Takeda et al. Reduction of marker-body matching work in activity recognition using motion capture
Kawasue et al. Pig weight prediction system using RGB-D sensor and AR glasses: analysis method with free camera capture direction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20847410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20847410

Country of ref document: EP

Kind code of ref document: A1