WO2024062912A1 - Display control device, method for operating display control device, and program for operating display control device - Google Patents

Display control device, method for operating display control device, and program for operating display control device Download PDF

Info

Publication number
WO2024062912A1
WO2024062912A1 PCT/JP2023/032375 JP2023032375W WO2024062912A1 WO 2024062912 A1 WO2024062912 A1 WO 2024062912A1 JP 2023032375 W JP2023032375 W JP 2023032375W WO 2024062912 A1 WO2024062912 A1 WO 2024062912A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
feature
display
control device
Prior art date
Application number
PCT/JP2023/032375
Other languages
French (fr)
Japanese (ja)
Inventor
正志 藏之下
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2024062912A1 publication Critical patent/WO2024062912A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the technology of the present disclosure relates to a display control device, a display control device operating method, and a display control device operating program.
  • Japanese Patent Laid-Open No. 2001-005822 describes a method for displaying images of a group of search results obtained as a result of searching an image database by an arbitrary method.
  • the display method is equipped with a database in which feature values for a group of images in the entire image database are associated with keywords or symbols that can be assigned to images and image feature values through statistical processing.
  • a maximum of a specified number of keywords or symbol features that express differences between features are dynamically and automatically selected for each search, in order of sensitivity to differences between the features.
  • Japanese Patent Application Laid-Open No. 05-282375 discloses a method for searching for images with arbitrary dimension attribute information from an image database that stores images with n-dimensional attribute information, and displaying the searched images in a reduced size.
  • An image database system is described in which a device performs browsing display, and when an image of interest is selected from among the displayed images, that image is displayed on the entire screen of the display device.
  • the image database system searches for images that have features within a specified distance space from each feature of the focused two-dimensional attribute information of the sample image.
  • the browsing display is performed using coordinates of two-dimensional attribute information.
  • Japanese Patent Application Publication No. 2003-271665 describes a graphical user interface for searching that includes a first display control means, a movement command means, and a second display control means.
  • the first display control means includes a coordinate axis on which a scale is formed, an area for displaying search results of target information corresponding to a search target range set corresponding to the scale, and an operator located at a reference position on the coordinate axis.
  • the display device is controlled to display on the display screen.
  • the movement command means gives a horizontal or vertical movement command to the operator.
  • the second display control means changes the scale of the scale when the movement command means gives the operator a movement command in either the horizontal or vertical direction, and the movement command means changes the scale of the scale to the operator.
  • the display of the coordinate axes is controlled so that the values of the scales are changed when a movement command is given in either the horizontal direction or the vertical direction.
  • One embodiment of the technology of the present disclosure provides a display control device capable of displaying a list of a wide variety of images, a method for operating the display control device, and an operating program for the display control device.
  • a display control device of the present disclosure includes a processor, the processor acquires a plurality of images, acquires a plurality of types of feature amounts for the plurality of images, and displays a two-dimensional or three-dimensional display space related to the feature amounts. , a plurality of images are displayed as a list by arranging the images at positions corresponding to the feature amounts in a display space that has at least one axis that indicates the magnitude of a numerical value that integrates two or more types of feature amounts.
  • the processor receives a user's instruction to change the specifications of the display space, and changes the specifications of the display space in accordance with the change instruction.
  • the change instruction is an instruction to change a weighting coefficient that determines the ratio of integrating two or more types of feature amounts.
  • the change instruction is an instruction to change the feature amount that constitutes the axis.
  • the processor displays the representative image of the two or more images at the top, and displays the other images below the representative image. It is preferable to display in a layered manner.
  • the processor receives an instruction to select an image from the user on the display space, and performs specified processing only on the image selected by the selection instruction.
  • the processor receives a user's selection instruction for at least one of the similar images and the dissimilar images, and, in accordance with the selection instruction, moves the similar images in a direction in which the feature amounts match, and the dissimilar images in a direction in which the feature amounts match. It is preferable to change the weighting coefficients applied to the plurality of types of feature quantities in the direction in which the quantities deviate.
  • the operating method of the display control device disclosed herein includes acquiring a plurality of images, acquiring a plurality of types of feature quantities for the plurality of images, and displaying a list of the plurality of images by arranging the images at positions according to the feature quantities in a two-dimensional or three-dimensional display space relating to the feature quantities and having at least one axis indicating the magnitude of a numerical value obtained by integrating two or more types of feature quantities.
  • the operating program of the display control device of the present disclosure includes acquiring a plurality of images, acquiring a plurality of types of feature amounts for the plurality of images, and a two-dimensional or three-dimensional display space related to the feature amounts. , displaying a plurality of images as a list by arranging the images at positions corresponding to the feature amounts in a display space having at least one axis indicating the magnitude of a numerical value that integrates two or more types of feature amounts; cause a computer to perform processing including
  • FIG. 2 is a diagram showing a user terminal and an image management server.
  • FIG. 2 is a block diagram showing a computer that constitutes a user terminal and an image management server.
  • FIG. 2 is a block diagram showing a processing unit of a CPU of a user terminal.
  • FIG. 2 is a block diagram showing a processing unit of a CPU of an image management server. It is a diagram showing data stored in an image DB. It is a diagram showing supplementary information.
  • FIG. 3 is a diagram showing feature amount information and feature amount vectors.
  • FIG. 2 is an explanatory diagram of a method for deriving date and time features.
  • FIG. 13 is a diagram showing a process of deriving image quality features using an image quality feature model.
  • FIG. 13 is a diagram showing a process of deriving image quality features using an image quality feature model.
  • FIG. 3 is a diagram illustrating a process of deriving model features using a model feature derivation model.
  • FIG. 7 is a diagram illustrating a process of deriving a subject feature using a subject feature deriving model.
  • FIG. 3 is a diagram showing a subject discrimination model.
  • FIG. 3 is a diagram showing the processing of each processing unit of the image management server when an image storage request is sent from a user terminal.
  • FIG. 3 is a diagram showing a display setting screen. 13 is a diagram showing a process of the browser control unit when a setting button is pressed on the display setting screen.
  • FIG. FIG. 6 is a diagram illustrating the processing of each processing unit of the image management server when an image distribution request is transmitted from a user terminal.
  • FIG. 6 is a diagram illustrating a process of arranging an image at a position in a display space according to a feature amount.
  • FIG. 7 is a diagram illustrating a process of displaying images whose feature amounts are within a preset range in a layered manner.
  • FIG. 3 is a diagram showing an image list display screen.
  • FIG. 7 is a diagram illustrating a process of changing the specifications of a display space in response to an instruction to change feature amounts that constitute an axis.
  • FIG. 7 is a diagram illustrating a process of changing the specifications of a display space in response to an instruction to change feature amounts that constitute an axis.
  • FIG. 3 is a diagram illustrating how a user instructs to select an image in a display space.
  • FIG. 6 is a diagram illustrating a state in which a processing instruction menu for instructing processing on an image selected by a selection instruction is displayed.
  • FIG. 3 is a diagram showing an album creation screen on which an album created from images selected by a selection instruction is displayed.
  • 5 is a flowchart showing the processing procedure of the image management server. 5 is a flowchart showing the processing procedure of the image management server. It is a flowchart which shows the processing procedure of a user terminal.
  • FIG. 7 is a diagram illustrating an image list display screen in which the feature amounts configuring an axis can be changed using a pull-down menu.
  • FIG. 7 is a diagram showing a display setting screen having a function of instructing a change in weighting coefficients.
  • FIG. 6 is a diagram illustrating a process of arranging an image at a position in a display space according to a feature amount.
  • FIG. 7 is a diagram showing how an image is moved to an arbitrary position in the display space.
  • FIG. 7 is a diagram showing an image list display screen including a display space in which other images are arranged around one image moved to an arbitrary position.
  • FIG. 7 is a diagram showing an image list display screen including a display space in which other images are arranged around two images moved to arbitrary positions. It is a figure showing a similar image selection screen.
  • FIG. 13 is a diagram illustrating a process of correcting weighting coefficients by solving an optimization problem for obtaining weighting coefficients that make the distance of feature vectors of similar images less than a first threshold value.
  • FIG. 7 is a diagram illustrating a process of correcting weighting coefficients by solving an optimization problem for finding weighting coefficients for which the distance between feature vectors of dissimilar images is equal to or greater than a second threshold
  • FIG. 3 is a diagram showing an example of arranging images in a three-dimensional display space.
  • a user U owns a user terminal 10.
  • the user terminal 10 is a device having a camera function, an image playback/display function, an image transmission/reception function, and the like.
  • the camera function of the user terminal 10 has an image sensor such as a CMOS (Complementary Metal-Oxide-Semiconductor) image sensor, and forms an image of the object 46 (see FIG. 5) by capturing object light from a lens and focusing it on the image sensor. get.
  • the user terminal 10 is a smartphone, a tablet terminal, a compact digital camera, a mirrorless single-lens camera, a notebook personal computer, or the like.
  • the user terminal 10 is an example of a "display control device" according to the technology of the present disclosure.
  • the user terminal 10 is connected to the image management server 12 via the network 11 so that they can communicate with each other.
  • the network 11 is, for example, a WAN (Wide Area Network) such as the Internet or a public communication network.
  • the user terminal 10 transmits (uploads) the image 46 to the image management server 12. Further, the user terminal 10 receives (downloads) an image 46 from the image management server 12.
  • the image management server 12 is, for example, a server computer, a workstation, etc., and together with the user terminal 10 is an example of a "display control device" according to the technology of the present disclosure. In this way, the "display control device" according to the technology of the present disclosure may be realized across multiple devices.
  • a plurality of user terminals 10 of a plurality of users U are connected to the image management server 12 via a network 11.
  • the computers configuring the user terminal 10 and the image management server 12 basically have the same configuration, including a storage 20, a memory 21, a CPU (Central Processing Unit) 22, a communication unit 23, It includes a display 24 and an input device 25. These are interconnected via a bus line 26.
  • a storage 20 a storage 20
  • a memory 21 a main memory 22
  • a CPU 22 Central Processing Unit 22
  • a communication unit 23 It includes a display 24 and an input device 25.
  • a bus line 26 interconnected via a bus line 26.
  • the storage 20 is a hard disk drive that is built into a computer that constitutes the user terminal 10 and the image management server 12, or is connected via a cable or a network.
  • the storage 20 is a disk array in which a plurality of hard disk drives are connected in series.
  • the storage 20 stores control programs such as an operating system, various application programs (hereinafter abbreviated as AP (Application Program)), and various data accompanying these programs.
  • AP Application Program
  • a solid state drive may be used instead of the hard disk drive.
  • the memory 21 is a work memory for the CPU 22 to execute processing.
  • the CPU 22 loads the program stored in the storage 20 into the memory 21 and executes processing according to the program. Thereby, the CPU 22 centrally controls each part of the computer.
  • the CPU 22 is an example of a "processor" according to the technology of the present disclosure. Note that the memory 21 may be built into the CPU 22.
  • the communication unit 23 is a network interface that controls transmission of various information via the network 11 and the like.
  • the display 24 displays various screens. Various screens are provided with operation functions using a GUI (Graphical User Interface).
  • the computers constituting the user terminal 10 and the image management server 12 accept input of operation instructions from the input device 25 through various screens.
  • the input device 25 is a keyboard, a mouse, a touch panel, a microphone for voice input, or the like.
  • each part of the computer that makes up the user terminal 10 storage 20, CPU 22, display 24, and input device 25
  • each part of the computer that makes up the image management server 12 is indicated by the suffix "A”
  • each part of the computer that makes up the image management server 12 is indicated by the suffix "A.”
  • Storage 20 and CPU 22 are distinguished by adding a suffix "B" to their respective symbols.
  • an image AP30 is stored in the storage 20A of the user terminal 10.
  • Image AP 30 is installed on user terminal 10 by user U.
  • the image AP 30 is an AP for causing a computer forming the user terminal 10 to function as a "display control device" according to the technology of the present disclosure. That is, the image AP30 is an example of the "display control device operation program" according to the technology of the present disclosure.
  • the CPU 22A of the user terminal 10 functions as the browser control unit 32 in cooperation with the memory 21 and the like.
  • the browser control unit 32 controls the operation of the dedicated web browser of the image AP 30.
  • the browser control unit 32 generates various screens.
  • the browser control unit 32 displays various generated screens on the display 24A.
  • the browser control unit 32 also receives various operation instructions input by the user U from the input device 25A through various screens.
  • the browser control unit 32 sends various requests to the image management server 12 according to operation instructions.
  • an operating program 35 is stored in the storage 20B of the image management server 12.
  • the operating program 35 is an AP for causing the computer forming the image management server 12 to function as a "display control device" according to the technology of the present disclosure. That is, like the image AP 30, the operation program 35 is an example of the "operation program for the display control device" according to the technology of the present disclosure.
  • An image database (hereinafter referred to as DB (Data Base)) 36 is also stored in the storage 20B.
  • the storage 20B contains a user ID (Identification Data) for uniquely identifying the user U, a password set by the user U, and a terminal ID for uniquely identifying the user terminal 10. is stored as user U's account information.
  • the CPU 22B of the image management server 12 cooperates with the memory 21 and the like to execute the reception unit 40, feature value derivation unit 41, and read/write (hereinafter referred to as RW (Read Write)). It functions as a control section 42 and a distribution control section 43.
  • the reception unit 40 receives various requests from the user terminal 10.
  • the reception unit 40 outputs the various requests to the feature derivation unit 41, the RW control unit 42, and the distribution control unit 43.
  • the feature derivation unit 41 derives multiple types of features of the image 46.
  • the feature derivation unit 41 outputs feature information 48 (see FIG. 5) consisting of the multiple types of derived features to the RW control unit 42.
  • the RW control unit 42 controls storage of various data in the storage 20B and reading of various data from the storage 20B.
  • the RW control unit 42 particularly controls the storage of the image 46 in the image DB 36 and the reading of the image 46 from the image DB 36. Further, the RW control unit 42 controls the storage of the feature amount information 48 in the image DB 36 and the reading of the feature amount information 48 from the image DB 36.
  • the distribution control unit 43 controls distribution of various data to the user terminal 10.
  • a storage area 45 is provided for each user U in the image DB 36.
  • a user ID is registered in the storage area 45.
  • an image 46, additional information 47, and feature amount information 48 are stored in association with each other.
  • the additional information 47 includes multiple items such as shooting date and time, shooting location, aperture value, ISO (International Organization for Standardization) sensitivity, shutter speed, focal length, presence or absence of flash, and tags. .
  • the date and time when the image 46 was photographed using the camera function of the user terminal 10 is registered in the photographing date and time.
  • an address and/or a landmark name determined from latitude and longitude information obtained by the GPS (Global Positioning System) function of the user terminal 10 is registered.
  • the tag is a word that simply represents the subject in the image 46.
  • the tags include those manually input by the user U or those derived using the subject discrimination model 60 (see FIG. 12).
  • the feature information 48 includes a plurality of items such as a date and time feature, a color feature, a brightness feature, an image quality feature, a model feature, and a subject feature.
  • the image 46 can be represented by a feature vector VI in which a plurality of types of feature amounts constituting the feature information 48 are listed. There are hundreds to thousands of feature amounts included in the feature information 48. Therefore, the number of dimensions of the feature quantity vector VI also ranges from several hundred to several thousand.
  • the date and time feature is a feature related to the date and time when the image 46 was taken.
  • the feature amount deriving unit 41 sets the date and time feature amount to 0 for the image 46O with the oldest shooting date and time among the images 46 stored in the storage area 45, and the date and time feature amount of the image 46O with the latest shooting date and time.
  • the date and time feature of image 46N is derived.
  • the color feature amount is a feature amount related to each color of RGB (red, green, and blue) of the image 46.
  • the feature amount deriving unit 41 derives a value obtained by normalizing the average value of the pixel values of each color of RGB of the image 46, for example, with the maximum value of the pixel value being 1, as the color feature amount.
  • the brightness feature amount is a feature amount related to the brightness (luminance) of the image 46.
  • the feature amount deriving unit 41 derives a value obtained by normalizing the average brightness value of the image 46, with the maximum brightness value being 1, as a brightness feature amount.
  • the image quality feature quantity is a feature quantity related to the image quality of the image 46.
  • the image quality feature amount is a numerical value of 0 or more and 1 or less. The closer the image quality feature quantity is to 0, the lower the image quality of the image 46 is, and the closer the image quality feature quantity is to 1, the higher the image quality of the image 46 is.
  • the feature amount derivation unit 41 derives the image quality feature amount using the image quality feature amount derivation model 55.
  • the image quality feature amount derivation model 55 is a machine learning model that outputs an image quality feature amount in response to the input of the image 46.
  • the correct image quality is used to match the learning image with the learning image quality feature output from the image quality feature derivation model 55 in response to the input of the learning image.
  • the feature amounts are prepared as training data.
  • the correct image quality features are calculated based on the results of the training data creator's evaluation of multiple evaluation items such as sharpness, exposure, gradation expression, saturation, white balance, presence of blur, and presence of blur. Ru.
  • Model features are features obtained from machine learning models. Like other feature quantities, the model feature quantity is also a value normalized with a maximum value of 1.
  • the feature amount derivation unit 41 derives the model feature amount using the model feature amount derivation model 56.
  • the model feature derivation model 56 is a machine learning model that outputs model features in response to the input of the image 46.
  • the subject feature quantity is a feature quantity that indicates the existence probability of a subject appearing in the image 46 as a numerical value between 0 and 1 for each type of subject.
  • the subject includes everything that can be the subject of the image 46, in addition to the illustrated mountains, rivers, oceans, lakes, men, women, etc.
  • the feature amount derivation unit 41 derives the object feature amount using the object feature amount derivation model 57.
  • the subject feature amount derivation model 57 is a machine learning model that outputs a subject feature amount in response to the input of the image 46.
  • the image quality feature derivation model 55, model feature derivation model 56, and subject feature derivation model 57 are stored in the storage 20B.
  • the RW control unit 42 reads these derived models 55 to 57 from the storage 20B and outputs them to the feature amount deriving unit 41.
  • the object discrimination model 60 is, for example, a convolutional neural network (CNN) such as U-Net or ResNet (Residual Network), and includes an encoder section 61 and an output section 62.
  • An image 46 is input to the encoder section 61 .
  • the encoder unit 61 derives the feature amount 63 by performing well-known convolution processing using a filter, pooling processing, skip layer processing, etc. on the image 46.
  • the encoder section 61 outputs the feature amount 63 to the output section 62.
  • the feature amount 63 is nothing but a model feature amount. In other words, the encoder unit 61 is used as the model feature derivation model 56.
  • the output section 62 includes a decoder section 64, a probability calculation section 65, and a subject discrimination section 66.
  • the feature amount 63 is input to the decoder section 64 .
  • the decoder unit 64 derives the final feature amount 63E by subjecting the feature amount 63 to upsampling processing accompanied by convolution processing, merging processing, and the like.
  • the probability calculation unit 65 generates a probability calculation result 67 from the final feature amount 63E using a well-known activation function such as a softmax function or a sigmoid function.
  • the probability calculation section 65 outputs the probability calculation result 67 to the subject discrimination section 66.
  • the probability calculation result 67 is the existence probability of each subject such as a mountain, a river, the sea, and a lake in the image 46.
  • the probability calculation result 67 is nothing but a subject feature amount. That is, the encoder section 61, the decoder section 64, and the probability calculation section 65 are used as the subject feature derivation model 57.
  • the subject discrimination unit 66 outputs the subject with the highest existence probability in the probability calculation result 67 as the subject discrimination result 68.
  • a part of the class discrimination model called the subject discrimination model 60 is used as the model feature derivation model 56 and the subject feature derivation model 57
  • the present invention is not limited to this.
  • An encoder section of an autoencoder that outputs a restored image of the image 46 in response to the input of the image 46 may be used as the model feature derivation model 56.
  • the encoder section, decoder section, and probability calculation section of a semantic segmentation model that identifies a subject in the image 46 in pixel units may be used as the subject feature derivation model 57.
  • the browser control unit 32 transmits an image storage request 70 to the image management server 12.
  • the image storage request 70 includes a user ID, an image 46, and additional information 47.
  • the reception unit 40 accepts the image storage request 70 and outputs the image storage request 70 to the feature quantity derivation unit 41 and the RW control unit 42.
  • the feature amount deriving unit 41 derives various feature amounts as shown in FIGS. 8 to 11, and outputs the obtained feature amount information 48 to the RW control unit 42.
  • the RW control unit 42 stores the image 46, additional information 47, and feature amount information 48 in the storage area 45 of the image DB 36 corresponding to the user ID.
  • the image management server 12 acquires a plurality of images 46 and acquires a plurality of types of feature amounts for the plurality of images 46.
  • the browser control unit 32 displays a display setting screen 75 for displaying a list of images 46 on the display 24A in response to an instruction from the user U.
  • the display setting screen 75 includes a pull-down menu 76 for selecting the feature amount forming the X-axis of the two-dimensional display space 90 (see FIG. 17) used for displaying a list of images 46, and a pull-down menu 76 for selecting the feature amount forming the Y-axis.
  • a pull-down menu 77 and a settings button 78 are provided for selection.
  • Two pull-down menus 76 and 77 are provided in the initial display state.
  • feature amount addition buttons 79 and 80 are provided at the bottom of the pull-down menus 76 and 77.
  • Features that can be selected from the pull-down menus 76 and 77 are date and time features, color features, brightness features, image quality features, and subject features.
  • the plurality of pull-down menus 76 only one of the pull-down menus 76 must select some feature amount.
  • the X-axis and Y-axis of the display space 90 serve as axes that indicate the numerical value of at least one type of feature amount.
  • the X axis of the display space 90 becomes an axis indicating the magnitude of a numerical value that integrates two or more types of feature quantities.
  • the Y-axis of the display space 90 becomes an axis indicating the magnitude of a numerical value that integrates two or more types of feature quantities.
  • the browser control unit 32 when the setting button 78 is pressed on the display setting screen 75, the browser control unit 32 generates an image distribution request 85.
  • the image distribution request 85 includes the user ID and information according to the feature amount selected from the pull-down menus 76 and 77.
  • the information corresponding to the feature amount is a keyword representing the object feature amount when the feature amount selected from the pull-down menus 76 and 77 is the object feature amount.
  • FIG. 15 a case is illustrated in which subject features of a dog and a child are selected in the pull-down menu 76, and an image distribution request 85 including the keywords dog and child is generated.
  • period designation information that specifies images 46 from a preset period such as the past year is used. is registered in the image distribution request 85 as information corresponding to the feature amount.
  • the browser control unit 32 transmits an image distribution request 85 to the image management server 12.
  • the reception unit 40 accepts the image distribution request 85 and outputs the image distribution request 85 to the RW control unit 42 .
  • the RW control unit 42 searches for an image 46 corresponding to information according to the feature amount of the image distribution request 85 from among the images 46 stored in the storage area 45 corresponding to the user ID of the image distribution request 85.
  • the RW control unit 42 outputs the searched image 46, its accompanying information 47 (not shown in FIG. 16), and feature amount information 48 to the distribution control unit 43.
  • the distribution control unit 43 distributes the image 46 and the like from the RW control unit 42 to the user terminal 10 that has requested the image distribution request 85 .
  • the distribution control unit 43 identifies the user terminal 10 that is the source of the image distribution request 85 based on the user ID included in the image distribution request 85.
  • FIG. 16 shows an example of the image distribution request 85 in FIG. 15 that includes the words dog and child as keywords.
  • the RW control unit 42 searches for an image 46 in which the subject feature amount of the dog is greater than 0, and an image 46 in which the subject feature amount of the child is greater than 0.
  • the browser control unit 32 calculates an integrated feature IF for an image 46 delivered from the image management server 12 in response to an image delivery request 85.
  • the integrated feature IF is an arithmetic average value of the multiple types of feature to be integrated.
  • FIG. 17 illustrates an example of an image delivery request 85 including keywords for dog and child as shown in FIG. 15 and FIG. 16.
  • the browser control unit 32 calculates the arithmetic average value of the subject feature of the dog and the subject feature of the child as the integrated feature IF.
  • the browser control unit 32 calculates the integrated feature IF as the arithmetic average value.
  • the browser control unit 32 places the image 46 at a position according to the feature in a two-dimensional display space 90 related to the feature having an X axis and a Y axis. When images 46 overlap in the display space 90, the browser control unit 32 displays the image 46 with the most recent shooting date and time in front.
  • a position according to the feature in the display space 90 refers to a position according to at least three types of feature out of the hundreds to thousands of types of feature, and does not necessarily have to be a position according to all of the hundreds to thousands of types of feature.
  • the X axis is an axis indicating a numerical value that integrates the subject feature quantities of the dog and the child, that is, the integrated feature quantity IF
  • the Y axis is the axis indicating the image quality feature quantity.
  • the X-axis is an example of "an axis that indicates the magnitude of a numerical value that integrates two or more types of feature amounts" according to the technology of the present disclosure. Note that not only the X-axis but also the Y-axis may be an axis indicating the magnitude of a numerical value that integrates two or more types of feature amounts.
  • the browser control unit 32 places the image 461 whose integrated feature quantity IF is 0.345 and whose image quality feature quantity is 0.88 at the position of coordinates ⁇ 0.345, 0.88 ⁇ in the display space 90. Similarly, the browser control unit 32 places the image 462 whose integrated feature quantity IF is 0.43 and whose image quality feature quantity is 0.56 at the position of coordinates ⁇ 0.43, 0.56 ⁇ in the display space 90. . Further, the browser control unit 32 arranges the image 463 whose integrated feature amount IF is 0.48 and whose image quality feature amount is 0.72 at the position of coordinates ⁇ 0.48, 0.72 ⁇ in the display space 90. In the display space 90 of FIG.
  • the image 46 has a relatively high subject feature of the dog and a child, and has a relatively high image quality feature, in other words, the dog and/or the child are clearly the subject.
  • the image 46 that appears more clearly and has better image quality is placed in the upper right area.
  • the browser control unit 32 displays a list of the plurality of images 46 by arranging the images 46 at positions corresponding to the feature amounts in the display space 90 in this manner.
  • the browser control unit 32 selects the most representative image 46RP of the two or more images 46. Display at the top. Then, the other images 46 are displayed in a stacked manner under the representative image 46RP.
  • the representative image 46RP is, for example, the image 46 with the oldest photographing date and time among the two or more images 46 whose feature amounts are within a preset range.
  • two or more images 46 whose feature amounts are within a preset range are images 46 whose feature amount vectors VI are less than a preset threshold.
  • the threshold value is set to a value that allows continuous shots taken at intervals of several milliseconds to several seconds to be determined as an image 46 whose feature amount falls within a preset range.
  • FIG. 18 shows an example in which three images 46 whose feature amounts are within a preset range are collectively displayed in a stacked manner as a representative image 46RP.
  • the browser control unit 32 displays an image list display screen 95 including a display space 90 in which a plurality of images 46 are arranged on the display 24A.
  • a setting change button 96 is provided on the image list display screen 95. When the settings change button 96 is pressed, the browser control unit 32 changes the display from the image list display screen 95 to the display settings screen 75.
  • FIG. 20 and 21 show an example of a process for changing the specifications of the display space 90 in response to an instruction to change the feature quantities constituting the axes of the display space 90.
  • FIG. 20 shows a case where the setting change button 96 is pressed on the image list display screen 95 including the display space 90 shown in FIG. 19, and furthermore, on the display setting screen 75, the feature quantities constituting the X-axis are changed to date and time features (the Y-axis remains unchanged as image quality features).
  • the browser control unit 32 rearranges the images 46 in the display space 90 with the X-axis being the date and time features and the Y-axis being the image quality features, and displays on the display 24A the image list display screen 95 including the display space 90 with the X-axis being the date and time features and the Y-axis being the image quality features.
  • the images 46 with relatively high date and time features and relatively high image quality features are arranged in the upper right area.
  • the browser control unit 32 rearranges the images 46 in the display space 90 where the X-axis is the subject feature quantity of alpine plants and the Y-axis is the red feature quantity, and displays the image list display screen 95 including the display space 90 where the X-axis is the subject feature quantity of alpine plants and the Y-axis is the red feature quantity on the display 24A.
  • images 46 with relatively high subject feature quantities of alpine plants and relatively high red feature quantities, in other words images 46 in which red alpine plants are clearly depicted as subjects, are arranged in the upper right region.
  • the browser control unit 32 receives an instruction to change the feature amount forming the axis of the display space 90 as an instruction from the user U to change the specifications of the display space 90. Then, the specifications of the display space 90 are changed according to the change instruction.
  • FIGS. 20 and 21 show an example in which the feature quantities configuring the X-axis and Y-axis are changed to one type, it is of course possible to change the feature quantities configuring the X-axis and Y-axis to multiple types. Good too.
  • FIG. 22 illustrates a case where the display 24A is a touch panel display.
  • the user U specifies the selection area 100 by tracing the surface of the display 24A with the finger F.
  • the image 46 existing within the designated selection area 100 is thus selected.
  • the image 46 selected by the user U will be referred to as a selected image 46S.
  • a processing instruction menu 101 is pop-up displayed on the image list display screen 95.
  • the processing instruction menu 101 has three options: album creation, batch editing, and batch deletion.
  • the browser control unit 32 displays an album creation screen 105 on the display 24A, as shown in FIG. 24 as an example.
  • the album creation screen 105 displays an album 106 created using the selected image 46S.
  • a save button 107 and a cancel button 108 are provided.
  • the browser control unit 32 transmits a storage request for the album 106 to the image management server 12.
  • the RW control unit 42 of the image management server 12 stores the album 106 in the storage area 45 of the image DB 36.
  • the cancel button 108 is pressed, the browser control unit 32 returns the display to the image list display screen 95 in which the selected image 46S is not selected.
  • the browser control unit 32 applies a specified display effect such as black and white, sepia, vivid, soft focus, light leakage, whitening, etc. to the selected image 46S. Apply all at once. Further, when the option of batch deletion in the processing instruction menu 101 is selected, the browser control unit 32 deletes the selected images 46S in a batch.
  • the browser control unit 32 receives an instruction from the user U to select the image 46 on the display space 90. Then, designated processes such as album creation, batch editing, and batch deletion are performed only on the selected images 46S selected by the selection instruction.
  • the CPU 22A of the user terminal 10 functions as the browser control unit 32 by activating the image AP 30.
  • the CPU 22B of the image management server 12 functions as a reception unit 40, a feature value derivation unit 41, an RW control unit 42, and a distribution control unit 43 by starting the operation program 35.
  • the user U uses the camera function of the user terminal 10 to photograph the image 46.
  • an image storage request 70 including an image 46 and supplementary information 47 is transmitted to the image management server 12.
  • the image storage request 70 is accepted by the reception unit 40 (YES in step ST100 in FIG. 25).
  • the image storage request 70 is output from the reception unit 40 to the feature quantity derivation unit 41 and the RW control unit 42.
  • the feature amount deriving unit 41 derives various feature amounts as shown in FIGS. 8 to 11 (step ST110).
  • Feature amount information 48 composed of various feature amounts is output from the feature amount deriving section 41 to the RW control section 42. Then, under the control of the RW control unit 42, the image 46, additional information 47, and feature information 48 are stored in the image DB 36 (step ST120).
  • the user U selects the feature quantities forming the X-axis and Y-axis of the display space 90 from the pull-down menus 76 and 77, and then presses the setting button 78.
  • the browser control unit 32 generates an image distribution request 85, and transmits the image distribution request 85 to the image management server 12.
  • the image distribution request 85 is received by the reception unit 40 (YES in step ST200 in FIG. 26).
  • the image distribution request 85 is output from the reception unit 40 to the RW control unit 42.
  • the image 46 corresponding to the image distribution request 85 is read from the image DB 36 (step ST210).
  • the image 46 is outputted from the RW control section 42 to the distribution control section 43 along with additional information 47 and feature amount information 48 .
  • the image 46, the supplementary information 47, and the feature amount information 48 are distributed to the user terminal 10 that requested the image distribution request 85 under the control of the distribution control unit 43 (step ST220).
  • the browser control unit 32 receives the images 46 and the like distributed from the image management server 12 (YES in step ST300 in FIG. 27). As shown in FIG. 17, the image 46 is placed at a position corresponding to the feature amount in the display space 90 set on the display setting screen 75 under the control of the browser control unit 32 (step ST310). Then, under the control of the browser control unit 32, an image list display screen 95 including the display space 90 is displayed on the display 24A (step ST320). At this time, among the two or more images 46 whose feature amounts are within a preset range, as shown in FIG. 18, the representative image 46RP is displayed at the top, and the other images are displayed below the representative image 46RP. 46 are displayed in a stacked manner.
  • the RW control unit 42 of the CPU 22B of the image management server 12 acquires multiple images 46 and acquires multiple types of feature amounts for the multiple images 46.
  • the browser control unit 32 of the CPU 22A of the user terminal 10 displays a list of the multiple images 46 by arranging the images 46 at positions according to the feature amounts in a two-dimensional display space 90 related to the feature amounts, the display space 90 having at least one axis indicating the magnitude of a numerical value obtained by integrating two or more types of feature amounts. Therefore, it is possible to display a list of a wide variety of images 46 compared to display spaces having X and Y axes indicating one type of feature amount, such as the display space 90 shown in FIG.
  • the X axis indicates the date and time feature amount and the Y axis indicates the image quality feature amount
  • the display space 90 shown in FIG. 21 in which the X axis indicates the subject feature amount of alpine plants and the Y axis indicates the red color feature amount.
  • the browser control unit 32 accepts an instruction from the user U to change the specifications of the display space 90, and changes the specifications of the display space 90 in accordance with the change instruction. This makes it easy to change the specifications of the display space 90 to match the intentions of the user U. As a result, it becomes easier for the user U to find the desired image 46, and the time required to search for the desired image 46 can be reduced.
  • the change instruction is an instruction to change the feature amount that constitutes the axis of the display space 90. Therefore, the axis of the display space 90 can be easily changed as the user U desires. As a result, an unexpected image 46 may be discovered, such as displaying an image 46 that has been taken but forgotten, thereby attracting the user U's interest.
  • the browser control unit 32 displays the representative image 46RP of the two or more images 46 at the top, and The other images 46 are displayed in a layered manner below. Therefore, the display of two or more images 46 whose feature amounts are within a preset range can be clearly displayed, and the list of images 46 can be displayed more easily.
  • the browser control unit 32 receives an instruction from the user U to select the image 46 on the display space 90. Then, the designated process is performed only on the selected image 46S selected by the selection instruction. For this reason, as shown in FIG. 23, for example, an album 106 may be created by selecting an image 46 in which a dog and/or child is clearly shown as the subject and with good image quality, or an image 46 with relatively poor image quality.
  • the images 46 can be selected and deleted all at once, and the user U's work on organizing and editing the images 46 can be greatly facilitated.
  • an image list display screen 110 shown in FIG. 28 may be adopted.
  • the image list display screen 110 is a screen in which the functions of the display setting screen 75 are added to the image list display screen 95.
  • the image list display screen 110 is provided with a pull-down menu 111 for selecting the feature amounts forming the X-axis and a pull-down menu 112 for selecting the feature amounts forming the Y-axis.
  • Two pull-down menus 111 and 112 are provided in the initial display state.
  • feature amount addition buttons 113 and 114 are provided on the right side of the pull-down menus 111 and 112. When the feature amount addition button 113 is pressed, the pull-down menu 111 is added.
  • the pull-down menu 112 is added.
  • Pull-down menus 111 and 112 correspond to pull-down menus 76 and 77 on display setting screen 75.
  • the feature value addition buttons 113 and 114 correspond to the feature value addition buttons 79 and 80 on the display setting screen 75. According to this image list display screen 110, there is no need to press the setting change button 96 to change the display to the display setting screen 75, so that the user U's operation effort can be saved.
  • the display setting screen 120 of the second embodiment is provided with weighting factor setting bars 121 and 122 in addition to the pull-down menus 76 and 77 of the first embodiment.
  • the weighting factor setting bar 121 is a GUI for setting the weighting factor to be applied to the feature amount constituting the X axis selected by the pull-down menu 76 to a value of 0.1 or more and 1 or less.
  • the weighting factor setting bar 122 is a GUI for setting the weighting factor to be applied to the feature amount constituting the Y axis selected by the pull-down menu 77 to a value of 0.1 or more and 1 or less.
  • two weighting factor setting bars 121 and 122 are provided in the initial display state.
  • the weighting coefficients of the weighting coefficient setting bars 121 and 122 are set to 1 in the initial display state.
  • FIG. 29 a case is illustrated in which subject features of a dog and a child are selected in the pull-down menu 76, and image quality features are selected in the pull-down menu 77.
  • the weighting factor setting bar 121 0.5 is set for the weighting coefficient multiplied by the dog's subject feature, and 1 is set for the weighting coefficient multiplied by the child's subject feature, and in the weighting factor setting bar 122.
  • the weighting coefficient multiplied by the image quality feature is set to 1.
  • FIG. 30 shows the processing of the browser control unit 32 in the case of the display settings shown in FIG. 29.
  • the browser control unit 32 calculates the integrated feature amount IF for the image 46 distributed from the image management server 12.
  • the browser control unit 32 calculates the average of the multiple types of feature amounts to be integrated as the integrated feature amount IF in the case of the first embodiment, whereas the browser control unit 32 calculates the weight of the multiple types of feature amounts to be integrated.
  • the weighted average (weighted average) is calculated as the integrated feature amount IF.
  • the browser control unit 32 uses the weighting coefficients set in the weighting coefficient setting bars 121 and 122 to calculate the weighted average.
  • the browser control unit 32 sets a weighting coefficient of 0 for the subject feature amount of a dog set in the weighting factor setting bar 121.
  • the instruction to change the specifications of the display space 90 by the user U is an instruction to change the weighting coefficient that determines the ratio of integrating two or more types of feature amounts. Therefore, it is possible to display a list of images 46 with even greater variety. For example, you might want to create an album 106 that gives top priority to images 46 with good image quality, but you also want to take the date and time of shooting into account, or you want to create an album 106 that mainly includes images 46 of children, but there are also images 46 of dogs here and there. It is possible to respond to user U's detailed requests, such as wanting to include
  • the image list display screen 125 of the third embodiment includes an area 126 in which images 46 are displayed as a list in a matrix, and a display space 90.
  • the image 46 in the area 126 can be moved to any position in the display space 90 by a drag-and-drop operation using the finger F of the user U.
  • the image 46 moved from the area 126 to the display space 90 by the drag-and-drop operation will be referred to as a reference image 46R.
  • the display space 90 of the third embodiment converts feature quantities having more dimensions than two dimensions into two dimensions using a well-known dimension reduction technique such as t-SNE (T-distributed Stochastic Neighbor Embedding). This is the space gained through reduction. Therefore, the feature amounts forming the X-axis and Y-axis of the display space 90 in the third embodiment are not determined to be a specific type of feature amount.
  • t-SNE T-distributed Stochastic Neighbor Embedding
  • the browser control unit 32 adjusts the distance between the reference image 46R and the feature vector VI, centering on the reference image 46R. Another image 46 is placed at the corresponding position. Therefore, the image 46 whose feature vector VI is similar to the reference image 46R and whose distance to the feature vector VI of the reference image 46R is close is placed at a position close to the reference image 46R. On the other hand, an image 46 in which the reference image 46R and the feature vector VI deviate from each other, and the distance between the reference image 46R and the feature vector VI is long, is placed at a position far from the reference image 46R.
  • an image whose feature vector VI is similar to the reference image 46R is an image similar to the reference image 46R.
  • an image in which the reference image 46R and the feature amount vector VI are different from each other is, simply put, an image that is dissimilar to the reference image 46R.
  • the browser control unit 32 adds a color, for example, a red circle 128, to the reference image 46R, and displays the reference image 46R to distinguish it from other images 46. Note that FIG. 32 shows a case where there is one reference image 46R, and FIG. 33 shows a case where there are two reference images 46R.
  • the feature quantity vector VI of the image 46 is obtained by multiplying the number of dimensions of VI by the weighting coefficient Wi.
  • the browser control unit 32 displays a similar image selection screen 130 on the display 24A in response to an instruction from the user U.
  • the similar image selection screen 130 is a screen for selecting an image 46 that the user U considers to be similar.
  • the images 46 are displayed in a matrix format on the similar image selection screen 130.
  • the user U selects two or more images that the user U considers to be similar, as shown by hatching, from the images 46 displayed in a list. After that, the user U presses the OK button 131.
  • the browser control unit 32 transmits a weighting coefficient correction request including the image 46 selected by the user U to the image management server 12.
  • the image 46 selected by the user U considering it to be similar will be referred to as a similar image 46SM.
  • the CPU 22B of the image management server 12 of the fourth embodiment functions as a weighting factor correction unit 135 in addition to each of the processing units 40 to 43 of the first embodiment.
  • the feature information 48SM of the similar image 46SM is input to the weighting coefficient correction unit 135.
  • the weighting factor correction unit 135 calculates the corrected weighting factor WiC by solving an optimization problem for finding a weighting factor Wi such that the distance between the feature vectors VI of the plurality of similar images 46SM is less than a first preset threshold. calculate.
  • the first threshold is set to a value close to 0, such as 0.1, for example. That is, the weighting coefficient correction unit 135 changes the weighting coefficient Wi in a direction in which the feature amounts of the similar images 46SM match each other, and sets the weighting coefficient Wi to a corrected weighting coefficient WiC.
  • the browser control unit 32 displays a dissimilar image selection screen 140 on the display 24A in response to an instruction from the user U.
  • the dissimilar image selection screen 140 is a screen for selecting an image 46 that the user U considers to be completely different from the similar image selection screen 130. Similar to the similar image selection screen 130, the dissimilar image selection screen 140 displays a list of images 46 in a matrix.
  • the user U selects two or more images that the user U considers to be completely different, as shown by hatching, from among the images 46 displayed in a list. After that, the user U presses the OK button 141.
  • the browser control unit 32 transmits a weighting coefficient correction request including the image 46 selected by the user U to the image management server 12.
  • the image 46 selected by the user U considering it to be completely different will be referred to as a dissimilar image 46DSM.
  • the feature amount information 48DSM of the dissimilar image 46DSM is input to the weighting coefficient correction unit 135.
  • the weighting coefficient correction unit 135 solves an optimization problem to obtain a weighting coefficient Wi such that the distance between the feature vectors VI of the plurality of dissimilar images 46DSM is equal to or greater than a preset second threshold value, thereby obtaining a corrected weighting coefficient WiC.
  • the second threshold value is set to a value such that the placement position of the dissimilar image 46DSM is farthest apart, such as the position of the lower left origin and the position where the upper right feature amount is maximum, in the display space 90 shown in the third embodiment, for example. Set. That is, the weighting coefficient correction unit 135 changes the weighting coefficient Wi in a direction in which the feature amounts of the dissimilar images 46DSM deviate from each other, and sets it as a corrected weighting coefficient WiC.
  • the browser control unit 32 receives an instruction from the user U to select at least one of the similar image 46SM and the dissimilar image 46DSM.
  • the weighting coefficient correction unit 135 applies weights to multiple types of feature amounts in a direction in which the feature amounts match for similar images 46SM, and in a direction in which feature amounts deviate from each other for dissimilar images 46DSM. Change the coefficient Wi. Therefore, in the display space 90, images 46 that the user U considers to be similar are placed relatively close to each other, and images 46 that the user U considers to be completely different are placed relatively far apart. It happens. Therefore, it is possible to display a list of images 46 that suits the sense of the user U.
  • the user U may be allowed to select the similar image 46SM or the dissimilar image 46DSM on the display space 90.
  • possible selection methods include surrounding the similar images 46SM with a frame, connecting the similar images 46SM with a line, or connecting the dissimilar images 46DSM with a line.
  • the images 46 are configured to be movable within the display space 90 by drag-and-drop operations, etc., and the operation of re-arranging the images 46 that the user U considers to be similar to each other is performed as a selection operation of the similar images 46SM. It may be considered as On the other hand, an operation of rearranging images 46 that the user U considers to be completely different to distant positions may be regarded as an operation of selecting dissimilar images 46DSM.
  • a list of images 46 may be displayed in a three-dimensional display space 150.
  • a display space 150 is illustrated in which the X-axis is a date and time feature, the Y-axis is a mountain subject feature and a forest subject feature, and the Z-axis is image quality.
  • the display size of the image 46 may be changed depending on the position of the display space 90. For example, in the display space 90 shown in FIGS. 19 to 21, the display size of the image 46 placed in the upper right area is increased. In this way, images 46 that are likely to be included in the album 106 can be shown to the user U in more detail.
  • the image 46 is not limited to an image taken using the camera function of the illustrated user terminal 10.
  • the image may be an image that the user U has downloaded from a web page, an image that the user U has received from a family member, a friend, or the like.
  • the image may be an image captured by the camera function of the user terminal 10 from an instant film printed out from an instant camera.
  • the image management server 12 may be responsible for all or part of the functions of the browser control unit 32 of the user terminal 10. Specifically, various screens such as a display setting screen 75 and an image list display screen 95 are generated in the image management server 12, and screen data for web distribution created in a markup language such as XML (Extensible Markup Language) is generated. It is distributed and output to the user terminal 10 in this format. In this case, the browser control unit 32 of the user terminal 10 reproduces various screens to be displayed on the web browser based on the screen data and displays them on the display 24A. Note that other data description languages such as JSON (JavaScript (registered trademark) Object Notation) may be used instead of XML.
  • JSON JavaScript (registered trademark) Object Notation
  • the hardware configuration of the computer that constitutes the image management server 12 can be modified in various ways.
  • the image management server 12 can be configured with multiple computers separated as hardware in order to improve processing power and reliability.
  • the functions of the reception unit 40 and feature derivation unit 41 and the functions of the RW control unit 42 and distribution control unit 43 can be distributed and assigned to two computers.
  • the image management server 12 is configured with two computers.
  • all or part of the functions of the image management server 12 may be assigned to the user terminal 10.
  • the hardware configurations of the computers of the user terminal 10 and the image management server 12 can be changed as appropriate depending on required performance such as processing capacity, safety, and reliability.
  • APs such as the image AP 30 and the operating program 35 can of course be duplicated or distributed and stored in multiple storages for the purpose of ensuring safety and reliability. be.
  • processing units that execute various processes such as the browser control unit 32, the reception unit 40, the feature derivation unit 41, the RW control unit 42, the distribution control unit 43, and the weighting coefficient correction unit 135 are used.
  • Various processors include CPUs 22A and 22B, which are general-purpose processors that execute software (image AP 30 and operating program 35) and function as various processing units, as well as FPGAs (Field Programmable Gate Array), etc.
  • a processor that has a circuit configuration specifically designed to execute a specific process such as a programmable logic device (PLD), which is a processor whose circuit configuration can be changed, and/or an ASIC (Application Specific Integrated Circuit). This includes dedicated electrical circuits, etc.
  • PLD programmable logic device
  • ASIC Application Specific Integrated Circuit
  • One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of multiple FPGAs and/or a CPU and (in combination with FPGA). Further, the plurality of processing units may be configured with one processor.
  • one processor is configured with a combination of one or more CPUs and software, as typified by computers such as clients and servers.
  • a processor functions as multiple processing units.
  • SoC system-on-chip
  • various processing units are configured using one or more of the various processors described above as a hardware structure.
  • circuitry that is a combination of circuit elements such as semiconductor elements can be used.
  • the processor includes: Get multiple images, Obtaining multiple types of feature amounts for the multiple images, In a two-dimensional or three-dimensional display space related to the feature amount, which has at least one axis indicating the magnitude of a numerical value that integrates two or more types of the feature amount, displaying a list of a plurality of images by arranging the images at positions; Display control device.
  • the processor includes: accept a user's instruction to change the specifications of the display space; The display control device according to supplementary note 1, wherein the specification of the display space is changed in accordance with the change instruction.
  • the display control device according to appendix 2, wherein the change instruction is an instruction to change a weighting coefficient that determines a ratio of integrating two or more types of the feature amounts.
  • the display control device according to appendix 2 or 3, wherein the change instruction is an instruction to change the feature amount constituting the axis.
  • the processor includes: If there are two or more images whose feature amounts are within a preset range, a representative image of the two or more images is displayed at the top, and the other images are stacked below the representative image.
  • the display control device displays a display in a mode in which: [Additional note 6]
  • the processor includes: accepting a user's instruction to select the image on the display space;
  • the display control device according to any one of Supplementary Notes 1 to 5, which performs specified processing only on the image selected by the selection instruction.
  • the processor includes: Accepting a selection instruction from a user of at least one of a similar image and a dissimilar image, In response to the selection instruction, weighting coefficients applied to the plurality of types of feature quantities are changed in a direction in which the feature quantities match for the similar images, and in a direction in which the feature quantities diverge for the dissimilar images.
  • the display control device according to any one of Supplementary Notes 1 to 6.
  • the technology of the present disclosure can also be combined as appropriate with the various embodiments and/or various modifications described above. Furthermore, it goes without saying that the present invention is not limited to the above embodiments, and that various configurations can be adopted as long as they do not depart from the gist of the invention. Furthermore, the technology of the present disclosure extends not only to programs but also to storage media that non-temporarily store programs.
  • a and/or B has the same meaning as “at least one of A and B.” That is, “A and/or B” means that it may be only A, only B, or a combination of A and B. Furthermore, in this specification, even when three or more items are expressed in conjunction with “and/or”, the same concept as “A and/or B" is applied.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This display control device comprises a processor. The processor: acquires a plurality of images; acquires a plurality of types of feature amounts relating to the images; and displays a collection of the images by arranging the images at positions corresponding to the feature amounts in a two-dimensional or three-dimensional display space regarding the feature amounts and having at least one axis indicating the magnitude of a numerical value obtained by integrating two or more types of the feature amounts.

Description

表示制御装置、表示制御装置の作動方法、および表示制御装置の作動プログラムDISPLAY CONTROL DEVICE, OPERATION METHOD OF DISPLAY CONTROL DEVICE, AND OPERATION PROGRAM OF DISPLAY CONTROL DEVICE
 本開示の技術は、表示制御装置、表示制御装置の作動方法、および表示制御装置の作動プログラムに関する。 The technology of the present disclosure relates to a display control device, a display control device operating method, and a display control device operating program.
 特開2001-005822号公報には、画像データベースから任意の方法によって検索した結果として得られる検索結果の画像群の画像の表示方法が記載されている。表示方法は、画像データベース全体の画像群に対する特徴量と、画像に付与し得るキーワードあるいはシンボルと画像特徴量の対応関係をあらかじめ統計処理によって対応付けたデータベースを備えて、検索結果の複数の画像の特徴量間の違いを表現するキーワードあるいはシンボルの特徴量を、その特徴量間の違いに敏感な順に、上位から最大で指定した個数だけが検索毎にダイナミックに自動選択されること、選択されたキーワードあるいはシンボルを検索者へ提示すること、並びに、提示されたキーワードあるいはシンボルから検索者によって選択されたキーワードあるいはシンボルから構成される表示空間上に検索結果の複数の画像の縮小画像を散布図として配置し表示することを含む。 Japanese Patent Laid-Open No. 2001-005822 describes a method for displaying images of a group of search results obtained as a result of searching an image database by an arbitrary method. The display method is equipped with a database in which feature values for a group of images in the entire image database are associated with keywords or symbols that can be assigned to images and image feature values through statistical processing. A maximum of a specified number of keywords or symbol features that express differences between features are dynamically and automatically selected for each search, in order of sensitivity to differences between the features. Presenting keywords or symbols to a searcher, and displaying reduced images of multiple images of search results as a scatter plot on a display space composed of the keywords or symbols selected by the searcher from the presented keywords or symbols. Including arranging and displaying.
 特開平05-282375号公報には、n次元の属性情報をもつ画像を蓄積する画像データベースの中から、任意次元の属性情報をもつ画像を検索し、検索した複数の画像については縮小して表示装置にブラウジング表示し、表示した画像の中から着目する画像を選択すると、その画像を表示装置の全体の画面に表示する画像データベース・システムが記載されている。画像データベース・システムは、任意の標本画像に類似した画像を検索するにあたって、標本画像の着目する2次元の属性情報の各特徴量から指定された範囲内の距離空間にある特徴量をもつ画像を検索し、検索した複数の画像を縮小して表示装置にブラウジング表示するにあたっては2次元の属性情報の座標でブラウジング表示を行う。 Japanese Patent Application Laid-Open No. 05-282375 discloses a method for searching for images with arbitrary dimension attribute information from an image database that stores images with n-dimensional attribute information, and displaying the searched images in a reduced size. An image database system is described in which a device performs browsing display, and when an image of interest is selected from among the displayed images, that image is displayed on the entire screen of the display device. When searching for images similar to a given sample image, the image database system searches for images that have features within a specified distance space from each feature of the focused two-dimensional attribute information of the sample image. When searching, reducing the size of a plurality of searched images, and displaying them for browsing on a display device, the browsing display is performed using coordinates of two-dimensional attribute information.
 特開2003-271665号公報には、第1の表示制御手段、移動指令手段、および第2の表示制御手段を備える検索用グラフィカル・ユーザ・インターフェイスが記載されている。第1の表示制御手段は、目盛りが形成されている座標軸、目盛りに対応して設定される検索対象範囲に該当する対象情報の検索結果を表示する領域、および座標軸上における基準位置にある操作子を表示画面上に表示するように表示装置を制御する。移動指令手段は、操作子に横方向または縦方向への移動指令を与える。第2の表示制御手段は、移動指令手段から操作子に横方向または縦方向のいずれか一方への移動指令が与えられたことにより、目盛りの尺度が変更され、かつ移動指令手段から操作子に横方向または縦方向のいずれか他方への移動指令が与えられたことにより、目盛りの値が変更されるように座標軸の表示を制御する。 Japanese Patent Application Publication No. 2003-271665 describes a graphical user interface for searching that includes a first display control means, a movement command means, and a second display control means. The first display control means includes a coordinate axis on which a scale is formed, an area for displaying search results of target information corresponding to a search target range set corresponding to the scale, and an operator located at a reference position on the coordinate axis. The display device is controlled to display on the display screen. The movement command means gives a horizontal or vertical movement command to the operator. The second display control means changes the scale of the scale when the movement command means gives the operator a movement command in either the horizontal or vertical direction, and the movement command means changes the scale of the scale to the operator. The display of the coordinate axes is controlled so that the values of the scales are changed when a movement command is given in either the horizontal direction or the vertical direction.
 本開示の技術に係る1つの実施形態は、バラエティーに富んだ画像の一覧表示を行うことが可能な表示制御装置、表示制御装置の作動方法、および表示制御装置の作動プログラムを提供する。 One embodiment of the technology of the present disclosure provides a display control device capable of displaying a list of a wide variety of images, a method for operating the display control device, and an operating program for the display control device.
 本開示の表示制御装置は、プロセッサを備え、プロセッサは、複数の画像を取得し、複数の画像について複数種類の特徴量を取得し、特徴量に係る2次元または3次元の表示空間であって、2種類以上の特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、特徴量に応じた位置に画像を配置することで、複数の画像を一覧表示する。 A display control device of the present disclosure includes a processor, the processor acquires a plurality of images, acquires a plurality of types of feature amounts for the plurality of images, and displays a two-dimensional or three-dimensional display space related to the feature amounts. , a plurality of images are displayed as a list by arranging the images at positions corresponding to the feature amounts in a display space that has at least one axis that indicates the magnitude of a numerical value that integrates two or more types of feature amounts.
 プロセッサは、表示空間の仕様のユーザによる変更指示を受け付け、変更指示にしたがって表示空間の仕様を変更することが好ましい。 Preferably, the processor receives a user's instruction to change the specifications of the display space, and changes the specifications of the display space in accordance with the change instruction.
 変更指示は、2種類以上の特徴量を統合する割合を決める重み係数の変更指示であることが好ましい。 Preferably, the change instruction is an instruction to change a weighting coefficient that determines the ratio of integrating two or more types of feature amounts.
 変更指示は、軸を構成する特徴量の変更指示であることが好ましい。 It is preferable that the change instruction is an instruction to change the feature amount that constitutes the axis.
 プロセッサは、特徴量が予め設定された範囲内にある2つ以上の画像がある場合、2つ以上の画像のうちの代表画像を最上位に表示し、代表画像の下にそれ以外の画像を積層した態様で表示することが好ましい。 If there are two or more images with feature values within a preset range, the processor displays the representative image of the two or more images at the top, and displays the other images below the representative image. It is preferable to display in a layered manner.
 プロセッサは、表示空間上でユーザによる画像の選択指示を受け付け、選択指示によって選択された画像に対してのみ、指定された処理を行うことが好ましい。 Preferably, the processor receives an instruction to select an image from the user on the display space, and performs specified processing only on the image selected by the selection instruction.
 プロセッサは、類似画像、および非類似画像のうちの少なくともいずれかのユーザによる選択指示を受け付け、選択指示に応じて、類似画像同士については特徴量が一致する方向に、非類似画像同士については特徴量が乖離する方向に、複数種類の特徴量に掛ける重み係数を変更することが好ましい。 The processor receives a user's selection instruction for at least one of the similar images and the dissimilar images, and, in accordance with the selection instruction, moves the similar images in a direction in which the feature amounts match, and the dissimilar images in a direction in which the feature amounts match. It is preferable to change the weighting coefficients applied to the plurality of types of feature quantities in the direction in which the quantities deviate.
 本開示の表示制御装置の作動方法は、複数の画像を取得すること、複数の画像について複数種類の特徴量を取得すること、並びに、特徴量に係る2次元または3次元の表示空間であって、2種類以上の特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、特徴量に応じた位置に画像を配置することで、複数の画像を一覧表示すること、を含む。 The operating method of the display control device disclosed herein includes acquiring a plurality of images, acquiring a plurality of types of feature quantities for the plurality of images, and displaying a list of the plurality of images by arranging the images at positions according to the feature quantities in a two-dimensional or three-dimensional display space relating to the feature quantities and having at least one axis indicating the magnitude of a numerical value obtained by integrating two or more types of feature quantities.
 本開示の表示制御装置の作動プログラムは、複数の画像を取得すること、複数の画像について複数種類の特徴量を取得すること、並びに、特徴量に係る2次元または3次元の表示空間であって、2種類以上の特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、特徴量に応じた位置に画像を配置することで、複数の画像を一覧表示すること、を含む処理をコンピュータに実行させる。 The operating program of the display control device of the present disclosure includes acquiring a plurality of images, acquiring a plurality of types of feature amounts for the plurality of images, and a two-dimensional or three-dimensional display space related to the feature amounts. , displaying a plurality of images as a list by arranging the images at positions corresponding to the feature amounts in a display space having at least one axis indicating the magnitude of a numerical value that integrates two or more types of feature amounts; cause a computer to perform processing including
ユーザ端末および画像管理サーバを示す図である。FIG. 2 is a diagram showing a user terminal and an image management server. ユーザ端末および画像管理サーバを構成するコンピュータを示すブロック図である。FIG. 2 is a block diagram showing a computer that constitutes a user terminal and an image management server. ユーザ端末のCPUの処理部を示すブロック図である。FIG. 2 is a block diagram showing a processing unit of a CPU of a user terminal. 画像管理サーバのCPUの処理部を示すブロック図である。FIG. 2 is a block diagram showing a processing unit of a CPU of an image management server. 画像DBに記憶されるデータを示す図である。It is a diagram showing data stored in an image DB. 付帯情報を示す図である。It is a diagram showing supplementary information. 特徴量情報および特徴量ベクトルを示す図である。FIG. 3 is a diagram showing feature amount information and feature amount vectors. 日時特徴量の導出方法の説明図である。FIG. 2 is an explanatory diagram of a method for deriving date and time features. 画質特徴量モデルを用いて画質特徴量を導出する処理を示す図である。FIG. 13 is a diagram showing a process of deriving image quality features using an image quality feature model. モデル特徴量導出モデルを用いてモデル特徴量を導出する処理を示す図である。FIG. 3 is a diagram illustrating a process of deriving model features using a model feature derivation model. 被写体特徴量導出モデルを用いて被写体特徴量を導出する処理を示す図である。FIG. 7 is a diagram illustrating a process of deriving a subject feature using a subject feature deriving model. 被写体判別モデルを示す図である。FIG. 3 is a diagram showing a subject discrimination model. ユーザ端末から画像記憶要求が送信された場合の画像管理サーバの各処理部の処理を示す図である。FIG. 3 is a diagram showing the processing of each processing unit of the image management server when an image storage request is sent from a user terminal. 表示設定画面を示す図である。FIG. 3 is a diagram showing a display setting screen. 表示設定画面において設定ボタンが押された場合のブラウザ制御部の処理を示す図である。13 is a diagram showing a process of the browser control unit when a setting button is pressed on the display setting screen. FIG. ユーザ端末から画像配信要求が送信された場合の画像管理サーバの各処理部の処理を示す図である。FIG. 6 is a diagram illustrating the processing of each processing unit of the image management server when an image distribution request is transmitted from a user terminal. 特徴量に応じた表示空間の位置に画像を配置する処理を示す図である。FIG. 6 is a diagram illustrating a process of arranging an image at a position in a display space according to a feature amount. 特徴量が予め設定された範囲内にある画像を積層表示する処理を示す図である。FIG. 7 is a diagram illustrating a process of displaying images whose feature amounts are within a preset range in a layered manner. 画像一覧表示画面を示す図である。FIG. 3 is a diagram showing an image list display screen. 軸を構成する特徴量の変更指示に応じて、表示空間の仕様を変更する処理を示す図である。FIG. 7 is a diagram illustrating a process of changing the specifications of a display space in response to an instruction to change feature amounts that constitute an axis. 軸を構成する特徴量の変更指示に応じて、表示空間の仕様を変更する処理を示す図である。FIG. 7 is a diagram illustrating a process of changing the specifications of a display space in response to an instruction to change feature amounts that constitute an axis. 表示空間上でのユーザによる画像の選択指示の様子を示す図である。FIG. 3 is a diagram illustrating how a user instructs to select an image in a display space. 選択指示によって選択された画像への処理を指示するための処理指示メニューが表示された状態を示す図である。FIG. 6 is a diagram illustrating a state in which a processing instruction menu for instructing processing on an image selected by a selection instruction is displayed. 選択指示によって選択された画像から作成したアルバムが表示されたアルバム作成画面を示す図である。FIG. 3 is a diagram showing an album creation screen on which an album created from images selected by a selection instruction is displayed. 画像管理サーバの処理手順を示すフローチャートである。5 is a flowchart showing the processing procedure of the image management server. 画像管理サーバの処理手順を示すフローチャートである。5 is a flowchart showing the processing procedure of the image management server. ユーザ端末の処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure of a user terminal. 軸を構成する特徴量をプルダウンメニューで変更可能な画像一覧表示画面を示す図である。FIG. 7 is a diagram illustrating an image list display screen in which the feature amounts configuring an axis can be changed using a pull-down menu. 重み係数の変更指示機能を有する表示設定画面を示す図である。FIG. 7 is a diagram showing a display setting screen having a function of instructing a change in weighting coefficients. 特徴量に応じた表示空間の位置に画像を配置する処理を示す図である。FIG. 6 is a diagram illustrating a process of arranging an image at a position in a display space according to a feature amount. 表示空間の任意の位置に画像を移動させる様子を示す図である。FIG. 7 is a diagram showing how an image is moved to an arbitrary position in the display space. 任意の位置に移動された1つの画像を中心として、他の画像を配置した表示空間を含む画像一覧表示画面を示す図である。FIG. 7 is a diagram showing an image list display screen including a display space in which other images are arranged around one image moved to an arbitrary position. 任意の位置に移動された2つの画像を中心として、他の画像を配置した表示空間を含む画像一覧表示画面を示す図である。FIG. 7 is a diagram showing an image list display screen including a display space in which other images are arranged around two images moved to arbitrary positions. 類似画像選択画面を示す図である。It is a figure showing a similar image selection screen. 類似画像の特徴量ベクトルの距離が第1閾値未満となる重み係数を求める最適化問題を解くことで、重み係数を補正する処理を示す図である。FIG. 13 is a diagram illustrating a process of correcting weighting coefficients by solving an optimization problem for obtaining weighting coefficients that make the distance of feature vectors of similar images less than a first threshold value. 非類似画像選択画面を示す図である。It is a figure showing a dissimilar image selection screen. 非類似画像の特徴量ベクトルの距離が第2閾値以上となる重み係数を求める最適化問題を解くことで、重み係数を補正する処理を示す図である。FIG. 7 is a diagram illustrating a process of correcting weighting coefficients by solving an optimization problem for finding weighting coefficients for which the distance between feature vectors of dissimilar images is equal to or greater than a second threshold; 3次元の表示空間に画像を配置した例を示す図である。FIG. 3 is a diagram showing an example of arranging images in a three-dimensional display space.
 一例として図1に示すように、ユーザUはユーザ端末10を所有する。ユーザ端末10は、カメラ機能、画像再生表示機能、および画像送受信機能等を有する機器である。ユーザ端末10のカメラ機能は、CMOS(Complementary Metal-Oxide-Semiconductor)イメージセンサ等の撮像素子をもち、レンズから取り込んだ被写体光を撮像素子に結像することで被写体の画像46(図5参照)を得る。ユーザ端末10は、具体的にはスマートフォン、タブレット端末、コンパクトデジタルカメラ、ミラーレス一眼カメラ、あるいはノート型のパーソナルコンピュータ等である。ユーザ端末10は、本開示の技術に係る「表示制御装置」の一例である。 As an example, as shown in FIG. 1, a user U owns a user terminal 10. The user terminal 10 is a device having a camera function, an image playback/display function, an image transmission/reception function, and the like. The camera function of the user terminal 10 has an image sensor such as a CMOS (Complementary Metal-Oxide-Semiconductor) image sensor, and forms an image of the object 46 (see FIG. 5) by capturing object light from a lens and focusing it on the image sensor. get. Specifically, the user terminal 10 is a smartphone, a tablet terminal, a compact digital camera, a mirrorless single-lens camera, a notebook personal computer, or the like. The user terminal 10 is an example of a "display control device" according to the technology of the present disclosure.
 ユーザ端末10は、ネットワーク11を介して画像管理サーバ12と相互通信可能に接続されている。ネットワーク11は、例えばインターネットまたは公衆通信網等のWAN(Wide Area Network)である。ユーザ端末10は画像46を画像管理サーバ12に送信(アップロード)する。また、ユーザ端末10は画像管理サーバ12からの画像46を受信(ダウンロード)する。 The user terminal 10 is connected to the image management server 12 via the network 11 so that they can communicate with each other. The network 11 is, for example, a WAN (Wide Area Network) such as the Internet or a public communication network. The user terminal 10 transmits (uploads) the image 46 to the image management server 12. Further, the user terminal 10 receives (downloads) an image 46 from the image management server 12.
 画像管理サーバ12は、例えばサーバコンピュータ、ワークステーション等であり、ユーザ端末10と併せて、本開示の技術に係る「表示制御装置」の一例である。このように、本開示の技術に係る「表示制御装置」は、複数の装置に跨って実現されてもよい。画像管理サーバ12には、ネットワーク11を介して複数のユーザUの複数のユーザ端末10が接続されている。 The image management server 12 is, for example, a server computer, a workstation, etc., and together with the user terminal 10 is an example of a "display control device" according to the technology of the present disclosure. In this way, the "display control device" according to the technology of the present disclosure may be realized across multiple devices. A plurality of user terminals 10 of a plurality of users U are connected to the image management server 12 via a network 11.
 一例として図2に示すように、ユーザ端末10および画像管理サーバ12を構成するコンピュータは、基本的には同じ構成であり、ストレージ20、メモリ21、CPU(Central Processing Unit)22、通信部23、ディスプレイ24、および入力デバイス25を備えている。これらはバスライン26を介して相互接続されている。 As an example, as shown in FIG. 2, the computers configuring the user terminal 10 and the image management server 12 basically have the same configuration, including a storage 20, a memory 21, a CPU (Central Processing Unit) 22, a communication unit 23, It includes a display 24 and an input device 25. These are interconnected via a bus line 26.
 ストレージ20は、ユーザ端末10および画像管理サーバ12を構成するコンピュータに内蔵、またはケーブル、ネットワークを通じて接続されたハードディスクドライブである。もしくはストレージ20は、ハードディスクドライブを複数台連装したディスクアレイである。ストレージ20には、オペレーティングシステム等の制御プログラム、各種アプリケーションプログラム(以下、AP(Application Program)と略す)、およびこれらのプログラムに付随する各種データ等が記憶されている。なお、ハードディスクドライブに代えてソリッドステートドライブを用いてもよい。 The storage 20 is a hard disk drive that is built into a computer that constitutes the user terminal 10 and the image management server 12, or is connected via a cable or a network. Alternatively, the storage 20 is a disk array in which a plurality of hard disk drives are connected in series. The storage 20 stores control programs such as an operating system, various application programs (hereinafter abbreviated as AP (Application Program)), and various data accompanying these programs. Note that a solid state drive may be used instead of the hard disk drive.
 メモリ21は、CPU22が処理を実行するためのワークメモリである。CPU22は、ストレージ20に記憶されたプログラムをメモリ21へロードして、プログラムにしたがった処理を実行する。これによりCPU22はコンピュータの各部を統括的に制御する。CPU22は、本開示の技術に係る「プロセッサ」の一例である。なお、メモリ21は、CPU22に内蔵されていてもよい。 The memory 21 is a work memory for the CPU 22 to execute processing. The CPU 22 loads the program stored in the storage 20 into the memory 21 and executes processing according to the program. Thereby, the CPU 22 centrally controls each part of the computer. The CPU 22 is an example of a "processor" according to the technology of the present disclosure. Note that the memory 21 may be built into the CPU 22.
 通信部23は、ネットワーク11等を介した各種情報の伝送制御を行うネットワークインターフェースである。ディスプレイ24は各種画面を表示する。各種画面にはGUI(Graphical User Interface)による操作機能が備えられる。ユーザ端末10および画像管理サーバ12を構成するコンピュータは、各種画面を通じて、入力デバイス25からの操作指示の入力を受け付ける。入力デバイス25は、キーボード、マウス、タッチパネル、および音声入力用のマイク等である。 The communication unit 23 is a network interface that controls transmission of various information via the network 11 and the like. The display 24 displays various screens. Various screens are provided with operation functions using a GUI (Graphical User Interface). The computers constituting the user terminal 10 and the image management server 12 accept input of operation instructions from the input device 25 through various screens. The input device 25 is a keyboard, a mouse, a touch panel, a microphone for voice input, or the like.
 なお、以下の説明では、ユーザ端末10を構成するコンピュータの各部(ストレージ20、CPU22、ディスプレイ24、および入力デバイス25)には添え字の「A」を、画像管理サーバ12を構成するコンピュータの各部(ストレージ20およびCPU22)には添え字の「B」をそれぞれ符号に付して区別する。 In the following description, each part of the computer that makes up the user terminal 10 (storage 20, CPU 22, display 24, and input device 25) is indicated by the suffix "A," and each part of the computer that makes up the image management server 12 is indicated by the suffix "A." (Storage 20 and CPU 22) are distinguished by adding a suffix "B" to their respective symbols.
 一例として図3に示すように、ユーザ端末10のストレージ20Aには、画像AP30が記憶されている。画像AP30は、ユーザUによってユーザ端末10にインストールされる。画像AP30は、ユーザ端末10を構成するコンピュータを、本開示の技術に係る「表示制御装置」として機能させるためのAPである。すなわち、画像AP30は、本開示の技術に係る「表示制御装置の作動プログラム」の一例である。画像AP30が起動された場合、ユーザ端末10のCPU22Aは、メモリ21等と協働して、ブラウザ制御部32として機能する。ブラウザ制御部32は、画像AP30の専用のウェブブラウザの動作を制御する。 As an example, as shown in FIG. 3, an image AP30 is stored in the storage 20A of the user terminal 10. Image AP 30 is installed on user terminal 10 by user U. The image AP 30 is an AP for causing a computer forming the user terminal 10 to function as a "display control device" according to the technology of the present disclosure. That is, the image AP30 is an example of the "display control device operation program" according to the technology of the present disclosure. When the image AP 30 is activated, the CPU 22A of the user terminal 10 functions as the browser control unit 32 in cooperation with the memory 21 and the like. The browser control unit 32 controls the operation of the dedicated web browser of the image AP 30.
 ブラウザ制御部32は各種画面を生成する。ブラウザ制御部32は、生成した各種画面をディスプレイ24Aに表示する。また、ブラウザ制御部32は、各種画面を通じて、ユーザUによって入力デバイス25Aから入力される様々な操作指示を受け付ける。ブラウザ制御部32は、操作指示に応じた様々な要求を画像管理サーバ12に送信する。 The browser control unit 32 generates various screens. The browser control unit 32 displays various generated screens on the display 24A. The browser control unit 32 also receives various operation instructions input by the user U from the input device 25A through various screens. The browser control unit 32 sends various requests to the image management server 12 according to operation instructions.
 一例として図4に示すように、画像管理サーバ12のストレージ20Bには、作動プログラム35が記憶されている。作動プログラム35は、画像管理サーバ12を構成するコンピュータを、本開示の技術に係る「表示制御装置」として機能させるためのAPである。すなわち、作動プログラム35は、画像AP30と同じく、本開示の技術に係る「表示制御装置の作動プログラム」の一例である。ストレージ20Bには、画像データベース(以下、DB(Data Base)と表記する)36も記憶されている。また、図示は省略したが、ストレージ20Bには、ユーザUを一意に識別するためのユーザID(Identification Data)と、ユーザUが設定したパスワードと、ユーザ端末10を一意に識別するための端末IDとが、ユーザUのアカウント情報として記憶されている。 As an example, as shown in FIG. 4, an operating program 35 is stored in the storage 20B of the image management server 12. The operating program 35 is an AP for causing the computer forming the image management server 12 to function as a "display control device" according to the technology of the present disclosure. That is, like the image AP 30, the operation program 35 is an example of the "operation program for the display control device" according to the technology of the present disclosure. An image database (hereinafter referred to as DB (Data Base)) 36 is also stored in the storage 20B. Although not shown, the storage 20B contains a user ID (Identification Data) for uniquely identifying the user U, a password set by the user U, and a terminal ID for uniquely identifying the user terminal 10. is stored as user U's account information.
 作動プログラム35が起動されると、画像管理サーバ12のCPU22Bは、メモリ21等と協働して、受付部40、特徴量導出部41、リードライト(以下、RW(Read Write)と表記する)制御部42、および配信制御部43として機能する。 When the operating program 35 is started, the CPU 22B of the image management server 12 cooperates with the memory 21 and the like to execute the reception unit 40, feature value derivation unit 41, and read/write (hereinafter referred to as RW (Read Write)). It functions as a control section 42 and a distribution control section 43.
 受付部40は、ユーザ端末10からの各種要求を受け付ける。受付部40は、各種要求を、特徴量導出部41、RW制御部42、および配信制御部43に出力する。特徴量導出部41は、画像46の複数種類の特徴量を導出する。特徴量導出部41は、導出した複数種類の特徴量で構成される特徴量情報48(図5参照)をRW制御部42に出力する。 The reception unit 40 receives various requests from the user terminal 10. The reception unit 40 outputs the various requests to the feature derivation unit 41, the RW control unit 42, and the distribution control unit 43. The feature derivation unit 41 derives multiple types of features of the image 46. The feature derivation unit 41 outputs feature information 48 (see FIG. 5) consisting of the multiple types of derived features to the RW control unit 42.
 RW制御部42は、ストレージ20Bへの各種データの記憶、およびストレージ20Bからの各種データの読み出しを制御する。RW制御部42は、特に、画像DB36への画像46の記憶、並びに画像DB36からの画像46の読み出しを制御する。また、RW制御部42は、画像DB36への特徴量情報48の記憶、並びに画像DB36からの特徴量情報48の読み出しを制御する。配信制御部43は、ユーザ端末10への各種データの配信を制御する。 The RW control unit 42 controls storage of various data in the storage 20B and reading of various data from the storage 20B. The RW control unit 42 particularly controls the storage of the image 46 in the image DB 36 and the reading of the image 46 from the image DB 36. Further, the RW control unit 42 controls the storage of the feature amount information 48 in the image DB 36 and the reading of the feature amount information 48 from the image DB 36. The distribution control unit 43 controls distribution of various data to the user terminal 10.
 一例として図5に示すように、画像DB36には、ユーザU毎に記憶領域45が設けられている。記憶領域45にはユーザIDが登録されている。また、記憶領域45には、画像46、付帯情報47、および特徴量情報48が関連付けて記憶されている。 As an example, as shown in FIG. 5, a storage area 45 is provided for each user U in the image DB 36. A user ID is registered in the storage area 45. Further, in the storage area 45, an image 46, additional information 47, and feature amount information 48 are stored in association with each other.
 一例として図6に示すように、付帯情報47は、撮影日時、撮影場所、絞り値、ISO(International Organization for Standardization)感度、シャッター速度、焦点距離、フラッシュの有無、およびタグといった複数の項目を含む。撮影日時には、ユーザ端末10のカメラ機能にて画像46を撮影した日時が登録される。撮影場所には、ユーザ端末10のGPS(Global Positioning System)機能で得られた経緯度情報から割り出された住所および/またはランドマーク名が登録される。タグは、画像46に写る被写体を端的に表す単語である。タグは、ユーザUが手入力したもの、あるいは、被写体判別モデル60(図12参照)を用いて導出したものを含む。 As an example, as shown in FIG. 6, the additional information 47 includes multiple items such as shooting date and time, shooting location, aperture value, ISO (International Organization for Standardization) sensitivity, shutter speed, focal length, presence or absence of flash, and tags. . The date and time when the image 46 was photographed using the camera function of the user terminal 10 is registered in the photographing date and time. As the shooting location, an address and/or a landmark name determined from latitude and longitude information obtained by the GPS (Global Positioning System) function of the user terminal 10 is registered. The tag is a word that simply represents the subject in the image 46. The tags include those manually input by the user U or those derived using the subject discrimination model 60 (see FIG. 12).
 一例として図7に示すように、特徴量情報48は、日時特徴量、色特徴量、明るさ特徴量、画質特徴量、モデル特徴量、および被写体特徴量といった複数の項目を含む。これら特徴量情報48を構成する複数種類の特徴量を羅列した特徴量ベクトルVIによって、画像46を表すことができる。特徴量情報48に含まれる特徴量は数百~数千ある。このため、特徴量ベクトルVIの次元数も数百~数千となる。 As shown in FIG. 7 as an example, the feature information 48 includes a plurality of items such as a date and time feature, a color feature, a brightness feature, an image quality feature, a model feature, and a subject feature. The image 46 can be represented by a feature vector VI in which a plurality of types of feature amounts constituting the feature information 48 are listed. There are hundreds to thousands of feature amounts included in the feature information 48. Therefore, the number of dimensions of the feature quantity vector VI also ranges from several hundred to several thousand.
 日時特徴量は画像46の撮影日時に関する特徴量である。一例として図8に示すように、特徴量導出部41は、記憶領域45に記憶された画像46のうちで、撮影日時が最古の画像46Oの日時特徴量を0、撮影日時が最新の画像46Nの日時特徴量を1として、画像46の日時特徴量を導出する。 The date and time feature is a feature related to the date and time when the image 46 was taken. As an example, as shown in FIG. 8, the feature amount deriving unit 41 sets the date and time feature amount to 0 for the image 46O with the oldest shooting date and time among the images 46 stored in the storage area 45, and the date and time feature amount of the image 46O with the latest shooting date and time. By setting the date and time feature of image 46N to 1, the date and time feature of image 46 is derived.
 色特徴量は画像46のRGB(赤緑青)の各色に関する特徴量である。特徴量導出部41は、例えば、画素値の最大値を1として、画像46のRGBの各色の画素値の平均値を正規化した値を、色特徴量として導出する。明るさ特徴量は画像46の明るさ(輝度)に関する特徴量である。特徴量導出部41は、明るさの最大値を1として、画像46の明るさの平均値を正規化した値を、明るさ特徴量として導出する。 The color feature amount is a feature amount related to each color of RGB (red, green, and blue) of the image 46. The feature amount deriving unit 41 derives a value obtained by normalizing the average value of the pixel values of each color of RGB of the image 46, for example, with the maximum value of the pixel value being 1, as the color feature amount. The brightness feature amount is a feature amount related to the brightness (luminance) of the image 46. The feature amount deriving unit 41 derives a value obtained by normalizing the average brightness value of the image 46, with the maximum brightness value being 1, as a brightness feature amount.
 画質特徴量は画像46の画質に関する特徴量である。画質特徴量は0以上1以下の数値である。画質特徴量が0に近い程画像46の画質は低く、画質特徴量が1に近い程画像46の画質は高い。 The image quality feature quantity is a feature quantity related to the image quality of the image 46. The image quality feature amount is a numerical value of 0 or more and 1 or less. The closer the image quality feature quantity is to 0, the lower the image quality of the image 46 is, and the closer the image quality feature quantity is to 1, the higher the image quality of the image 46 is.
 一例として図9に示すように、特徴量導出部41は、画質特徴量導出モデル55を用いて画質特徴量を導出する。画質特徴量導出モデル55は、画像46の入力に応じて画質特徴量を出力する機械学習モデルである。 As an example, as shown in FIG. 9, the feature amount derivation unit 41 derives the image quality feature amount using the image quality feature amount derivation model 55. The image quality feature amount derivation model 55 is a machine learning model that outputs an image quality feature amount in response to the input of the image 46.
 画質特徴量導出モデル55の学習フェーズにおいては、学習用画像と、学習用画像の入力に応じて画質特徴量導出モデル55から出力された学習用画質特徴量との答え合わせをするための正解画質特徴量とが教師データとして用意される。正解画質特徴量は、鮮鋭度、露出、階調表現、彩度、ホワイトバランス、ボケの有無、およびブレの有無といった複数の評価項目について、教師データの作成者が評価した結果に基づいて算出される。 In the learning phase of the image quality feature derivation model 55, the correct image quality is used to match the learning image with the learning image quality feature output from the image quality feature derivation model 55 in response to the input of the learning image. The feature amounts are prepared as training data. The correct image quality features are calculated based on the results of the training data creator's evaluation of multiple evaluation items such as sharpness, exposure, gradation expression, saturation, white balance, presence of blur, and presence of blur. Ru.
 モデル特徴量は機械学習モデルから得られる特徴量である。モデル特徴量も他の特徴量と同様に、最大値を1として正規化した値である。 Model features are features obtained from machine learning models. Like other feature quantities, the model feature quantity is also a value normalized with a maximum value of 1.
 一例として図10に示すように、特徴量導出部41は、モデル特徴量導出モデル56を用いてモデル特徴量を導出する。モデル特徴量導出モデル56は、画像46の入力に応じてモデル特徴量を出力する機械学習モデルである。 As an example, as shown in FIG. 10, the feature amount derivation unit 41 derives the model feature amount using the model feature amount derivation model 56. The model feature derivation model 56 is a machine learning model that outputs model features in response to the input of the image 46.
 被写体特徴量は、画像46に写る被写体の存在確率を、被写体の種類毎に0以上1以下の数値で示す特徴量である。被写体としては、例示の山、川、海、湖、男性、女性等の他に、画像46の被写体となり得るあらゆるものが含まれている。 The subject feature quantity is a feature quantity that indicates the existence probability of a subject appearing in the image 46 as a numerical value between 0 and 1 for each type of subject. The subject includes everything that can be the subject of the image 46, in addition to the illustrated mountains, rivers, oceans, lakes, men, women, etc.
 一例として図11に示すように、特徴量導出部41は、被写体特徴量導出モデル57を用いて被写体特徴量を導出する。被写体特徴量導出モデル57は、画像46の入力に応じて被写体特徴量を出力する機械学習モデルである。 As an example, as shown in FIG. 11, the feature amount derivation unit 41 derives the object feature amount using the object feature amount derivation model 57. The subject feature amount derivation model 57 is a machine learning model that outputs a subject feature amount in response to the input of the image 46.
 画質特徴量導出モデル55、モデル特徴量導出モデル56、および被写体特徴量導出モデル57は、ストレージ20Bに記憶されている。RW制御部42は、これらの導出モデル55~57をストレージ20Bから読み出し、特徴量導出部41に出力する。 The image quality feature derivation model 55, model feature derivation model 56, and subject feature derivation model 57 are stored in the storage 20B. The RW control unit 42 reads these derived models 55 to 57 from the storage 20B and outputs them to the feature amount deriving unit 41.
 一例として図12に示すように、モデル特徴量導出モデル56、および被写体特徴量導出モデル57には、被写体判別モデル60の一部が転用される。被写体判別モデル60は、例えばU-Net、ResNet(Residual Network)といった畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)であり、エンコーダ部61および出力部62により構成される。エンコーダ部61には画像46が入力される。エンコーダ部61は、フィルタを用いた周知の畳み込み演算処理、プーリング処理、およびスキップレイヤー処理等を画像46に対して施すことで、特徴量63を導出する。エンコーダ部61は特徴量63を出力部62に出力する。特徴量63はモデル特徴量に他ならない。つまり、エンコーダ部61がモデル特徴量導出モデル56に転用される。 As an example, as shown in FIG. 12, a part of the subject discrimination model 60 is used as the model feature derivation model 56 and the subject feature derivation model 57. The object discrimination model 60 is, for example, a convolutional neural network (CNN) such as U-Net or ResNet (Residual Network), and includes an encoder section 61 and an output section 62. An image 46 is input to the encoder section 61 . The encoder unit 61 derives the feature amount 63 by performing well-known convolution processing using a filter, pooling processing, skip layer processing, etc. on the image 46. The encoder section 61 outputs the feature amount 63 to the output section 62. The feature amount 63 is nothing but a model feature amount. In other words, the encoder unit 61 is used as the model feature derivation model 56.
 出力部62は、デコーダ部64、確率算出部65、および被写体判別部66を有する。デコーダ部64には特徴量63が入力される。デコーダ部64は、畳み込み演算処理を伴うアップサンプリング処理、およびマージ処理等を特徴量63に対して施すことで、最終特徴量63Eを導出する。確率算出部65は、ソフトマックス関数、あるいはシグモイド関数といった周知の活性化関数を用いて、最終特徴量63Eから確率算出結果67を生成する。確率算出部65は、確率算出結果67を被写体判別部66に出力する。確率算出結果67は、画像46における、山、川、海、湖といった各被写体の存在確率である。確率算出結果67は被写体特徴量に他ならない。つまり、エンコーダ部61、デコーダ部64、および確率算出部65が、被写体特徴量導出モデル57に転用される。 The output section 62 includes a decoder section 64, a probability calculation section 65, and a subject discrimination section 66. The feature amount 63 is input to the decoder section 64 . The decoder unit 64 derives the final feature amount 63E by subjecting the feature amount 63 to upsampling processing accompanied by convolution processing, merging processing, and the like. The probability calculation unit 65 generates a probability calculation result 67 from the final feature amount 63E using a well-known activation function such as a softmax function or a sigmoid function. The probability calculation section 65 outputs the probability calculation result 67 to the subject discrimination section 66. The probability calculation result 67 is the existence probability of each subject such as a mountain, a river, the sea, and a lake in the image 46. The probability calculation result 67 is nothing but a subject feature amount. That is, the encoder section 61, the decoder section 64, and the probability calculation section 65 are used as the subject feature derivation model 57.
 被写体判別部66は、確率算出結果67における存在確率が最も高い被写体を、被写体判別結果68として出力する。なお、ここでは被写体判別モデル60というクラス判別モデルの一部を、モデル特徴量導出モデル56および被写体特徴量導出モデル57に転用する例を示したが、これに限らない。画像46の入力に応じて、画像46の復元画像を出力するオートエンコーダのエンコーダ部を、モデル特徴量導出モデル56に転用してもよい。あるいは、画像46に写る被写体を画素単位で識別するセマンティックセグメンテーションモデルのエンコーダ部、デコーダ部、および確率算出部を、被写体特徴量導出モデル57に転用してもよい。 The subject discrimination unit 66 outputs the subject with the highest existence probability in the probability calculation result 67 as the subject discrimination result 68. Although an example has been shown in which a part of the class discrimination model called the subject discrimination model 60 is used as the model feature derivation model 56 and the subject feature derivation model 57, the present invention is not limited to this. An encoder section of an autoencoder that outputs a restored image of the image 46 in response to the input of the image 46 may be used as the model feature derivation model 56. Alternatively, the encoder section, decoder section, and probability calculation section of a semantic segmentation model that identifies a subject in the image 46 in pixel units may be used as the subject feature derivation model 57.
 一例として図13に示すように、ブラウザ制御部32は、画像記憶要求70を画像管理サーバ12に送信する。画像記憶要求70は、ユーザIDと、画像46および付帯情報47とを含む。 As an example, as shown in FIG. 13, the browser control unit 32 transmits an image storage request 70 to the image management server 12. The image storage request 70 includes a user ID, an image 46, and additional information 47.
 受付部40は画像記憶要求70を受け付け、画像記憶要求70を特徴量導出部41およびRW制御部42に出力する。特徴量導出部41は、図8~図11で示したように各種特徴量を導出し、これにより得られた特徴量情報48をRW制御部42に出力する。RW制御部42は、ユーザIDに対応する画像DB36の記憶領域45に、画像46、付帯情報47、および特徴量情報48を記憶する。これにより、画像管理サーバ12は、複数の画像46を取得し、かつ、複数の画像46について複数種類の特徴量を取得する。 The reception unit 40 accepts the image storage request 70 and outputs the image storage request 70 to the feature quantity derivation unit 41 and the RW control unit 42. The feature amount deriving unit 41 derives various feature amounts as shown in FIGS. 8 to 11, and outputs the obtained feature amount information 48 to the RW control unit 42. The RW control unit 42 stores the image 46, additional information 47, and feature amount information 48 in the storage area 45 of the image DB 36 corresponding to the user ID. Thereby, the image management server 12 acquires a plurality of images 46 and acquires a plurality of types of feature amounts for the plurality of images 46.
 一例として図14に示すように、ブラウザ制御部32は、ユーザUの指示に応じて、画像46の一覧表示の表示設定画面75をディスプレイ24Aに表示する。表示設定画面75には、画像46の一覧表示に用いる2次元の表示空間90(図17参照)のX軸を構成する特徴量を選択するためのプルダウンメニュー76、Y軸を構成する特徴量を選択するためのプルダウンメニュー77、および設定ボタン78が設けられている。プルダウンメニュー76および77は、初期表示状態においては2つずつ用意されている。プルダウンメニュー76および77の下部には、特徴量追加ボタン79および80が設けられている。特徴量追加ボタン79が押された場合、プルダウンメニュー76が追加される。同様に、特徴量追加ボタン80が押された場合、プルダウンメニュー77が追加される。 As an example, as shown in FIG. 14, the browser control unit 32 displays a display setting screen 75 for displaying a list of images 46 on the display 24A in response to an instruction from the user U. The display setting screen 75 includes a pull-down menu 76 for selecting the feature amount forming the X-axis of the two-dimensional display space 90 (see FIG. 17) used for displaying a list of images 46, and a pull-down menu 76 for selecting the feature amount forming the Y-axis. A pull-down menu 77 and a settings button 78 are provided for selection. Two pull-down menus 76 and 77 are provided in the initial display state. At the bottom of the pull-down menus 76 and 77, feature amount addition buttons 79 and 80 are provided. When the feature amount addition button 79 is pressed, a pull-down menu 76 is added. Similarly, when the feature amount addition button 80 is pressed, the pull-down menu 77 is added.
 プルダウンメニュー76および77で選択可能な特徴量は、日時特徴量、色特徴量、明るさ特徴量、画質特徴量、および被写体特徴量である。複数のプルダウンメニュー76のうち、必ず1つのプルダウンメニュー76だけは何らかの特徴量を選択しなければならない。同様に、複数のプルダウンメニュー77のうち、必ず1つのプルダウンメニュー77だけは何らかの特徴量を選択しなければならない。このため表示空間90のX軸およびY軸は、少なくとも1種類の特徴量の数値の大きさを示す軸となる。一方で、複数のプルダウンメニュー76において特徴量が選択された場合、表示空間90のX軸は、2種類以上の特徴量を統合した数値の大きさを示す軸となる。同様に、複数のプルダウンメニュー77において特徴量が選択された場合、表示空間90のY軸は、2種類以上の特徴量を統合した数値の大きさを示す軸となる。 Features that can be selected from the pull-down menus 76 and 77 are date and time features, color features, brightness features, image quality features, and subject features. Among the plurality of pull-down menus 76, only one of the pull-down menus 76 must select some feature amount. Similarly, among the plurality of pull-down menus 77, only one of the pull-down menus 77 must select some feature amount. Therefore, the X-axis and Y-axis of the display space 90 serve as axes that indicate the numerical value of at least one type of feature amount. On the other hand, when feature quantities are selected from the plurality of pull-down menus 76, the X axis of the display space 90 becomes an axis indicating the magnitude of a numerical value that integrates two or more types of feature quantities. Similarly, when a feature quantity is selected from a plurality of pull-down menus 77, the Y-axis of the display space 90 becomes an axis indicating the magnitude of a numerical value that integrates two or more types of feature quantities.
 一例として図15に示すように、表示設定画面75において設定ボタン78が押された場合、ブラウザ制御部32は画像配信要求85を生成する。画像配信要求85は、ユーザID、およびプルダウンメニュー76および77にて選択された特徴量に応じた情報を含む。特徴量に応じた情報は、プルダウンメニュー76および77にて選択された特徴量が被写体特徴量であった場合は、被写体特徴量を表すキーワードである。図15においては、プルダウンメニュー76にて犬および子どもの被写体特徴量が選択され、犬および子どもをキーワードとして含む画像配信要求85が生成された場合を例示している。なお、プルダウンメニュー76および77にて選択された特徴量が、いずれも被写体特徴量以外の特徴量であった場合は、過去1年間等の予め設定された期間の画像46を指定する期間指定情報が、特徴量に応じた情報として画像配信要求85に登録される。 As an example, as shown in FIG. 15, when the setting button 78 is pressed on the display setting screen 75, the browser control unit 32 generates an image distribution request 85. The image distribution request 85 includes the user ID and information according to the feature amount selected from the pull-down menus 76 and 77. The information corresponding to the feature amount is a keyword representing the object feature amount when the feature amount selected from the pull-down menus 76 and 77 is the object feature amount. In FIG. 15, a case is illustrated in which subject features of a dog and a child are selected in the pull-down menu 76, and an image distribution request 85 including the keywords dog and child is generated. Note that if the feature amounts selected in the pull-down menus 76 and 77 are all feature amounts other than the subject feature amount, period designation information that specifies images 46 from a preset period such as the past year is used. is registered in the image distribution request 85 as information corresponding to the feature amount.
 一例として図16に示すように、ブラウザ制御部32は、画像配信要求85を画像管理サーバ12に送信する。受付部40は画像配信要求85を受け付け、画像配信要求85をRW制御部42に出力する。RW制御部42は、画像配信要求85のユーザIDに対応する記憶領域45に記憶された画像46のうち、画像配信要求85の特徴量に応じた情報に該当する画像46を検索する。RW制御部42は、検索した画像46と、その付帯情報47(図16においては不図示)および特徴量情報48とを配信制御部43に出力する。配信制御部43は、RW制御部42からの画像46等を、画像配信要求85の要求元のユーザ端末10に配信する。配信制御部43は、画像配信要求85の要求元のユーザ端末10を、画像配信要求85に含まれるユーザIDを元に特定する。 As an example, as shown in FIG. 16, the browser control unit 32 transmits an image distribution request 85 to the image management server 12. The reception unit 40 accepts the image distribution request 85 and outputs the image distribution request 85 to the RW control unit 42 . The RW control unit 42 searches for an image 46 corresponding to information according to the feature amount of the image distribution request 85 from among the images 46 stored in the storage area 45 corresponding to the user ID of the image distribution request 85. The RW control unit 42 outputs the searched image 46, its accompanying information 47 (not shown in FIG. 16), and feature amount information 48 to the distribution control unit 43. The distribution control unit 43 distributes the image 46 and the like from the RW control unit 42 to the user terminal 10 that has requested the image distribution request 85 . The distribution control unit 43 identifies the user terminal 10 that is the source of the image distribution request 85 based on the user ID included in the image distribution request 85.
 図16においては、図15の犬および子どもをキーワードとして含む画像配信要求85の場合を例示している。この場合、RW制御部42は、犬の被写体特徴量が0より大きい画像46、および子どもの被写体特徴量が0より大きい画像46を検索する。 FIG. 16 shows an example of the image distribution request 85 in FIG. 15 that includes the words dog and child as keywords. In this case, the RW control unit 42 searches for an image 46 in which the subject feature amount of the dog is greater than 0, and an image 46 in which the subject feature amount of the child is greater than 0.
 一例として図17に示すように、ブラウザ制御部32は、複数のプルダウンメニュー76または77において特徴量が選択され、2種類以上の特徴量を統合する必要がある場合、画像配信要求85に応じて画像管理サーバ12から配信された画像46について、統合特徴量IFを算出する。統合特徴量IFは、具体的には、統合する複数種類の特徴量の加算平均値である。図17においては、図15および図16の犬および子どもをキーワードとして含む画像配信要求85の場合を例示している。この場合、ブラウザ制御部32は、犬の被写体特徴量と子どもの被写体特徴量の加算平均値を、統合特徴量IFとして算出する。例えば犬の被写体特徴量が0.64、子どもの被写体特徴量が0.22の画像462の場合、ブラウザ制御部32は、
 IF=(0.64+0.22)/2=0.43
と算出する。
As an example, as shown in FIG. 17, when features are selected in multiple pull-down menus 76 or 77 and two or more types of features need to be integrated, the browser control unit 32 calculates an integrated feature IF for an image 46 delivered from the image management server 12 in response to an image delivery request 85. Specifically, the integrated feature IF is an arithmetic average value of the multiple types of feature to be integrated. FIG. 17 illustrates an example of an image delivery request 85 including keywords for dog and child as shown in FIG. 15 and FIG. 16. In this case, the browser control unit 32 calculates the arithmetic average value of the subject feature of the dog and the subject feature of the child as the integrated feature IF. For example, in the case of an image 462 in which the subject feature of the dog is 0.64 and the subject feature of the child is 0.22, the browser control unit 32 calculates the integrated feature IF as the arithmetic average value.
IF = (0.64 + 0.22) / 2 = 0.43
It is calculated as follows.
 ブラウザ制御部32は、X軸およびY軸を有する特徴量に係る2次元の表示空間90において、特徴量に応じた位置に画像46を配置する。ブラウザ制御部32は、表示空間90において画像46が重なり合う場合、撮影日時が新しい画像46を手前に表示する。なお、本実施形態において、表示空間90の特徴量に応じた位置とは、場合により数百~数千種類ある特徴量のうちの少なくとも3種の特徴量に応じた位置を指し、数百~数千種類ある特徴量の全てに応じた位置であることを必要としない。 The browser control unit 32 places the image 46 at a position according to the feature in a two-dimensional display space 90 related to the feature having an X axis and a Y axis. When images 46 overlap in the display space 90, the browser control unit 32 displays the image 46 with the most recent shooting date and time in front. Note that in this embodiment, a position according to the feature in the display space 90 refers to a position according to at least three types of feature out of the hundreds to thousands of types of feature, and does not necessarily have to be a position according to all of the hundreds to thousands of types of feature.
 図17における表示空間90は、X軸が、犬および子どもの被写体特徴量を統合した数値、すなわち統合特徴量IFを示す軸であり、Y軸が画質特徴量を示す軸である。X軸は、本開示の技術に係る「2種類以上の特徴量を統合した数値の大きさを示す軸」の一例である。なお、X軸だけでなくY軸も、2種類以上の特徴量を統合した数値の大きさを示す軸としてもよい。 In the display space 90 in FIG. 17, the X axis is an axis indicating a numerical value that integrates the subject feature quantities of the dog and the child, that is, the integrated feature quantity IF, and the Y axis is the axis indicating the image quality feature quantity. The X-axis is an example of "an axis that indicates the magnitude of a numerical value that integrates two or more types of feature amounts" according to the technology of the present disclosure. Note that not only the X-axis but also the Y-axis may be an axis indicating the magnitude of a numerical value that integrates two or more types of feature amounts.
 ブラウザ制御部32は、統合特徴量IFが0.345、画質特徴量が0.88である画像461を、表示空間90の座標{0.345、0.88}の位置に配置する。同様に、ブラウザ制御部32は、統合特徴量IFが0.43、画質特徴量が0.56である画像462を、表示空間90の座標{0.43、0.56}の位置に配置する。また、ブラウザ制御部32は、統合特徴量IFが0.48、画質特徴量が0.72である画像463を、表示空間90の座標{0.48、0.72}の位置に配置する。図17の表示空間90においては、犬の被写体特徴量、および子どもの被写体特徴量が相対的に高く、かつ画質特徴量が相対的に高い画像46、言い換えれば犬および/または子どもが明確に被写体として写り、かつ画質が良好な画像46程、右上の領域に配置される。ブラウザ制御部32は、こうして表示空間90の特徴量に応じた位置に画像46を配置することで、複数の画像46を一覧表示する。 The browser control unit 32 places the image 461 whose integrated feature quantity IF is 0.345 and whose image quality feature quantity is 0.88 at the position of coordinates {0.345, 0.88} in the display space 90. Similarly, the browser control unit 32 places the image 462 whose integrated feature quantity IF is 0.43 and whose image quality feature quantity is 0.56 at the position of coordinates {0.43, 0.56} in the display space 90. . Further, the browser control unit 32 arranges the image 463 whose integrated feature amount IF is 0.48 and whose image quality feature amount is 0.72 at the position of coordinates {0.48, 0.72} in the display space 90. In the display space 90 of FIG. 17, the image 46 has a relatively high subject feature of the dog and a child, and has a relatively high image quality feature, in other words, the dog and/or the child are clearly the subject. The image 46 that appears more clearly and has better image quality is placed in the upper right area. The browser control unit 32 displays a list of the plurality of images 46 by arranging the images 46 at positions corresponding to the feature amounts in the display space 90 in this manner.
 一例として図18に示すように、ブラウザ制御部32は、特徴量が予め設定された範囲内にある2つ以上の画像46がある場合、2つ以上の画像46のうちの代表画像46RPを最上位に表示する。そして、代表画像46RPの下にそれ以外の画像46を積層した態様で表示する。代表画像46RPは、例えば、特徴量が予め設定された範囲内にある2つ以上の画像46のうちで撮影日時が最古の画像46である。ここで、特徴量が予め設定された範囲内にある2つ以上の画像46とは、互いの特徴量ベクトルVIの距離が、予め設定された閾値未満の画像46同士である。閾値には、例えば、数ミリ秒~数秒の間隔で撮影した連写画像を、特徴量が予め設定された範囲内にある画像46として判定可能な値が設定されている。図18においては、特徴量が予め設定された範囲内にある3つの画像46を、代表画像46RPでまとめて積層表示した例を示している。 As an example, as shown in FIG. 18, when there are two or more images 46 whose feature amounts are within a preset range, the browser control unit 32 selects the most representative image 46RP of the two or more images 46. Display at the top. Then, the other images 46 are displayed in a stacked manner under the representative image 46RP. The representative image 46RP is, for example, the image 46 with the oldest photographing date and time among the two or more images 46 whose feature amounts are within a preset range. Here, two or more images 46 whose feature amounts are within a preset range are images 46 whose feature amount vectors VI are less than a preset threshold. For example, the threshold value is set to a value that allows continuous shots taken at intervals of several milliseconds to several seconds to be determined as an image 46 whose feature amount falls within a preset range. FIG. 18 shows an example in which three images 46 whose feature amounts are within a preset range are collectively displayed in a stacked manner as a representative image 46RP.
 一例として図19に示すように、ブラウザ制御部32は、複数の画像46が配置された表示空間90を含む画像一覧表示画面95をディスプレイ24Aに表示する。画像一覧表示画面95には設定変更ボタン96が設けられている。設定変更ボタン96が押された場合、ブラウザ制御部32は、画像一覧表示画面95から表示設定画面75に表示を遷移させる。 As an example, as shown in FIG. 19, the browser control unit 32 displays an image list display screen 95 including a display space 90 in which a plurality of images 46 are arranged on the display 24A. A setting change button 96 is provided on the image list display screen 95. When the settings change button 96 is pressed, the browser control unit 32 changes the display from the image list display screen 95 to the display settings screen 75.
 図20および図21は、表示空間90の軸を構成する特徴量の変更指示に応じて、表示空間90の仕様を変更する処理の一例を示す。図20は、図19で示した表示空間90を含む画像一覧表示画面95において設定変更ボタン96が押され、さらに表示設定画面75において、X軸を構成する特徴量が日時特徴量に変更(Y軸は画質特徴量のまま変更なし)された場合を示す。この場合、ブラウザ制御部32は、X軸が日時特徴量、Y軸が画質特徴量の表示空間90に画像46を配置し直し、X軸が日時特徴量、Y軸が画質特徴量の表示空間90を含む画像一覧表示画面95をディスプレイ24Aに表示する。X軸が日時特徴量、Y軸が画質特徴量の表示空間90においては、日時特徴量が相対的に高く、かつ画質特徴量が相対的に高い画像46、言い換えれば撮影日時が新しく、かつ画質が良好な画像46程、右上の領域に配置される。 20 and 21 show an example of a process for changing the specifications of the display space 90 in response to an instruction to change the feature quantities constituting the axes of the display space 90. FIG. 20 shows a case where the setting change button 96 is pressed on the image list display screen 95 including the display space 90 shown in FIG. 19, and furthermore, on the display setting screen 75, the feature quantities constituting the X-axis are changed to date and time features (the Y-axis remains unchanged as image quality features). In this case, the browser control unit 32 rearranges the images 46 in the display space 90 with the X-axis being the date and time features and the Y-axis being the image quality features, and displays on the display 24A the image list display screen 95 including the display space 90 with the X-axis being the date and time features and the Y-axis being the image quality features. In the display space 90 with the X-axis being the date and time features and the Y-axis being the image quality features, the images 46 with relatively high date and time features and relatively high image quality features, in other words, the images 46 with newer shooting dates and better image quality, are arranged in the upper right area.
 図21は、図20で示した表示空間90を含む画像一覧表示画面95において設定変更ボタン96が押され、さらに表示設定画面75において、X軸を構成する特徴量が高山植物の被写体特徴量に変更され、Y軸を構成する特徴量が色特徴量のうちの赤色特徴量に変更された場合を示す。この場合、ブラウザ制御部32は、X軸が高山植物の被写体特徴量、Y軸が赤色特徴量の表示空間90に画像46を配置し直し、X軸が高山植物の被写体特徴量、Y軸が赤色特徴量の表示空間90を含む画像一覧表示画面95をディスプレイ24Aに表示する。X軸が高山植物の被写体特徴量、Y軸が赤色特徴量の表示空間90においては、高山植物の被写体特徴量が相対的に高く、かつ赤色特徴量が相対的に高い画像46、言い換えれば赤色の高山植物が明確に被写体として写った画像46程、右上の領域に配置される。 21 shows a case where the setting change button 96 is pressed on the image list display screen 95 including the display space 90 shown in FIG. 20, and further, on the display setting screen 75, the feature quantity constituting the X-axis is changed to the subject feature quantity of alpine plants, and the feature quantity constituting the Y-axis is changed to the red feature quantity among the color features. In this case, the browser control unit 32 rearranges the images 46 in the display space 90 where the X-axis is the subject feature quantity of alpine plants and the Y-axis is the red feature quantity, and displays the image list display screen 95 including the display space 90 where the X-axis is the subject feature quantity of alpine plants and the Y-axis is the red feature quantity on the display 24A. In the display space 90 where the X-axis is the subject feature quantity of alpine plants and the Y-axis is the red feature quantity, images 46 with relatively high subject feature quantities of alpine plants and relatively high red feature quantities, in other words images 46 in which red alpine plants are clearly depicted as subjects, are arranged in the upper right region.
 このように、ブラウザ制御部32は、表示空間90の仕様のユーザUによる変更指示として、表示空間90の軸を構成する特徴量の変更指示を受け付ける。そして、変更指示にしたがって表示空間90の仕様を変更する。なお、図20および図21では、X軸およびY軸を構成する特徴量を1種類に変更する例を示したが、もちろん、X軸およびY軸を構成する特徴量を複数種類に変更してもよい。 In this manner, the browser control unit 32 receives an instruction to change the feature amount forming the axis of the display space 90 as an instruction from the user U to change the specifications of the display space 90. Then, the specifications of the display space 90 are changed according to the change instruction. Note that although FIGS. 20 and 21 show an example in which the feature quantities configuring the X-axis and Y-axis are changed to one type, it is of course possible to change the feature quantities configuring the X-axis and Y-axis to multiple types. Good too.
 一例として図22に示すように、画像一覧表示画面95においては、表示空間90上でユーザUによる画像46の選択指示が可能となっている。図22は、ディスプレイ24Aがタッチパネルディスプレイであった場合を例示している。この場合、ユーザUは、指Fでディスプレイ24Aの表面をなぞることで選択領域100を指定する。こうして指定された選択領域100内に存在する画像46が選択される。以下、こうしてユーザUにより選択された画像46を、選択画像46Sと表記する。 As an example, as shown in FIG. 22, on the image list display screen 95, the user U can select an image 46 on the display space 90. FIG. 22 illustrates a case where the display 24A is a touch panel display. In this case, the user U specifies the selection area 100 by tracing the surface of the display 24A with the finger F. The image 46 existing within the designated selection area 100 is thus selected. Hereinafter, the image 46 selected by the user U will be referred to as a selected image 46S.
 一例として図23に示すように、選択領域100が指定された場合、画像一覧表示画面95には処理指示メニュー101がポップアップ表示される。処理指示メニュー101には、アルバム作成、一括編集、および一括削除の3つの選択肢が用意されている。 As an example, as shown in FIG. 23, when the selection area 100 is specified, a processing instruction menu 101 is pop-up displayed on the image list display screen 95. The processing instruction menu 101 has three options: album creation, batch editing, and batch deletion.
 処理指示メニュー101のアルバム作成の選択肢が選択された場合、一例として図24に示すように、ブラウザ制御部32は、アルバム作成画面105をディスプレイ24Aに表示する。アルバム作成画面105には、選択画像46Sを用いて作成されたアルバム106が表示される。アルバム作成画面105においては、アルバム106の選択画像46Sの配置を変更したり、新たな選択画像46Sをアルバム106に加えたり、逆にアルバム106の選択画像46Sを削除したりすることが可能である。 When the album creation option of the processing instruction menu 101 is selected, the browser control unit 32 displays an album creation screen 105 on the display 24A, as shown in FIG. 24 as an example. The album creation screen 105 displays an album 106 created using the selected image 46S. On the album creation screen 105, it is possible to change the arrangement of the selected images 46S in the album 106, add a new selected image 46S to the album 106, or conversely delete the selected images 46S in the album 106. .
 アルバム作成画面105の下部には、保存ボタン107およびキャンセルボタン108が設けられている。保存ボタン107が押された場合、ブラウザ制御部32は、アルバム106の記憶要求を画像管理サーバ12に送信する。画像管理サーバ12のRW制御部42は、アルバム106を画像DB36の記憶領域45に記憶する。キャンセルボタン108が押された場合、ブラウザ制御部32は、選択画像46Sが選択されていない状態の画像一覧表示画面95に表示を戻す。 At the bottom of the album creation screen 105, a save button 107 and a cancel button 108 are provided. When the save button 107 is pressed, the browser control unit 32 transmits a storage request for the album 106 to the image management server 12. The RW control unit 42 of the image management server 12 stores the album 106 in the storage area 45 of the image DB 36. When the cancel button 108 is pressed, the browser control unit 32 returns the display to the image list display screen 95 in which the selected image 46S is not selected.
 なお、処理指示メニュー101の一括編集の選択肢が選択された場合、ブラウザ制御部32は、白黒、セピア、ビビット、ソフトフォーカス、光漏れ、美白等のうちの指定された表示エフェクトを、選択画像46Sに対して一括して施す。また、処理指示メニュー101の一括削除の選択肢が選択された場合、ブラウザ制御部32は選択画像46Sを一括削除する。 Note that when the batch editing option of the processing instruction menu 101 is selected, the browser control unit 32 applies a specified display effect such as black and white, sepia, vivid, soft focus, light leakage, whitening, etc. to the selected image 46S. Apply all at once. Further, when the option of batch deletion in the processing instruction menu 101 is selected, the browser control unit 32 deletes the selected images 46S in a batch.
 このように、ブラウザ制御部32は、表示空間90上でユーザUによる画像46の選択指示を受け付ける。そして、選択指示によって選択された選択画像46Sに対してのみ、アルバム作成、一括編集、および一括削除といった指定された処理を行う。 In this way, the browser control unit 32 receives an instruction from the user U to select the image 46 on the display space 90. Then, designated processes such as album creation, batch editing, and batch deletion are performed only on the selected images 46S selected by the selection instruction.
 次に、上記構成による作用について、一例として図25、図26、および図27に示すフローチャートを参照して説明する。ユーザ端末10のCPU22Aは、図3で示したように、画像AP30の起動により、ブラウザ制御部32として機能される。また、画像管理サーバ12のCPU22Bは、図4で示したように、作動プログラム35の起動により、受付部40、特徴量導出部41、RW制御部42、および配信制御部43として機能される。 Next, the operation of the above configuration will be described as an example with reference to flowcharts shown in FIGS. 25, 26, and 27. As shown in FIG. 3, the CPU 22A of the user terminal 10 functions as the browser control unit 32 by activating the image AP 30. Further, as shown in FIG. 4, the CPU 22B of the image management server 12 functions as a reception unit 40, a feature value derivation unit 41, an RW control unit 42, and a distribution control unit 43 by starting the operation program 35.
 ユーザUは、ユーザ端末10のカメラ機能を用いて画像46を撮影する。図13で示したように、ブラウザ制御部32の制御の下、画像46および付帯情報47を含む画像記憶要求70が画像管理サーバ12に送信される。 The user U uses the camera function of the user terminal 10 to photograph the image 46. As shown in FIG. 13, under the control of the browser control unit 32, an image storage request 70 including an image 46 and supplementary information 47 is transmitted to the image management server 12.
 画像管理サーバ12においては、画像記憶要求70が受付部40にて受け付けられる(図25のステップST100でYES)。画像記憶要求70は受付部40から特徴量導出部41およびRW制御部42に出力される。 In the image management server 12, the image storage request 70 is accepted by the reception unit 40 (YES in step ST100 in FIG. 25). The image storage request 70 is output from the reception unit 40 to the feature quantity derivation unit 41 and the RW control unit 42.
 特徴量導出部41においては、図8~図11で示したように各種特徴量が導出される(ステップST110)。各種特徴量で構成される特徴量情報48は、特徴量導出部41からRW制御部42に出力される。そして、RW制御部42の制御の下、画像46、付帯情報47、および特徴量情報48が画像DB36に記憶される(ステップST120)。 The feature amount deriving unit 41 derives various feature amounts as shown in FIGS. 8 to 11 (step ST110). Feature amount information 48 composed of various feature amounts is output from the feature amount deriving section 41 to the RW control section 42. Then, under the control of the RW control unit 42, the image 46, additional information 47, and feature information 48 are stored in the image DB 36 (step ST120).
 図15で示したように、ユーザUは、表示設定画面75において、プルダウンメニュー76および77により表示空間90のX軸およびY軸を構成する特徴量を選択した後、設定ボタン78を押す。これによりブラウザ制御部32によって画像配信要求85が生成され、画像配信要求85が画像管理サーバ12に送信される。 As shown in FIG. 15, on the display setting screen 75, the user U selects the feature quantities forming the X-axis and Y-axis of the display space 90 from the pull-down menus 76 and 77, and then presses the setting button 78. As a result, the browser control unit 32 generates an image distribution request 85, and transmits the image distribution request 85 to the image management server 12.
 図16で示したように、画像管理サーバ12においては、画像配信要求85が受付部40にて受け付けられる(図26のステップST200でYES)。画像配信要求85は受付部40からRW制御部42に出力される。そして、RW制御部42の制御の下、画像DB36から画像配信要求85に応じた画像46が読み出される(ステップST210)。画像46は、付帯情報47および特徴量情報48とともに、RW制御部42から配信制御部43に出力される。画像46、付帯情報47、および特徴量情報48は、配信制御部43の制御の下、画像配信要求85の要求元のユーザ端末10に配信される(ステップST220)。 As shown in FIG. 16, in the image management server 12, the image distribution request 85 is received by the reception unit 40 (YES in step ST200 in FIG. 26). The image distribution request 85 is output from the reception unit 40 to the RW control unit 42. Then, under the control of the RW control unit 42, the image 46 corresponding to the image distribution request 85 is read from the image DB 36 (step ST210). The image 46 is outputted from the RW control section 42 to the distribution control section 43 along with additional information 47 and feature amount information 48 . The image 46, the supplementary information 47, and the feature amount information 48 are distributed to the user terminal 10 that requested the image distribution request 85 under the control of the distribution control unit 43 (step ST220).
 ユーザ端末10においては、画像管理サーバ12から配信された画像46等がブラウザ制御部32にて受信される(図27のステップST300でYES)。図17で示したように、画像46は、ブラウザ制御部32の制御の下、表示設定画面75で設定された表示空間90において、特徴量に応じた位置に配置される(ステップST310)。そして、ブラウザ制御部32の制御の下、表示空間90を含む画像一覧表示画面95がディスプレイ24Aに表示される(ステップST320)。このとき、特徴量が予め設定された範囲内にある2つ以上の画像46は、図18で示したように、代表画像46RPが最上位に表示され、代表画像46RPの下にそれ以外の画像46が積層された態様で表示される。 In the user terminal 10, the browser control unit 32 receives the images 46 and the like distributed from the image management server 12 (YES in step ST300 in FIG. 27). As shown in FIG. 17, the image 46 is placed at a position corresponding to the feature amount in the display space 90 set on the display setting screen 75 under the control of the browser control unit 32 (step ST310). Then, under the control of the browser control unit 32, an image list display screen 95 including the display space 90 is displayed on the display 24A (step ST320). At this time, among the two or more images 46 whose feature amounts are within a preset range, as shown in FIG. 18, the representative image 46RP is displayed at the top, and the other images are displayed below the representative image 46RP. 46 are displayed in a stacked manner.
 以上説明したように、画像管理サーバ12のCPU22BのRW制御部42は、複数の画像46を取得し、複数の画像46について複数種類の特徴量を取得する。ユーザ端末10のCPU22Aのブラウザ制御部32は、特徴量に係る2次元の表示空間90であって、2種類以上の特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間90において、特徴量に応じた位置に画像46を配置することで、複数の画像46を一覧表示する。したがって、図20で示したX軸が日時特徴量、Y軸が画質特徴量の表示空間90、および図21で示したX軸が高山植物の被写体特徴量、Y軸が赤色特徴量の表示空間90等、1種類の特徴量を示すX軸およびY軸を有する表示空間と比べて、バラエティーに富んだ画像46の一覧表示を行うことが可能となる。 As described above, the RW control unit 42 of the CPU 22B of the image management server 12 acquires multiple images 46 and acquires multiple types of feature amounts for the multiple images 46. The browser control unit 32 of the CPU 22A of the user terminal 10 displays a list of the multiple images 46 by arranging the images 46 at positions according to the feature amounts in a two-dimensional display space 90 related to the feature amounts, the display space 90 having at least one axis indicating the magnitude of a numerical value obtained by integrating two or more types of feature amounts. Therefore, it is possible to display a list of a wide variety of images 46 compared to display spaces having X and Y axes indicating one type of feature amount, such as the display space 90 shown in FIG. 20, in which the X axis indicates the date and time feature amount and the Y axis indicates the image quality feature amount, and the display space 90 shown in FIG. 21, in which the X axis indicates the subject feature amount of alpine plants and the Y axis indicates the red color feature amount.
 ブラウザ制御部32は、表示空間90の仕様のユーザUによる変更指示を受け付け、変更指示にしたがって表示空間90の仕様を変更する。このため、容易にユーザUの意図に合った表示空間90の仕様に変更することができる。結果として、ユーザUが所望の画像46を見つけやすくなり、所望の画像46の検索に要する時間を短縮することができる。 The browser control unit 32 accepts an instruction from the user U to change the specifications of the display space 90, and changes the specifications of the display space 90 in accordance with the change instruction. This makes it easy to change the specifications of the display space 90 to match the intentions of the user U. As a result, it becomes easier for the user U to find the desired image 46, and the time required to search for the desired image 46 can be reduced.
 変更指示は、表示空間90の軸を構成する特徴量の変更指示である。このため、ユーザUの気持ちの赴くままに、気軽に表示空間90の軸を変更することができる。これにより、今まで撮影しただけで忘れていた画像46が表示される等、思わぬ画像46の発見があったりして、ユーザUの興趣を引き付けることができる。 The change instruction is an instruction to change the feature amount that constitutes the axis of the display space 90. Therefore, the axis of the display space 90 can be easily changed as the user U desires. As a result, an unexpected image 46 may be discovered, such as displaying an image 46 that has been taken but forgotten, thereby attracting the user U's interest.
 ブラウザ制御部32は、特徴量が予め設定された範囲内にある2つ以上の画像46がある場合、2つ以上の画像46のうちの代表画像46RPを最上位に表示し、代表画像46RPの下にそれ以外の画像46を積層した態様で表示する。このため、特徴量が予め設定された範囲内にある2つ以上の画像46の表示がすっきりとし、画像46の一覧表示を一段と見やすくすることができる。 If there are two or more images 46 whose feature amounts are within a preset range, the browser control unit 32 displays the representative image 46RP of the two or more images 46 at the top, and The other images 46 are displayed in a layered manner below. Therefore, the display of two or more images 46 whose feature amounts are within a preset range can be clearly displayed, and the list of images 46 can be displayed more easily.
 ブラウザ制御部32は、表示空間90上でユーザUによる画像46の選択指示を受け付ける。そして、選択指示によって選択された選択画像46Sに対してのみ、指定された処理を行う。このため、例えば図23等で示したように、犬および/または子どもが明確に被写体として写り、かつ画質が良好な画像46を選択してアルバム106を作成したり、あるいは画質が相対的に悪い画像46を選択して一括削除したりすることができ、ユーザUによる画像46の整理、編集作業を大いに捗らせることができる。 The browser control unit 32 receives an instruction from the user U to select the image 46 on the display space 90. Then, the designated process is performed only on the selected image 46S selected by the selection instruction. For this reason, as shown in FIG. 23, for example, an album 106 may be created by selecting an image 46 in which a dog and/or child is clearly shown as the subject and with good image quality, or an image 46 with relatively poor image quality. The images 46 can be selected and deleted all at once, and the user U's work on organizing and editing the images 46 can be greatly facilitated.
 なお、一例として図28に示す画像一覧表示画面110を採用してもよい。画像一覧表示画面110は、表示設定画面75の機能を画像一覧表示画面95に付与した画面である。具体的には、画像一覧表示画面110には、X軸を構成する特徴量を選択するためのプルダウンメニュー111、およびY軸を構成する特徴量を選択するためのプルダウンメニュー112が設けられている。プルダウンメニュー111および112は、初期表示状態においては2つずつ用意されている。プルダウンメニュー111および112の右側には、特徴量追加ボタン113および114が設けられている。特徴量追加ボタン113が押された場合、プルダウンメニュー111が追加される。同様に、特徴量追加ボタン114が押された場合、プルダウンメニュー112が追加される。プルダウンメニュー111および112は、表示設定画面75のプルダウンメニュー76および77に相当する。また、特徴量追加ボタン113および114は、表示設定画面75の特徴量追加ボタン79および80に相当する。この画像一覧表示画面110によれば、設定変更ボタン96を押して表示設定画面75に表示を遷移させる必要がなくなるため、ユーザUの操作の手間を省くことができる。 Incidentally, as an example, an image list display screen 110 shown in FIG. 28 may be adopted. The image list display screen 110 is a screen in which the functions of the display setting screen 75 are added to the image list display screen 95. Specifically, the image list display screen 110 is provided with a pull-down menu 111 for selecting the feature amounts forming the X-axis and a pull-down menu 112 for selecting the feature amounts forming the Y-axis. . Two pull-down menus 111 and 112 are provided in the initial display state. On the right side of the pull-down menus 111 and 112, feature amount addition buttons 113 and 114 are provided. When the feature amount addition button 113 is pressed, the pull-down menu 111 is added. Similarly, when the feature quantity addition button 114 is pressed, the pull-down menu 112 is added. Pull-down menus 111 and 112 correspond to pull-down menus 76 and 77 on display setting screen 75. Further, the feature value addition buttons 113 and 114 correspond to the feature value addition buttons 79 and 80 on the display setting screen 75. According to this image list display screen 110, there is no need to press the setting change button 96 to change the display to the display setting screen 75, so that the user U's operation effort can be saved.
 [第2実施形態]
 一例として図29に示すように、第2実施形態の表示設定画面120には、上記第1実施形態のプルダウンメニュー76および77等に加えて、重み係数設定バー121および122が設けられている。重み係数設定バー121は、プルダウンメニュー76により選択されたX軸を構成する特徴量に掛ける重み係数を、0.1以上1以下の値に設定するためのGUIである。重み係数設定バー122は、プルダウンメニュー77により選択されたY軸を構成する特徴量に掛ける重み係数を、0.1以上1以下の値に設定するためのGUIである。重み係数設定バー121および122は、プルダウンメニュー76および77と同じく、初期表示状態においては2つずつ用意されている。また、重み係数設定バー121および122は、初期表示状態においては重み係数が1に設定されている。
[Second embodiment]
As an example, as shown in FIG. 29, the display setting screen 120 of the second embodiment is provided with weighting factor setting bars 121 and 122 in addition to the pull-down menus 76 and 77 of the first embodiment. The weighting factor setting bar 121 is a GUI for setting the weighting factor to be applied to the feature amount constituting the X axis selected by the pull-down menu 76 to a value of 0.1 or more and 1 or less. The weighting factor setting bar 122 is a GUI for setting the weighting factor to be applied to the feature amount constituting the Y axis selected by the pull-down menu 77 to a value of 0.1 or more and 1 or less. Like the pull-down menus 76 and 77, two weighting factor setting bars 121 and 122 are provided in the initial display state. Furthermore, the weighting coefficients of the weighting coefficient setting bars 121 and 122 are set to 1 in the initial display state.
 図29においては、プルダウンメニュー76にて犬および子どもの被写体特徴量が選択され、プルダウンメニュー77にて画質特徴量が選択された場合を例示している。また、図29においては、重み係数設定バー121にて犬の被写体特徴量に掛ける重み係数に0.5、子どもの被写体特徴量に掛ける重み係数に1がそれぞれ設定され、重み係数設定バー122にて画質特徴量に掛ける重み係数に1が設定された場合を例示している。 In FIG. 29, a case is illustrated in which subject features of a dog and a child are selected in the pull-down menu 76, and image quality features are selected in the pull-down menu 77. In addition, in FIG. 29, in the weighting factor setting bar 121, 0.5 is set for the weighting coefficient multiplied by the dog's subject feature, and 1 is set for the weighting coefficient multiplied by the child's subject feature, and in the weighting factor setting bar 122. In this example, the weighting coefficient multiplied by the image quality feature is set to 1.
 図30は、図29で示した表示設定の場合のブラウザ制御部32の処理を示す。この場合も上記第1実施形態の図17で示した場合と同様に、ブラウザ制御部32は、画像管理サーバ12から配信された画像46について、統合特徴量IFを算出する。ただし、この場合、ブラウザ制御部32は、上記第1実施形態の場合は統合する複数種類の特徴量の加算平均を統合特徴量IFとして算出したのに対し、統合する複数種類の特徴量の重み付き平均(加重平均)を統合特徴量IFとして算出する。ブラウザ制御部32は、重み付き平均の算出に、重み係数設定バー121および122で設定された重み係数を用いる。例えば犬の被写体特徴量が0.64、子どもの被写体特徴量が0.22の画像462の場合、ブラウザ制御部32は、重み係数設定バー121で設定された犬の被写体特徴量に対する重み係数0.5および子どもの被写体特徴量1を用いて、
 IF=(0.64×0.5+0.22×1)/2=0.27
と算出する。以降、ブラウザ制御部32は、上記第1実施形態の場合と同じく、表示空間90において、特徴量に応じた位置に画像46を配置する。
FIG. 30 shows the processing of the browser control unit 32 in the case of the display settings shown in FIG. 29. In this case as well, similarly to the case shown in FIG. 17 of the first embodiment, the browser control unit 32 calculates the integrated feature amount IF for the image 46 distributed from the image management server 12. However, in this case, the browser control unit 32 calculates the average of the multiple types of feature amounts to be integrated as the integrated feature amount IF in the case of the first embodiment, whereas the browser control unit 32 calculates the weight of the multiple types of feature amounts to be integrated. The weighted average (weighted average) is calculated as the integrated feature amount IF. The browser control unit 32 uses the weighting coefficients set in the weighting coefficient setting bars 121 and 122 to calculate the weighted average. For example, in the case of an image 462 in which the subject feature amount of a dog is 0.64 and the subject feature amount of a child is 0.22, the browser control unit 32 sets a weighting coefficient of 0 for the subject feature amount of a dog set in the weighting factor setting bar 121. Using .5 and the child's subject feature amount 1,
IF=(0.64×0.5+0.22×1)/2=0.27
It is calculated as follows. Thereafter, the browser control unit 32 arranges the image 46 at a position corresponding to the feature amount in the display space 90, as in the first embodiment.
 このように、第2実施形態においては、表示空間90の仕様のユーザUによる変更指示は、2種類以上の特徴量を統合する割合を決める重み係数の変更指示である。このため、さらにバラエティーに富んだ画像46の一覧表示を行うことが可能となる。例えば、画質の良好な画像46を最優先にアルバム106を作成したいが、撮影日時も多少は加味したい、あるいは、子どもの画像46をメインにしたアルバム106を作成したいが、犬の画像46も所々に入れたい、といったユーザUの細かな要望に応えることができる。 As described above, in the second embodiment, the instruction to change the specifications of the display space 90 by the user U is an instruction to change the weighting coefficient that determines the ratio of integrating two or more types of feature amounts. Therefore, it is possible to display a list of images 46 with even greater variety. For example, you might want to create an album 106 that gives top priority to images 46 with good image quality, but you also want to take the date and time of shooting into account, or you want to create an album 106 that mainly includes images 46 of children, but there are also images 46 of dogs here and there. It is possible to respond to user U's detailed requests, such as wanting to include
 [第3実施形態]
 一例として図31に示すように、第3実施形態の画像一覧表示画面125は、画像46が行列状に一覧表示される領域126と、表示空間90とを含む。領域126の画像46は、ユーザUの指Fによるドラッグアンドドロップ操作により、表示空間90の任意の位置に移動させることが可能である。以下、ドラッグアンドドロップ操作により領域126から表示空間90に移動させられた画像46を、基準画像46Rと表記する。
[Third embodiment]
As shown in FIG. 31 as an example, the image list display screen 125 of the third embodiment includes an area 126 in which images 46 are displayed as a list in a matrix, and a display space 90. The image 46 in the area 126 can be moved to any position in the display space 90 by a drag-and-drop operation using the finger F of the user U. Hereinafter, the image 46 moved from the area 126 to the display space 90 by the drag-and-drop operation will be referred to as a reference image 46R.
 ここで、第3実施形態の表示空間90は、2次元よりも多い次元数を有する特徴量を、t-SNE(T-distributed Stochastic Neighbor Embedding)といった周知の次元削減技術を用いて2次元に次元削減することで得られた空間である。このため、第3実施形態の表示空間90のX軸およびY軸を構成する特徴量は、特定の種類の特徴量に決まらない。 Here, the display space 90 of the third embodiment converts feature quantities having more dimensions than two dimensions into two dimensions using a well-known dimension reduction technique such as t-SNE (T-distributed Stochastic Neighbor Embedding). This is the space gained through reduction. Therefore, the feature amounts forming the X-axis and Y-axis of the display space 90 in the third embodiment are not determined to be a specific type of feature amount.
 一例として図32および図33に示すように、ブラウザ制御部32は、基準画像46Rのドラッグアンドドロップ操作がなされた場合、基準画像46Rを中心として、基準画像46Rの特徴量ベクトルVIとの距離に応じた位置に他の画像46を配置する。このため、基準画像46Rと特徴量ベクトルVIが類似し、基準画像46Rの特徴量ベクトルVIとの距離が近い画像46は、基準画像46Rに近い位置に配置される。反対に、基準画像46Rと特徴量ベクトルVIが乖離し、基準画像46Rの特徴量ベクトルVIとの距離が遠い画像46は、基準画像46Rから遠い位置に配置される。基準画像46Rと特徴量ベクトルVIが類似している画像とは、端的に言えば基準画像46Rの類似画像である。また、基準画像46Rと特徴量ベクトルVIが乖離している画像とは、端的に言えば基準画像46Rの非類似画像である。 As an example, as shown in FIGS. 32 and 33, when a drag-and-drop operation is performed on the reference image 46R, the browser control unit 32 adjusts the distance between the reference image 46R and the feature vector VI, centering on the reference image 46R. Another image 46 is placed at the corresponding position. Therefore, the image 46 whose feature vector VI is similar to the reference image 46R and whose distance to the feature vector VI of the reference image 46R is close is placed at a position close to the reference image 46R. On the other hand, an image 46 in which the reference image 46R and the feature vector VI deviate from each other, and the distance between the reference image 46R and the feature vector VI is long, is placed at a position far from the reference image 46R. Simply put, an image whose feature vector VI is similar to the reference image 46R is an image similar to the reference image 46R. Moreover, an image in which the reference image 46R and the feature amount vector VI are different from each other is, simply put, an image that is dissimilar to the reference image 46R.
 ブラウザ制御部32は、基準画像46Rに色付き、例えば赤色の丸印128を付し、基準画像46Rを他の画像46と区別して表示する。なお、図32は基準画像46Rが1つの場合、図33は基準画像46Rが2つの場合をそれぞれ示す。 The browser control unit 32 adds a color, for example, a red circle 128, to the reference image 46R, and displays the reference image 46R to distinguish it from other images 46. Note that FIG. 32 shows a case where there is one reference image 46R, and FIG. 33 shows a case where there are two reference images 46R.
 このように、第3実施形態においては、表示空間90の任意の位置に配置された基準画像46Rを中心として、基準画像46Rの特徴量ベクトルVIとの距離に応じた位置に他の画像46を配置する。このため、さらにバラエティーに富んだ画像46の一覧表示を行うことが可能となる。ユーザUが気になる基準画像46Rの類似画像および非類似画像が一目で分かるので、ユーザUによる画像46の整理、編集作業を大いに捗らせることができる。 In this way, in the third embodiment, with the reference image 46R placed at an arbitrary position in the display space 90 as the center, other images 46 are placed at positions corresponding to the distance from the feature vector VI of the reference image 46R. Deploy. Therefore, it is possible to display a list of images 46 with even greater variety. Since images similar to and dissimilar to the reference image 46R that the user U is interested in can be seen at a glance, the user U can greatly facilitate the organization and editing of the images 46.
 [第4実施形態]
 第4実施形態では、下記の式(1)に示すように、各特徴量Zi(i=1、2、3、・・・、n-1、n nは特徴量の数、すなわち特徴量ベクトルVIの次元数)に重み係数Wiを掛けたものを画像46の特徴量ベクトルVIとする。
 VI={Wi×Zi}={W1×Z1、W2×Z2、W3×Z3、・・・、Wn-1×Zn-1、Wn×Zn}・・・(1)
[Fourth embodiment]
In the fourth embodiment, each feature amount Zi (i=1, 2, 3, ..., n-1, n is the number of feature amounts, that is, the feature vector The feature quantity vector VI of the image 46 is obtained by multiplying the number of dimensions of VI by the weighting coefficient Wi.
VI={Wi×Zi}={W1×Z1, W2×Z2, W3×Z3,..., Wn-1×Zn-1, Wn×Zn}...(1)
 一例として図34に示すように、ブラウザ制御部32は、ユーザUの指示に応じて、類似画像選択画面130をディスプレイ24Aに表示する。類似画像選択画面130は、ユーザUが類似していると考える画像46を選択するための画面である。類似画像選択画面130には画像46が行列状に一覧表示される。 As an example, as shown in FIG. 34, the browser control unit 32 displays a similar image selection screen 130 on the display 24A in response to an instruction from the user U. The similar image selection screen 130 is a screen for selecting an image 46 that the user U considers to be similar. The images 46 are displayed in a matrix format on the similar image selection screen 130.
 ユーザUは、一覧表示された画像46の中から、ハッチングで示すように、自らが類似していると考える画像を2つ以上選択する。その後、ユーザUはOKボタン131を押す。OKボタン131が押された場合、ブラウザ制御部32は、ユーザUが選択した画像46等を含む重み係数補正要求を画像管理サーバ12に送信する。以下、ユーザUが類似していると考えて選択した画像46を、類似画像46SMと表記する。 The user U selects two or more images that the user U considers to be similar, as shown by hatching, from the images 46 displayed in a list. After that, the user U presses the OK button 131. When the OK button 131 is pressed, the browser control unit 32 transmits a weighting coefficient correction request including the image 46 selected by the user U to the image management server 12. Hereinafter, the image 46 selected by the user U considering it to be similar will be referred to as a similar image 46SM.
 一例として図35に示すように、第4実施形態の画像管理サーバ12のCPU22Bは、上記第1実施形態の各処理部40~43に加えて、重み係数補正部135として機能する。重み係数補正部135には、類似画像46SMの特徴量情報48SMが入力される。重み係数補正部135は、複数の類似画像46SMの特徴量ベクトルVIの距離が、予め設定された第1閾値未満となる重み係数Wiを求める最適化問題を解くことで、補正後重み係数WiCを算出する。第1閾値には、例えば0.1等、0に近い値が設定される。すなわち、重み係数補正部135は、類似画像46SM同士の特徴量が一致する方向に重み係数Wiを変更し、補正後重み係数WiCとする。 As an example, as shown in FIG. 35, the CPU 22B of the image management server 12 of the fourth embodiment functions as a weighting factor correction unit 135 in addition to each of the processing units 40 to 43 of the first embodiment. The feature information 48SM of the similar image 46SM is input to the weighting coefficient correction unit 135. The weighting factor correction unit 135 calculates the corrected weighting factor WiC by solving an optimization problem for finding a weighting factor Wi such that the distance between the feature vectors VI of the plurality of similar images 46SM is less than a first preset threshold. calculate. The first threshold is set to a value close to 0, such as 0.1, for example. That is, the weighting coefficient correction unit 135 changes the weighting coefficient Wi in a direction in which the feature amounts of the similar images 46SM match each other, and sets the weighting coefficient Wi to a corrected weighting coefficient WiC.
 また、一例として図36に示すように、ブラウザ制御部32は、ユーザUの指示に応じて、非類似画像選択画面140をディスプレイ24Aに表示する。非類似画像選択画面140は、類似画像選択画面130とは逆に、ユーザUが全く異なると考える画像46を選択するための画面である。非類似画像選択画面140には、類似画像選択画面130と同じく、画像46が行列状に一覧表示される。 Furthermore, as shown in FIG. 36 as an example, the browser control unit 32 displays a dissimilar image selection screen 140 on the display 24A in response to an instruction from the user U. The dissimilar image selection screen 140 is a screen for selecting an image 46 that the user U considers to be completely different from the similar image selection screen 130. Similar to the similar image selection screen 130, the dissimilar image selection screen 140 displays a list of images 46 in a matrix.
 ユーザUは、一覧表示された画像46の中から、ハッチングで示すように、自らが全く異なると考える画像を2つ以上選択する。その後、ユーザUはOKボタン141を押す。OKボタン141が押された場合、ブラウザ制御部32は、ユーザUが選択した画像46等を含む重み係数補正要求を画像管理サーバ12に送信する。以下、ユーザUが全く異なると考えて選択した画像46を、非類似画像46DSMと表記する。 The user U selects two or more images that the user U considers to be completely different, as shown by hatching, from among the images 46 displayed in a list. After that, the user U presses the OK button 141. When the OK button 141 is pressed, the browser control unit 32 transmits a weighting coefficient correction request including the image 46 selected by the user U to the image management server 12. Hereinafter, the image 46 selected by the user U considering it to be completely different will be referred to as a dissimilar image 46DSM.
 一例として図37に示すように、この場合、重み係数補正部135には、非類似画像46DSMの特徴量情報48DSMが入力される。重み係数補正部135は、複数の非類似画像46DSMの特徴量ベクトルVIの距離が、予め設定された第2閾値以上となる重み係数Wiを求める最適化問題を解くことで、補正後重み係数WiCを算出する。第2閾値には、例えば上記第3実施形態で示した表示空間90において、左下の原点の位置と右上の特徴量が最大の位置といった、非類似画像46DSMの配置位置が最も離れるような値が設定される。すなわち、重み係数補正部135は、非類似画像46DSM同士の特徴量が乖離する方向に重み係数Wiを変更し、補正後重み係数WiCとする。 As shown in FIG. 37 as an example, in this case, the feature amount information 48DSM of the dissimilar image 46DSM is input to the weighting coefficient correction unit 135. The weighting coefficient correction unit 135 solves an optimization problem to obtain a weighting coefficient Wi such that the distance between the feature vectors VI of the plurality of dissimilar images 46DSM is equal to or greater than a preset second threshold value, thereby obtaining a corrected weighting coefficient WiC. Calculate. The second threshold value is set to a value such that the placement position of the dissimilar image 46DSM is farthest apart, such as the position of the lower left origin and the position where the upper right feature amount is maximum, in the display space 90 shown in the third embodiment, for example. Set. That is, the weighting coefficient correction unit 135 changes the weighting coefficient Wi in a direction in which the feature amounts of the dissimilar images 46DSM deviate from each other, and sets it as a corrected weighting coefficient WiC.
 このように、第4実施形態においては、ブラウザ制御部32は、類似画像46SM、および非類似画像46DSMのうちの少なくともいずれかのユーザUによる選択指示を受け付ける。重み係数補正部135は、選択指示に応じて、類似画像46SM同士については特徴量が一致する方向に、非類似画像46DSM同士については特徴量が乖離する方向に、複数種類の特徴量に掛ける重み係数Wiを変更する。このため、表示空間90において、ユーザUが類似していると考える画像46同士が相対的に近い位置に配置され、ユーザUが全く異なると考える画像46同士が相対的に遠い位置に配置されることとなる。したがって、ユーザUの感覚に合った画像46の一覧表示を行うことができる。 In this manner, in the fourth embodiment, the browser control unit 32 receives an instruction from the user U to select at least one of the similar image 46SM and the dissimilar image 46DSM. In response to a selection instruction, the weighting coefficient correction unit 135 applies weights to multiple types of feature amounts in a direction in which the feature amounts match for similar images 46SM, and in a direction in which feature amounts deviate from each other for dissimilar images 46DSM. Change the coefficient Wi. Therefore, in the display space 90, images 46 that the user U considers to be similar are placed relatively close to each other, and images 46 that the user U considers to be completely different are placed relatively far apart. It happens. Therefore, it is possible to display a list of images 46 that suits the sense of the user U.
 なお、表示空間90上でユーザUに類似画像46SMまたは非類似画像46DSMを選択させてもよい。この場合、類似画像46SMを枠で囲む、類似画像46SMを線で繋ぐ、あるいは非類似画像46DSMを線で繋ぐといった選択方法が考えられる。また、表示空間90内でドラッグアンドドロップ操作等により画像46を移動可能に構成し、ユーザUが類似していると考える画像46同士を近い位置に配置し直す操作を、類似画像46SMの選択操作と見なしてもよい。反対に、ユーザUが全く異なると考える画像46同士を遠い位置に配置し直す操作を、非類似画像46DSMの選択操作と見なしてもよい。 Note that the user U may be allowed to select the similar image 46SM or the dissimilar image 46DSM on the display space 90. In this case, possible selection methods include surrounding the similar images 46SM with a frame, connecting the similar images 46SM with a line, or connecting the dissimilar images 46DSM with a line. In addition, the images 46 are configured to be movable within the display space 90 by drag-and-drop operations, etc., and the operation of re-arranging the images 46 that the user U considers to be similar to each other is performed as a selection operation of the similar images 46SM. It may be considered as On the other hand, an operation of rearranging images 46 that the user U considers to be completely different to distant positions may be regarded as an operation of selecting dissimilar images 46DSM.
 上記各実施形態では、2次元の表示空間90に画像46を一覧表示する例を示したが、これに限らない。一例として図38に示すように、3次元の表示空間150に画像46を一覧表示してもよい。図38においては、X軸を日時特徴量、Y軸を山の被写体特徴量および森の被写体特徴量、Z軸を画質とした表示空間150を例示している。 In each of the above embodiments, an example was shown in which the images 46 are displayed as a list in the two-dimensional display space 90, but the present invention is not limited to this. As an example, as shown in FIG. 38, a list of images 46 may be displayed in a three-dimensional display space 150. In FIG. 38, a display space 150 is illustrated in which the X-axis is a date and time feature, the Y-axis is a mountain subject feature and a forest subject feature, and the Z-axis is image quality.
 表示空間90の位置に応じて、画像46の表示サイズを変更してもよい。例えば図19~図21で示した表示空間90においては、右上の領域に配置された画像46程、表示サイズを大きくする。こうすれば、アルバム106に採用されやすい画像46を、より詳細にユーザUに見せたりすることができる。 The display size of the image 46 may be changed depending on the position of the display space 90. For example, in the display space 90 shown in FIGS. 19 to 21, the display size of the image 46 placed in the upper right area is increased. In this way, images 46 that are likely to be included in the album 106 can be shown to the user U in more detail.
 画像46は、例示のユーザ端末10のカメラ機能で撮影した画像に限らない。ユーザUがウェブページからダウンロードした画像、ユーザUが家族および友達等から譲り受けた画像等でもよい。また、インスタントカメラから印刷出力されたインスタントフイルムを、ユーザ端末10のカメラ機能で撮影した画像等でもよい。 The image 46 is not limited to an image taken using the camera function of the illustrated user terminal 10. The image may be an image that the user U has downloaded from a web page, an image that the user U has received from a family member, a friend, or the like. Alternatively, the image may be an image captured by the camera function of the user terminal 10 from an instant film printed out from an instant camera.
 ユーザ端末10のブラウザ制御部32の機能の全部または一部を画像管理サーバ12に担わせてもよい。具体的には、画像管理サーバ12において表示設定画面75、画像一覧表示画面95等の各種画面を生成し、例えばXML(Extensible Markup Language)等のマークアップ言語によって作成されるウェブ配信用の画面データの形式でユーザ端末10に配信出力する。この場合、ユーザ端末10のブラウザ制御部32は、画面データに基づきウェブブラウザ上に表示する各種画面を再現し、これをディスプレイ24Aに表示する。なお、XMLに代えて、JSON(Javascript(登録商標) Object Notation)等の他のデータ記述言語を利用してもよい。 The image management server 12 may be responsible for all or part of the functions of the browser control unit 32 of the user terminal 10. Specifically, various screens such as a display setting screen 75 and an image list display screen 95 are generated in the image management server 12, and screen data for web distribution created in a markup language such as XML (Extensible Markup Language) is generated. It is distributed and output to the user terminal 10 in this format. In this case, the browser control unit 32 of the user terminal 10 reproduces various screens to be displayed on the web browser based on the screen data and displays them on the display 24A. Note that other data description languages such as JSON (JavaScript (registered trademark) Object Notation) may be used instead of XML.
 画像管理サーバ12を構成するコンピュータのハードウェア構成は種々の変形が可能である。例えば、画像管理サーバ12を、処理能力および信頼性の向上を目的として、ハードウェアとして分離された複数台のコンピュータで構成することも可能である。例えば、受付部40および特徴量導出部41の機能と、RW制御部42および配信制御部43の機能とを、2台のコンピュータに分散して担わせる。この場合は2台のコンピュータで画像管理サーバ12を構成する。また、画像管理サーバ12の全部または一部の機能を、ユーザ端末10が担ってもよい。 The hardware configuration of the computer that constitutes the image management server 12 can be modified in various ways. For example, the image management server 12 can be configured with multiple computers separated as hardware in order to improve processing power and reliability. For example, the functions of the reception unit 40 and feature derivation unit 41 and the functions of the RW control unit 42 and distribution control unit 43 can be distributed and assigned to two computers. In this case, the image management server 12 is configured with two computers. In addition, all or part of the functions of the image management server 12 may be assigned to the user terminal 10.
 このように、ユーザ端末10および画像管理サーバ12のコンピュータのハードウェア構成は、処理能力、安全性、信頼性等の要求される性能に応じて適宜変更することができる。さらに、ハードウェアに限らず、画像AP30および作動プログラム35といったAPについても、安全性および信頼性の確保を目的として、二重化したり、あるいは、複数のストレージに分散して格納することももちろん可能である。 In this way, the hardware configurations of the computers of the user terminal 10 and the image management server 12 can be changed as appropriate depending on required performance such as processing capacity, safety, and reliability. Furthermore, not only the hardware but also APs such as the image AP 30 and the operating program 35 can of course be duplicated or distributed and stored in multiple storages for the purpose of ensuring safety and reliability. be.
 上記各実施形態において、例えば、ブラウザ制御部32、受付部40、特徴量導出部41、RW制御部42、配信制御部43、および重み係数補正部135といった各種の処理を実行する処理部(Processing Unit)のハードウェア的な構造としては、次に示す各種のプロセッサ(Processor)を用いることができる。各種のプロセッサには、ソフトウェア(画像AP30および作動プログラム35)を実行して各種の処理部として機能する汎用的なプロセッサであるCPU22Aおよび22Bに加えて、FPGA(Field Programmable Gate Array)等の製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device:PLD)、および/またはASIC(Application Specific Integrated Circuit)等の特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路等が含まれる。 In each of the above embodiments, processing units that execute various processes such as the browser control unit 32, the reception unit 40, the feature derivation unit 41, the RW control unit 42, the distribution control unit 43, and the weighting coefficient correction unit 135 are used. As the hardware structure of the unit, the following various processors can be used. Various processors include CPUs 22A and 22B, which are general-purpose processors that execute software (image AP 30 and operating program 35) and function as various processing units, as well as FPGAs (Field Programmable Gate Array), etc. A processor that has a circuit configuration specifically designed to execute a specific process, such as a programmable logic device (PLD), which is a processor whose circuit configuration can be changed, and/or an ASIC (Application Specific Integrated Circuit). This includes dedicated electrical circuits, etc.
 1つの処理部は、これらの各種のプロセッサのうちの1つで構成されてもよいし、同種または異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGAの組み合わせ、および/または、CPUとFPGAとの組み合わせ)で構成されてもよい。また、複数の処理部を1つのプロセッサで構成してもよい。 One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of multiple FPGAs and/or a CPU and (in combination with FPGA). Further, the plurality of processing units may be configured with one processor.
 複数の処理部を1つのプロセッサで構成する例としては、第1に、クライアントおよびサーバ等のコンピュータに代表されるように、1つ以上のCPUとソフトウェアの組み合わせで1つのプロセッサを構成し、このプロセッサが複数の処理部として機能する形態がある。第2に、システムオンチップ(System On Chip:SoC)等に代表されるように、複数の処理部を含むシステム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の処理部は、ハードウェア的な構造として、上記各種のプロセッサの1つ以上を用いて構成される。 As an example of configuring multiple processing units with one processor, first, one processor is configured with a combination of one or more CPUs and software, as typified by computers such as clients and servers. There is a form in which a processor functions as multiple processing units. Second, as typified by system-on-chip (SoC), there are forms that use processors that realize the functions of the entire system including multiple processing units with a single IC (Integrated Circuit) chip. be. In this way, various processing units are configured using one or more of the various processors described above as a hardware structure.
 さらに、これらの各種のプロセッサのハードウェア的な構造としては、より具体的には、半導体素子等の回路素子を組み合わせた電気回路(Circuitry)を用いることができる。 Furthermore, as the hardware structure of these various processors, more specifically, an electric circuit (circuitry) that is a combination of circuit elements such as semiconductor elements can be used.
 以上の記載から、下記の付記項に記載の技術を把握することができる。 From the above description, the technology described in the appendix below can be understood.
 [付記項1]
 プロセッサを備え、
 前記プロセッサは、
 複数の画像を取得し、
 複数の前記画像について複数種類の特徴量を取得し、
 前記特徴量に係る2次元または3次元の表示空間であって、2種類以上の前記特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、前記特徴量に応じた位置に前記画像を配置することで、複数の前記画像を一覧表示する、
表示制御装置。
 [付記項2]
 前記プロセッサは、
 前記表示空間の仕様のユーザによる変更指示を受け付け、
 前記変更指示にしたがって前記表示空間の仕様を変更する付記項1に記載の表示制御装置。
 [付記項3]
 前記変更指示は、2種類以上の前記特徴量を統合する割合を決める重み係数の変更指示である付記項2に記載の表示制御装置。
 [付記項4]
 前記変更指示は、前記軸を構成する前記特徴量の変更指示である付記項2または付記項3に記載の表示制御装置。
 [付記項5]
 前記プロセッサは、
 前記特徴量が予め設定された範囲内にある2つ以上の画像がある場合、2つ以上の画像のうちの代表画像を最上位に表示し、前記代表画像の下にそれ以外の画像を積層した態様で表示する付記項1から付記項4のいずれか1項に記載の表示制御装置。
 [付記項6]
 前記プロセッサは、
 前記表示空間上でユーザによる前記画像の選択指示を受け付け、
 前記選択指示によって選択された前記画像に対してのみ、指定された処理を行う付記項1から付記項5のいずれか1項に記載の表示制御装置。
 [付記項7]
 前記プロセッサは、
 類似画像、および非類似画像のうちの少なくともいずれかのユーザによる選択指示を受け付け、
 前記選択指示に応じて、前記類似画像同士については前記特徴量が一致する方向に、前記非類似画像同士については前記特徴量が乖離する方向に、複数種類の前記特徴量に掛ける重み係数を変更する付記項1から付記項6のいずれか1項に記載の表示制御装置。
[Additional note 1]
Equipped with a processor,
The processor includes:
Get multiple images,
Obtaining multiple types of feature amounts for the multiple images,
In a two-dimensional or three-dimensional display space related to the feature amount, which has at least one axis indicating the magnitude of a numerical value that integrates two or more types of the feature amount, displaying a list of a plurality of images by arranging the images at positions;
Display control device.
[Additional note 2]
The processor includes:
accept a user's instruction to change the specifications of the display space;
The display control device according to supplementary note 1, wherein the specification of the display space is changed in accordance with the change instruction.
[Additional note 3]
The display control device according to appendix 2, wherein the change instruction is an instruction to change a weighting coefficient that determines a ratio of integrating two or more types of the feature amounts.
[Additional note 4]
The display control device according to appendix 2 or 3, wherein the change instruction is an instruction to change the feature amount constituting the axis.
[Additional note 5]
The processor includes:
If there are two or more images whose feature amounts are within a preset range, a representative image of the two or more images is displayed at the top, and the other images are stacked below the representative image. The display control device according to any one of Supplementary Notes 1 to 4, wherein the display control device displays a display in a mode in which:
[Additional note 6]
The processor includes:
accepting a user's instruction to select the image on the display space;
The display control device according to any one of Supplementary Notes 1 to 5, which performs specified processing only on the image selected by the selection instruction.
[Additional note 7]
The processor includes:
Accepting a selection instruction from a user of at least one of a similar image and a dissimilar image,
In response to the selection instruction, weighting coefficients applied to the plurality of types of feature quantities are changed in a direction in which the feature quantities match for the similar images, and in a direction in which the feature quantities diverge for the dissimilar images. The display control device according to any one of Supplementary Notes 1 to 6.
 本開示の技術は、上述の種々の実施形態および/または種々の変形例を適宜組み合わせることも可能である。また、上記各実施形態に限らず、要旨を逸脱しない限り種々の構成を採用し得ることはもちろんである。さらに、本開示の技術は、プログラムに加えて、プログラムを非一時的に記憶する記憶媒体にもおよぶ。 The technology of the present disclosure can also be combined as appropriate with the various embodiments and/or various modifications described above. Furthermore, it goes without saying that the present invention is not limited to the above embodiments, and that various configurations can be adopted as long as they do not depart from the gist of the invention. Furthermore, the technology of the present disclosure extends not only to programs but also to storage media that non-temporarily store programs.
 以上に示した記載内容および図示内容は、本開示の技術に係る部分についての詳細な説明であり、本開示の技術の一例に過ぎない。例えば、上記の構成、機能、作用、および効果に関する説明は、本開示の技術に係る部分の構成、機能、作用、および効果の一例に関する説明である。よって、本開示の技術の主旨を逸脱しない範囲内において、以上に示した記載内容および図示内容に対して、不要な部分を削除したり、新たな要素を追加したり、置き換えたりしてもよいことはいうまでもない。また、錯綜を回避し、本開示の技術に係る部分の理解を容易にするために、以上に示した記載内容および図示内容では、本開示の技術の実施を可能にする上で特に説明を要しない技術常識等に関する説明は省略されている。 The described content and illustrated content shown above are detailed explanations of the portions related to the technology of the present disclosure, and are merely examples of the technology of the present disclosure. For example, the above description regarding the configuration, function, operation, and effect is an example of the configuration, function, operation, and effect of the part related to the technology of the present disclosure. Therefore, unnecessary parts may be deleted, new elements may be added, or replacements may be made to the written and illustrated contents shown above without departing from the gist of the technology of the present disclosure. Needless to say. In addition, in order to avoid confusion and facilitate understanding of the parts related to the technology of the present disclosure, the descriptions and illustrations shown above do not include parts that require particular explanation in order to enable implementation of the technology of the present disclosure. Explanations regarding common technical knowledge, etc. that do not apply are omitted.
 本明細書において、「Aおよび/またはB」は、「AおよびBのうちの少なくとも1つ」と同義である。つまり、「Aおよび/またはB」は、Aだけであってもよいし、Bだけであってもよいし、AおよびBの組み合わせであってもよい、という意味である。また、本明細書において、3つ以上の事柄を「および/または」で結び付けて表現する場合も、「Aおよび/またはB」と同様の考え方が適用される。 In this specification, "A and/or B" has the same meaning as "at least one of A and B." That is, "A and/or B" means that it may be only A, only B, or a combination of A and B. Furthermore, in this specification, even when three or more items are expressed in conjunction with "and/or", the same concept as "A and/or B" is applied.
 本明細書に記載された全ての文献、特許出願および技術規格は、個々の文献、特許出願および技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 All documents, patent applications, and technical standards mentioned herein are incorporated herein by reference to the same extent as if each individual document, patent application, and technical standard was specifically and individually indicated to be incorporated by reference. Incorporated by reference into this book.

Claims (9)

  1.  プロセッサを備え、
     前記プロセッサは、
     複数の画像を取得し、
     複数の前記画像について複数種類の特徴量を取得し、
     前記特徴量に係る2次元または3次元の表示空間であって、2種類以上の前記特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、前記特徴量に応じた位置に前記画像を配置することで、複数の前記画像を一覧表示する、
    表示制御装置。
    Equipped with a processor,
    The processor includes:
    Get multiple images,
    Obtaining multiple types of feature amounts for the multiple images,
    In a two-dimensional or three-dimensional display space related to the feature amount, which has at least one axis indicating the magnitude of a numerical value that integrates two or more types of the feature amount, displaying a list of a plurality of images by arranging the images at positions;
    Display control device.
  2.  前記プロセッサは、
     前記表示空間の仕様のユーザによる変更指示を受け付け、
     前記変更指示にしたがって前記表示空間の仕様を変更する請求項1に記載の表示制御装置。
    The processor includes:
    accept a user's instruction to change the specifications of the display space;
    The display control device according to claim 1, wherein specifications of the display space are changed in accordance with the change instruction.
  3.  前記変更指示は、2種類以上の前記特徴量を統合する割合を決める重み係数の変更指示である請求項2に記載の表示制御装置。 The display control device according to claim 2, wherein the change instruction is an instruction to change a weighting coefficient that determines a ratio of integrating two or more types of the feature amounts.
  4.  前記変更指示は、前記軸を構成する前記特徴量の変更指示である請求項2に記載の表示制御装置。 The display control device according to claim 2, wherein the change instruction is an instruction to change the feature amount forming the axis.
  5.  前記プロセッサは、
     前記特徴量が予め設定された範囲内にある2つ以上の画像がある場合、2つ以上の画像のうちの代表画像を最上位に表示し、前記代表画像の下にそれ以外の画像を積層した態様で表示する請求項1に記載の表示制御装置。
    The processor includes:
    If there are two or more images whose feature amounts are within a preset range, a representative image of the two or more images is displayed at the top, and the other images are stacked below the representative image. 2. The display control device according to claim 1, wherein the display control device displays the image in a manner as described above.
  6.  前記プロセッサは、
     前記表示空間上でユーザによる前記画像の選択指示を受け付け、
     前記選択指示によって選択された前記画像に対してのみ、指定された処理を行う請求項1に記載の表示制御装置。
    The processor includes:
    accepting a user's instruction to select the image on the display space;
    The display control device according to claim 1, wherein specified processing is performed only on the image selected by the selection instruction.
  7.  前記プロセッサは、
     類似画像、および非類似画像のうちの少なくともいずれかのユーザによる選択指示を受け付け、
     前記選択指示に応じて、前記類似画像同士については前記特徴量が一致する方向に、前記非類似画像同士については前記特徴量が乖離する方向に、複数種類の前記特徴量に掛ける重み係数を変更する請求項1に記載の表示制御装置。
    The processor includes:
    Accepting an instruction from a user to select at least one of a similar image and a dissimilar image,
    In response to the selection instruction, weighting coefficients applied to the plurality of types of feature quantities are changed in a direction in which the feature quantities match for the similar images, and in a direction in which the feature quantities diverge for the dissimilar images. The display control device according to claim 1.
  8.  複数の画像を取得すること、
     複数の前記画像について複数種類の特徴量を取得すること、並びに、
     前記特徴量に係る2次元または3次元の表示空間であって、2種類以上の前記特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、前記特徴量に応じた位置に前記画像を配置することで、複数の前記画像を一覧表示すること、
    を含む表示制御装置の作動方法。
    acquiring multiple images,
    obtaining a plurality of types of feature amounts for the plurality of images, and
    In a two-dimensional or three-dimensional display space related to the feature amount, which has at least one axis indicating the magnitude of a numerical value that integrates two or more types of the feature amount, displaying a list of a plurality of images by arranging the images at positions;
    A method of operating a display control device including:
  9.  複数の画像を取得すること、
     複数の前記画像について複数種類の特徴量を取得すること、並びに、
     前記特徴量に係る2次元または3次元の表示空間であって、2種類以上の前記特徴量を統合した数値の大きさを示す軸を少なくとも1軸以上有する表示空間において、前記特徴量に応じた位置に前記画像を配置することで、複数の前記画像を一覧表示すること、
    を含む処理をコンピュータに実行させる表示制御装置の作動プログラム。
    acquiring multiple images,
    acquiring a plurality of types of feature amounts for the plurality of images, and
    In a two-dimensional or three-dimensional display space related to the feature amount, which has at least one axis indicating the magnitude of a numerical value that integrates two or more types of the feature amount, displaying a list of a plurality of images by arranging the images at positions;
    An operating program for a display control device that causes a computer to execute processing including.
PCT/JP2023/032375 2022-09-20 2023-09-05 Display control device, method for operating display control device, and program for operating display control device WO2024062912A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-149485 2022-09-20
JP2022149485 2022-09-20

Publications (1)

Publication Number Publication Date
WO2024062912A1 true WO2024062912A1 (en) 2024-03-28

Family

ID=90454219

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/032375 WO2024062912A1 (en) 2022-09-20 2023-09-05 Display control device, method for operating display control device, and program for operating display control device

Country Status (1)

Country Link
WO (1) WO2024062912A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001325294A (en) * 2000-05-17 2001-11-22 Olympus Optical Co Ltd Method and device for retrieving similar image
JP2005196298A (en) * 2003-12-26 2005-07-21 Canon Software Inc Information processor, image data display control method, program and recording medium
JP2006004062A (en) * 2004-06-16 2006-01-05 Canon Inc Image database creation device and image search method
JP2013114597A (en) * 2011-11-30 2013-06-10 Canon Marketing Japan Inc Information processing device, control method thereof, and program
JP2013200591A (en) * 2012-03-23 2013-10-03 Fujifilm Corp Database search device, method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001325294A (en) * 2000-05-17 2001-11-22 Olympus Optical Co Ltd Method and device for retrieving similar image
JP2005196298A (en) * 2003-12-26 2005-07-21 Canon Software Inc Information processor, image data display control method, program and recording medium
JP2006004062A (en) * 2004-06-16 2006-01-05 Canon Inc Image database creation device and image search method
JP2013114597A (en) * 2011-11-30 2013-06-10 Canon Marketing Japan Inc Information processing device, control method thereof, and program
JP2013200591A (en) * 2012-03-23 2013-10-03 Fujifilm Corp Database search device, method, and program

Similar Documents

Publication Publication Date Title
EP3823261B1 (en) Method and system for providing recommendation information related to photography
CN108369633B (en) Visual representation of photo album
US9491366B2 (en) Electronic device and image composition method thereof
US11256904B2 (en) Image candidate determination apparatus, image candidate determination method, program for controlling image candidate determination apparatus, and recording medium storing program
US20110219329A1 (en) Parameter setting superimposed upon an image
US11281940B2 (en) Image file generating device and image file generating method
US9690980B2 (en) Automatic curation of digital images
US10755487B1 (en) Techniques for using perception profiles with augmented reality systems
US20150169944A1 (en) Image evaluation apparatus, image evaluation method, and non-transitory computer readable medium
JP2022526053A (en) Techniques for capturing and editing dynamic depth images
KR20230021144A (en) Machine learning-based image compression settings reflecting user preferences
US11854178B2 (en) Photography session assistant
CN114926351A (en) Image processing method, electronic device, and computer storage medium
WO2024062912A1 (en) Display control device, method for operating display control device, and program for operating display control device
WO2023149135A1 (en) Image processing device, image processing method, and program
JP2014085814A (en) Information processing device, control method therefor, and program
WO2019065582A1 (en) Image data discrimination system, image data discrimination program, image data discrimination method and imaging system
US9736380B2 (en) Display control apparatus, control method, and storage medium
WO2022104181A1 (en) Systems, apparatus, and methods for improving composition of images
US10552888B1 (en) System for determining resources from image data
CN110235174B (en) Image evaluation device, image evaluation method, and recording medium
JP2006172090A (en) Representative image selecting device and method and program
JP2006171942A (en) Significance setting device and method and program
JP2020194472A (en) Server, display method, creation method, and program
KR20150096552A (en) System and method for providing online photo gallery service by using photo album or photo frame

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23868036

Country of ref document: EP

Kind code of ref document: A1