WO2021235296A1 - Mobile body movement prediction system and mobile body movement prediction method - Google Patents

Mobile body movement prediction system and mobile body movement prediction method Download PDF

Info

Publication number
WO2021235296A1
WO2021235296A1 PCT/JP2021/018093 JP2021018093W WO2021235296A1 WO 2021235296 A1 WO2021235296 A1 WO 2021235296A1 JP 2021018093 W JP2021018093 W JP 2021018093W WO 2021235296 A1 WO2021235296 A1 WO 2021235296A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
moving body
state
checkpoint
flow line
Prior art date
Application number
PCT/JP2021/018093
Other languages
French (fr)
Japanese (ja)
Inventor
佑 北野
Original Assignee
株式会社日立情報通信エンジニアリング
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立情報通信エンジニアリング filed Critical 株式会社日立情報通信エンジニアリング
Publication of WO2021235296A1 publication Critical patent/WO2021235296A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the present invention relates to a technique for predicting the movement of a moving object such as a person.
  • Patent Document 1 As a prior art for predicting human behavior, a technique as shown in Patent Document 1 is known. Further, as a prior art for predicting a flow line, a technique as shown in Patent Document 2 is known.
  • Patent Document 1 discloses a technique for estimating a movement frequency model that outputs the movement frequency between areas from the area attribute and the area-to-area attribute in order to model the movement of the crowd in the activity area of the crowd.
  • Patent Document 2 discloses a technique of generating a behavior model by reverse reinforcement learning from flow line data and product placement information, and predicting the flow of people after changing the product placement.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2006-221329
  • Patent Document 2 International Publication No. 2018/131214
  • the measured flow line data does not include the destination information of each person, it is necessary to learn the flow line prediction model after dividing the flow line data for each OD (Origin-Destination).
  • the measured flow line data includes a retention state in addition to the movement, there is a problem that the accuracy of the flow line simulation is lowered only by the OD division.
  • a residence simulation during queue waiting and dialogue is also required.
  • the present invention is a mobile movement prediction system including a processor and a storage device, wherein the storage device includes motion including time-wise position information of the mobile body.
  • Line information map information including information on the positions of a plurality of checkpoints at which the moving body can pass or stay, respectively, in a space in which the moving body can move, and movement of the moving body.
  • the processor learns a movement model that predicts the movement destination of the moving body based on the movement line information, the map information, and the state of the moving body, and holds the state information indicating the state, and the moving body. It is characterized in that a virtual movement line of the moving body is generated based on the initial conditions of the body and the moving model.
  • the flow line simulation can be made highly accurate. Thereby, the influence of the layout design on the flow line can be evaluated in advance. Issues, configurations and effects other than those described above will be clarified by the description of the following examples.
  • the present invention relates to a system for predicting the destination of a person. It should be noted that the prediction by this system may be not only a person but also a moving object such as a ship or a car. Here, I will focus on people and give explanations.
  • the present invention learns a flow line prediction model from flow line data measured using a sensor such as a laser radar or a camera in a facility such as an airport, and performs a flow line simulation using the flow line prediction model.
  • the flow line prediction model is a model that predicts the destination of a moving body such as a person after a predetermined time (for example, after 1 second).
  • the measured flow line data includes data in various states such as a moving state and a stagnant state.
  • the moving state is a state in which the moving body is moving, such as a state in which a person is walking
  • the staying state is a state in which the moving body is staying in a certain place. All of these are examples of movement-related states.
  • the state referred to here includes information on the transition between checkpoints in addition to the moving state and the stagnant state.
  • the checkpoint is, for example, a retention point or a passing point of a moving body in the target area.
  • the information about each checkpoint may include a pair of information indicating its position and information indicating its type.
  • FIG. 1 is a block diagram showing the basic configuration of the human behavior prediction system according to the first embodiment of the present invention.
  • the human behavior prediction system 10 of this embodiment receives measurement data from the measurement system 11 and map data from the user 12, divides the movement line data extracted from the measurement data into each state, and then learns the movement line prediction model. It has a function to perform a movement line simulation and a function to perform a movement line simulation using the learned movement line prediction model after receiving initial parameters for the movement line simulation from the user 12.
  • the human behavior prediction system 10 includes a movement line data extraction unit 101, a movement line database (DB) 105, a map data input unit 103, a map DB 108, a state determination unit 102, and a state. It includes a DB 106, a behavior model learning unit 110, a model DB 107, an initial human flow generation unit 104, a virtual flow line generation unit 111, and a simulation DB 109.
  • DB movement line database
  • the flow line data extraction unit 101 extracts the flow line data from the data measured by the measurement system 11 and stores it in the flow line DB 105.
  • the data measured by the measurement system 11 may be moving image data in the target area or laser radar data.
  • the person When extracting the flow line data from the moving image data, the person may be recognized by a known image processing technique and the coordinates of the person may be extracted.
  • the flow line data from the laser radar data When extracting the flow line data from the laser radar data, a moving body whose distance from the sensor changes may be extracted as a person using a known technique and converted into coordinate information.
  • the map data input unit 103 has a function of receiving map information about a target area such as a facility from the user 12 and storing it in the map DB 108.
  • the map data input unit 103 includes a wall layout input unit 1031, a checkpoint input unit 1032, and a matrix layout input unit 1033.
  • the wall layout input unit 1031 receives layout information regarding an object that cannot be passed by a person, such as a wall in a target area.
  • the checkpoint input unit 1032 receives layout information regarding the checkpoint of the target area.
  • the matrix layout input unit 1033 receives additional layout information regarding the matrix among the layout information regarding the checkpoint.
  • the state determination unit 102 receives the flow line data stored in the flow line DB 105 and the checkpoint data stored in the map DB 108, determines the state of the flow line data, and stores the determination result in the state DB 106. It has a function.
  • the state determination unit 102 includes a matrix determination unit 1021 that determines the state of being lined up in a matrix from the flow line data, and an OD analysis unit 1022 that determines the transition state of the checkpoint from the flow line data. , Equipped with. The specific processing of each part will be described later.
  • the behavior model learning unit 110 receives the flow line data stored in the flow line DB 105, the matrix state data and the checkpoint transition data stored in the state DB 106, and the layout data stored in the map DB 108, and moves.
  • the line prediction model is learned, and the learned flow line prediction model is stored in the model DB 107.
  • the behavior model learning unit 110 includes a flow line data division unit 1101, a feature amount calculation unit 1102, and a model learning unit 1103. The specific processing contents of each part will be described later.
  • the initial human flow generation unit 104 After receiving the initial parameters for the flow line simulation from the user 12, the initial human flow generation unit 104 generates the initial human flow and stores it in the simulation DB 109. The specific processing content will be described later.
  • the virtual flow line generation unit 111 receives the current flow line data from the simulation DB 109, the layout data from the map DB 108, and the flow line prediction model learned from the model DB 107, and predicts the flow line data one step later. And store it in the simulation DB 109. The specific processing content will be described later.
  • the data stored in the flow line DB 105, the state DB 106, the model DB 107, the map DB 108, and the simulation DB 109 will be described later.
  • FIG. 2 is a sequence diagram showing a process performed at the time of model learning in the human behavior prediction system 10 of the first embodiment of the present invention. The entire process will be described below.
  • the measurement system 11 When learning the flow line prediction model, the measurement system 11 first transmits moving image data or laser radar data in a target area such as in a facility to the flow line data extraction unit 101 (201).
  • the flow line data extraction unit 101 extracts the flow line data from the received data (202) and stores it in the flow line DB 105 (203).
  • the user 12 inputs a map data group such as wall layout data, checkpoint data, and matrix layout data of the target area to the map data input unit 103 (214).
  • the map data input unit 103 stores the input map data group in the map DB 108 (213). Either process 203 or process 213 may be performed first, or may be performed at the same time.
  • the state determination unit 102 receives the flow line data measured from the flow line DB 105 (204), and after receiving the map data group from the map DB 108 (208), the state determination unit determines the state. (205), and the determined state data is stored in the state DB 106 (206).
  • the behavior model learning unit 110 received the flow line data measured from the flow line DB 105 (207), received the state data determined from the state DB 106 (209), and received the map data group from the map DB 108. Later (211), the flow line prediction model is learned (210), and the information of the learned flow line prediction model is stored in the model DB 107 (212).
  • FIG. 3 is a sequence diagram showing the processing performed at the time of simulation in the human behavior prediction system 10 of the first embodiment of the present invention.
  • the map data input unit 103 When performing a simulation using a map data group different from that at the time of model learning for the purpose of implementing measures, the map data input unit 103 first stores the map data at the time of implementing the measures in the map DB 108 ( 301). If the same map data group as in model training is used, this step is omitted.
  • the initial human flow generation unit 104 generates the human flow information at the start of the simulation (306) and stores it in the simulation DB 109 (305).
  • the virtual flow line generation unit 111 receives the map data group from the map DB 108 (302), receives the flow line prediction model from the model DB 107 (304), and executes the simulation (303).
  • the virtual flow line generation unit 111 first receives the flow line data and the state data for the current human flow from the simulation DB 109 (307), and predicts the flow line data and the state data in the next time step (303). , The predicted flow line data and the state data are stored in the simulation DB 109 (308). This is repeated until the end condition is satisfied (309).
  • FIG. 4A is an explanatory diagram showing an example of a data structure of the flow line DB 105 of the first embodiment of the present invention.
  • 4B to 4D are explanatory views showing an example of a data structure of the map DB 108 of the first embodiment of the present invention.
  • the flow line DB 105 stores the flow line data acquired by the measurement system 11, and the map DB 108 stores the map data group prepared in advance or input by the user 12.
  • the flow line table 401 (FIG. 4A) is stored in the flow line DB 105.
  • the flow line table 401 is a table for storing the flow line data.
  • the flow line table 401 stores data in the form of sampling the coordinates of the flow line data, for example, for each person or at regular time intervals (for example, 1 second).
  • START_TIME4011 and END_TIME4012 represent the start time and the end time when sampling the flow line data, respectively, and PID 4013 represents a person's ID.
  • WKT4014 represents the geometry information about the linestring of the flow line when the person with the corresponding PID4013 moves between START_TIME4011 and END_TIME4012. In the example of FIG. 4A, WKT4014 stores two-dimensional coordinate values indicating the positions of the corresponding PID 4013 in START_TIME4011 and END_TIME4012, respectively.
  • the coordinate system related to this geometry information may be arbitrary, and may be, for example, a plane orthogonal coordinate system.
  • the flow line table 401 may include information on the orientation of the person.
  • the END_TIME 4012 may include information indicating the direction in which the person with the PID 4013 is facing as an angle.
  • this angle information is measured using a laser radar, the result of detecting the orientation of the 3D point cloud data using a known technique such as Pose Estimation may be stored.
  • the result of orientation detection using a known image processing technique may be stored.
  • the result of direction estimation from the flow line data may be stored.
  • the difference vector of the coordinates moved by the person with PID 4013 from START_TIME4011 to END_TIME4012 may be calculated and the result of the direction detection may be stored.
  • the map DB 108 contains a map table 402 (FIG. 4B) representing an area such as a wall that is impassable to people, a map table 403 (FIG. 4C) regarding checkpoints, and a map that stores additional layouts related to matrices among checkpoints.
  • Table 404 (FIG. 4D) is stored.
  • the checkpoint indicates a stop-by place when a person moves within the target area, and a passing place to which some attribute is added. For example, when considering the inside of an airport facility as a target area, a checkpoint corresponds to information such as an entrance and an automatic check-in (CI) machine.
  • CI automatic check-in
  • the map table 402 shown in FIG. 4B stores WID4021, which is an ID related to an object such as a wall, KIND4022, which represents an object type, and WKT4023, which represents the position information and shape information of the object.
  • WKT4023 expresses the coordinates representing the object shape as geometry in the form of a polygon or a line string. As with WKT4014, the coordinate system may be arbitrary.
  • the map table 403 shown in FIG. 4C has a CID4031 representing a checkpoint ID, a KID4032 representing a checkpoint type ID, a NAME4033 representing a checkpoint name, a WKT4034 representing checkpoint position information and shape information, and a check. Stores a TYPE 4035 representing the type of point.
  • the type of checkpoint indicates either a pass type (Pass), a retention type (Stay), or a matrix type (Queue).
  • a checkpoint such as an entrance is a place that only passes through, unlike a place to stop by, so a Pass is stored in the TYPE4035 of the checkpoint corresponding to this.
  • a checkpoint such as a CI counter is a place where people stop by, Stay is stored in TYPE4035 of the checkpoint corresponding to this.
  • Queueue is stored in TYPE4035 of the checkpoint corresponding to the matrix in front of the CI counter.
  • the geometry information indicating the first line of the matrix is stored in WKT4034, not the area where people can line up as a matrix.
  • checkpoints such that the CID4031 is different but the KID4032 is the same. For example, there may be multiple entrances to the facility.
  • the map table 404 shown in FIG. 4D stores additional layout information 4041 to 4044 with respect to the matrix type checkpoint stored in the map table 403.
  • CID4041 is a checkpoint ID, which is equivalent to CID4031 stored in the map table 403. However, only the ID of the matrix type checkpoint is stored in the map table 404.
  • S_WKT4042 is the geometry information of the place where the service is received after lining up in the matrix with respect to the matrix corresponding to CID4041. This is the position of the first line of the matrix on the opposite side of the line of people. For example, S_WKT4042 in the queue in front of the CI counter corresponds to the position of the counter where the people in the queue actually receive service thereafter.
  • P_WKT4043 is geometry information representing a partition for arranging and arranging. This partition may be, for example, to prevent interference with an adjacent matrix, or may be a partition for arranging people in the matrix so that the matrix meanders.
  • N_ID4044 stores a list of CID4041s that people in the corresponding CID4041 matrix can then head to next.
  • the list of KID4032 may be stored instead of the list of CID4041.
  • 5A and 5B are explanatory views showing an example of a data structure of the state DB 106 of the first embodiment of the present invention.
  • the flow line state table 501 (FIG. 5A) and the matrix state table 502 (FIG. 5B) are stored in the state DB 106.
  • the flow line state table 501 stores information on when, from which checkpoint to which checkpoint, and whether the person is staying in the matrix in the measured flow line data. ..
  • the matrix state table 502 stores information about which person was lined up in which order in each matrix.
  • the flow line state table 501 shown in FIG. 5A stores START_TIME5011, END_TIME5012, PID5013, O_ID5014, D_ID5015 and Q_ID5016.
  • START_TIME5011 and END_TIME5012 represent the start and end dates and times of the state in the corresponding person, respectively.
  • PID5013 represents the ID of the corresponding person.
  • O_ID5014 and D_ID5015 represent the ID of the start checkpoint and the ID of the end checkpoint as the corresponding person moves between checkpoints, respectively.
  • Q_ID5016 represents CID4041 in a matrix in which the corresponding people are lined up.
  • O_ID5014 and D_ID5015 if the start / end checkpoint cannot be determined by the status determination unit 102, -1 or NULL may be stored, respectively. Further, when the state determination unit 102 determines that the procession is not in line, -1 is stored in Q_ID5016. If O_ID5014 and D_ID5015 are the same, the corresponding person is staying at the checkpoint. In this case, the ID of the retention type checkpoint in which the person is retained may be stored in Q_ID5016.
  • the matrix state table 502 shown in FIG. 5B stores START_TIME 5021, END_TIME 5022, CID 5023 and PID_LIST 5024.
  • START_TIME5021 and END_TIME5022 represent start and end times for the corresponding matrix states, respectively.
  • CID5023 represents CID4041 of the corresponding matrix.
  • PID_LIST5024 is an ID list in which the IDs of the people in the corresponding queue are arranged in the order of waiting for the service.
  • 6A to 6E are explanatory views showing an example of a data structure of the model DB 107 of the first embodiment of the present invention.
  • the flow line prediction model learned by the behavior model learning unit 110 is generated for each state. Therefore, the model DB 107 stores model parameters for each state. In addition, the transition probability between checkpoints and the probability that the first checkpoint is selected are also stored. Specifically, the model DB 107 includes a movement model table 601 (FIG. 6A), a matrix model table 602 (FIG. 6B), a checkpoint retention model table 603 (FIG. 6C), a checkpoint transition probability table 604 (FIG. 6D), and a checkpoint transition probability table 604 (FIG. 6D). The checkpoint initial probability table 605 (FIG. 6E) is stored.
  • the movement model table 601 shown in FIG. 6A stores the parameters 6011 to 6014 of the model that predicts the movement between checkpoints. It does not include model parameters for retention within a matrix or checkpoint. Since the prediction model for the flow lines other than the retention is generated for each checkpoint transition, the model parameters are stored for each checkpoint transition.
  • O_ID6011 and D_ID6012 represent the ID of the start checkpoint and the ID of the end checkpoint when a person moves between checkpoints, respectively.
  • M_Param16013 to M_Paramm6014 are model parameters of the prediction model for the corresponding checkpoint transition.
  • this model parameter is expressed as an M-dimensional vector, and may be an agent model parameter such as a Social Force Model, a Gradient Boosting Regression Tree, a Support Vector Regression, a Long Short Term Neural, etc. It may be a model parameter in machine learning.
  • model parameters are stored for each type of checkpoint transition here, a flow line prediction model may be generated for each D_ID6012 and model parameters may be stored for each D_ID6012. In this case, since there is no corresponding O_ID6011, NULL or -1 is stored.
  • the matrix model table 602 shown in FIG. 6B stores the parameters 6021 to 6023 of the model for predicting the retention behavior in the matrix. This model parameter is stored for each matrix.
  • Q_ID6021 corresponds to CID4041 of the matrix type checkpoint.
  • Q_Param16022 to Q_ParamN6023 are model parameters of the prediction model for the corresponding matrix.
  • the model parameters are represented as N-dimensional vectors, and may be model parameters in machine learning such as, for example, Gradient Boosting Regression Tree, Support Vector Regression, or Long Short Term Memory, as in the above example.
  • the model parameters may be parameters related to the distance between people and the service waiting time when lining up in a matrix, or may be parameters when expressing them as an arbitrary probability distribution such as a normal distribution.
  • the checkpoint retention model table 603 shown in FIG. 6C stores the parameters 6031 to 6033 of the model for predicting the retention behavior at the retention type checkpoint. This model parameter is stored for each stagnant checkpoint.
  • S_ID6031 corresponds to CID4031 of the retention type checkpoint.
  • S_Param16032 to S_ParamO6033 are model parameters of the prediction model for the corresponding retention checkpoints.
  • the model parameter is represented as an O-dimensional vector, and may be a model parameter in machine learning such as, for example, Gradient Boosting Regression Tree, Support Vector Regression, or Long Short Term Memory, as in the above example.
  • the model parameter may be a parameter relating to the residence time when staying, or may be a parameter when expressing it as an arbitrary probability distribution such as a normal distribution.
  • the checkpoint transition probability table 604 shown in FIG. 6D stores parameters 6041 to 6043 regarding the transition probability between checkpoints.
  • O_ID6041 and D_ID6042 represent the ID of the start checkpoint and the ID of the end checkpoint when moving between checkpoints, respectively.
  • T_Prob6043 indicates the probability of selecting D_ID6042 when selecting the next checkpoint from O_ID6041. For the same O_ID6041, the sum of all the values of the corresponding T_Prob6043 is 1.
  • the checkpoint initial probability table 605 shown in FIG. 6E stores the probability of occurrence at which checkpoint a person is generated when the simulation is started. Specifically, the checkpoint initial probability table 605 stores the ID 6051 corresponding to the checkpoint CID4031 and the probability of occurrence I_Prob6052 at the checkpoint.
  • FIG. 7 is a flowchart of the process performed by the state determination unit 102 of the first embodiment of the present invention.
  • the process 701 represents the start of the process of the state determination unit 102.
  • the process 702 makes a spatial intersection determination between the flow line data stored in the flow line DB 105 and the map table 403 stored in the map DB 108, and which person passed which pass-type checkpoint and when. Is a process for determining.
  • the process 703 makes a spatial intersection determination between the flow line data stored in the flow line DB 105 and the map table 403 stored in the map DB 108, and which person stayed at which retention type checkpoint and when. Is a process for determining.
  • Process 702 and process 703 are performed by the OD analysis unit 1022 in the state determination unit 102. Details of these processes will be described later.
  • the process 705 uses the flow line data stored in the flow line DB 105 and the map tables 402, 403, and 404 stored in the map DB 108, and when, who, and in what order are arranged at each matrix type checkpoint. This is a process of storing the determination result of whether or not the result has been obtained in the matrix state table 502 of the state DB 106.
  • Process 704 and process 705 are performed by the matrix determination unit 1021 in the state determination unit 102. Details of these processes will be described later.
  • Process 706 integrates the results of processes 702, 703 and 704, and for each person, when, from which checkpoint to which checkpoint, which matrix checkpoint was lined up, and which retention type. This is a process of storing the information as to whether or not it has stayed at the checkpoint in the flow line state table of the state DB 106.
  • the process 707 represents the end of the process of the state determination unit 102.
  • 8A and 8B are explanatory views of the processing performed by the OD analysis unit 1022 of the first embodiment of the present invention.
  • the OD analysis unit 1022 makes a spatial intersection determination with respect to the flow line data stored in the flow line DB 105 and the pass-through type / retention type checkpoint stored in the map DB 108, and which person, when, and which checkpoint. Judge whether it has passed or stagnated.
  • FIG. 801 is an explanatory diagram of the spatial intersection determination between the pass-through checkpoint and the flow line data. This is a plan view for observing a space including a human-passable area from above.
  • the shaded area 803 indicates an area that is impassable to people, such as a wall, and the thick lined gate 804 indicates an example of a pass-through checkpoint.
  • Data 805 is an example of flow line data, and is data in which the coordinate information of a person at each time is connected in chronological order.
  • the data 805 and the gate 804 are spatially intersected with each other, it is determined that the person corresponding to the data 805 has passed through the gate 804.
  • the time immediately after the intersection is detected as the passing time.
  • This transit time is considered to be the time of arrival at the transit checkpoint and the time of departure from the transit checkpoint.
  • FIG. 802 is an explanatory diagram of the spatial intersection determination between the retention type checkpoint and the flow line data.
  • the data 806 and 807 are examples of flow line data, and the area 808 displayed by a thick line shows an example of a retention type checkpoint.
  • the data 806 spatially intersects the region 808, but passes through the region 808 without staying. For this reason, the person corresponding to the data 806 is not considered to have stopped at this retention checkpoint.
  • the data 807 is retained in the region 808, it is determined that the person corresponding to the data 807 has stopped at this retention type checkpoint.
  • a state in which the speed is constant for a certain period of time or longer and lower than a certain speed is regarded as retention. At this time, the time when the stay starts is regarded as the time when the checkpoint arrives, and the time when the stay ends is regarded as the time when the checkpoint departs.
  • FIG. 9 is a flowchart and a specific explanatory diagram of the processing performed by the matrix determination unit 1021 of the first embodiment of the present invention.
  • the matrix determination unit 1021 arranges which person, when, at which matrix type checkpoint, and in what order from the flow line data stored in the flow line DB 105 and the matrix type checkpoint stored in the map DB 108. Determine if it was there.
  • the flow of processes 901 to 905 is performed every hour.
  • the process 901 represents the start of the process of the matrix determination unit 1021.
  • Process 902 is a process of extracting the person who is at the beginning of each matrix type checkpoint. This process will be described with reference to FIG. 906.
  • Explanatory drawing 906 is a plan view of the space where the matrix is generated.
  • the gate 9061 is a matrix type checkpoint
  • the shaded area 9064 is an area such as a wall where people cannot enter
  • the grayed out area 9065 is the partition P_WKT4043 corresponding to the matrix of the gate 9061
  • the reception 9063 is the gate 9061. It is a receptionist that provides services to the people in the corresponding line. Therefore, the reception 9063 corresponds to the S_WKT4042 corresponding to the matrix of the gate 9061.
  • the matrix determination unit 1021 When extracting the matrix head corresponding to the gate 9061, the matrix determination unit 1021 extracts the person 9062 who is within a certain distance from the gate 9061 and is closest to the gate 9061 and whose speed is constant or less, as the head of the matrix. At this time, the matrix determination unit 1021 is a person who is on the opposite side of the reception 9063 with respect to the gate 9061, and when the gate 9061 and the person are connected by a line segment, the line segment becomes the area 9064 and the area. The person who does not intersect with any of 9065 is extracted as the person 9062 at the head of the matrix.
  • the circle of the person 9062 indicates the position of the person at a certain time
  • the tip of the straight line in contact with the circle indicates the position of the person at the time before that (for example, 1 second before). Others are displayed as well. The same applies to FIGS. 11, 14, and 17 described later.
  • Process 903 is a process for extracting the second and subsequent people at each matrix type checkpoint. This process will be described with reference to FIG. 907.
  • the matrix determination unit 1021 is next to a person 9072 who is within a certain distance and is in the nearest neighbor from the person 9071 (for example, corresponding to the person 9062 in the explanatory diagram 906) extracted in the immediately preceding step and whose speed is below a certain level. Extract as people lined up in. At this time, the matrix determination unit 1021 extracts the person 9072 so that the line segment connecting the person 9071 and the person 9072 does not intersect with any of the gate 9061, the area 9064, and the area 9065.
  • the matrix determination unit 1021 has a gate whose speed becomes less than a certain distance within a certain distance from the person extracted in the immediately preceding step, and a line segment connecting the person extracted in the immediately preceding step and the person to be searched for. It is determined whether or not there is a person who does not intersect with any of 9061, region 9064 and region 9065. If there is no such person, the end condition is satisfied and the process proceeds to process 905. If the end condition is not satisfied, the process returns to process 903.
  • the process 905 represents the end of the process of the matrix determination unit 1021.
  • information on each person's orientation may be used for this matrix determination process. For example, when extracting the people who line up at the beginning of the matrix, only the people who are facing the matrix type checkpoint among the people who satisfy the determination condition of the process 902 may be extracted, and the second and subsequent people may be extracted. When extracting the people in line, only the people who are facing the person extracted in the immediately preceding step may be extracted from the people who satisfy the determination condition of the process 903.
  • FIG. 10 is a flowchart of processing performed by the behavior model learning unit 110 of the first embodiment of the present invention.
  • Process 1001 represents the start of processing of the behavior model learning unit 110.
  • the process 1002 is a process of associating the flow line data stored in the flow line DB 105 with the flow line state table 501 stored in the state DB 106 and dividing the flow line data for each state.
  • the state is composed of three types of combinations of O_ID5014, D_ID5015, and Q_ID5016. This processing is performed by the flow line data dividing unit 1101 of the behavior model learning unit 110.
  • the process 1003 uses the flow line data divided in the process 1002 and the map tables 402, 403, and 404 of the map DB 108 to calculate the objective variable and the feature quantities related to the surrounding people, checkpoints, walls, and the like. This processing is performed by the feature amount calculation unit 1102 of the behavior model learning unit 110. The details of the processing contents will be described later.
  • Process 1004 is a process of learning a flow line prediction model for each state and storing the model parameters at that time in the model DB 107. This process is performed by the model learning unit 1103. Here, the process 1004 will be described.
  • the model learning unit 1103 uses flow line data in which Q_ID5016 is -1, and O_ID5014 and D_ID5015 are not equal, O_ID5014 and D_ID5015 are equal, or the value is not CID5023 of the retention type checkpoint. To learn a mobile flow line prediction model that does not stay.
  • the model learning unit 1103 may learn the model for each D_ID5015, or may learn the model for each combination of the O_ID5014 and the D_ID5015. Alternatively, the model learning unit 1103 may learn a model in which a combination of D_ID5015 or O_ID5014 and D_ID5015 is input as a feature amount (that is, as one of the explanatory variables).
  • the parameters of the agent model such as Social Force Model may be learned, for example, Gradient Boosting Regression Tree, Support Vector Regression, Long Short Term Memory, Machine Learning, etc. You may learn the parameters.
  • Machine learning here is a method of learning a model that predicts an objective variable from a feature quantity from data.
  • the model parameters learned here are stored in the moving model table 601.
  • the model learning unit 1103 uses the flow line data in which Q_ID5016 is -1, O_ID5014 and D_ID5015 are equal, and the value is CID4031 of the retention type checkpoint, and the retention type flow line prediction model at the checkpoint is used. To learn.
  • the model learning unit 1103 may learn the retention type flow line prediction model by using an agent model, machine learning, or the like, as in the case of learning the mobile type flow line prediction model.
  • the model learning unit 1103 may generate a model that stochastically outputs the residence time. The model parameters at this time are stored in the checkpoint retention model table 603.
  • the model learning unit 1103 learns a matrix-type flow line prediction model using the flow line data in which Q_ID5016 is neither -1 nor NULL.
  • the model learning unit 1103 may learn a matrix-type flow line prediction model by using an agent model, machine learning, or the like, as in the case of learning a mobile type flow line prediction model.
  • the model learning unit 1103 may learn a model that moves minutely in the direction of the person lined up in front as long as the distance to the person lined up in front is equal to or more than a certain threshold value.
  • the model parameters at this time are stored in the matrix model table 602.
  • the model learning unit 1103 learns the flow line prediction model for each state.
  • the model learning unit 1103 is divided into flow line data classified by state (for example, flow line data moving between checkpoints, flow line data staying at a retention type checkpoint, or a matrix type checkpoint. Learning is performed using the flow line data (flow line data arranged in the matrix of), and a flow line prediction model for each state is obtained.
  • model learning unit 1103 may learn a model that predicts a flow line by inputting a state as one of the feature quantities.
  • the process 1005 is a process of aggregating the information of the flow line state table 501 stored in the state DB 106, calculating the state transition probability and the initial probability, and storing them in the model DB 107.
  • the behavior model learning unit 110 extracts O_ID5014 and D_ID5015 for each PID5013 of the flow line state table 501, calculates checkpoint transition information, and aggregates them.
  • the behavior model learning unit 110 calculates the probability distribution of the first checkpoint of PID5013 as an initial probability, stores it in the checkpoint initial probability table 605, and checkspoint the probability that D_ID5015 when O_ID5014 is fixed is taken. Store in the probability table 604.
  • 1006 represents the end of processing of the behavior model learning unit 110.
  • FIG. 11 is an explanatory diagram of the processing performed by the feature amount calculation unit 1102 of the first embodiment of the present invention.
  • the screen 11001 is a plan view of the target area where the flow line data is measured, and displays the flow line data, the wall layout, and the checkpoint information at a certain time.
  • Areas 11002 hatched with diagonal lines are areas that people cannot enter, such as walls, areas 11003 and areas 11004 displayed with thick solid lines are stagnant checkpoints, gates 11005 are matrix checkpoints, and are grayed out.
  • Region 11011 represents the partition of the matrix corresponding to gate 11005. For people 11006, 11007, 11008, 11009 and 11010, how to calculate the objective variable and the features related to the surrounding people, walls, destination checkpoints and the like will be described.
  • the same calculation method is used for any of the people 11006, 11007, 11008, 11009, and 11010.
  • the distance to a person and the distance to a wall in various directions around the current position may be calculated as a feature amount.
  • the direction at this time may be the direction of the velocity vector in the previous step of the person or the relative direction with respect to the direction to the destination of the person.
  • the direction may be based on the X-axis or the Y-axis.
  • an appropriate threshold value may be substituted.
  • a grid may be generated based on the direction of the velocity vector in the previous step of the person or the direction to the destination of the person, or the X-axis or the Y-axis may be used as a reference.
  • the feature amount of the wall not only the area 11002 of the wall but also the area 11011 related to the partition of the matrix may be included to calculate the feature amount of the wall.
  • the feature amount related to the checkpoint is calculated from the current position of the person and the point to which the person is the destination. At this time, the distance from the person's position to the destination and the direction information to the destination are used as feature quantities.
  • the direction of the destination it may be a relative direction with respect to the direction of the velocity vector in the person's previous step. At this time, the X-axis or the Y-axis may be used as a reference.
  • the destination points are different for people 11006, 11007, 11008, 11009, and 11010, respectively.
  • Person 11006 is a flow line moving at a predetermined speed or higher.
  • the D_ID5016 of the person 11006 is the CID4031 corresponding to the area 11003
  • the destination of the person 11006 is an arbitrary point included in the area 11003, and may be, for example, the center of gravity of the area 11003.
  • Person 11007 is a flow line moving at a predetermined speed or higher as in the above example, but D_ID5016 is CID4031 corresponding to gate 11005. At this time, the destination of the person 11007 is not the point included in the gate 11005, but the coordinates of the last person in the procession of the gate 11005. This information can be obtained from the flow line data of PID 4013 corresponding to the last value of PID_LIST5024 in the matrix state table 502 of the state DB 106.
  • Person 11008 is an example of a flow line of a predetermined speed or less lined up at the head of a matrix, and in this case as well, D_ID5016 is CID4031 corresponding to gate 11005.
  • the destination at this time is an arbitrary point included in the gate 1105 indicating the head line of the matrix, and may be the center of gravity of the gate 11005.
  • Person 11009 is a flow line below a predetermined speed that is lined up in a line after the second, and D_ID5016 is CID4031 corresponding to gate 11005 as in the above example, but the destination is lined up in front of him. It becomes the coordinates of a person.
  • This information can be obtained from the flow line data of PID 4013 corresponding to the previous value of PID 4013 of person 11009 in PID_LIST5024 of the matrix state table 502 of the state DB 106.
  • Person 11010 is an example of a flow line that is accumulated at a predetermined speed or less in a retention type checkpoint, and D_ID5016 of person 11010 is CID4031 corresponding to region 11004.
  • the destination is a point included in the area 11004, and may be the center of gravity of the area 11004.
  • the feature amount calculation may be omitted because the feature amount related to the checkpoint related to the destination may not contribute to the model because the feature amount has already arrived at the destination.
  • the feature amount calculation unit 1102 basically calculates the speed of the time one step ahead as the objective variable to be predicted.
  • the velocity at the time one step ahead may be an absolute coordinate system, or the velocity at the time one step ahead may be converted into a relative velocity vector with reference to the direction of the velocity vector at the current time.
  • the direction to the destination may be used as a reference.
  • the remaining stay time or the probability of transitioning to the next D_ID5015 one step ahead and starting the movement is explained instead of the speed of the time one step ahead. It may be a variable.
  • 12A and 12B are explanatory views showing an example of a data structure of the simulation DB 109 of the first embodiment of the present invention.
  • the simulation DB 109 stores the virtual flow line table 1201 and the virtual matrix state table 1202.
  • the flow line data generated by the virtual flow line generation unit 111 is sequentially stored in the virtual flow line table 1201, and the state data of the matrix updated by the virtual flow line generation unit 111 is sequentially stored in the virtual flow line state table 1202. NS.
  • the virtual flow line table 1201 sequentially stores information 12011 to 12017 regarding the coordinates of the flow line data simulated for each person and at regular time intervals (for example, 1 second) and their states.
  • START_TIME12011 and END_TIME12012 represent the start time and end time of the flow line data simulated every second, respectively.
  • WKT12013 represents the geometry information about the linestring of the flow line that the corresponding person moved between START_TIME12011 and END_TIME12012.
  • PID12014 indicates the ID of the corresponding person.
  • O_ID12015 and D_ID12016 represent a start checkpoint and an end checkpoint at the time of simulation, respectively.
  • Q_ID12017 represents CID4041 in a matrix in which the corresponding people are lined up. When not in line, -1 is stored in Q_ID12017.
  • the virtual matrix state table 1202 stores information 12021 to 12024.
  • START_TIME12021 and END_TIME12022 represent start and end times for the corresponding matrix states, respectively.
  • CID12023 represents CID4041 of the corresponding matrix.
  • PID_LIST12024 is an ID list in which the IDs of the people in the corresponding queue are arranged in the order of waiting for the service.
  • FIG. 13 is a flowchart showing a simulation performed by the initial human flow generation unit 104 and the virtual flow line generation unit 111 of the first embodiment of the present invention.
  • Process 1301 represents the start of simulation processing.
  • the process 1302 represents a process of receiving the demand information of the human flow when executing the simulation from the user 12 and generating the initial human flow. This process is performed by the initial human flow generation unit 104.
  • the demand information of the flow of people refers to information on the number of people generated in the simulation, the simulation start time, the departure place, and the first destination. These information may be input in the form of statistical values, or the simulation start time, departure place, and destination information may be input in detail for each person.
  • the initial human flow generation unit 104 When input in the form of a statistic, the initial human flow generation unit 104 sets the simulation start time by sampling processing using a Poisson distribution or the like. Further, the initial human flow generation unit 104 probabilistically sets the information of the starting point and the destination for each person by the checkpoint initial probability table 605 and the checkpoint transition probability table 604 included in the model DB 107. At this time, the information regarding the starting point and the destination may be given in the form of KID4032 or CID4031 and may include not only the first destination but also all the transitions of the subsequent destinations.
  • the initial human flow generation unit 104 generates information on the simulation start time, departure point, and destination for each person, and then adds information on the initial position and initial speed, respectively.
  • the initial flow generation unit 104 selects a point included in the checkpoint regarding the first departure point as the initial position.
  • the coordinates of the center of gravity calculated from the geometry information of the checkpoint may be used as the initial position, points may be randomly sampled in the checkpoint area and used as the initial position, or the motion stored in the flow line DB 105 may be used.
  • the line data may be analyzed and the coordinate information when starting from the checkpoint may be extracted and used.
  • the initial velocity is given as a vector.
  • the initial human flow generation unit 104 may randomly sample the vector of the initial velocity, may calculate it from the direction vector to the checkpoint as the destination, or may calculate the flow line data stored in the flow line DB 105. , And the velocity vector when starting from the checkpoint may be extracted and used.
  • Process 1304 represents a process of extracting only people at time t on the simulation and adding them to the simulation target for the initial human flow generated in process 1302. At that time, the virtual flow line generation unit 111 stores the newly extracted initial human flow in the simulation DB 109.
  • Process 1305 represents a process of calculating a feature amount and predicting a human flow position at the next time using a model.
  • the virtual flow line generation unit 111 is described with reference to FIG. 11 by using the simulation map tables 402, 403 and 404 stored in the map DB 108 for the human flow to be simulated.
  • the feature amount is calculated by the feature amount calculation unit 1111 by the same method.
  • the virtual flow line generation unit 111 predicts the position of the human flow at time t + 1 by the flow line prediction unit 1112 using the calculated feature amount and the flow line prediction model for each state stored in the model DB 107.
  • the result is stored in the virtual flow line table 1201 of the simulation DB 109.
  • the virtual flow line generation unit 111 may retain the person to be simulated at the same position until the time reaches t + ⁇ t. .. Further, when Q_ID12017 is not -1, since the person is lined up in a matrix, the virtual flow line generation unit 111 may correct the predicted position so that the person does not come closer than a predetermined distance to another person. ..
  • the destination in this step is determined below depending on the D_ID12016 and Q_ID12017 of the human flow at time t and the virtual matrix state table 1202 of the simulation DB 109.
  • D_ID12016 -1 and D_ID12016 is other than a matrix type checkpoint
  • the destination is a point included in the checkpoint corresponding to D_ID12016.
  • a plurality of checkpoints may exist as destination candidates in one KID4032, but at this time, among the destination candidates, the checkpoint closer to the person flow. You may choose, or you may choose the checkpoint where no one is staying.
  • D_ID12016 is a matrix type checkpoint and no one is lined up in the corresponding matrix at time t
  • the destination is a point included in the matrix checkpoint corresponding to D_ID12016.
  • a plurality of checkpoints may exist as destination candidates in one KID4032, but at this time, from among the destination candidates, with respect to the flow of people. You may choose the checkpoint that is closer to you, or you may choose the checkpoint that is not lined up with people.
  • D_ID12016 is a matrix type checkpoint and people are lined up in the corresponding line at time t
  • the person at the end of the line corresponding to D_ID12016 is set as the destination. do.
  • the purpose is the person at the end who has the shortest distance between the person flow and the end of the matrix. It may be the ground, or you may choose the end of the line with the smaller number of people in line.
  • Process 1306 represents a process of determining a state of a person flow from the predicted position of the person flow and transitioning to the next state. This process is performed by the state transition unit 1113.
  • the state determination method and transition method here are described below.
  • the state transition unit 1113 inherits the state at time t as it is without changing the state.
  • the state transition unit 1113 sets Q_ID12017 to -1 at time t + 1 unless a person stays at the checkpoint heading after lining up in the matrix.
  • O_ID12015 is transferred to D_ID12016 at time t
  • D_ID12016 at time t + 1 is transferred to the ID of the next checkpoint.
  • the state transition unit 1113 inherits the state at time t as it is without changing the state.
  • the state transition unit 1113 updates Q_ID12017 to the ID of the matrix, and updates PID_LIST12024 of the virtual matrix state table 1202.
  • the state transition unit 1113 updates both O_ID12015 and D_ID12016 to the ID of the retention type checkpoint, and puts the state in the retention state.
  • the state transition unit 1113 uses the D_ID12016 as the next destination if the person is staying at ⁇ t or more as a result of prediction in the process 1305. If it is considered that the state is still stagnant as a result of the process 1305, the state is not changed.
  • the state transition unit 1113 transitions O_ID12015 at time t + 1 to D_ID12016 at time t and D_ID12016 at time t + 1 to the ID of the next checkpoint.
  • the ID of the checkpoint selected next when updating D_ID12016 is selected so that the destination is transitioned according to the information when the initial human flow generated in the process 1302 includes all the transition information of the destination. Will be done. If all the destination transition information is not included, the destination may be stochastically transitioned according to the checkpoint transition probability table 604 of the model DB 107. Here, if the next destination does not exist, the process of extinguishing the flow of people is performed.
  • Process 1307 represents a process of advancing the simulation time t by one.
  • Process 1308 represents a process of determining whether or not the simulation end condition is satisfied. If the end condition is satisfied, the process returns to process 1304, and if the end condition is not satisfied, the process proceeds to process 1309.
  • the end condition may be, for example, whether or not a certain period of time has passed since the start of the simulation, or whether or not all the flow of people to be simulated has reached the final destination.
  • Process 1309 represents the end of simulation processing.
  • FIG. 14 is an explanatory diagram showing an example of the interface of the human behavior prediction system 10 of the first embodiment of the present invention.
  • the details of the screen 1401 which is an example of the interface will be described.
  • the screen 1402 included in the screen 1401 displays the flow line data stored in the flow line DB 105, the virtual flow line data stored in the simulation DB 109, and the map tables 402, 403, and 404 stored in the map DB 108.
  • each flow line data is drawn by drawing the LINESTRING of each record, but it may be drawn by attaching a marker to the end point of the LINESTRING, or by changing the color for each state.
  • the drawn data can be selected on the screen, and when selected, the display such as the thickness or color may be changed and drawn as in the flow line data 14021.
  • Button 1403 is a button for measuring flow line data. When this button is operated, the measurement system 11 acquires the data in the target area, and the flow line data is stored in the flow line DB 105.
  • Button 1404 is a button for inputting map data. When this button is operated, a dialog is launched and the user 12 can enter information about the map tables 402, 403 and 404. The input information is stored in the map DB 108 by the map data input unit 103.
  • Button 1405 is a button for performing a state determination process on the measured flow line data. When this button is operated, the process of the state determination unit 102 is executed.
  • Button 1406 is a button for switching the display of flow line data. Since there are two types of flow line data, data stored in the simulation DB 109 and data stored in the flow line DB 105, these can be switched by operating the button 1406.
  • Button 1407 is a button for learning a flow line prediction model from the flow line data stored in the flow line DB 105. When this button is operated, the process of the behavior model learning unit 110 is executed.
  • Button 1408 is a button for generating an initial human flow. When this button is operated, a dialog box is displayed, and the demand information of the human flow described in the process 1302 can be input, and then the process of the initial human flow generation unit 104 is executed.
  • Button 1409 is a button for executing a simulation according to the learned flow line prediction model. When this button is operated, the processing of the virtual flow line generation unit 111 is executed.
  • Button 1410 is a button for displaying detailed information of the data selected on the screen 1402. Specifically, when this button is operated, one or more databases of the flow line DB 105, the simulation DB 109, the state DB 106, and the map DB 108 corresponding to the selected data are called, and the information of the selected data is called. Is displayed. In the example of FIG. 14, since the flow line data 14021 is selected, a dialog box containing information on the flow line table 401 and the flow line state table 501 corresponding to this data is launched.
  • Button 1411 is a stop button operated when playing back the flow line data. When the flow line data is being played on the screen, the playback can be stopped by operating this button.
  • Button 1412 is a flow line data playback button. When this button is operated, the process of continuously reproducing the flow line data every time is performed.
  • Button 1413 is a rewind button operated when playing back the flow line data.
  • the displayed time can be rewound by operating this button.
  • the progress bar 1414 is a bar indicating the position of the playback time of the flow line data. By directly moving the bar by the user 12 who is referencing this screen, the time during which the flow line data is displayed can be shifted.
  • Button 1415 is a fast-forward button operated when playing back flow line data. When the flow line data is displayed on the screen, the displayed time can be fast-forwarded by operating this button.
  • Text 1416 is a text box indicating the time when the flow line data is currently displayed. The user can directly edit this text 1416 to change the playback time.
  • Check box 1417 is a check box for switching between displaying the flow line data collectively or displaying it hourly. When this check box 1417 is checked, all the flow line data is displayed, and the buttons 1411-1415 and the text 1416 cannot be operated.
  • Example 1 a system for generating a human flow line prediction model and generating a human virtual flow line based on the model has been described.
  • a person is an example of a moving body, and this embodiment can be applied to a moving body other than a person, for example, a ship or a vehicle. Therefore, the human behavior prediction system may be read as a mobile movement prediction system.
  • Example 2 of the present invention will be described with reference to the drawings. Except for the differences described below, each part of the human behavior prediction system of Example 2 has the same function as each part of the same reference numeral of Example 1, and therefore the description thereof will be omitted.
  • the retention state in Example 1 of the present invention refers to retention within a retention type checkpoint and retention while lining up in a line.
  • the simulation is performed in consideration of the dialogue state between people as a staying state.
  • FIG. 15 is a block diagram showing the basic configuration of the human behavior prediction system 10 according to the second embodiment of the present invention.
  • the measurement system 11 the flow line data extraction unit 101, the map data input unit 103, the initial human flow generation unit 104, the behavior model learning unit 110, and the virtual flow line generation unit 111. Since it is the same as that of the first embodiment, the description thereof will be omitted.
  • the state determination unit 102 receives the flow line data stored in the flow line DB 105 and the checkpoint data stored in the map DB 108, determines the state of the flow line data, and stores the determination result in the state DB 1502. It has a function.
  • the state determination unit 102 includes a matrix determination unit 1021, an OD analysis unit 1022, and a dialogue determination unit 1501.
  • the matrix determination unit 1021 and the OD analysis unit 1022 are the same as those in the first embodiment. The specific processing of the dialogue determination unit 1501 will be described later.
  • the data structures of the state DB 1502 and the model DB 1503 will be described later.
  • FIG. 16A is an explanatory diagram showing an example of a data structure of the state DB 1502 according to the second embodiment of the present invention.
  • Dialogue IDs are assigned to the phenomenon of dialogue regardless of location, time and number of people. For example, if two people are talking and another person joins and three people start the dialogue, all the corresponding people from the time the two people start the dialogue until the three people finish the dialogue. The same GID 150217 is assigned. That is, while the first two people are interacting with each other, the GID is assigned only to the two people, and from the time when the third person is added, the same GID is assigned to the three people.
  • FIG. 16B is an explanatory diagram showing an example of a data structure of the model DB 1503 of the second embodiment of the present invention.
  • the dialogue model table 15031 is newly stored. This is a table that stores information that represents the ease with which dialogue can occur at each location.
  • SHAPE_ID150311 represents the ID of the grid that is a candidate for the dialogue place
  • WKT150312 is the geometry information of the grid.
  • Prob150313 quantifies the susceptibility of dialogue to occur at that location, and may be, for example, the probability that dialogue will occur at that location.
  • the behavior model learning unit 110 learns the dialogue model, the number of times the dialogue has occurred is counted for each grid, and the number of times is converted into a probability. It may be generated by storing it.
  • a dialogue may be probabilistically generated using the value of Prob150313 corresponding to the place.
  • the dialogue time extracted from the dialogue determination result of the actual flow line data may be used.
  • FIG. 17 is a flowchart of the process performed by the state determination unit 102 of the second embodiment of the present invention.
  • the process 1701 represents the start of the process of the state determination unit 102.
  • the treatments 1702 to 1705 are the same as the treatments 702 to 705.
  • Process 1706 is a process for performing dialogue determination from the retention state of flow line data.
  • the dialogue determination unit 1501 confirms the flow line data, if there are a plurality of people at a predetermined speed or less within a predetermined distance range, the dialogue determination unit 1501 determines that the dialogue is being performed. This process will be described with reference to FIG. 1709.
  • the person 1710 since there are a plurality of people having a predetermined speed or less in the area 1711 within a predetermined distance range, it is determined that the person 1710 is having a dialogue. Although the person 1712 is moving at a predetermined speed or less, it is not determined that the person is having a dialogue because there is no other person at a predetermined speed or less within the range of the predetermined distance.
  • the person 1713 exists in the region 1711, which is within a predetermined distance range, but is not below a predetermined speed, so that it is not determined that the person is having a dialogue.
  • information on the orientation of the person may be used when performing dialogue judgment. For example, if the distance and speed meet the conditions, and the angle between the direction of each person and the direction of the other person is smaller than the predetermined value, the dialogue is performed. It may be determined that there is.
  • the process 1707 is a process for storing the state data in the state DB 1502.
  • the state determination unit 102 integrates the results of the processes 1702 to 1705, and for each person, when, from which checkpoint to which checkpoint, and which matrix type checkpoint was lined up. Information on which retention type checkpoint was staying and whether the dialogue was held is stored in the flow line state table 15021 of the state DB 1502.
  • the process 1708 represents the end of the process of the state determination unit 102.
  • FIG. 18 is a block diagram showing a hardware configuration of each device constituting the measurement system 11 and the human behavior prediction system 10 of the first and second embodiments of the present invention.
  • the measurement system 11 is composed of one or more of the laser measurement system 2001, the camera system 2002, and the terminal positioning system 2003.
  • the laser measurement system 2001 obtains the distance from the laser oscillator 2011 that emits the laser light, the laser receiver 2001 that reads the reflected light of the laser, the oscillation of the laser, the time required for receiving the light, and the like to the object around the laser measurement system 2001. It comprises an arithmetic unit 2013 that converts into point group data.
  • the camera system 2002 is a system provided with a general camera, and visible light is obtained as an image by an image sensor 20021, and an arithmetic unit 20022 detects a person from the image and estimates its position by a known method. It is a device that can be used.
  • the terminal positioning system 2003 includes a processor 20031, a storage device 20032, a monitor 20033, a GPS receiver 20034, a DRAM 20033, an input device 20033, and a wireless communication board 20033.
  • the processor 20031 has arithmetic performance.
  • the DRAM 20033 is a volatile temporary storage area that can be read and written at high speed.
  • the storage device 20032 is a permanent storage area using a hard disk drive (HDD), a flash memory, or the like.
  • the input device 20003 accepts human operations.
  • Monitor 20033 presents the current status of the terminal.
  • the wireless communication board 20033 is a network interface card for performing wireless communication.
  • the GPS receiver 20034 specifies the position of the terminal.
  • the processor 20031 executes a program recorded in a storage area such as a DRAM 20033, it estimates its own position using a GPS receiver 20034 or the like and distributes it via a wireless communication board 20033.
  • the human behavior prediction system 10 includes a processor 112, a storage device 113, a monitor 114, a DRAM 115, an input device 116, and a NIC 117.
  • the processor 112 has arithmetic performance.
  • the DRAM 115 is a volatile temporary storage area that can be read and written at high speed.
  • the storage device 113 is a permanent storage area using an HDD, a flash memory, or the like.
  • the input device 116 accepts human operations.
  • Monitor 114 presents information.
  • NIC117 is a network interface card for performing communication.
  • the processor 112 executing the program recorded in the storage area such as the DRAM 115, the flow line data extraction unit 101, the state determination unit 102, the map data input unit 103, the initial human flow generation unit 104, the behavior model learning unit 110, and the virtual The flow line generation unit 111 can be realized. That is, the processing executed by each of the above parts in Examples 1 and 2 is actually executed by the processor 112 according to the program. Further, the flow line DB 105, the state DB 106, the model DB 107, the map DB 108, and the simulation DB 109 can be realized by storing them in the storage device 113.
  • the human behavior prediction system 10 may be realized by, for example, one computer having the configuration shown in FIG. 18, but may be realized by a plurality of computers.
  • the information held by the above-mentioned human behavior prediction system 10 may be distributed and stored in a plurality of storage devices 113 or DRAM 115, or the functions of the above-mentioned human behavior prediction system 10 may be stored in a plurality of computers. It may be distributed and executed by the processor.
  • the above embodiment of the present invention may include the following examples.
  • a mobile movement prediction system including a processor (for example, a processor 112) and a storage device (for example, at least one of the storage device 113 and the DRAM 115), wherein the storage device is position information for each time of the moving body.
  • the movement line DB 105 the map information of the space in which the moving body can move (for example, the map DB 108), and the state information indicating the state related to the movement of the moving body (for example, the state DB 106) are retained.
  • the processor learns a movement model that predicts the destination of the moving body based on the movement line information, the map information, and the state of the moving body (for example, the processing of FIG. 10), and is based on the initial condition of the moving body and the moving body. To generate a virtual movement line of the moving body (for example, the process of FIG. 13).
  • the map information includes information on the positions and types of a plurality of checkpoints (eg, map table 403), each of which the moving object can pass, stay, or matrix at least.
  • the result of determining the state of the moving body from the positional relationship between the moving body and the checkpoint, the type of checkpoint, and the speed of the moving body based on the movement line information and the map information is stored in the storage device as the state information. (For example, the process of FIG. 7).
  • the map information includes information indicating whether the type of each checkpoint is a retention type, and the state of the moving body includes a state of being retained at the retention type checkpoint.
  • the processor determines that the state of the moving body whose positional relationship with the retention type checkpoint satisfies a predetermined condition and whose moving speed is equal to or less than a predetermined value is a state of being retained at the retention type checkpoint. (For example, the process 703 in FIG. 7).
  • the map information includes information indicating whether the type of each checkpoint is a matrix type, and the state of the moving body is a state in which the checkpoints of the matrix type are arranged in a matrix.
  • the processor comprises the state of the moving body in which the positional relationship with the matrix-type checkpoint satisfies a predetermined condition and the movement speed is equal to or less than a predetermined value, the matrix of the matrix-type checkpoint. It is determined that the state is lined up in (for example, the process 704 in FIG. 7).
  • the processor uses a matrix-type checkpoint, a point of interest in the matrix at the matrix-type checkpoint (for example, a place where a service is provided to a person in a matrix), and a matrix-type check.
  • the moving objects arranged at the head of the matrix are specified (for example, the process 902 in FIG. 9), and the order in the matrix is specified.
  • Arranged in a matrix by performing a process of recursively identifying the moving objects next to the moving objects whose order is specified in the matrix based on the positional relationship between the moving object and the other moving objects.
  • the order of the moving objects is specified (for example, the process 903 in FIG. 9).
  • the feature amount input to the movement model includes the feature amount related to the positional relationship between the moving body and the destination of the moving body, and the moving bodies are not lined up at the matrix-type checkpoint.
  • the destination of the moving object toward the matrix-type checkpoint is any point belonging to the matrix-type checkpoint, and when one or more of the above-mentioned moving objects are lined up at the matrix-type checkpoint, the matrix-type checkpoint
  • the destination of the moving object toward the checkpoint is the moving object that is lined up at the end of the matrix, and if one or more moving objects are lined up at the matrix type checkpoint, it is lined up at the beginning of the matrix type checkpoint.
  • the destination of the moving object is any point belonging to the matrix type checkpoint, and if two or more moving objects are lined up at the matrix type checkpoint, the second and subsequent points of the matrix type checkpoint are followed.
  • the destination of the moving objects that are lined up is the moving objects that are lined up in front of the matrix.
  • the moving body is a person
  • the processor is based on the duration of the state in which the distance between a plurality of people and the moving speed of each person satisfy a predetermined condition from the flow line information.
  • the result of determining the occurrence of dialogue between a plurality of people is included in the state information and stored in the storage device (for example, the process of FIG. 17).
  • the processor calculates the probability of occurrence of a dialogue for each region in the space based on the result of determining the occurrence of a dialogue between a plurality of people, and the generated virtual motion. Predict the occurrence of dialogue based on the line and the probability of occurrence of dialogue.
  • the processor learns a movement model for each state of the moving body based on the flow line information and the map information classified according to the state of the moving body.
  • the processor learns a movement model in which the state of the moving body is input as a feature quantity.
  • the present invention is not limited to the above-described embodiment, but includes various modifications.
  • the above-mentioned examples have been described in detail for a better understanding of the present invention, and are not necessarily limited to those having all the configurations of the description.
  • it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment and it is possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations, functions, processing units, processing means, etc. may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be stored in non-volatile semiconductor memories, hard disk drives, storage devices such as SSDs (Solid State Drives), or computer-readable non-readable devices such as IC cards, SD cards, and DVDs. It can be stored in a temporary data storage medium.
  • control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all control lines and information lines in the product. In practice, it can be considered that almost all configurations are interconnected.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

This mobile body movement prediction system has a processor and a storage device. The storage device holds: traffic line information that includes position information at times of a mobile body; map information that includes information about positions of a plurality of checkpoints at which the mobile body can perform passage and/or retention in a space where the mobile body can make movement; and state information that indicates a state related to movement of mobile body. The processor trains a movement model for predicting a movement destination of the mobile body on the basis of the traffic line information, the map information, and the state of the mobile body, and generates a virtual traffic line of the mobile body on the basis of an initial condition of the mobile body and the movement model.

Description

移動体移動予測システム及び移動体移動予測方法Mobile movement prediction system and mobile movement prediction method 参照による取り込みCapture by reference
 本出願は、令和2年(2020年)5月18日に出願された日本出願である特願2020-86998の優先権を主張し、その内容を参照することにより、本出願に取り込む。 This application claims the priority of Japanese Patent Application No. 2020-86998, which is a Japanese application filed on May 18, 2020, and incorporates it into this application by referring to its contents.
 本発明は、人等の移動体の移動を予測する技術に関する。 The present invention relates to a technique for predicting the movement of a moving object such as a person.
 空港や工場などの施設等では、窓口の増減、案内板の配置といった施策によって、利用者の混雑回避が求められている。またオフィスなどのコミュニケーションスペース等では、レイアウト変更等の施策を実施することによって、利用者のコミュニケーションを活性化させることが求められている。 At facilities such as airports and factories, it is required to avoid congestion of users by taking measures such as increasing or decreasing the number of counters and arranging information boards. Further, in communication spaces such as offices, it is required to activate user communication by implementing measures such as layout change.
 レーザーレーダまたはカメラ等を用いて取得された動線データを統計分析することで、利用者の混雑度およびコミュニケーション活性度を把握することが可能である。しかし施策を事前に評価するためには、施策実行時の動線データが存在しないため、シミュレーションによる動線データ生成が求められている。 By statistically analyzing the flow line data acquired using a laser radar or a camera, it is possible to grasp the degree of congestion and communication activity of the user. However, in order to evaluate the measures in advance, there is no flow line data at the time of implementing the measures, so it is required to generate the flow line data by simulation.
 人の行動を予測する先行技術として、特許文献1に示すような技術が知られている。また動線を予測する先行技術として、特許文献2に示すような技術が知られている。 As a prior art for predicting human behavior, a technique as shown in Patent Document 1 is known. Further, as a prior art for predicting a flow line, a technique as shown in Patent Document 2 is known.
 特許文献1では、群衆の活動エリアにおける群衆の動きをモデル化するために、エリア属性とエリア間属性から、エリア間の移動頻度を出力する移動頻度モデルを推定する技術が開示されている。 Patent Document 1 discloses a technique for estimating a movement frequency model that outputs the movement frequency between areas from the area attribute and the area-to-area attribute in order to model the movement of the crowd in the activity area of the crowd.
 特許文献2では、動線データと商品の配置情報から、逆強化学習によって行動モデルを生成し、商品の配置変更後の人の流れを予測する技術が開示されている。 Patent Document 2 discloses a technique of generating a behavior model by reverse reinforcement learning from flow line data and product placement information, and predicting the flow of people after changing the product placement.
  特許文献1:特開2006-221329号公報
  特許文献2:国際公開第2018/131214号
Patent Document 1: Japanese Patent Application Laid-Open No. 2006-221329 Patent Document 2: International Publication No. 2018/131214
 計測された動線データには、各人の行き先情報が含まれていないので、OD(Origin-Destination)ごとに動線データを分割してから、動線予測モデルを学習する必要がある。しかし、計測した動線データには、移動以外に滞留状態も含まれるため、OD分割だけでは、動線シミュレーションの精度が低下するという問題がある。また空港における行列待ち時間およびオフィスのコミュニケーション活性度等を評価するためには、行列待ち中および対話中の滞留シミュレーションも必要となる。 Since the measured flow line data does not include the destination information of each person, it is necessary to learn the flow line prediction model after dividing the flow line data for each OD (Origin-Destination). However, since the measured flow line data includes a retention state in addition to the movement, there is a problem that the accuracy of the flow line simulation is lowered only by the OD division. In addition, in order to evaluate the queue waiting time at the airport and the communication activity of the office, a residence simulation during queue waiting and dialogue is also required.
 上記の課題の少なくとも一つを解決するため、本発明は、プロセッサと、記憶装置と、を有する移動体移動予測システムであって、前記記憶装置は、移動体の時刻ごとの位置情報を含む動線情報と、前記移動体が移動可能な空間において、それぞれ、前記移動体が通過及び滞留の少なくともいずれかをし得る複数のチェックポイントの位置の情報を含む地図情報と、前記移動体の移動に関する状態を示す状態情報と、を保持し、前記プロセッサは、前記動線情報、前記地図情報及び前記移動体の状態に基づいて、前記移動体の移動先を予測する移動モデルを学習し、前記移動体の初期条件及び前記移動モデルに基づいて前記移動体の仮想的な動線を生成することを特徴とする。 In order to solve at least one of the above problems, the present invention is a mobile movement prediction system including a processor and a storage device, wherein the storage device includes motion including time-wise position information of the mobile body. Line information, map information including information on the positions of a plurality of checkpoints at which the moving body can pass or stay, respectively, in a space in which the moving body can move, and movement of the moving body. The processor learns a movement model that predicts the movement destination of the moving body based on the movement line information, the map information, and the state of the moving body, and holds the state information indicating the state, and the moving body. It is characterized in that a virtual movement line of the moving body is generated based on the initial conditions of the body and the moving model.
 本発明の一態様によると、動線シミュレーションを高精度化することができる。それによって、レイアウト設計による動線への影響を事前に評価することができる。前述した以外の課題、構成及び効果は、以下の実施例の説明により明らかにされる。 According to one aspect of the present invention, the flow line simulation can be made highly accurate. Thereby, the influence of the layout design on the flow line can be evaluated in advance. Issues, configurations and effects other than those described above will be clarified by the description of the following examples.
本発明の実施例1の人間行動予測システムの基本構成を表すブロック図である。It is a block diagram which shows the basic structure of the human behavior prediction system of Example 1 of this invention. 本発明の実施例1の人間行動予測システムにてモデル学習時に行われる処理を表すシーケンス図である。It is a sequence diagram which shows the process performed at the time of model learning in the human behavior prediction system of Example 1 of this invention. 本発明の実施例1の人間行動予測システムにてシミュレーション時に行われる処理を表すシーケンス図である。It is a sequence diagram which shows the process performed at the time of a simulation by the human behavior prediction system of Example 1 of this invention. 本発明の実施例1の動線DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the flow line DB of Example 1 of this invention. 本発明の実施例1の地図DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the map DB of Example 1 of this invention. 本発明の実施例1の地図DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the map DB of Example 1 of this invention. 本発明の実施例1の地図DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the map DB of Example 1 of this invention. 本発明の実施例1の状態DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the state DB of Example 1 of this invention. 本発明の実施例1の状態DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the state DB of Example 1 of this invention. 本発明の実施例1のモデルDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the model DB of Example 1 of this invention. 本発明の実施例1のモデルDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the model DB of Example 1 of this invention. 本発明の実施例1のモデルDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the model DB of Example 1 of this invention. 本発明の実施例1のモデルDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the model DB of Example 1 of this invention. 本発明の実施例1のモデルDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the model DB of Example 1 of this invention. 本発明の実施例1の状態判定部で行われる処理のフローチャートである。It is a flowchart of the process performed in the state determination part of Embodiment 1 of this invention. 本発明の実施例1のOD分析部で行われる処理の説明図である。It is explanatory drawing of the process performed in the OD analysis part of Example 1 of this invention. 本発明の実施例1のOD分析部で行われる処理の説明図である。It is explanatory drawing of the process performed in the OD analysis part of Example 1 of this invention. 本発明の実施例1の行列判定部で行われる処理のフローチャートおよび具体的な説明図である。It is a flowchart of the process performed in the matrix determination part of Embodiment 1 of this invention, and a concrete explanatory diagram. 本発明の実施例1の行動モデル学習部で行われる処理のフローチャートである。It is a flowchart of the process performed in the behavior model learning part of Example 1 of this invention. 本発明の実施例1の特徴量算出部で行われる処理の説明図である。It is explanatory drawing of the process performed in the feature amount calculation part of Example 1 of this invention. 本発明の実施例1のシミュレーションDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the simulation DB of Example 1 of this invention. 本発明の実施例1のシミュレーションDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the simulation DB of Example 1 of this invention. 本発明の実施例1の初期人流生成部および仮想動線生成部で行われるシミュレーションを示したフローチャートである。It is a flowchart which showed the simulation performed in the initial person flow generation part and the virtual flow line generation part of Example 1 of this invention. 本発明の実施例1の人間行動予測システムのインタフェースの例を表す説明図である。It is explanatory drawing which shows the example of the interface of the human behavior prediction system of Example 1 of this invention. 本発明の実施例2の人間行動予測システムの基本構成を表すブロック図である。It is a block diagram which shows the basic structure of the human behavior prediction system of Example 2 of this invention. 本発明の実施例2の状態DBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the state DB of Example 2 of this invention. 本発明の実施例2のモデルDBのデータ構造例を表す説明図である。It is explanatory drawing which shows the data structure example of the model DB of Example 2 of this invention. 本発明の実施例2の状態判定部で行われる処理のフローチャートである。It is a flowchart of the process performed in the state determination part of Example 2 of this invention. 本発明の実施例1および実施例2の計測システムおよび人間行動予測システムを構成する各装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware composition of each apparatus which constitutes the measurement system and the human behavior prediction system of Example 1 and Example 2 of this invention.
 以下、本発明の実施形態を添付図面に基づいて説明する。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings.
 本発明は人の行き先を予測するシステムに関するものである。なお本システムで予測するのは人だけでなく、船舶や車などの移動体であってもよい。ここでは人に焦点をあてて、説明を行う。 The present invention relates to a system for predicting the destination of a person. It should be noted that the prediction by this system may be not only a person but also a moving object such as a ship or a car. Here, I will focus on people and give explanations.
 本発明は、空港等の施設内で、レーザーレーダまたはカメラ等のセンサを用いて計測された動線データから動線予測モデルを学習し、その動線予測モデルを用いて動線シミュレーションを行う。ここで、動線予測モデルとは、例えば、人等の移動体の所定の時間後(例えば1秒後)の移動先を予測するモデルである。計測された動線データには、移動状態および滞留状態等、様々な状態のデータが含まれる。ここで、移動状態とは、例えば人が歩行している状態のような、移動体が移動している状態であり、滞留状態とは移動体がある場所に滞留している状態である。これらはいずれも移動に関する状態の一例である。このように、計測された動線データに様々な状態のデータが含まれるため、1つのモデルを用いて動線シミュレーションを行うと、精度が落ちるという問題がある。そのため、状態ごとに分割された動線データを用いて、動線予測モデルを学習し、動線シミュレーションを行う。ここでいう状態とは、移動状態および滞留状態に加えて、チェックポイント間の遷移に関する情報も含む。チェックポイントとは、例えば対象地域における移動体の滞留点または通過点等である。各チェックポイントに関する情報は、その位置を示す情報と種別を示す情報との対を含んでもよい。 The present invention learns a flow line prediction model from flow line data measured using a sensor such as a laser radar or a camera in a facility such as an airport, and performs a flow line simulation using the flow line prediction model. Here, the flow line prediction model is a model that predicts the destination of a moving body such as a person after a predetermined time (for example, after 1 second). The measured flow line data includes data in various states such as a moving state and a stagnant state. Here, the moving state is a state in which the moving body is moving, such as a state in which a person is walking, and the staying state is a state in which the moving body is staying in a certain place. All of these are examples of movement-related states. As described above, since the measured flow line data includes data in various states, there is a problem that the accuracy is lowered when the flow line simulation is performed using one model. Therefore, the flow line prediction model is learned and the flow line simulation is performed using the flow line data divided for each state. The state referred to here includes information on the transition between checkpoints in addition to the moving state and the stagnant state. The checkpoint is, for example, a retention point or a passing point of a moving body in the target area. The information about each checkpoint may include a pair of information indicating its position and information indicating its type.
 図1は、本発明の実施例1の人間行動予測システムの基本構成を表すブロック図である。 FIG. 1 is a block diagram showing the basic configuration of the human behavior prediction system according to the first embodiment of the present invention.
 本実施例の人間行動予測システム10は、計測システム11から計測データを、ユーザ12から地図データを受信し、計測データから抽出した動線データを状態ごとに分割した後、動線予測モデルを学習する機能と、ユーザ12から動線シミュレーションのための初期パラメータを受信した後、学習した動線予測モデルを用いて、動線シミュレーションを行う機能と、を備える。 The human behavior prediction system 10 of this embodiment receives measurement data from the measurement system 11 and map data from the user 12, divides the movement line data extracted from the measurement data into each state, and then learns the movement line prediction model. It has a function to perform a movement line simulation and a function to perform a movement line simulation using the learned movement line prediction model after receiving initial parameters for the movement line simulation from the user 12.
 上記機能を実現するために、人間行動予測システム10は、動線データ抽出部101と、動線データベース(DB)105と、地図データ入力部103と、地図DB108と、状態判定部102と、状態DB106と、行動モデル学習部110と、モデルDB107と、初期人流生成部104と、仮想動線生成部111とシミュレーションDB109と、を備える。 In order to realize the above functions, the human behavior prediction system 10 includes a movement line data extraction unit 101, a movement line database (DB) 105, a map data input unit 103, a map DB 108, a state determination unit 102, and a state. It includes a DB 106, a behavior model learning unit 110, a model DB 107, an initial human flow generation unit 104, a virtual flow line generation unit 111, and a simulation DB 109.
 動線データ抽出部101は、計測システム11にて計測されたデータから、動線データを抽出し、動線DB105に格納する。この際、計測システム11にて計測されるデータは、対象領域の動画データでもよいし、レーザーレーダデータでもよい。動画データから動線データを抽出する際は、公知の画像処理技術によって人を認識し、人の座標を抽出してもよい。レーザーレーダデータから動線データを抽出する際は、公知の技術を用いてセンサからの距離が変化する移動体を人として抽出し、座標情報に変換してもよい。 The flow line data extraction unit 101 extracts the flow line data from the data measured by the measurement system 11 and stores it in the flow line DB 105. At this time, the data measured by the measurement system 11 may be moving image data in the target area or laser radar data. When extracting the flow line data from the moving image data, the person may be recognized by a known image processing technique and the coordinates of the person may be extracted. When extracting the flow line data from the laser radar data, a moving body whose distance from the sensor changes may be extracted as a person using a known technique and converted into coordinate information.
 地図データ入力部103は、ユーザ12から施設等の対象領域に関する地図情報を受信し、地図DB108に格納する機能を備える。上記機能を実現するために、地図データ入力部103は、壁レイアウト入力部1031と、チェックポイント入力部1032と、行列レイアウト入力部1033と、を備える。壁レイアウト入力部1031は、対象領域の壁等、人が通過することができないオブジェクトに関するレイアウト情報を受信する。チェックポイント入力部1032は、対象領域のチェックポイントに関するレイアウト情報を受信する。行列レイアウト入力部1033は、チェックポイントに関するレイアウト情報の中でも行列に関する追加レイアウト情報を受信する。 The map data input unit 103 has a function of receiving map information about a target area such as a facility from the user 12 and storing it in the map DB 108. In order to realize the above function, the map data input unit 103 includes a wall layout input unit 1031, a checkpoint input unit 1032, and a matrix layout input unit 1033. The wall layout input unit 1031 receives layout information regarding an object that cannot be passed by a person, such as a wall in a target area. The checkpoint input unit 1032 receives layout information regarding the checkpoint of the target area. The matrix layout input unit 1033 receives additional layout information regarding the matrix among the layout information regarding the checkpoint.
 状態判定部102は、動線DB105に格納された動線データと、地図DB108に格納されたチェックポイントデータと、を受信し、動線データに対する状態判定を行い、判定結果を状態DB106に格納する機能を備える。上記機能を実現するために、状態判定部102は、動線データから行列に並んでいる状態を判定する行列判定部1021と、動線データからチェックポイントの遷移状態を判定するOD分析部1022と、を備える。各部の具体的な処理については後述する。 The state determination unit 102 receives the flow line data stored in the flow line DB 105 and the checkpoint data stored in the map DB 108, determines the state of the flow line data, and stores the determination result in the state DB 106. It has a function. In order to realize the above functions, the state determination unit 102 includes a matrix determination unit 1021 that determines the state of being lined up in a matrix from the flow line data, and an OD analysis unit 1022 that determines the transition state of the checkpoint from the flow line data. , Equipped with. The specific processing of each part will be described later.
 行動モデル学習部110は、動線DB105に格納された動線データと、状態DB106に格納された行列状態データおよびチェックポイント遷移データと、地図DB108に格納されたレイアウトデータと、を受信し、動線予測モデルを学習し、モデルDB107に学習した動線予測モデルを格納する。上記機能を実現するために、行動モデル学習部110は、動線データ分割部1101と、特徴量算出部1102と、モデル学習部1103と、を備える。各部の具体的な処理内容については後述する。 The behavior model learning unit 110 receives the flow line data stored in the flow line DB 105, the matrix state data and the checkpoint transition data stored in the state DB 106, and the layout data stored in the map DB 108, and moves. The line prediction model is learned, and the learned flow line prediction model is stored in the model DB 107. In order to realize the above functions, the behavior model learning unit 110 includes a flow line data division unit 1101, a feature amount calculation unit 1102, and a model learning unit 1103. The specific processing contents of each part will be described later.
 初期人流生成部104は、ユーザ12から動線シミュレーションのための初期パラメータを受信した後、初期人流を生成し、シミュレーションDB109に格納する。具体的な処理内容については後述する。 After receiving the initial parameters for the flow line simulation from the user 12, the initial human flow generation unit 104 generates the initial human flow and stores it in the simulation DB 109. The specific processing content will be described later.
 仮想動線生成部111は、シミュレーションDB109から現在の動線データと、地図DB108からレイアウトデータと、モデルDB107から学習した動線予測モデルと、を受信し、1ステップ後の動線データを予測してシミュレーションDB109に格納する。具体的な処理内容については後述する。 The virtual flow line generation unit 111 receives the current flow line data from the simulation DB 109, the layout data from the map DB 108, and the flow line prediction model learned from the model DB 107, and predicts the flow line data one step later. And store it in the simulation DB 109. The specific processing content will be described later.
 動線DB105、状態DB106、モデルDB107、地図DB108、シミュレーションDB109に格納されるデータについては後述する。 The data stored in the flow line DB 105, the state DB 106, the model DB 107, the map DB 108, and the simulation DB 109 will be described later.
 図2は、本発明の実施例1の人間行動予測システム10にてモデル学習時に行われる処理を表すシーケンス図である。以下に全体の処理について説明する。 FIG. 2 is a sequence diagram showing a process performed at the time of model learning in the human behavior prediction system 10 of the first embodiment of the present invention. The entire process will be described below.
 動線予測モデルを学習する際は、最初に計測システム11が施設内など対象領域における動画データ、またはレーザーレーダデータを動線データ抽出部101に送信する(201)。動線データ抽出部101は、受信したデータから動線データを抽出し(202)、動線DB105に格納する(203)。 When learning the flow line prediction model, the measurement system 11 first transmits moving image data or laser radar data in a target area such as in a facility to the flow line data extraction unit 101 (201). The flow line data extraction unit 101 extracts the flow line data from the received data (202) and stores it in the flow line DB 105 (203).
 一方ユーザ12は、対象領域の壁レイアウトデータ、チェックポイントデータおよび行列のレイアウトデータ等の地図データ群を地図データ入力部103に入力する(214)。地図データ入力部103は、入力された地図データ群を地図DB108に格納する(213)。この処理203と処理213の順序はどちらが先でもよく、同時に実施されてもよい。 On the other hand, the user 12 inputs a map data group such as wall layout data, checkpoint data, and matrix layout data of the target area to the map data input unit 103 (214). The map data input unit 103 stores the input map data group in the map DB 108 (213). Either process 203 or process 213 may be performed first, or may be performed at the same time.
 処理203と処理213が実施された後、状態判定部102は、動線DB105から計測された動線データを受信し(204)、地図DB108から地図データ群を受信した後(208)、状態判定を行い(205)、判定された状態データを状態DB106に格納する(206)。 After the processes 203 and 213 are executed, the state determination unit 102 receives the flow line data measured from the flow line DB 105 (204), and after receiving the map data group from the map DB 108 (208), the state determination unit determines the state. (205), and the determined state data is stored in the state DB 106 (206).
 その後、行動モデル学習部110は、動線DB105から計測された動線データを受信し(207)、状態DB106から判定された状態データを受信し(209)、地図DB108から地図データ群を受信した後(211)、動線予測モデルを学習し(210)、モデルDB107に学習した動線予測モデルの情報を格納する(212)。 After that, the behavior model learning unit 110 received the flow line data measured from the flow line DB 105 (207), received the state data determined from the state DB 106 (209), and received the map data group from the map DB 108. Later (211), the flow line prediction model is learned (210), and the information of the learned flow line prediction model is stored in the model DB 107 (212).
 図3は、本発明の実施例1の人間行動予測システム10にてシミュレーション時に行われる処理を表すシーケンス図である。 FIG. 3 is a sequence diagram showing the processing performed at the time of simulation in the human behavior prediction system 10 of the first embodiment of the present invention.
 図3に示す処理を実施する前に、図2で説明した動線予測モデルを学習する処理を行っておく必要がある。以下に全体の処理について説明する。 Before performing the process shown in FIG. 3, it is necessary to perform the process of learning the flow line prediction model described in FIG. The entire process will be described below.
 シミュレーションを行う際、施策実行などのため、モデル学習時と異なる地図データ群を用いてシミュレーションをする場合、まず最初に地図データ入力部103が、施策実行時の地図データを地図DB108に格納する(301)。モデル学習時と同じ地図データ群を用いる場合は、このステップは省略される。 When performing a simulation using a map data group different from that at the time of model learning for the purpose of implementing measures, the map data input unit 103 first stores the map data at the time of implementing the measures in the map DB 108 ( 301). If the same map data group as in model training is used, this step is omitted.
 その後、初期人流生成部104は、シミュレーション開始時の人流情報を生成し、(306)、シミュレーションDB109に格納する(305)。その後、仮想動線生成部111は、地図DB108から地図データ群を受信し(302)、モデルDB107から動線予測モデルを受信し(304)、シミュレーションを実施する(303)。 After that, the initial human flow generation unit 104 generates the human flow information at the start of the simulation (306) and stores it in the simulation DB 109 (305). After that, the virtual flow line generation unit 111 receives the map data group from the map DB 108 (302), receives the flow line prediction model from the model DB 107 (304), and executes the simulation (303).
 その際、仮想動線生成部111は、まずはシミュレーションDB109から現在の人流について、動線データおよび状態データを受信し(307)、次の時間ステップにおける動線データおよび状態データを予測し(303)、予測した動線データおよび状態データをシミュレーションDB109に格納する(308)。これを終了条件を満たすまで繰り返す(309)。 At that time, the virtual flow line generation unit 111 first receives the flow line data and the state data for the current human flow from the simulation DB 109 (307), and predicts the flow line data and the state data in the next time step (303). , The predicted flow line data and the state data are stored in the simulation DB 109 (308). This is repeated until the end condition is satisfied (309).
 図4Aは、本発明の実施例1の動線DB105のデータ構造例を表す説明図である。 FIG. 4A is an explanatory diagram showing an example of a data structure of the flow line DB 105 of the first embodiment of the present invention.
 図4B~図4Dは、本発明の実施例1の地図DB108のデータ構造例を表す説明図である。 4B to 4D are explanatory views showing an example of a data structure of the map DB 108 of the first embodiment of the present invention.
 動線DB105には計測システム11によって取得された動線データが格納され、地図DB108には事前に準備された、あるいはユーザ12によって入力された地図データ群が格納される。 The flow line DB 105 stores the flow line data acquired by the measurement system 11, and the map DB 108 stores the map data group prepared in advance or input by the user 12.
 まずは動線DB105について説明する。動線DB105には動線テーブル401(図4A)が格納される。動線テーブル401は、動線データを格納するテーブルである。動線テーブル401には、例えば、人ごとに、また一定の時間(例えば1秒)ごとに、動線データの座標をサンプリングしたような形のデータが格納される。 First, the flow line DB 105 will be described. The flow line table 401 (FIG. 4A) is stored in the flow line DB 105. The flow line table 401 is a table for storing the flow line data. The flow line table 401 stores data in the form of sampling the coordinates of the flow line data, for example, for each person or at regular time intervals (for example, 1 second).
 START_TIME4011およびEND_TIME4012は、それぞれ、動線データをサンプリングする際の開始時間および終了時間を表し、PID4013は人のIDを表す。WKT4014は対応するPID4013の人がSTART_TIME4011からEND_TIME4012までの間に動いた際の動線のラインストリングに関するジオメトリ情報を表す。図4Aの例では、WKT4014に、対応するPID4013の人のSTART_TIME4011およびEND_TIME4012それぞれにおける位置を示す2次元座標値が格納される。なおこのジオメトリ情報に関する座標系は任意のものでよく、例えば平面直角座標系などでもよい。 START_TIME4011 and END_TIME4012 represent the start time and the end time when sampling the flow line data, respectively, and PID 4013 represents a person's ID. WKT4014 represents the geometry information about the linestring of the flow line when the person with the corresponding PID4013 moves between START_TIME4011 and END_TIME4012. In the example of FIG. 4A, WKT4014 stores two-dimensional coordinate values indicating the positions of the corresponding PID 4013 in START_TIME4011 and END_TIME4012, respectively. The coordinate system related to this geometry information may be arbitrary, and may be, for example, a plane orthogonal coordinate system.
 なお動線テーブル401は、人の向きの情報を含んでいても良い。具体的にはEND_TIME4012においてPID4013の人が向いている方向を角度で表した情報を含んでいても良い。この角度情報は、レーザーレーダを用いて計測した場合は、3D点群データに対し、Pose Estimation等の公知技術を使って向きを検出した結果を格納しても良い。 The flow line table 401 may include information on the orientation of the person. Specifically, the END_TIME 4012 may include information indicating the direction in which the person with the PID 4013 is facing as an angle. When this angle information is measured using a laser radar, the result of detecting the orientation of the 3D point cloud data using a known technique such as Pose Estimation may be stored.
 カメラを用いて計測した場合は、公知の画像処理技術を用いて向きを検出した結果を格納しても良い。あるいは、動線データから向き推定を行った結果を格納しても良い。動線データから向き推定を行う場合、PID4013の人がSTART_TIME4011からEND_TIME4012にかけて動いた座標の差分ベクトルを算出して向き検出を行った結果を格納しても良い。 When measured using a camera, the result of orientation detection using a known image processing technique may be stored. Alternatively, the result of direction estimation from the flow line data may be stored. When the direction is estimated from the flow line data, the difference vector of the coordinates moved by the person with PID 4013 from START_TIME4011 to END_TIME4012 may be calculated and the result of the direction detection may be stored.
 次に地図DB108について説明する。地図DB108には、人が通行不可である壁などの領域を表す地図テーブル402(図4B)、チェックポイントに関する地図テーブル403(図4C)、および、チェックポイントの中でも行列に関する追加レイアウトを格納する地図テーブル404(図4D)が格納される。チェックポイントは、対象領域内で人が移動する際の立ち寄り場所、および、何らかの属性が付加された通過場所を示す。例えば対象領域として空港施設内を考える場合、チェックポイントは入り口および自動チェックイン(CI)機等の情報に相当する。 Next, the map DB 108 will be described. The map DB 108 contains a map table 402 (FIG. 4B) representing an area such as a wall that is impassable to people, a map table 403 (FIG. 4C) regarding checkpoints, and a map that stores additional layouts related to matrices among checkpoints. Table 404 (FIG. 4D) is stored. The checkpoint indicates a stop-by place when a person moves within the target area, and a passing place to which some attribute is added. For example, when considering the inside of an airport facility as a target area, a checkpoint corresponds to information such as an entrance and an automatic check-in (CI) machine.
 図4Bに示す地図テーブル402は、壁等のオブジェクトに関するIDであるWID4021、オブジェクトの種別を表すKIND4022、ならびに、オブジェクトの位置情報および形状情報を表すWKT4023を格納する。 The map table 402 shown in FIG. 4B stores WID4021, which is an ID related to an object such as a wall, KIND4022, which represents an object type, and WKT4023, which represents the position information and shape information of the object.
 WKT4023は、オブジェクト形状を表す座標を、ポリゴンまたはラインストリングといった形のジオメトリとして表したものである。WKT4014と同様に、座標系は任意のものでよい。 WKT4023 expresses the coordinates representing the object shape as geometry in the form of a polygon or a line string. As with WKT4014, the coordinate system may be arbitrary.
 図4Cに示す地図テーブル403は、チェックポイントのIDを表すCID4031、チェックポイントの種類に関するIDを表すKID4032、チェックポイントの名前を表すNAME4033、チェックポイントの位置情報および形状情報を表すWKT4034、ならびに、チェックポイントの種類を表すTYPE4035を格納する。 The map table 403 shown in FIG. 4C has a CID4031 representing a checkpoint ID, a KID4032 representing a checkpoint type ID, a NAME4033 representing a checkpoint name, a WKT4034 representing checkpoint position information and shape information, and a check. Stores a TYPE 4035 representing the type of point.
 チェックポイントの種類は、通過型(Pass)、滞留型(Stay)、または行列型(Queue)のいずれかを示す。例えば、入り口等のチェックポイントは立ち寄る場所とは異なり、通過するだけの場所であるため、これに該当するチェックポイントのTYPE4035にはPassが格納される。CIカウンター等のチェックポイントは、人が立ち寄る場所であるため、これに該当するチェックポイントのTYPE4035にはStayが格納される。これに対し、CIカウンター前の行列に該当するチェックポイントのTYPE4035にはQueueが格納される。 The type of checkpoint indicates either a pass type (Pass), a retention type (Stay), or a matrix type (Queue). For example, a checkpoint such as an entrance is a place that only passes through, unlike a place to stop by, so a Pass is stored in the TYPE4035 of the checkpoint corresponding to this. Since a checkpoint such as a CI counter is a place where people stop by, Stay is stored in TYPE4035 of the checkpoint corresponding to this. On the other hand, Queueue is stored in TYPE4035 of the checkpoint corresponding to the matrix in front of the CI counter.
 なお、行列型のチェックポイントの場合は、行列として人が並びうる領域ではなく、行列の先頭ラインを示すジオメトリ情報がWKT4034に格納される。ここで、CID4031は異なるが、KID4032が同じとなるようなチェックポイントは存在しうる。例えば施設の入口などは複数存在しうるためである。 In the case of a matrix type checkpoint, the geometry information indicating the first line of the matrix is stored in WKT4034, not the area where people can line up as a matrix. Here, there may be checkpoints such that the CID4031 is different but the KID4032 is the same. For example, there may be multiple entrances to the facility.
 図4Dに示す地図テーブル404は、地図テーブル403に格納される行列型チェックポイントに関して、追加のレイアウト情報4041~4044を格納する。CID4041は、チェックポイントのIDであり、地図テーブル403に格納されるCID4031と同等のものである。ただし、地図テーブル404には行列型チェックポイントのIDのみ格納される。 The map table 404 shown in FIG. 4D stores additional layout information 4041 to 4044 with respect to the matrix type checkpoint stored in the map table 403. CID4041 is a checkpoint ID, which is equivalent to CID4031 stored in the map table 403. However, only the ID of the matrix type checkpoint is stored in the map table 404.
 S_WKT4042は、CID4041に対応する行列に関して、行列に並んだ後にサービスを受ける場所のジオメトリ情報である。これは行列の先頭ラインについて、人が並ぶ側と反対側に存在する位置になる。例えば、CIカウンター前の行列におけるS_WKT4042は、行列に並んだ人が実際にその後サービスを受けるカウンターの位置に相当する。 S_WKT4042 is the geometry information of the place where the service is received after lining up in the matrix with respect to the matrix corresponding to CID4041. This is the position of the first line of the matrix on the opposite side of the line of people. For example, S_WKT4042 in the queue in front of the CI counter corresponds to the position of the counter where the people in the queue actually receive service thereafter.
 P_WKT4043は、整列して並ぶためのパーティションを表すジオメトリ情報である。このパーティションは、例えば隣の行列との干渉を防ぐためのものであっても良いし、行列が蛇行する形になるように、その行列に並ぶ人を整列させるためのパーティションであっても良い。 P_WKT4043 is geometry information representing a partition for arranging and arranging. This partition may be, for example, to prevent interference with an adjacent matrix, or may be a partition for arranging people in the matrix so that the matrix meanders.
 N_ID4044は、対応するCID4041の行列に並んだ人が、そのあと、次に向かいうるCID4041のリストを格納する。なおCID4041のリストの代わりにKID4032のリストを格納しても良い。 N_ID4044 stores a list of CID4041s that people in the corresponding CID4041 matrix can then head to next. The list of KID4032 may be stored instead of the list of CID4041.
 図5Aおよび図5Bは、本発明の実施例1の状態DB106のデータ構造例を表す説明図である。 5A and 5B are explanatory views showing an example of a data structure of the state DB 106 of the first embodiment of the present invention.
 状態DB106には、動線状態テーブル501(図5A)と、行列状態テーブル502(図5B)とが格納される。動線状態テーブル501は、計測された動線データにおいて、各人がいつ、どのチェックポイントからどのチェックポイントに向かっているか、またその人が行列にて滞留しているかどうか、に関する情報を格納する。行列状態テーブル502は、各行列にどの人がどの順番で並んでいたかに関する情報を格納する。 The flow line state table 501 (FIG. 5A) and the matrix state table 502 (FIG. 5B) are stored in the state DB 106. The flow line state table 501 stores information on when, from which checkpoint to which checkpoint, and whether the person is staying in the matrix in the measured flow line data. .. The matrix state table 502 stores information about which person was lined up in which order in each matrix.
 図5Aに示す動線状態テーブル501は、START_TIME5011、END_TIME5012、PID5013、O_ID5014、D_ID5015およびQ_ID5016を格納する。START_TIME5011およびEND_TIME5012は、それぞれ、対応する人においてその状態が開始する日時および終了する日時を表す。PID5013は、対応する人のIDを表す。O_ID5014およびD_ID5015は、それぞれ、対応する人がチェックポイント間を移動する際の開始チェックポイントのIDおよび終了チェックポイントのIDを表す。Q_ID5016は、対応する人が並んでいる行列のCID4041を表す。 The flow line state table 501 shown in FIG. 5A stores START_TIME5011, END_TIME5012, PID5013, O_ID5014, D_ID5015 and Q_ID5016. START_TIME5011 and END_TIME5012 represent the start and end dates and times of the state in the corresponding person, respectively. PID5013 represents the ID of the corresponding person. O_ID5014 and D_ID5015 represent the ID of the start checkpoint and the ID of the end checkpoint as the corresponding person moves between checkpoints, respectively. Q_ID5016 represents CID4041 in a matrix in which the corresponding people are lined up.
 O_ID5014およびD_ID5015に関して、状態判定部102にて開始・終了チェックポイントが判定できなかった場合はそれぞれ-1、またはNULLを格納しても良い。また、状態判定部102にて行列に並んでいないと判定された場合は、Q_ID5016には-1を格納する。なおO_ID5014およびD_ID5015が同一である場合、対応する人はそのチェックポイントに滞留している。この場合、Q_ID5016にその人が滞留している滞留型チェックポイントのIDを格納してもよい。 Regarding O_ID5014 and D_ID5015, if the start / end checkpoint cannot be determined by the status determination unit 102, -1 or NULL may be stored, respectively. Further, when the state determination unit 102 determines that the procession is not in line, -1 is stored in Q_ID5016. If O_ID5014 and D_ID5015 are the same, the corresponding person is staying at the checkpoint. In this case, the ID of the retention type checkpoint in which the person is retained may be stored in Q_ID5016.
 図5Bに示す行列状態テーブル502は、START_TIME5021、END_TIME5022、CID5023およびPID_LIST5024を格納する。START_TIME5021およびEND_TIME5022は、それぞれ、対応する行列の状態についての開始時刻および終了時刻を表す。CID5023は、対応する行列のCID4041を表す。PID_LIST5024は、対応する行列に並んでいる人のIDをサービス待ちの順番通りに並べたIDリストである。 The matrix state table 502 shown in FIG. 5B stores START_TIME 5021, END_TIME 5022, CID 5023 and PID_LIST 5024. START_TIME5021 and END_TIME5022 represent start and end times for the corresponding matrix states, respectively. CID5023 represents CID4041 of the corresponding matrix. PID_LIST5024 is an ID list in which the IDs of the people in the corresponding queue are arranged in the order of waiting for the service.
 図6A~図6Eは、本発明の実施例1のモデルDB107のデータ構造例を表す説明図である。 6A to 6E are explanatory views showing an example of a data structure of the model DB 107 of the first embodiment of the present invention.
 行動モデル学習部110にて学習された動線予測モデルは状態ごとに生成される。そのため、モデルDB107には、状態ごとのモデルパラメータが格納される。またチェックポイント間の遷移確率や最初のチェックポイントが選ばれる確率等も格納される。具体的には、モデルDB107には、移動モデルテーブル601(図6A)、行列モデルテーブル602(図6B)、チェックポイント滞留モデルテーブル603(図6C)、チェックポイント遷移確率テーブル604(図6D)およびチェックポイント初期確率テーブル605(図6E)が格納される。 The flow line prediction model learned by the behavior model learning unit 110 is generated for each state. Therefore, the model DB 107 stores model parameters for each state. In addition, the transition probability between checkpoints and the probability that the first checkpoint is selected are also stored. Specifically, the model DB 107 includes a movement model table 601 (FIG. 6A), a matrix model table 602 (FIG. 6B), a checkpoint retention model table 603 (FIG. 6C), a checkpoint transition probability table 604 (FIG. 6D), and a checkpoint transition probability table 604 (FIG. 6D). The checkpoint initial probability table 605 (FIG. 6E) is stored.
 図6Aに示す移動モデルテーブル601は、チェックポイント間の移動を予測するモデルのパラメータ6011~6014を格納する。これには、行列またはチェックポイント内における滞留に関するモデルのパラメータは含まれない。滞留以外の動線に関する予測モデルは、チェックポイント遷移ごとに生成されるため、チェックポイント遷移ごとにモデルパラメータを格納する。 The movement model table 601 shown in FIG. 6A stores the parameters 6011 to 6014 of the model that predicts the movement between checkpoints. It does not include model parameters for retention within a matrix or checkpoint. Since the prediction model for the flow lines other than the retention is generated for each checkpoint transition, the model parameters are stored for each checkpoint transition.
 O_ID6011およびD_ID6012は、それぞれ、人がチェックポイント間を移動する際の開始チェックポイントのIDおよび終了チェックポイントのIDを表す。 O_ID6011 and D_ID6012 represent the ID of the start checkpoint and the ID of the end checkpoint when a person moves between checkpoints, respectively.
 M_Param16013~M_ParamM6014は、対応するチェックポイント遷移に関する予測モデルのモデルパラメータである。ここで、このモデルパラメータはM次元のベクトルとして表され、例えばSocial Force Modelのようなエージェントモデルのパラメータでもよいし、Gradient Boosting Regression Tree、Suppport Vector Regression、Long Short Term Memory、またはConvolutional Neural Networkなどの機械学習におけるモデルパラメータなどでもよい。 M_Param16013 to M_Paramm6014 are model parameters of the prediction model for the corresponding checkpoint transition. Here, this model parameter is expressed as an M-dimensional vector, and may be an agent model parameter such as a Social Force Model, a Gradient Boosting Regression Tree, a Support Vector Regression, a Long Short Term Neural, etc. It may be a model parameter in machine learning.
 なおここではチェックポイント遷移の種類ごとにモデルパラメータを格納しているが、動線予測モデルをD_ID6012ごとに生成し、D_ID6012ごとにモデルパラメータを格納してもよい。この場合、対応するO_ID6011がないため、NULLまたは-1が格納される。 Although model parameters are stored for each type of checkpoint transition here, a flow line prediction model may be generated for each D_ID6012 and model parameters may be stored for each D_ID6012. In this case, since there is no corresponding O_ID6011, NULL or -1 is stored.
 図6Bに示す行列モデルテーブル602は、行列内の滞留行動を予測するモデルのパラメータ6021~6023を格納する。このモデルパラメータは行列ごとに格納される。 The matrix model table 602 shown in FIG. 6B stores the parameters 6021 to 6023 of the model for predicting the retention behavior in the matrix. This model parameter is stored for each matrix.
 Q_ID6021は、行列型チェックポイントのCID4041に対応する。Q_Param16022~Q_ParamN6023は、対応する行列に関する予測モデルのモデルパラメータである。モデルパラメータはN次元のベクトルとして表され、上記の例と同様、例えばGradient Boosting Regression Tree、Suppport Vector Regression、またはLong Short Term Memoryなどの機械学習におけるモデルパラメータなどでもよい。あるいは、モデルパラメータは、行列に並ぶ際の人との間隔およびサービス待ち時間に関するパラメータであってもよいし、それらを正規分布などの任意の確率分布として表す際のパラメータなどであってもよい。 Q_ID6021 corresponds to CID4041 of the matrix type checkpoint. Q_Param16022 to Q_ParamN6023 are model parameters of the prediction model for the corresponding matrix. The model parameters are represented as N-dimensional vectors, and may be model parameters in machine learning such as, for example, Gradient Boosting Regression Tree, Support Vector Regression, or Long Short Term Memory, as in the above example. Alternatively, the model parameters may be parameters related to the distance between people and the service waiting time when lining up in a matrix, or may be parameters when expressing them as an arbitrary probability distribution such as a normal distribution.
 図6Cに示すチェックポイント滞留モデルテーブル603は、滞留型チェックポイントにおける滞留行動を予測するモデルのパラメータ6031~6033を格納する。このモデルパラメータは、滞留型チェックポイントごとに格納される。 The checkpoint retention model table 603 shown in FIG. 6C stores the parameters 6031 to 6033 of the model for predicting the retention behavior at the retention type checkpoint. This model parameter is stored for each stagnant checkpoint.
 S_ID6031は、滞留型チェックポイントのCID4031に対応する。S_Param16032~S_ParamO6033は、対応する滞留型チェックポイントに関する予測モデルのモデルパラメータである。モデルパラメータはO次元のベクトルとして表され、上記の例と同様、例えばGradient Boosting Regression Tree、Suppport Vector Regression、またはLong Short Term Memoryなどの機械学習におけるモデルパラメータなどでもよい。あるいは、モデルパラメータは、滞留している際の滞留時間に関するパラメータであってもよいし、それを正規分布などの任意の確率分布として表す際のパラメータなどであってもよい。 S_ID6031 corresponds to CID4031 of the retention type checkpoint. S_Param16032 to S_ParamO6033 are model parameters of the prediction model for the corresponding retention checkpoints. The model parameter is represented as an O-dimensional vector, and may be a model parameter in machine learning such as, for example, Gradient Boosting Regression Tree, Support Vector Regression, or Long Short Term Memory, as in the above example. Alternatively, the model parameter may be a parameter relating to the residence time when staying, or may be a parameter when expressing it as an arbitrary probability distribution such as a normal distribution.
 図6Dに示すチェックポイント遷移確率テーブル604は、チェックポイント間の遷移確率に関するパラメータ6041~6043を格納する。 The checkpoint transition probability table 604 shown in FIG. 6D stores parameters 6041 to 6043 regarding the transition probability between checkpoints.
 O_ID6041およびD_ID6042は、それぞれチェックポイント間を移動する際の開始チェックポイントのIDおよび終了チェックポイントのIDを表す。T_Prob6043は、O_ID6041から次のチェックポイントを選択する際に、D_ID6042を選択する確率を示す。同じO_ID6041に関して、対応するT_Prob6043のすべての値の和をとると、1になる。 O_ID6041 and D_ID6042 represent the ID of the start checkpoint and the ID of the end checkpoint when moving between checkpoints, respectively. T_Prob6043 indicates the probability of selecting D_ID6042 when selecting the next checkpoint from O_ID6041. For the same O_ID6041, the sum of all the values of the corresponding T_Prob6043 is 1.
 図6Eに示すチェックポイント初期確率テーブル605は、シミュレーションを開始する際に、どのチェックポイントに人を発生させるかの発生確率を格納する。具体的には、チェックポイント初期確率テーブル605は、チェックポイントのCID4031に対応するID6051と、そのチェックポイントにおける発生確率I_Prob6052とを格納する。 The checkpoint initial probability table 605 shown in FIG. 6E stores the probability of occurrence at which checkpoint a person is generated when the simulation is started. Specifically, the checkpoint initial probability table 605 stores the ID 6051 corresponding to the checkpoint CID4031 and the probability of occurrence I_Prob6052 at the checkpoint.
 図7は、本発明の実施例1の状態判定部102で行われる処理のフローチャートである。 FIG. 7 is a flowchart of the process performed by the state determination unit 102 of the first embodiment of the present invention.
 処理701は、状態判定部102の処理開始を表す。 The process 701 represents the start of the process of the state determination unit 102.
 処理702は、動線DB105に格納されている動線データと地図DB108に格納されている地図テーブル403とについて、空間的な交差判定を行い、どの人がどの通過型チェックポイントをいつ通過したかを判定する処理である。 The process 702 makes a spatial intersection determination between the flow line data stored in the flow line DB 105 and the map table 403 stored in the map DB 108, and which person passed which pass-type checkpoint and when. Is a process for determining.
 処理703は、動線DB105に格納されている動線データと地図DB108に格納されている地図テーブル403とについて、空間的な交差判定を行い、どの人がどの滞留型チェックポイントでいつ滞留したかを判定する処理である。 The process 703 makes a spatial intersection determination between the flow line data stored in the flow line DB 105 and the map table 403 stored in the map DB 108, and which person stayed at which retention type checkpoint and when. Is a process for determining.
 処理702および処理703は、状態判定部102内のOD分析部1022にて行われる。これらの処理の詳細については後述する。 Process 702 and process 703 are performed by the OD analysis unit 1022 in the state determination unit 102. Details of these processes will be described later.
 処理704は、動線DB105に格納されている動線データと地図DB108に格納されている地図テーブル402、403および404とを用いて、どの人がどの行列型チェックポイントにて、いつ並んでいたかを判定する処理である。 In the process 704, using the flow line data stored in the flow line DB 105 and the map tables 402, 403, and 404 stored in the map DB 108, which person was lined up at which matrix type checkpoint and when. It is a process of determining whether or not.
 処理705は、動線DB105に格納されている動線データと地図DB108に格納されている地図テーブル402、403および404とを用いて、各行列型チェックポイントにおいて、いつどの人がどの順番で並んでいたかの判定結果を状態DB106の行列状態テーブル502に格納する処理である。 The process 705 uses the flow line data stored in the flow line DB 105 and the map tables 402, 403, and 404 stored in the map DB 108, and when, who, and in what order are arranged at each matrix type checkpoint. This is a process of storing the determination result of whether or not the result has been obtained in the matrix state table 502 of the state DB 106.
 処理704および処理705は、状態判定部102内の行列判定部1021にて行われる。これらの処理の詳細については後述する。 Process 704 and process 705 are performed by the matrix determination unit 1021 in the state determination unit 102. Details of these processes will be described later.
 処理706は、処理702、703および704の結果を統合し、各人について、いつ、どのチェックポイントからどのチェックポイントへ向かっていたか、どの行列型チェックポイントに並んでいたか、および、どの滞留型チェックポイントにて滞留していたかの情報を、状態DB106の動線状態テーブルに格納する処理である。 Process 706 integrates the results of processes 702, 703 and 704, and for each person, when, from which checkpoint to which checkpoint, which matrix checkpoint was lined up, and which retention type. This is a process of storing the information as to whether or not it has stayed at the checkpoint in the flow line state table of the state DB 106.
 処理707は、状態判定部102の処理終了を表す。 The process 707 represents the end of the process of the state determination unit 102.
 図8Aおよび図8Bは、本発明の実施例1のOD分析部1022で行われる処理の説明図である。 8A and 8B are explanatory views of the processing performed by the OD analysis unit 1022 of the first embodiment of the present invention.
 OD分析部1022は、動線DB105に格納された動線データと、地図DB108に格納された通過型・滞留型チェックポイントに関して、空間的な交差判定を行い、どの人が、いつ、どのチェックポイントを通過・滞留したかを判定する。 The OD analysis unit 1022 makes a spatial intersection determination with respect to the flow line data stored in the flow line DB 105 and the pass-through type / retention type checkpoint stored in the map DB 108, and which person, when, and which checkpoint. Judge whether it has passed or stagnated.
 図801は、通過型チェックポイントと動線データとの空間的交差判定の説明図である。これは、人が通行可能な領域を含む空間を上方から観察する平面図である。斜線でハッチングされた領域803は、壁などの人が通行不可である領域を示し、太線で表示されたゲート804は、通過型チェックポイントの例を示す。 FIG. 801 is an explanatory diagram of the spatial intersection determination between the pass-through checkpoint and the flow line data. This is a plan view for observing a space including a human-passable area from above. The shaded area 803 indicates an area that is impassable to people, such as a wall, and the thick lined gate 804 indicates an example of a pass-through checkpoint.
 データ805は動線データの例であり、各時刻における人の座標情報を時系列順につないだデータである。この場合、データ805とゲート804は、空間的に交差しているために、データ805に対応する人は、ゲート804を通過したと判定される。この際、交差した直後の時刻が通過時刻として検出される。この通過時刻が通過型チェックポイントに到着した時刻とみなされ、また通過型チェックポイントを出発した時刻とみなされる。 Data 805 is an example of flow line data, and is data in which the coordinate information of a person at each time is connected in chronological order. In this case, since the data 805 and the gate 804 are spatially intersected with each other, it is determined that the person corresponding to the data 805 has passed through the gate 804. At this time, the time immediately after the intersection is detected as the passing time. This transit time is considered to be the time of arrival at the transit checkpoint and the time of departure from the transit checkpoint.
 図802は、滞留型チェックポイントと動線データとの空間的交差判定の説明図である。データ806および807は動線データの例であり、太線で表示された領域808は、滞留型チェックポイントの例を示す。 FIG. 802 is an explanatory diagram of the spatial intersection determination between the retention type checkpoint and the flow line data. The data 806 and 807 are examples of flow line data, and the area 808 displayed by a thick line shows an example of a retention type checkpoint.
 この例において、データ806は領域808と空間的に交差しているが、領域808の中で滞留せずに通り過ぎている。このため、データ806に対応する人は、この滞留型チェックポイントに立ち寄ったとはみなさない。これに対し、データ807は、領域808の中で滞留しているため、データ807に対応する人は、この滞留型チェックポイントに立ち寄ったと判定される。この例において、一定時間以上、一定以下の速度となっている状態を滞留とみなす。この際、滞留を始めた時刻がこのチェックポイントに到着した時刻、滞留を終えた時刻がこのチェックポイントを出発した時刻とみなす。 In this example, the data 806 spatially intersects the region 808, but passes through the region 808 without staying. For this reason, the person corresponding to the data 806 is not considered to have stopped at this retention checkpoint. On the other hand, since the data 807 is retained in the region 808, it is determined that the person corresponding to the data 807 has stopped at this retention type checkpoint. In this example, a state in which the speed is constant for a certain period of time or longer and lower than a certain speed is regarded as retention. At this time, the time when the stay starts is regarded as the time when the checkpoint arrives, and the time when the stay ends is regarded as the time when the checkpoint departs.
 図9は、本発明の実施例1の行列判定部1021で行われる処理のフローチャートおよび具体的な説明図である。 FIG. 9 is a flowchart and a specific explanatory diagram of the processing performed by the matrix determination unit 1021 of the first embodiment of the present invention.
 行列判定部1021は、動線DB105に格納された動線データと、地図DB108に格納された行列型チェックポイントとから、どの人が、いつ、どの行列型チェックポイントにて、何番目に並んでいたか、を判定する。なお処理901~905のフローは毎時刻行われる。 The matrix determination unit 1021 arranges which person, when, at which matrix type checkpoint, and in what order from the flow line data stored in the flow line DB 105 and the matrix type checkpoint stored in the map DB 108. Determine if it was there. The flow of processes 901 to 905 is performed every hour.
 処理901は、行列判定部1021の処理開始を表す。 The process 901 represents the start of the process of the matrix determination unit 1021.
 処理902は、各行列型チェックポイントにて先頭に並ぶ人を抽出する処理である。この処理を説明図906に沿って説明する。 Process 902 is a process of extracting the person who is at the beginning of each matrix type checkpoint. This process will be described with reference to FIG. 906.
 説明図906は、行列が発生する空間の平面図である。ゲート9061は行列型チェックポイント、斜線でハッチングされた領域9064は人が侵入不可である壁等の領域、グレー表示された領域9065はゲート9061の行列に対応するパーティションP_WKT4043、受付9063はゲート9061に対応する行列に並んだ人に対して、その後にサービスを提供する受付である。そのため、受付9063はゲート9061の行列に対応するS_WKT4042に対応する。 Explanatory drawing 906 is a plan view of the space where the matrix is generated. The gate 9061 is a matrix type checkpoint, the shaded area 9064 is an area such as a wall where people cannot enter, the grayed out area 9065 is the partition P_WKT4043 corresponding to the matrix of the gate 9061, and the reception 9063 is the gate 9061. It is a receptionist that provides services to the people in the corresponding line. Therefore, the reception 9063 corresponds to the S_WKT4042 corresponding to the matrix of the gate 9061.
 行列判定部1021は、ゲート9061に対応する行列先頭を抽出する際、ゲート9061から一定の距離以内かつ最近傍にいる人で速度が一定以下となる人9062を行列の先頭として抽出する。この際、行列判定部1021は、ゲート9061に対し、受付9063とは反対側にいる人であり、かつ、ゲート9061とその人とを線分で結んだ際、その線分が領域9064および領域9065のいずれとも交差しない人物を、行列先頭の人9062として抽出する。 When extracting the matrix head corresponding to the gate 9061, the matrix determination unit 1021 extracts the person 9062 who is within a certain distance from the gate 9061 and is closest to the gate 9061 and whose speed is constant or less, as the head of the matrix. At this time, the matrix determination unit 1021 is a person who is on the opposite side of the reception 9063 with respect to the gate 9061, and when the gate 9061 and the person are connected by a line segment, the line segment becomes the area 9064 and the area. The person who does not intersect with any of 9065 is extracted as the person 9062 at the head of the matrix.
 なお、図9において、人9062の丸は、ある時刻におけるその人の位置を示し、その丸に接する直線の先端は、その前の時刻(例えば1秒前)におけるその人の位置を示す。他の人も同様に表示される。後述する図11、図14、図17も同様である。 In FIG. 9, the circle of the person 9062 indicates the position of the person at a certain time, and the tip of the straight line in contact with the circle indicates the position of the person at the time before that (for example, 1 second before). Others are displayed as well. The same applies to FIGS. 11, 14, and 17 described later.
 処理903は、各行列型チェックポイントにて2番目以降に並ぶ人を抽出する処理である。この処理を説明図907に沿って説明する。 Process 903 is a process for extracting the second and subsequent people at each matrix type checkpoint. This process will be described with reference to FIG. 907.
 行列判定部1021は、直前ステップで抽出した人9071(例えば説明図906の人9062に相当する)から、一定の距離以内かつ最近傍にいる人であって、速度が一定以下の人9072を次に並んでいる人として抽出する。この際、行列判定部1021は、人9071と人9072とを結ぶ線分がゲート9061、領域9064および領域9065のいずれとも交差しないように、人9072を抽出する。 The matrix determination unit 1021 is next to a person 9072 who is within a certain distance and is in the nearest neighbor from the person 9071 (for example, corresponding to the person 9062 in the explanatory diagram 906) extracted in the immediately preceding step and whose speed is below a certain level. Extract as people lined up in. At this time, the matrix determination unit 1021 extracts the person 9072 so that the line segment connecting the person 9071 and the person 9072 does not intersect with any of the gate 9061, the area 9064, and the area 9065.
 処理904において、行列判定部1021は、直前ステップで抽出した人から、一定の距離以内で速度が一定以下となり、かつ、直前ステップで抽出した人とその探索対象の人とを結ぶ線分がゲート9061、領域9064及び領域9065のいずれとも交差しない人がいるかどうかを判定する。そのような人がいない場合は終了条件を満たすとし、処理905に進む。終了条件を満たさない場合は処理903に戻る。 In the process 904, the matrix determination unit 1021 has a gate whose speed becomes less than a certain distance within a certain distance from the person extracted in the immediately preceding step, and a line segment connecting the person extracted in the immediately preceding step and the person to be searched for. It is determined whether or not there is a person who does not intersect with any of 9061, region 9064 and region 9065. If there is no such person, the end condition is satisfied and the process proceeds to process 905. If the end condition is not satisfied, the process returns to process 903.
 処理905は、行列判定部1021の処理終了を表す。 The process 905 represents the end of the process of the matrix determination unit 1021.
 なおこの行列判定処理に、各人の向きの情報を用いてもよい。例えば、行列の先頭に並ぶ人を抽出する際に、処理902の判定条件を満たす人の中で、行列型チェックポイントの方へ向いている人のみを抽出してもよいし、2番目以降に並ぶ人を抽出する際に、処理903の判定条件を満たす人の中で、直前ステップで抽出された人の方を向いている人のみを抽出してもよい。 Note that information on each person's orientation may be used for this matrix determination process. For example, when extracting the people who line up at the beginning of the matrix, only the people who are facing the matrix type checkpoint among the people who satisfy the determination condition of the process 902 may be extracted, and the second and subsequent people may be extracted. When extracting the people in line, only the people who are facing the person extracted in the immediately preceding step may be extracted from the people who satisfy the determination condition of the process 903.
 図10は、本発明の実施例1の行動モデル学習部110で行われる処理のフローチャートである。 FIG. 10 is a flowchart of processing performed by the behavior model learning unit 110 of the first embodiment of the present invention.
 処理1001は、行動モデル学習部110の処理開始を表す。 Process 1001 represents the start of processing of the behavior model learning unit 110.
 処理1002は、動線DB105に格納された動線データと状態DB106に格納された動線状態テーブル501を紐づけ、状態ごとに動線データを分割する処理である。ここで、状態とは、O_ID5014、D_ID5015、Q_ID5016の三種類の組み合わせで構成されるものとする。この処理は行動モデル学習部110の動線データ分割部1101によって行われる。 The process 1002 is a process of associating the flow line data stored in the flow line DB 105 with the flow line state table 501 stored in the state DB 106 and dividing the flow line data for each state. Here, the state is composed of three types of combinations of O_ID5014, D_ID5015, and Q_ID5016. This processing is performed by the flow line data dividing unit 1101 of the behavior model learning unit 110.
 処理1003は、処理1002において分割された動線データと地図DB108の地図テーブル402、403および404とを用いて、目的変数、および、周囲の人・チェックポイント・壁等に関する特徴量を算出する。この処理は行動モデル学習部110の特徴量算出部1102によって行われる。処理内容の詳細については、後述する。 The process 1003 uses the flow line data divided in the process 1002 and the map tables 402, 403, and 404 of the map DB 108 to calculate the objective variable and the feature quantities related to the surrounding people, checkpoints, walls, and the like. This processing is performed by the feature amount calculation unit 1102 of the behavior model learning unit 110. The details of the processing contents will be described later.
 処理1004は、状態ごとに動線予測モデルを学習しその際のモデルパラメータをモデルDB107に格納する処理である。この処理はモデル学習部1103によって行われる。ここで、処理1004について説明する。 Process 1004 is a process of learning a flow line prediction model for each state and storing the model parameters at that time in the model DB 107. This process is performed by the model learning unit 1103. Here, the process 1004 will be described.
 まず、モデル学習部1103は、Q_ID5016が-1となり、かつ、O_ID5014とD_ID5015とが等しくないか、O_ID5014とD_ID5015とが等しいが、またはその値が滞留型チェックポイントのCID5023ではない動線データを用いて、滞留しない移動型の動線予測モデルを学習する。 First, the model learning unit 1103 uses flow line data in which Q_ID5016 is -1, and O_ID5014 and D_ID5015 are not equal, O_ID5014 and D_ID5015 are equal, or the value is not CID5023 of the retention type checkpoint. To learn a mobile flow line prediction model that does not stay.
 この際、モデル学習部1103は、D_ID5015ごとにモデルを学習してもよいし、O_ID5014とD_ID5015との組み合わせごとにモデルを学習してもよい。あるいは、モデル学習部1103は、D_ID5015またはO_ID5014とD_ID5015との組み合わせを特徴量として(すなわち説明変数の一つとして)入力するモデルを学習してもよい。 At this time, the model learning unit 1103 may learn the model for each D_ID5015, or may learn the model for each combination of the O_ID5014 and the D_ID5015. Alternatively, the model learning unit 1103 may learn a model in which a combination of D_ID5015 or O_ID5014 and D_ID5015 is input as a feature amount (that is, as one of the explanatory variables).
 ここで学習する際は、Social Force Modelのようなエージェントモデルのパラメータを学習してもよいし、例えばGradient Boosting Regression Tree、Suppport Vector Regression、Long Short Term Memory、またはConvolutional Neural Networkなどの機械学習のモデルパラメータを学習してもよい。ここでいう機械学習とは、特徴量から目的変数を予測するモデルをデータから学習する方法である。ここで学習したモデルパラメータは移動モデルテーブル601に格納される。 When learning here, the parameters of the agent model such as Social Force Model may be learned, for example, Gradient Boosting Regression Tree, Support Vector Regression, Long Short Term Memory, Machine Learning, etc. You may learn the parameters. Machine learning here is a method of learning a model that predicts an objective variable from a feature quantity from data. The model parameters learned here are stored in the moving model table 601.
 次に、モデル学習部1103は、Q_ID5016が-1となり、かつO_ID5014とD_ID5015が等しく、その値が滞留型チェックポイントのCID4031となる動線データを用いて、チェックポイントにおける滞留型の動線予測モデルを学習する。モデル学習部1103は、移動型の動線予測モデルを学習するときと同じく、エージェントモデルまたは機械学習等を用いて滞留型の動線予測モデルを学習してもよい。あるいは、モデル学習部1103は、確率的に滞留時間を出力するモデルを生成してもよい。この際のモデルパラメータはチェックポイント滞留モデルテーブル603に格納される。 Next, the model learning unit 1103 uses the flow line data in which Q_ID5016 is -1, O_ID5014 and D_ID5015 are equal, and the value is CID4031 of the retention type checkpoint, and the retention type flow line prediction model at the checkpoint is used. To learn. The model learning unit 1103 may learn the retention type flow line prediction model by using an agent model, machine learning, or the like, as in the case of learning the mobile type flow line prediction model. Alternatively, the model learning unit 1103 may generate a model that stochastically outputs the residence time. The model parameters at this time are stored in the checkpoint retention model table 603.
 最後に、モデル学習部1103は、Q_ID5016が-1でもNULLでもない動線データを用いて、行列型の動線予測モデルを学習する。モデル学習部1103は、移動型の動線予測モデルを学習するときと同じく、エージェントモデルまたは機械学習等を用いて行列型の動線予測モデルを学習してもよい。あるいは、モデル学習部1103は、前に並ぶ人との距離が一定の閾値以上であれば、前に並ぶ人の方向へ微小移動するモデルを学習してもよい。この際のモデルパラメータは行列モデルテーブル602に格納される。 Finally, the model learning unit 1103 learns a matrix-type flow line prediction model using the flow line data in which Q_ID5016 is neither -1 nor NULL. The model learning unit 1103 may learn a matrix-type flow line prediction model by using an agent model, machine learning, or the like, as in the case of learning a mobile type flow line prediction model. Alternatively, the model learning unit 1103 may learn a model that moves minutely in the direction of the person lined up in front as long as the distance to the person lined up in front is equal to or more than a certain threshold value. The model parameters at this time are stored in the matrix model table 602.
 上記の例では、モデル学習部1103は状態ごとに動線予測モデルを学習する。この場合、モデル学習部1103は、状態ごとに分類された動線データ(例えば、チェックポイント間を移動中の動線データ、滞留型チェックポイントに滞留している動線データ、又は行列型チェックポイントの行列に並んでいる動線データ)を用いた学習が行われ、状態ごとの動線予測モデルが得られる。 In the above example, the model learning unit 1103 learns the flow line prediction model for each state. In this case, the model learning unit 1103 is divided into flow line data classified by state (for example, flow line data moving between checkpoints, flow line data staying at a retention type checkpoint, or a matrix type checkpoint. Learning is performed using the flow line data (flow line data arranged in the matrix of), and a flow line prediction model for each state is obtained.
 しかし、このような動線予測モデルの生成方法は一例であり、状態に基づいて動線を予測するモデルを生成できれば、ほかの方法を採用してもよい。例えば、モデル学習部1103は、状態を特徴量の一つとして入力して動線を予測するモデルを学習してもよい。 However, such a method of generating a flow line prediction model is an example, and if a model that predicts a flow line based on a state can be generated, another method may be adopted. For example, the model learning unit 1103 may learn a model that predicts a flow line by inputting a state as one of the feature quantities.
 処理1005は、状態DB106に格納されている動線状態テーブル501の情報を集計し、状態遷移確率、初期確率を算出し、モデルDB107に格納する処理である。具体的には、行動モデル学習部110は、動線状態テーブル501のPID5013ごとにO_ID5014とD_ID5015とを抽出し、チェックポイントの遷移情報を算出し、集計する。行動モデル学習部110は、PID5013の一番最初のチェックポイントの確率分布を初期確率として算出し、チェックポイント初期確率テーブル605に格納し、O_ID5014を固定した際のD_ID5015をとりうる確率をチェックポイント遷移確率テーブル604に格納する。 The process 1005 is a process of aggregating the information of the flow line state table 501 stored in the state DB 106, calculating the state transition probability and the initial probability, and storing them in the model DB 107. Specifically, the behavior model learning unit 110 extracts O_ID5014 and D_ID5015 for each PID5013 of the flow line state table 501, calculates checkpoint transition information, and aggregates them. The behavior model learning unit 110 calculates the probability distribution of the first checkpoint of PID5013 as an initial probability, stores it in the checkpoint initial probability table 605, and checkspoint the probability that D_ID5015 when O_ID5014 is fixed is taken. Store in the probability table 604.
 1006は、行動モデル学習部110の処理終了を表す。 1006 represents the end of processing of the behavior model learning unit 110.
 図11は、本発明の実施例1の特徴量算出部1102で行われる処理の説明図である。 FIG. 11 is an explanatory diagram of the processing performed by the feature amount calculation unit 1102 of the first embodiment of the present invention.
 画面11001は動線データを計測した対象領域の平面図であり、ある時刻における動線データ、壁レイアウトおよびチェックポイントの情報を表示したものである。斜線でハッチングされた領域11002は壁等の人が侵入不可である領域、太い実線の枠で表示された領域11003および領域11004は滞留型チェックポイント、ゲート11005は行列型チェックポイント、グレー表示された領域11011はゲート11005に対応する行列のパーティションを表す。人11006、11007、11008、11009および11010について、目的変数と、周囲の人、壁および目的地チェックポイント等に関する特徴量とをどのように算出するかを説明する。 The screen 11001 is a plan view of the target area where the flow line data is measured, and displays the flow line data, the wall layout, and the checkpoint information at a certain time. Areas 11002 hatched with diagonal lines are areas that people cannot enter, such as walls, areas 11003 and areas 11004 displayed with thick solid lines are stagnant checkpoints, gates 11005 are matrix checkpoints, and are grayed out. Region 11011 represents the partition of the matrix corresponding to gate 11005. For people 11006, 11007, 11008, 11009 and 11010, how to calculate the objective variable and the features related to the surrounding people, walls, destination checkpoints and the like will be described.
 周囲の人および周囲の壁の特徴量については、人11006、11007、11008、11009および11010のいずれでも同じ算出方法となる。例えば、現在いる自分の位置を中心とした様々な方向に関する人までの距離および壁までの距離を特徴量として算出してもよい。この際の方向は、その人の前ステップにおける速度ベクトルの方向またはその人の目的地までの向きを基準とした相対的な方向であってもよい。あるいは、X軸またはY軸を基準とした方向であってもよい。 Regarding the feature quantities of the surrounding people and the surrounding walls, the same calculation method is used for any of the people 11006, 11007, 11008, 11009, and 11010. For example, the distance to a person and the distance to a wall in various directions around the current position may be calculated as a feature amount. The direction at this time may be the direction of the velocity vector in the previous step of the person or the relative direction with respect to the direction to the destination of the person. Alternatively, the direction may be based on the X-axis or the Y-axis.
 なお各方向に対して、人または壁等が存在しない場合は、適当な閾値を代入してもよい。あるいは距離を特徴量とするのではなく、現在いる自分の位置を中心とした適当なサイズのグリッドを生成し、各グリッド内の壁の占有面積、占有率または人のカウント値などを特徴量としてもよい。この際、同様にその人の前ステップにおける速度ベクトルの方向またはその人の目的地までの向きを基準としたグリッドを生成してもよいし、X軸またはY軸を基準としてもよい。なお壁の特徴量を算出する際に、壁の領域11002だけではなく、行列のパーティションに関する領域11011も含めて、壁の特徴量を算出してもよい。 If there are no people or walls in each direction, an appropriate threshold value may be substituted. Alternatively, instead of using distance as a feature, generate a grid of an appropriate size centered on your current position, and use the area occupied by the walls in each grid, the occupancy rate, or the count value of people as the feature. May be good. At this time, similarly, a grid may be generated based on the direction of the velocity vector in the previous step of the person or the direction to the destination of the person, or the X-axis or the Y-axis may be used as a reference. When calculating the feature amount of the wall, not only the area 11002 of the wall but also the area 11011 related to the partition of the matrix may be included to calculate the feature amount of the wall.
 チェックポイントに関する特徴量は、現在の人の位置とその人が目的地とするポイントから算出される。この際、その人の位置から目的地までの距離および目的地への方向情報を特徴量とする。目的地の方向を算出する際、その人の前ステップにおける速度ベクトルの方向を基準とした相対的な方向でもよい。このとき、X軸またはY軸を基準としてもよい。目的地とするポイントは人11006、11007、11008、11009、11010でそれぞれ異なる。 The feature amount related to the checkpoint is calculated from the current position of the person and the point to which the person is the destination. At this time, the distance from the person's position to the destination and the direction information to the destination are used as feature quantities. When calculating the direction of the destination, it may be a relative direction with respect to the direction of the velocity vector in the person's previous step. At this time, the X-axis or the Y-axis may be used as a reference. The destination points are different for people 11006, 11007, 11008, 11009, and 11010, respectively.
 人11006は、所定の速度以上で移動している動線である。人11006のD_ID5016が領域11003に対応するCID4031である場合、人11006の目的地は領域11003内に含まれる任意の点であり、例えば領域11003の重心でもよい。 Person 11006 is a flow line moving at a predetermined speed or higher. When the D_ID5016 of the person 11006 is the CID4031 corresponding to the area 11003, the destination of the person 11006 is an arbitrary point included in the area 11003, and may be, for example, the center of gravity of the area 11003.
 人11007は、上の例と同じく所定の速度以上で移動している動線だが、D_ID5016はゲート11005に対応するCID4031である。この際、人11007の目的地は、ゲート11005に含まれる点ではなく、ゲート11005の行列に並んでいる最後尾の人の座標となる。この情報は状態DB106の行列状態テーブル502のPID_LIST5024の最後の値に対応するPID4013の動線データから取得可能である。 Person 11007 is a flow line moving at a predetermined speed or higher as in the above example, but D_ID5016 is CID4031 corresponding to gate 11005. At this time, the destination of the person 11007 is not the point included in the gate 11005, but the coordinates of the last person in the procession of the gate 11005. This information can be obtained from the flow line data of PID 4013 corresponding to the last value of PID_LIST5024 in the matrix state table 502 of the state DB 106.
 人11008は、行列の先頭に並んでいる所定の速度以下の動線の例であり、この場合もD_ID5016はゲート11005に対応するCID4031である。この際の目的地は行列の先頭ラインを示すゲート1105に含まれる任意の点であり、ゲート11005の重心でもよい。 Person 11008 is an example of a flow line of a predetermined speed or less lined up at the head of a matrix, and in this case as well, D_ID5016 is CID4031 corresponding to gate 11005. The destination at this time is an arbitrary point included in the gate 1105 indicating the head line of the matrix, and may be the center of gravity of the gate 11005.
 人11009は、二番目以降に行列に並んでいる所定の速度以下の動線であり、上の例と同じくD_ID5016はゲート11005に対応するCID4031となるが、目的地は自分の前に並んでいる人の座標となる。この情報は状態DB106の行列状態テーブル502のPID_LIST5024の中で、人11009のPID4013の前の値に対応するPID4013の動線データから取得可能である。 Person 11009 is a flow line below a predetermined speed that is lined up in a line after the second, and D_ID5016 is CID4031 corresponding to gate 11005 as in the above example, but the destination is lined up in front of him. It becomes the coordinates of a person. This information can be obtained from the flow line data of PID 4013 corresponding to the previous value of PID 4013 of person 11009 in PID_LIST5024 of the matrix state table 502 of the state DB 106.
 人11010は、滞留型チェックポイント内にて所定の速度以下で滞留している動線の例であり、人11010のD_ID5016は領域11004に対応するCID4031である。この場合、目的地は領域11004に含まれる点であり、領域11004の重心でもよい。但しチェックポイント内で滞留している場合は、既に目的地に到着しており、目的地に関するチェックポイントに関する特徴量がモデルに寄与しない可能性もあるため、特徴量算出を省いてもよい。 Person 11010 is an example of a flow line that is accumulated at a predetermined speed or less in a retention type checkpoint, and D_ID5016 of person 11010 is CID4031 corresponding to region 11004. In this case, the destination is a point included in the area 11004, and may be the center of gravity of the area 11004. However, if the model stays within the checkpoint, the feature amount calculation may be omitted because the feature amount related to the checkpoint related to the destination may not contribute to the model because the feature amount has already arrived at the destination.
 特徴量算出部1102は、予測対象となる目的変数として、基本的には1ステップ先の時刻の速度を算出する。1ステップ先の時刻の速度は絶対座標系でもよく、また現在の時刻における速度ベクトルの方向を基準として、1ステップ先の時刻の速度を相対的な速度ベクトルに変換してもよい。相対的な速度ベクトルに変換する際、目的地までの方向を基準としてもよい。 The feature amount calculation unit 1102 basically calculates the speed of the time one step ahead as the objective variable to be predicted. The velocity at the time one step ahead may be an absolute coordinate system, or the velocity at the time one step ahead may be converted into a relative velocity vector with reference to the direction of the velocity vector at the current time. When converting to a relative velocity vector, the direction to the destination may be used as a reference.
 なお滞留型チェックポイント内で滞留している人11010に関しては、1ステップ先の時刻の速度ではなく、残り滞留時間、または、1ステップ先に次のD_ID5015に遷移して移動を開始する確率を説明変数としてもよい。 Regarding the person 11010 staying in the stay type checkpoint, the remaining stay time or the probability of transitioning to the next D_ID5015 one step ahead and starting the movement is explained instead of the speed of the time one step ahead. It may be a variable.
 図12Aおよび図12Bは、本発明の実施例1のシミュレーションDB109のデータ構造例を表す説明図である。 12A and 12B are explanatory views showing an example of a data structure of the simulation DB 109 of the first embodiment of the present invention.
 シミュレーションDB109は、仮想動線テーブル1201および仮想行列状態テーブル1202を格納する。仮想動線生成部111にて生成された動線データが仮想動線テーブル1201に、仮想動線生成部111にて更新された行列の状態データが仮想行列状態テーブル1202に、逐次的に格納される。 The simulation DB 109 stores the virtual flow line table 1201 and the virtual matrix state table 1202. The flow line data generated by the virtual flow line generation unit 111 is sequentially stored in the virtual flow line table 1201, and the state data of the matrix updated by the virtual flow line generation unit 111 is sequentially stored in the virtual flow line state table 1202. NS.
 仮想動線テーブル1201は、人ごとに、また一定の時間(例えば1秒)ごとにシミュレートされた動線データの座標とその状態に関する情報12011~12017が逐次的に格納する。 The virtual flow line table 1201 sequentially stores information 12011 to 12017 regarding the coordinates of the flow line data simulated for each person and at regular time intervals (for example, 1 second) and their states.
 START_TIME12011およびEND_TIME12012は、それぞれ、1秒ごとにシミュレートされた動線データの開始時間および終了時間を表す。WKT12013は、対応する人がSTART_TIME12011からEND_TIME12012までの間に動いた動線のラインストリングに関するジオメトリ情報を表す。 START_TIME12011 and END_TIME12012 represent the start time and end time of the flow line data simulated every second, respectively. WKT12013 represents the geometry information about the linestring of the flow line that the corresponding person moved between START_TIME12011 and END_TIME12012.
 PID12014は、対応する人のIDを示す。O_ID12015およびD_ID12016は、それぞれシミュレーション時の開始チェックポイントおよび終了チェックポイントを表す。Q_ID12017は、対応する人が並んでいる行列のCID4041を表す。なお行列に並んでいない際はQ_ID12017には-1を格納する。 PID12014 indicates the ID of the corresponding person. O_ID12015 and D_ID12016 represent a start checkpoint and an end checkpoint at the time of simulation, respectively. Q_ID12017 represents CID4041 in a matrix in which the corresponding people are lined up. When not in line, -1 is stored in Q_ID12017.
 仮想行列状態テーブル1202は、情報12021~12024を格納する。START_TIME12021およびEND_TIME12022は、それぞれ、対応する行列の状態についての開始時刻および終了時刻を表す。CID12023は、対応する行列のCID4041を表す。PID_LIST12024は、対応する行列に並んでいる人のIDをサービス待ちの順番通りに並べたIDリストである。 The virtual matrix state table 1202 stores information 12021 to 12024. START_TIME12021 and END_TIME12022 represent start and end times for the corresponding matrix states, respectively. CID12023 represents CID4041 of the corresponding matrix. PID_LIST12024 is an ID list in which the IDs of the people in the corresponding queue are arranged in the order of waiting for the service.
 図13は、本発明の実施例1の初期人流生成部104および仮想動線生成部111で行われるシミュレーションを示したフローチャートである。 FIG. 13 is a flowchart showing a simulation performed by the initial human flow generation unit 104 and the virtual flow line generation unit 111 of the first embodiment of the present invention.
 処理1301は、シミュレーションの処理開始を表す。 Process 1301 represents the start of simulation processing.
 処理1302は、ユーザ12からシミュレーションを実施する際の人流の需要情報を受信して初期人流を生成する処理を表す。この処理は初期人流生成部104にて行われる。ここで人流の需要情報とは、シミュレーション上で発生させる人の数、シミュレーション開始時刻、出発地、および最初の目的地に関する情報を指す。これらの情報は、統計値という形で入力されてもよいし、シミュレーション開始時刻と出発地、目的地の情報が一人ひとり詳細に入力されてもよい。 The process 1302 represents a process of receiving the demand information of the human flow when executing the simulation from the user 12 and generating the initial human flow. This process is performed by the initial human flow generation unit 104. Here, the demand information of the flow of people refers to information on the number of people generated in the simulation, the simulation start time, the departure place, and the first destination. These information may be input in the form of statistical values, or the simulation start time, departure place, and destination information may be input in detail for each person.
 統計値という形で入力される場合、初期人流生成部104は、ポアソン分布などを用いたサンプリング処理によってシミュレーション開始時刻を設定する。さらに、初期人流生成部104は、モデルDB107に含まれるチェックポイント初期確率テーブル605およびチェックポイント遷移確率テーブル604によって確率的に出発地および目的地の情報を一人ひとりに設定する。この際、出発地、目的地に関する情報はKID4032、もしくはCID4031という形で与え、最初の目的地だけでなくその後の目的地の遷移をすべて含んでいてもよい。 When input in the form of a statistic, the initial human flow generation unit 104 sets the simulation start time by sampling processing using a Poisson distribution or the like. Further, the initial human flow generation unit 104 probabilistically sets the information of the starting point and the destination for each person by the checkpoint initial probability table 605 and the checkpoint transition probability table 604 included in the model DB 107. At this time, the information regarding the starting point and the destination may be given in the form of KID4032 or CID4031 and may include not only the first destination but also all the transitions of the subsequent destinations.
 初期人流生成部104は、一人ひとりにシミュレーション開始時刻、出発地および目的地に関する情報を生成したら、初期位置および初期速度に関する情報をそれぞれ付加する。初期人流生成部104は、初期位置として最初の出発地に関するチェックポイントに含まれる点を選択する。この際、チェックポイントのジオメトリ情報から算出した重心座標を初期位置としてもよいし、チェックポイントの領域内に関してランダムに点をサンプリングして初期位置としてもよいし、動線DB105に格納されている動線データを分析し、そのチェックポイントから出発する際の座標情報を抽出して用いてもよい。 The initial human flow generation unit 104 generates information on the simulation start time, departure point, and destination for each person, and then adds information on the initial position and initial speed, respectively. The initial flow generation unit 104 selects a point included in the checkpoint regarding the first departure point as the initial position. At this time, the coordinates of the center of gravity calculated from the geometry information of the checkpoint may be used as the initial position, points may be randomly sampled in the checkpoint area and used as the initial position, or the motion stored in the flow line DB 105 may be used. The line data may be analyzed and the coordinate information when starting from the checkpoint may be extracted and used.
 初期速度はベクトルとして与えられる。初期人流生成部104は、初期速度のベクトルをランダムでサンプリングしてもよいし、目的地とするチェックポイントへの方向ベクトルから算出してもよいし、動線DB105に格納されている動線データを分析し、そのチェックポイントから出発する際の速度ベクトルを抽出して用いてもよい。 The initial velocity is given as a vector. The initial human flow generation unit 104 may randomly sample the vector of the initial velocity, may calculate it from the direction vector to the checkpoint as the destination, or may calculate the flow line data stored in the flow line DB 105. , And the velocity vector when starting from the checkpoint may be extracted and used.
 処理1303は、開始時刻(t=0)を設定し、シミュレーションを実際に開始する処理を表す。 Process 1303 represents a process of setting a start time (t = 0) and actually starting a simulation.
 処理1304は、処理1302にて生成した初期人流について、シミュレーション上の時刻tにおける人だけを抽出し、シミュレーション対象に加える処理を表す。その際、仮想動線生成部111は、新しく抽出された初期人流をシミュレーションDB109に格納する。 Process 1304 represents a process of extracting only people at time t on the simulation and adding them to the simulation target for the initial human flow generated in process 1302. At that time, the virtual flow line generation unit 111 stores the newly extracted initial human flow in the simulation DB 109.
 処理1305は、特徴量を算出して、モデルを用いて次の時刻の人流位置を予測する処理を表す。具体的には、仮想動線生成部111は、シミュレーション対象である人流に対して、地図DB108に格納されているシミュレーション用の地図テーブル402、403および404を用いて、図11で説明したものと同様の方法で特徴量を特徴量算出部1111にて算出する。そして、仮想動線生成部111は、算出した特徴量を用いて、モデルDB107に格納された状態ごとの動線予測モデルを用いて時刻t+1における人流の位置を動線予測部1112にて予測し、その結果をシミュレーションDB109の仮想動線テーブル1201に格納する。 Process 1305 represents a process of calculating a feature amount and predicting a human flow position at the next time using a model. Specifically, the virtual flow line generation unit 111 is described with reference to FIG. 11 by using the simulation map tables 402, 403 and 404 stored in the map DB 108 for the human flow to be simulated. The feature amount is calculated by the feature amount calculation unit 1111 by the same method. Then, the virtual flow line generation unit 111 predicts the position of the human flow at time t + 1 by the flow line prediction unit 1112 using the calculated feature amount and the flow line prediction model for each state stored in the model DB 107. , The result is stored in the virtual flow line table 1201 of the simulation DB 109.
 この際、動線予測モデルの出力が速度ではなく、残り滞留時間δtである場合、仮想動線生成部111は、時刻がt+δtになるまで、シミュレーション対象の人を同じ位置で滞留させてもよい。また、Q_ID12017が-1でない場合、その人は行列に並んでいるため、仮想動線生成部111は、その人がほかの人に所定の距離より近づかないように予測位置を補正してもよい。 At this time, if the output of the flow line prediction model is not the velocity but the remaining residence time δt, the virtual flow line generation unit 111 may retain the person to be simulated at the same position until the time reaches t + δt. .. Further, when Q_ID12017 is not -1, since the person is lined up in a matrix, the virtual flow line generation unit 111 may correct the predicted position so that the person does not come closer than a predetermined distance to another person. ..
 このステップにおける目的地は、時刻tにおける人流のD_ID12016、Q_ID12017とシミュレーションDB109の仮想行列状態テーブル1202によって決め方が異なるため、以下に説明する。 The destination in this step is determined below depending on the D_ID12016 and Q_ID12017 of the human flow at time t and the virtual matrix state table 1202 of the simulation DB 109.
 Q_ID12017≠-1かつその人が行列の先頭に並んでいる場合、その行列型チェックポイントに含まれる点を目的地とする。 If Q_ID12017 ≠ -1 and the person is lined up at the beginning of the matrix, the point included in the matrix type checkpoint is set as the destination.
 Q_ID12017≠-1かつその人が行列の二番目以降に並んでいる場合、その人の一つ前に並んでいる人を目的地とする。 If Q_ID12017 ≠ -1 and the person is lined up after the second in the line, the person who is lined up in front of that person is the destination.
 Q_ID12017=-1かつ、D_ID12016が行列型チェックポイント以外である場合、D_ID12016に対応するチェックポイントの中に含まれる点を目的地とする。D_ID12016としてKID4032の情報が使われている場合、一つのKID4032に複数のチェックポイントが目的地候補として存在しうるが、この時は目的地候補の中から、その人流に対して近い方のチェックポイントを選んでもよいし、人が滞留していない方のチェックポイントを選んでもよい。 If Q_ID12017 = -1 and D_ID12016 is other than a matrix type checkpoint, the destination is a point included in the checkpoint corresponding to D_ID12016. When the information of KID4032 is used as D_ID12016, a plurality of checkpoints may exist as destination candidates in one KID4032, but at this time, among the destination candidates, the checkpoint closer to the person flow. You may choose, or you may choose the checkpoint where no one is staying.
 Q_ID12017=-1、かつ、D_ID12016が行列型チェックポイントであり、かつ、時刻tにおいて該当行列に人が並んでいない場合、D_ID12016に対応する行列チェックポイントの中に含まれる点を目的地とする。上記の例と同じく、D_ID12016としてKID4032の情報が使われている場合、一つのKID4032に複数のチェックポイントが目的地候補として存在しうるが、この時は目的地候補の中から、その人流に対して近い方のチェックポイントを選んでもよいし、人が並んでいない方のチェックポイントを選んでもよい。 If Q_ID12017 = -1 and D_ID12016 is a matrix type checkpoint and no one is lined up in the corresponding matrix at time t, the destination is a point included in the matrix checkpoint corresponding to D_ID12016. Similar to the above example, when the information of KID4032 is used as D_ID12016, a plurality of checkpoints may exist as destination candidates in one KID4032, but at this time, from among the destination candidates, with respect to the flow of people. You may choose the checkpoint that is closer to you, or you may choose the checkpoint that is not lined up with people.
 Q_ID12017=-1、かつ、D_ID12016が行列型チェックポイントであり、かつ、時刻tにおいて該当行列に人が並んでいる場合、D_ID12016に対応する行列に並んでいる人の最後尾の人を目的地とする。上記の例と同じく、D_ID12016としてKID4032の情報が使われている場合、一つのKID4032に複数の行列が存在しうるが、その人流と行列の最後尾の距離が短い方の最後尾の人を目的地としてもよいし、並んでいる人数が少ない方の行列の最後尾を選んでもよい。 When Q_ID12017 = -1 and D_ID12016 is a matrix type checkpoint and people are lined up in the corresponding line at time t, the person at the end of the line corresponding to D_ID12016 is set as the destination. do. Similar to the above example, when the information of KID4032 is used as D_ID12016, there may be multiple matrices in one KID4032, but the purpose is the person at the end who has the shortest distance between the person flow and the end of the matrix. It may be the ground, or you may choose the end of the line with the smaller number of people in line.
 処理1306は、予測した人流の位置から、人流に対して状態判定し、次の状態に遷移させる処理を表す。この処理は、状態遷移部1113にて行われる。ここでの状態判定方法と遷移方法を以下に記述する。 Process 1306 represents a process of determining a state of a person flow from the predicted position of the person flow and transitioning to the next state. This process is performed by the state transition unit 1113. The state determination method and transition method here are described below.
 Q_ID12017≠-1かつ、該当する人が行列の先頭以外の人である場合、状態遷移部1113は、状態を遷移させず時刻tの状態をそのまま引き継ぐ。 When Q_ID12017 ≠ -1 and the corresponding person is a person other than the head of the matrix, the state transition unit 1113 inherits the state at time t as it is without changing the state.
 Q_ID12017≠-1かつ、該当する人が行列の先頭の人である場合、状態遷移部1113は、行列に並んだ後に向かうチェックポイントに人が滞留していなければ、Q_ID12017を-1、時刻t+1におけるO_ID12015を時刻tにおけるD_ID12016に、時刻t+1におけるD_ID12016を次に向かうチェックポイントのIDに遷移させる。一方、行列に並んだ後に向かうチェックポイントに人が滞留している場合、状態遷移部1113は、状態を遷移させず時刻tの状態をそのまま引き継ぐ。 When Q_ID12017 ≠ -1 and the corresponding person is the person at the head of the matrix, the state transition unit 1113 sets Q_ID12017 to -1 at time t + 1 unless a person stays at the checkpoint heading after lining up in the matrix. O_ID12015 is transferred to D_ID12016 at time t, and D_ID12016 at time t + 1 is transferred to the ID of the next checkpoint. On the other hand, when a person stays at the checkpoint heading after lining up in a line, the state transition unit 1113 inherits the state at time t as it is without changing the state.
 Q_ID12017=-1の場合は、状態遷移部1113は、まず目的地に到着したかどうか判定する。具体的には時刻t+1における人の位置が目的地から所定の距離以内である場合、状態遷移部1113は、その人が目的地に到着したと判定する。 When Q_ID12017 = -1, the state transition unit 1113 first determines whether or not the destination has arrived. Specifically, when the position of the person at time t + 1 is within a predetermined distance from the destination, the state transition unit 1113 determines that the person has arrived at the destination.
 目的地が行列の最後尾だった場合、状態遷移部1113は、Q_ID12017をその行列のIDに更新し、仮想行列状態テーブル1202のPID_LIST12024を更新する。 When the destination is at the end of the matrix, the state transition unit 1113 updates Q_ID12017 to the ID of the matrix, and updates PID_LIST12024 of the virtual matrix state table 1202.
 目的地が滞留型チェックポイントであり、かつ、O_ID12015とD_ID12016とが等しくない場合、状態遷移部1113は、O_ID12015およびD_ID12016を共に滞留型チェックポイントのIDに更新し、状態を滞留状態にする。 When the destination is a retention type checkpoint and O_ID12015 and D_ID12016 are not equal, the state transition unit 1113 updates both O_ID12015 and D_ID12016 to the ID of the retention type checkpoint, and puts the state in the retention state.
 目的地が滞留型チェックポイントであり、かつ、O_ID12015とD_ID12016が等しい場合、状態遷移部1113は、処理1305にて予測した結果、人がδt以上滞留しているならば、D_ID12016を次の目的地に更新し、処理1305の結果まだ滞留していると見なされる場合は、状態を遷移させない。 When the destination is a retention type checkpoint and O_ID12015 and D_ID12016 are equal, the state transition unit 1113 uses the D_ID12016 as the next destination if the person is staying at δt or more as a result of prediction in the process 1305. If it is considered that the state is still stagnant as a result of the process 1305, the state is not changed.
 目的地が通過型チェックポイントの場合、状態遷移部1113は、時刻t+1におけるO_ID12015を時刻tにおけるD_ID12016に、時刻t+1におけるD_ID12016を次に向かうチェックポイントのIDに遷移させる。 When the destination is a pass-through checkpoint, the state transition unit 1113 transitions O_ID12015 at time t + 1 to D_ID12016 at time t and D_ID12016 at time t + 1 to the ID of the next checkpoint.
 なおD_ID12016を更新する際に次に選ばれるチェックポイントのIDは、処理1302で生成した初期人流に目的地の遷移情報がすべて含まれている場合は、その情報に従って目的地を遷移させるように選択される。目的地の遷移情報がすべて含まれていない場合は、モデルDB107のチェックポイント遷移確率テーブル604に従い、確率的に目的地を遷移させてもよい。ここで、次の目的地が存在しない場合は、その人流を消す処理を行う。 The ID of the checkpoint selected next when updating D_ID12016 is selected so that the destination is transitioned according to the information when the initial human flow generated in the process 1302 includes all the transition information of the destination. Will be done. If all the destination transition information is not included, the destination may be stochastically transitioned according to the checkpoint transition probability table 604 of the model DB 107. Here, if the next destination does not exist, the process of extinguishing the flow of people is performed.
 処理1307は、シミュレーションの時刻tを一つ進める処理を表す。 Process 1307 represents a process of advancing the simulation time t by one.
 処理1308は、シミュレーションの終了条件を満たすかどうかを判定する処理を表す。終了条件を満たす場合は、処理1304に戻り、満たさない場合は処理1309へ進む。終了条件は、例えばシミュレーションを開始して一定時間過ぎたかどうかであってもよいし、シミュレーション対象である人流がすべて最終的な目的地に到達したかどうかであってもよい。 Process 1308 represents a process of determining whether or not the simulation end condition is satisfied. If the end condition is satisfied, the process returns to process 1304, and if the end condition is not satisfied, the process proceeds to process 1309. The end condition may be, for example, whether or not a certain period of time has passed since the start of the simulation, or whether or not all the flow of people to be simulated has reached the final destination.
 処理1309は、シミュレーションの処理終了を表す。 Process 1309 represents the end of simulation processing.
 図14は、本発明の実施例1の人間行動予測システム10のインタフェースの例を表す説明図である。以下、インタフェースの例である画面1401の詳細について説明する。 FIG. 14 is an explanatory diagram showing an example of the interface of the human behavior prediction system 10 of the first embodiment of the present invention. Hereinafter, the details of the screen 1401 which is an example of the interface will be described.
 画面1401に含まれる画面1402は、動線DB105に格納された動線データ、シミュレーションDB109に格納された仮想動線データ、ならびに、地図DB108に格納された地図テーブル402、403および404を表示する画面である。ここで、それぞれの動線データは各レコードのLINESTRINGを描画したものであるが、LINESTRINGの終点にマーカーをつけて描画してもよいし、状態ごとに色を変えて描画してもよい。描画されているデータは画面上で選択することができ、選択した場合は動線データ14021のように太さまたは色などの表示を変化させて描画してもよい。 The screen 1402 included in the screen 1401 displays the flow line data stored in the flow line DB 105, the virtual flow line data stored in the simulation DB 109, and the map tables 402, 403, and 404 stored in the map DB 108. Is. Here, each flow line data is drawn by drawing the LINESTRING of each record, but it may be drawn by attaching a marker to the end point of the LINESTRING, or by changing the color for each state. The drawn data can be selected on the screen, and when selected, the display such as the thickness or color may be changed and drawn as in the flow line data 14021.
 ボタン1403は、動線データ計測を行うためのボタンである。このボタンが操作されると、計測システム11にて対象領域におけるデータが取得され、動線データが動線DB105に格納される。 Button 1403 is a button for measuring flow line data. When this button is operated, the measurement system 11 acquires the data in the target area, and the flow line data is stored in the flow line DB 105.
 ボタン1404は、地図データを入力するためのボタンである。このボタンが操作されると、ダイアログが立ち上がり、ユーザ12が地図テーブル402、403および404に関する情報を入力することができる。入力された情報が、地図データ入力部103によって地図DB108に格納される。 Button 1404 is a button for inputting map data. When this button is operated, a dialog is launched and the user 12 can enter information about the map tables 402, 403 and 404. The input information is stored in the map DB 108 by the map data input unit 103.
 ボタン1405は、計測した動線データに対して、状態判定処理を行うためのボタンである。このボタンが操作されると、状態判定部102の処理が実行される。 Button 1405 is a button for performing a state determination process on the measured flow line data. When this button is operated, the process of the state determination unit 102 is executed.
 ボタン1406は、動線データの表示を切り替えるボタンである。動線データは、シミュレーションDB109に格納されたデータと、動線DB105に格納されたデータとの2種類があるため、ボタン1406を操作することによってこれらを切り替えることができる。 Button 1406 is a button for switching the display of flow line data. Since there are two types of flow line data, data stored in the simulation DB 109 and data stored in the flow line DB 105, these can be switched by operating the button 1406.
 ボタン1407は、動線DB105に格納された動線データから動線予測モデルを学習するためのボタンである。このボタンが操作されると、行動モデル学習部110の処理が実行される。 Button 1407 is a button for learning a flow line prediction model from the flow line data stored in the flow line DB 105. When this button is operated, the process of the behavior model learning unit 110 is executed.
 ボタン1408は、初期人流を生成するためのボタンである。このボタンが操作されると、ダイアログボックスが表示され、処理1302にて説明した人流の需要情報を入力することができ、その後、初期人流生成部104の処理が実行される。 Button 1408 is a button for generating an initial human flow. When this button is operated, a dialog box is displayed, and the demand information of the human flow described in the process 1302 can be input, and then the process of the initial human flow generation unit 104 is executed.
 ボタン1409は、学習した動線予測モデルに従い、シミュレーションを実施するためのボタンである。このボタンが操作されると、仮想動線生成部111の処理が実行される。 Button 1409 is a button for executing a simulation according to the learned flow line prediction model. When this button is operated, the processing of the virtual flow line generation unit 111 is executed.
 ボタン1410は、画面1402上で選択されているデータの詳細情報を表示するためのボタンである。具体的には、このボタンが操作されると、選択されているデータに対応する動線DB105、シミュレーションDB109、状態DB106および地図DB108の一つ以上のデータベースが呼び出され、選択しているデータの情報が表示される。図14の例では、動線データ14021が選択されているため、このデータに対応する動線テーブル401および動線状態テーブル501の情報が記載されたダイアログボックスが立ち上がる。 Button 1410 is a button for displaying detailed information of the data selected on the screen 1402. Specifically, when this button is operated, one or more databases of the flow line DB 105, the simulation DB 109, the state DB 106, and the map DB 108 corresponding to the selected data are called, and the information of the selected data is called. Is displayed. In the example of FIG. 14, since the flow line data 14021 is selected, a dialog box containing information on the flow line table 401 and the flow line state table 501 corresponding to this data is launched.
 ボタン1411は、動線データを再生する際に操作される停止ボタンである。画面上で動線データの再生を行っている際に、このボタンを操作することでその再生を止めることができる。 Button 1411 is a stop button operated when playing back the flow line data. When the flow line data is being played on the screen, the playback can be stopped by operating this button.
 ボタン1412は、動線データの再生ボタンである。このボタンを操作すると、動線データを時間ごとに連続再生する処理が行われる。 Button 1412 is a flow line data playback button. When this button is operated, the process of continuously reproducing the flow line data every time is performed.
 ボタン1413は、動線データを再生する際に操作される巻き戻しボタンである。画面上で動線データの表示を行っている際に、このボタンを操作することで表示している時間を巻き戻すことができる。 Button 1413 is a rewind button operated when playing back the flow line data. When the flow line data is displayed on the screen, the displayed time can be rewound by operating this button.
 プログレスバー1414は、動線データの再生時間の位置を示すバーである。この画面を参照しているユーザ12がバーを直接移動させることで、動線データを表示している時間をずらすことができる。 The progress bar 1414 is a bar indicating the position of the playback time of the flow line data. By directly moving the bar by the user 12 who is referencing this screen, the time during which the flow line data is displayed can be shifted.
 ボタン1415は、動線データを再生する際に操作される早送りボタンである。画面上で動線データの表示を行っている際に、このボタンを操作することで表示している時間を早送りことができる。 Button 1415 is a fast-forward button operated when playing back flow line data. When the flow line data is displayed on the screen, the displayed time can be fast-forwarded by operating this button.
 テキスト1416は、現在動線データが表示されている時間を示すテキストボックスである。ユーザはこのテキスト1416を直接編集して、再生時間を変更することができる。 Text 1416 is a text box indicating the time when the flow line data is currently displayed. The user can directly edit this text 1416 to change the playback time.
 チェックボックス1417は、動線データを一括で表示するか、時間ごとに表示をするかを切り替えるチェックボックスである。このチェックボックス1417にチェックが入っている場合は、動線データがすべて表示され、ボタン1411~1415およびテキスト1416は操作することができなくなる。 Check box 1417 is a check box for switching between displaying the flow line data collectively or displaying it hourly. When this check box 1417 is checked, all the flow line data is displayed, and the buttons 1411-1415 and the text 1416 cannot be operated.
 上記の実施例1では、人の動線予測モデルを生成して、それに基づいて人の仮想動線を生成するシステムを説明した。ここで、人は移動体の一例であり、人以外の移動体、例えば船舶または車両等にも本実施例を適用することができる。このため、人間行動予測システムは移動体移動予測システムと読み替えてもよい。 In Example 1 above, a system for generating a human flow line prediction model and generating a human virtual flow line based on the model has been described. Here, a person is an example of a moving body, and this embodiment can be applied to a moving body other than a person, for example, a ship or a vehicle. Therefore, the human behavior prediction system may be read as a mobile movement prediction system.
 以下、本発明の実施例2について図面を参照して説明する。以下に説明する相違点を除き、実施例2の人間行動予測システムの各部は、実施例1の同一の符号を付された各部と同一の機能を有するため、それらの説明は省略する。 Hereinafter, Example 2 of the present invention will be described with reference to the drawings. Except for the differences described below, each part of the human behavior prediction system of Example 2 has the same function as each part of the same reference numeral of Example 1, and therefore the description thereof will be omitted.
 本発明の実施例1における滞留状態は、滞留型チェックポイント内での滞留、行列に並んでいる間の滞留をさしていた。一方、実施例2では、人同士の対話状態も滞留状態として考慮してシミュレーションを実施する。 The retention state in Example 1 of the present invention refers to retention within a retention type checkpoint and retention while lining up in a line. On the other hand, in the second embodiment, the simulation is performed in consideration of the dialogue state between people as a staying state.
 図15は、本発明の実施例2の人間行動予測システム10の基本構成を表すブロック図である。 FIG. 15 is a block diagram showing the basic configuration of the human behavior prediction system 10 according to the second embodiment of the present invention.
 図15に示す人間行動予測システム10の構成要素のうち、計測システム11、動線データ抽出部101、地図データ入力部103、初期人流生成部104、行動モデル学習部110、仮想動線生成部111については、実施例1のものと同様であるため、説明を省略する。 Among the components of the human behavior prediction system 10 shown in FIG. 15, the measurement system 11, the flow line data extraction unit 101, the map data input unit 103, the initial human flow generation unit 104, the behavior model learning unit 110, and the virtual flow line generation unit 111. Since it is the same as that of the first embodiment, the description thereof will be omitted.
 状態判定部102は、動線DB105に格納された動線データと、地図DB108に格納されたチェックポイントデータと、を受信し、動線データに対する状態判定を行い、判定結果を状態DB1502に格納する機能を備える。上記機能を実現するために、前記状態判定部102は、行列判定部1021と、OD分析部1022と、対話判定部1501を備える。行列判定部1021およびOD分析部1022は、実施例1のものと同様である。対話判定部1501の具体的な処理については後述する。 The state determination unit 102 receives the flow line data stored in the flow line DB 105 and the checkpoint data stored in the map DB 108, determines the state of the flow line data, and stores the determination result in the state DB 1502. It has a function. In order to realize the above function, the state determination unit 102 includes a matrix determination unit 1021, an OD analysis unit 1022, and a dialogue determination unit 1501. The matrix determination unit 1021 and the OD analysis unit 1022 are the same as those in the first embodiment. The specific processing of the dialogue determination unit 1501 will be described later.
 状態DB1502およびモデルDB1503のデータ構造については後述する。 The data structures of the state DB 1502 and the model DB 1503 will be described later.
 図16Aは、本発明の実施例2の状態DB1502のデータ構造例を表す説明図である。 FIG. 16A is an explanatory diagram showing an example of a data structure of the state DB 1502 according to the second embodiment of the present invention.
 状態DB1502に関して、実施例1の状態DB106と異なるのは、動線状態テーブル15021に格納される情報である。動線状態テーブル15021に格納される情報150211~150216は、動線状態テーブル501に格納される情報5011~5016と同様である。異なるのはGID150217である。GID150217は、対応する人が対話状態であるかどうかを示す情報である。具体的には対話のIDである。対応する人が対話状態でなければ、GID=-1とする。 Regarding the state DB 1502, what is different from the state DB 106 of the first embodiment is the information stored in the flow line state table 15021. The information 150211 to 150216 stored in the flow line state table 15021 is the same as the information 5011 to 5016 stored in the flow line state table 501. The difference is GID150217. GID150217 is information indicating whether or not the corresponding person is in an interactive state. Specifically, it is the ID of the dialogue. If the corresponding person is not in a dialogue state, GID = -1.
 対話のIDは、場所、時間および人数によらず対話という現象に対して割り当てられる。例えば、2人が対話している際に、もう1人加わって3人で対話を始めた場合、2人が対話を開始してから3人が対話を終了するまで、対応するすべての人に同じGID150217が割り当てられる。すなわち、その最初の2人が対話している間はその2人のみに当該GIDが割り当てられ、3人目が加わった時刻からは3人に同じGIDが割り当てられる。 Dialogue IDs are assigned to the phenomenon of dialogue regardless of location, time and number of people. For example, if two people are talking and another person joins and three people start the dialogue, all the corresponding people from the time the two people start the dialogue until the three people finish the dialogue. The same GID 150217 is assigned. That is, while the first two people are interacting with each other, the GID is assigned only to the two people, and from the time when the third person is added, the same GID is assigned to the three people.
 図16Bは、本発明の実施例2のモデルDB1503のデータ構造例を表す説明図である。 FIG. 16B is an explanatory diagram showing an example of a data structure of the model DB 1503 of the second embodiment of the present invention.
 モデルDB1503に関しては、実施例1のテーブル601~605に加えて、対話モデルテーブル15031が新しく格納される。これは各場所において、対話の発生のしやすさを表す情報を格納したテーブルである。 Regarding the model DB 1503, in addition to the tables 601 to 605 of the first embodiment, the dialogue model table 15031 is newly stored. This is a table that stores information that represents the ease with which dialogue can occur at each location.
 SHAPE_ID150311は対話場所の候補であるグリッドのIDを表し、WKT150312はそのグリッドのジオメトリ情報である。Prob150313はその場所における対話の発生のしやすさを定量化したものであり、たとえばその場所で対話が発生する確率でもよい。 SHAPE_ID150311 represents the ID of the grid that is a candidate for the dialogue place, and WKT150312 is the geometry information of the grid. Prob150313 quantifies the susceptibility of dialogue to occur at that location, and may be, for example, the probability that dialogue will occur at that location.
 このパラメータは、例えば、状態判定部102が対話判定を行った後、行動モデル学習部110が対話モデルを学習する際に、グリッドごとに対話が起こった回数をカウントし、その回数を確率に変換して格納することによって生成されてもよい。実際に実施したシミュレーションの中で、移動中の人がある場所で他の人とすれ違った場合、その場所に対応するProb150313の値を用いて確率的に対話を発生させてもよい。このとき、シミュレーションにおける対話時間として、実際の動線データの対話判定結果から抽出した対話時間を用いてもよい。 For this parameter, for example, after the state determination unit 102 makes a dialogue determination, when the behavior model learning unit 110 learns the dialogue model, the number of times the dialogue has occurred is counted for each grid, and the number of times is converted into a probability. It may be generated by storing it. In the simulation actually performed, when a moving person passes another person at a certain place, a dialogue may be probabilistically generated using the value of Prob150313 corresponding to the place. At this time, as the dialogue time in the simulation, the dialogue time extracted from the dialogue determination result of the actual flow line data may be used.
 図17は、本発明の実施例2の状態判定部102で行われる処理のフローチャートである。 FIG. 17 is a flowchart of the process performed by the state determination unit 102 of the second embodiment of the present invention.
 処理1701は、状態判定部102の処理開始を表す。 The process 1701 represents the start of the process of the state determination unit 102.
 処理1702~1705は、処理702~705と同様である。 The treatments 1702 to 1705 are the same as the treatments 702 to 705.
 処理1706は、動線データの滞留状態から対話判定を行う処理である。対話判定部1501は、動線データを確認した際に、所定の距離の範囲内に所定の速度以下の人が複数いる場合、対話をしていると判定する。この処理について説明図1709を用いて説明する。 Process 1706 is a process for performing dialogue determination from the retention state of flow line data. When the dialogue determination unit 1501 confirms the flow line data, if there are a plurality of people at a predetermined speed or less within a predetermined distance range, the dialogue determination unit 1501 determines that the dialogue is being performed. This process will be described with reference to FIG. 1709.
 この例では、所定の距離の範囲である領域1711の中に所定の速度以下の人が複数いるため、人1710は対話をしていると判定される。人1712は所定の速度以下で移動しているが、所定の距離の範囲内に他の所定の速度以下の人がいないため、対話をしていると判定されない。人1713は所定の距離の範囲である領域1711内に存在するが、所定の速度以下でないので、対話をしていると判定されない。 In this example, since there are a plurality of people having a predetermined speed or less in the area 1711 within a predetermined distance range, it is determined that the person 1710 is having a dialogue. Although the person 1712 is moving at a predetermined speed or less, it is not determined that the person is having a dialogue because there is no other person at a predetermined speed or less within the range of the predetermined distance. The person 1713 exists in the region 1711, which is within a predetermined distance range, but is not below a predetermined speed, so that it is not determined that the person is having a dialogue.
 なお行列判定の時と同じく、対話判定を行う際に、人の向きの情報を用いてもよい。例えば、距離および速度が条件を満たしていて、かつ、それぞれの人の向きと相手の人がいる方向とのなす角が所定の値より小さいなどの所定の条件を満たす場合に、対話をしていると判定してもよい。 As in the case of matrix judgment, information on the orientation of the person may be used when performing dialogue judgment. For example, if the distance and speed meet the conditions, and the angle between the direction of each person and the direction of the other person is smaller than the predetermined value, the dialogue is performed. It may be determined that there is.
 処理1707は、状態データを状態DB1502に格納する処理である。具体的には、状態判定部102は、処理1702~1705の結果を統合し、各人について、いつ、どのチェックポイントからどのチェックポイントへ向かっていたか、どの行列型チェックポイントに並んでいたか、どの滞留型チェックポイントにて滞留していたか、対話していたか、の情報を、状態DB1502の動線状態テーブル15021に格納する。 The process 1707 is a process for storing the state data in the state DB 1502. Specifically, the state determination unit 102 integrates the results of the processes 1702 to 1705, and for each person, when, from which checkpoint to which checkpoint, and which matrix type checkpoint was lined up. Information on which retention type checkpoint was staying and whether the dialogue was held is stored in the flow line state table 15021 of the state DB 1502.
 処理1708は、状態判定部102の処理終了を表す。 The process 1708 represents the end of the process of the state determination unit 102.
 図18は、本発明の実施例1および実施例2の計測システム11および人間行動予測システム10を構成する各装置のハードウェア構成を示すブロック図である。 FIG. 18 is a block diagram showing a hardware configuration of each device constituting the measurement system 11 and the human behavior prediction system 10 of the first and second embodiments of the present invention.
 計測システム11は、レーザ計測システム2001、カメラシステム2002、および端末測位システム2003のいずれか一つ以上から構成される。 The measurement system 11 is composed of one or more of the laser measurement system 2001, the camera system 2002, and the terminal positioning system 2003.
 レーザ計測システム2001は、レーザ光を発信するレーザ発振器20011、レーザの反射光を読み取るレーザ受光器20012、レーザの発振、受光にかかった時間等からレーザ計測システム2001の周囲の物体までの距離を求め点群データに変換する演算装置20013からなる。 The laser measurement system 2001 obtains the distance from the laser oscillator 2011 that emits the laser light, the laser receiver 2001 that reads the reflected light of the laser, the oscillation of the laser, the time required for receiving the light, and the like to the object around the laser measurement system 2001. It comprises an arithmetic unit 2013 that converts into point group data.
 カメラシステム2002は一般的なカメラを備えたシステムであり、イメージセンサ20021によって可視光を画像として得て、演算装置20022が公知の方法によってその画像の中から人を検知してその位置を推定することができる装置である。 The camera system 2002 is a system provided with a general camera, and visible light is obtained as an image by an image sensor 20021, and an arithmetic unit 20022 detects a person from the image and estimates its position by a known method. It is a device that can be used.
 端末測位システム2003は、プロセッサ20031、記憶装置20032、モニタ20033、GPS受信機20034、DRAM20035、入力装置20036および無線通信ボード20037を備える。プロセッサ20031は、演算性能を持つ。DRAM20035は、高速に読み書きが可能な揮発性一時記憶領域である。記憶装置20032は、ハードディスクドライブ(HDD)またはフラッシュメモリなどを利用した永続的な記憶領域である。入力装置20036は、人の操作を受け付ける。モニタ20033は、現在の端末の状況を提示する。無線通信ボード20037は、無線通信を行うためのネットワークインタフェースカードである。GPS受信機20034は、端末の位置を特定する。 The terminal positioning system 2003 includes a processor 20031, a storage device 20032, a monitor 20033, a GPS receiver 20034, a DRAM 20033, an input device 20033, and a wireless communication board 20033. The processor 20031 has arithmetic performance. The DRAM 20033 is a volatile temporary storage area that can be read and written at high speed. The storage device 20032 is a permanent storage area using a hard disk drive (HDD), a flash memory, or the like. The input device 20003 accepts human operations. Monitor 20033 presents the current status of the terminal. The wireless communication board 20033 is a network interface card for performing wireless communication. The GPS receiver 20034 specifies the position of the terminal.
 DRAM20035等の記憶領域に記録されたプログラムをプロセッサ20031が実行すると、GPS受信機20034等を用いて自己の位置を推定して無線通信ボード20037を経由して配信する。 When the processor 20031 executes a program recorded in a storage area such as a DRAM 20033, it estimates its own position using a GPS receiver 20034 or the like and distributes it via a wireless communication board 20033.
 人間行動予測システム10は、プロセッサ112、記憶装置113、モニタ114、DRAM115、入力装置116及びNIC117を備える。プロセッサ112は、演算性能を有する。DRAM115は、高速に読み書きが可能な揮発性一時記憶領域である。記憶装置113は、HDDまたはフラッシュメモリなどを利用した永続的な記憶領域である。入力装置116は、人の操作を受け付ける。モニタ114は、情報を提示する。NIC117は、通信を行うためのネットワークインタフェースカードである。 The human behavior prediction system 10 includes a processor 112, a storage device 113, a monitor 114, a DRAM 115, an input device 116, and a NIC 117. The processor 112 has arithmetic performance. The DRAM 115 is a volatile temporary storage area that can be read and written at high speed. The storage device 113 is a permanent storage area using an HDD, a flash memory, or the like. The input device 116 accepts human operations. Monitor 114 presents information. NIC117 is a network interface card for performing communication.
 DRAM115等の記憶領域に記録されたプログラムをプロセッサ112が実行することによって、動線データ抽出部101、状態判定部102、地図データ入力部103、初期人流生成部104、行動モデル学習部110および仮想動線生成部111を実現できる。すなわち、実施例1および2において上記の各部が実行する処理は、実際にはプロセッサ112がプログラムに従って実行する。また、動線DB105、状態DB106、モデルDB107、地図DB108、シミュレーションDB109は記憶装置113に記憶することによって実現できる。 By the processor 112 executing the program recorded in the storage area such as the DRAM 115, the flow line data extraction unit 101, the state determination unit 102, the map data input unit 103, the initial human flow generation unit 104, the behavior model learning unit 110, and the virtual The flow line generation unit 111 can be realized. That is, the processing executed by each of the above parts in Examples 1 and 2 is actually executed by the processor 112 according to the program. Further, the flow line DB 105, the state DB 106, the model DB 107, the map DB 108, and the simulation DB 109 can be realized by storing them in the storage device 113.
 人間行動予測システム10は、例えば図18に示す構成の一つの計算機によって実現されてもよいが、複数の計算機によって実現されてもよい。例えば、前述した人間行動予測システム10が保持する情報が、複数の記憶装置113またはDRAM115に分散して格納されてもよいし、前述した人間行動予測システム10の機能が、複数の計算機の複数のプロセッサによって分散して実行されてもよい。 The human behavior prediction system 10 may be realized by, for example, one computer having the configuration shown in FIG. 18, but may be realized by a plurality of computers. For example, the information held by the above-mentioned human behavior prediction system 10 may be distributed and stored in a plurality of storage devices 113 or DRAM 115, or the functions of the above-mentioned human behavior prediction system 10 may be stored in a plurality of computers. It may be distributed and executed by the processor.
 上記の本発明の実施形態は、次のような例を含んでもよい。 The above embodiment of the present invention may include the following examples.
 (1)プロセッサ(例えばプロセッサ112)と、記憶装置(例えば記憶装置113およびDRAM115の少なくともいずれか)と、を有する移動体移動予測システムであって、記憶装置は、移動体の時刻ごとの位置情報を含む動線情報(例えば動線DB105)と、移動体が移動可能な空間の地図情報(例えば地図DB108)と、移動体の移動に関する状態を示す状態情報(例えば状態DB106)と、を保持し、プロセッサは、動線情報、地図情報及び移動体の状態に基づいて、移動体の移動先を予測する移動モデルを学習し(例えば図10の処理)、移動体の初期条件及び移動モデルに基づいて移動体の仮想的な動線を生成する(例えば図13の処理)。 (1) A mobile movement prediction system including a processor (for example, a processor 112) and a storage device (for example, at least one of the storage device 113 and the DRAM 115), wherein the storage device is position information for each time of the moving body. (For example, the movement line DB 105), the map information of the space in which the moving body can move (for example, the map DB 108), and the state information indicating the state related to the movement of the moving body (for example, the state DB 106) are retained. , The processor learns a movement model that predicts the destination of the moving body based on the movement line information, the map information, and the state of the moving body (for example, the processing of FIG. 10), and is based on the initial condition of the moving body and the moving body. To generate a virtual movement line of the moving body (for example, the process of FIG. 13).
 これによって、動線シミュレーションを高精度化することができる。それによって、レイアウト設計による動線への影響を事前に評価することができる。 This makes it possible to improve the accuracy of the flow line simulation. Thereby, the influence of the layout design on the flow line can be evaluated in advance.
 (2)上記(1)において、地図情報は、それぞれ移動体が通過、滞留及び行列の少なくともいずれかをし得る複数のチェックポイントの位置及び種類の情報(例えば地図テーブル403)を含み、プロセッサは、動線情報及び地図情報に基づいて、移動体とチェックポイントとの位置関係、チェックポイントの種類及び移動体の速度から、移動体の状態を判定した結果を、状態情報として記憶装置に格納する(例えば図7の処理)。 (2) In (1) above, the map information includes information on the positions and types of a plurality of checkpoints (eg, map table 403), each of which the moving object can pass, stay, or matrix at least. , The result of determining the state of the moving body from the positional relationship between the moving body and the checkpoint, the type of checkpoint, and the speed of the moving body based on the movement line information and the map information is stored in the storage device as the state information. (For example, the process of FIG. 7).
 これによって、移動体の状態を適切に分類して、動線シミュレーションを高精度化することができる。 This makes it possible to appropriately classify the states of moving objects and improve the accuracy of the flow line simulation.
 (3)上記(2)において、地図情報は、各チェックポイントの種類が滞留型であるかを示す情報を含み、移動体の状態は、滞留型のチェックポイントに滞留している状態を含み、プロセッサは、滞留型のチェックポイントとの位置関係が所定の条件を満たし、かつ、移動速度が所定の値以下である移動体の状態を、滞留型のチェックポイントに滞留している状態と判定する(例えば図7の処理703)。 (3) In the above (2), the map information includes information indicating whether the type of each checkpoint is a retention type, and the state of the moving body includes a state of being retained at the retention type checkpoint. The processor determines that the state of the moving body whose positional relationship with the retention type checkpoint satisfies a predetermined condition and whose moving speed is equal to or less than a predetermined value is a state of being retained at the retention type checkpoint. (For example, the process 703 in FIG. 7).
 これによって、移動体の状態を適切に分類して、動線シミュレーションを高精度化することができる。 This makes it possible to appropriately classify the states of moving objects and improve the accuracy of the flow line simulation.
 (4)上記(2)において、地図情報は、各チェックポイントの種類が行列型であるかを示す情報を含み、前記移動体の状態は、前記行列型のチェックポイントの行列に並んでいる状態を含み、前記プロセッサは、前記行列型のチェックポイントとの位置関係が所定の条件を満たし、かつ、移動速度が所定の値以下である前記移動体の状態を、前記行列型のチェックポイントの行列に並んでいる状態と判定する(例えば図7の処理704)。 (4) In the above (2), the map information includes information indicating whether the type of each checkpoint is a matrix type, and the state of the moving body is a state in which the checkpoints of the matrix type are arranged in a matrix. The processor comprises the state of the moving body in which the positional relationship with the matrix-type checkpoint satisfies a predetermined condition and the movement speed is equal to or less than a predetermined value, the matrix of the matrix-type checkpoint. It is determined that the state is lined up in (for example, the process 704 in FIG. 7).
 これによって、移動体の状態を適切に分類して、動線シミュレーションを高精度化することができる。 This makes it possible to appropriately classify the states of moving objects and improve the accuracy of the flow line simulation.
 (5)上記(4)において、プロセッサは、行列型のチェックポイントと、行列型のチェックポイントにおける行列の対象の地点(例えば行列に並んだ人にサービスを提供する場所)と、行列型のチェックポイントの行列に並んでいる状態に分類された移動体と、の位置関係に基づいて、行列の先頭に並んでいる移動体を特定し(例えば図9の処理902)、行列における順序が特定されている移動体と他の移動体との位置関係に基づいて、行列における順序が特定されている移動体の次に並んでいる移動体を特定する処理を再帰的に行うことによって、行列に並んでいる移動体の順序を特定する(例えば図9の処理903)。 (5) In (4) above, the processor uses a matrix-type checkpoint, a point of interest in the matrix at the matrix-type checkpoint (for example, a place where a service is provided to a person in a matrix), and a matrix-type check. Based on the positional relationship between the moving objects classified into the states arranged in the matrix of points, the moving objects arranged at the head of the matrix are specified (for example, the process 902 in FIG. 9), and the order in the matrix is specified. Arranged in a matrix by performing a process of recursively identifying the moving objects next to the moving objects whose order is specified in the matrix based on the positional relationship between the moving object and the other moving objects. The order of the moving objects is specified (for example, the process 903 in FIG. 9).
 これによって、移動体の状態を適切に分類して、動線シミュレーションを高精度化することができる。 This makes it possible to appropriately classify the states of moving objects and improve the accuracy of the flow line simulation.
 (6)上記(5)において、移動モデルに入力される特徴量は、移動体と移動体の目的地との位置関係に関する特徴量を含み、行列型のチェックポイントに移動体が並んでいない場合、行列型のチェックポイントに向かう移動体の目的地は、行列型のチェックポイントに属するいずれかの地点であり、行列型のチェックポイントに1以上の前記移動体が並んでいる場合、行列型のチェックポイントに向かう移動体の目的地は、行列の最後尾に並んでいる移動体であり、行列型のチェックポイントに1以上の移動体が並んでいる場合、行列型のチェックポイントの先頭に並んでいる移動体の目的地は、行列型のチェックポイントに属するいずれかの地点であり、行列型のチェックポイントに2以上の移動体が並んでいる場合、行列型のチェックポイントの2番目以降に並んでいる移動体の目的地は、行列の一つ前に並んでいる移動体である。 (6) In the above (5), the feature amount input to the movement model includes the feature amount related to the positional relationship between the moving body and the destination of the moving body, and the moving bodies are not lined up at the matrix-type checkpoint. , The destination of the moving object toward the matrix-type checkpoint is any point belonging to the matrix-type checkpoint, and when one or more of the above-mentioned moving objects are lined up at the matrix-type checkpoint, the matrix-type checkpoint The destination of the moving object toward the checkpoint is the moving object that is lined up at the end of the matrix, and if one or more moving objects are lined up at the matrix type checkpoint, it is lined up at the beginning of the matrix type checkpoint. The destination of the moving object is any point belonging to the matrix type checkpoint, and if two or more moving objects are lined up at the matrix type checkpoint, the second and subsequent points of the matrix type checkpoint are followed. The destination of the moving objects that are lined up is the moving objects that are lined up in front of the matrix.
 これによって、分類された移動体の状態に基づいて、行列に関連する動線シミュレーションを高精度化することができる。 This makes it possible to improve the accuracy of the flow line simulation related to the matrix based on the state of the classified moving object.
 (7)上記(2)において、移動体は人であり、プロセッサは、動線情報から、複数の人の間の距離及び各人の移動速度が所定の条件を満たす状態の持続時間に基づいて、複数の人の間の対話の発生を判定した結果を、状態情報に含めて記憶装置に格納する(例えば図17の処理)。 (7) In the above (2), the moving body is a person, and the processor is based on the duration of the state in which the distance between a plurality of people and the moving speed of each person satisfy a predetermined condition from the flow line information. , The result of determining the occurrence of dialogue between a plurality of people is included in the state information and stored in the storage device (for example, the process of FIG. 17).
 これによって、人同士の対話状態も滞留状態として考慮したシミュレーションを実行することができる。 This makes it possible to execute a simulation that considers the dialogue state between people as a retention state.
 (8)上記(7)において、プロセッサは、複数の人の間の対話の発生を判定した結果に基づいて、空間内の領域ごとの対話の発生確率を計算し、生成された仮想的な動線と、対話の発生確率とに基づいて、対話の発生を予測する。 (8) In the above (7), the processor calculates the probability of occurrence of a dialogue for each region in the space based on the result of determining the occurrence of a dialogue between a plurality of people, and the generated virtual motion. Predict the occurrence of dialogue based on the line and the probability of occurrence of dialogue.
 これによって、人同士の対話状態も滞留状態として考慮したシミュレーションを実行することができる。 This makes it possible to execute a simulation that considers the dialogue state between people as a retention state.
 (9)上記(1)において、プロセッサは、移動体の状態によって分類された動線情報と地図情報とに基づいて、移動体の状態ごとに、移動モデルを学習する。 (9) In (1) above, the processor learns a movement model for each state of the moving body based on the flow line information and the map information classified according to the state of the moving body.
 これによって、状態に基づいて移動先を予測するモデルを生成し、動線シミュレーションを高精度化することができる。 This makes it possible to generate a model that predicts the destination based on the state and improve the accuracy of the flow line simulation.
 (10)上記(1)において、プロセッサは、移動体の状態が特徴量として入力される移動モデルを学習する。 (10) In (1) above, the processor learns a movement model in which the state of the moving body is input as a feature quantity.
 これによって、状態に基づいて移動先を予測するモデルを生成し、動線シミュレーションを高精度化することができる。 This makes it possible to generate a model that predicts the destination based on the state and improve the accuracy of the flow line simulation.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明のより良い理解のために詳細に説明したのであり、必ずしも説明の全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることが可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 The present invention is not limited to the above-described embodiment, but includes various modifications. For example, the above-mentioned examples have been described in detail for a better understanding of the present invention, and are not necessarily limited to those having all the configurations of the description. Further, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace a part of the configuration of each embodiment with another configuration.
 また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によってハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによってソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、不揮発性半導体メモリ、ハードディスクドライブ、SSD(Solid State Drive)等の記憶デバイス、または、ICカード、SDカード、DVD等の計算機読み取り可能な非一時的データ記憶媒体に格納することができる。 Further, each of the above configurations, functions, processing units, processing means, etc. may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be stored in non-volatile semiconductor memories, hard disk drives, storage devices such as SSDs (Solid State Drives), or computer-readable non-readable devices such as IC cards, SD cards, and DVDs. It can be stored in a temporary data storage medium.
 また、制御線及び情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線及び情報線を示しているとは限らない。実際にはほとんど全ての構成が相互に接続されていると考えてもよい。 In addition, the control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all control lines and information lines in the product. In practice, it can be considered that almost all configurations are interconnected.

Claims (11)

  1.  プロセッサと、記憶装置と、を有する移動体移動予測システムであって、
     前記記憶装置は、
     移動体の時刻ごとの位置情報を含む動線情報と、
     前記移動体が移動可能な空間の地図情報と、
     前記移動体の移動に関する状態を示す状態情報と、を保持し、
     前記プロセッサは、
     前記動線情報、前記地図情報及び前記移動体の状態に基づいて、前記移動体の移動先を予測する移動モデルを学習し、
     前記移動体の初期条件及び前記移動モデルに基づいて前記移動体の仮想的な動線を生成することを特徴とする移動体移動予測システム。
    A mobile movement prediction system having a processor and a storage device.
    The storage device is
    Flow line information including the position information of the moving body for each time,
    Map information of the space where the moving body can move and
    The state information indicating the state related to the movement of the moving body and the state information are retained.
    The processor
    A movement model that predicts the movement destination of the moving body is learned based on the flow line information, the map information, and the state of the moving body.
    A moving body movement prediction system characterized by generating a virtual flow line of the moving body based on the initial conditions of the moving body and the moving model.
  2.  請求項1に記載の移動体移動予測システムであって、
     前記地図情報は、それぞれ前記移動体が通過、滞留及び行列の少なくともいずれかをし得る複数のチェックポイントの位置及び種類の情報を含み、
     前記プロセッサは、前記動線情報及び前記地図情報に基づいて、前記移動体と前記チェックポイントとの位置関係、前記チェックポイントの種類及び前記移動体の速度から、前記移動体の状態を判定した結果を、前記状態情報として前記記憶装置に格納することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 1.
    The map information includes information on the positions and types of a plurality of checkpoints, each of which the moving object can pass, stay, or queue at least.
    The processor determines the state of the moving body from the positional relationship between the moving body and the checkpoint, the type of the checkpoint, and the speed of the moving body based on the movement line information and the map information. Is stored in the storage device as the state information.
  3.  請求項2に記載の移動体移動予測システムであって、
     前記地図情報は、各チェックポイントの種類が滞留型であるかを示す情報を含み、
     前記移動体の状態は、前記滞留型のチェックポイントに滞留している状態を含み、
     前記プロセッサは、前記滞留型のチェックポイントとの位置関係が所定の条件を満たし、かつ、移動速度が所定の値以下である前記移動体の状態を、前記滞留型のチェックポイントに滞留している状態と判定することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 2.
    The map information includes information indicating whether the type of each checkpoint is a retention type.
    The state of the moving body includes a state of staying at the staying type checkpoint.
    The processor retains the state of the moving body in which the positional relationship with the retention type checkpoint satisfies a predetermined condition and the moving speed is equal to or less than a predetermined value at the retention type checkpoint. A mobile movement prediction system characterized by determining a state.
  4.  請求項2に記載の移動体移動予測システムであって、
     前記地図情報は、各チェックポイントの種類が行列型であるかを示す情報を含み、
     前記移動体の状態は、前記行列型のチェックポイントの行列に並んでいる状態を含み、
     前記プロセッサは、前記行列型のチェックポイントとの位置関係が所定の条件を満たし、かつ、移動速度が所定の値以下である前記移動体の状態を、前記行列型のチェックポイントの行列に並んでいる状態と判定することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 2.
    The map information includes information indicating whether each checkpoint type is a matrix type.
    The state of the moving body includes a state of being arranged in a matrix of the matrix type checkpoints.
    The processor arranges the states of the moving body whose positional relationship with the matrix-type checkpoint satisfies a predetermined condition and whose moving speed is equal to or less than a predetermined value in a matrix of the matrix-type checkpoint. A moving object movement prediction system characterized in that it is determined to be in a state of being present.
  5.  請求項4に記載の移動体移動予測システムであって、
     前記プロセッサは、
     前記行列型のチェックポイントと、前記行列型のチェックポイントにおける行列の対象の地点と、前記行列型のチェックポイントの行列に並んでいる状態に分類された前記移動体と、の位置関係に基づいて、前記行列の先頭に並んでいる前記移動体を特定し、
     前記行列における順序が特定されている前記移動体と他の前記移動体との位置関係に基づいて、前記行列における順序が特定されている前記移動体の次に並んでいる前記移動体を特定する処理を再帰的に行うことによって、前記行列に並んでいる前記移動体の順序を特定することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 4.
    The processor
    Based on the positional relationship between the matrix-type checkpoint, the target point of the matrix at the matrix-type checkpoint, and the moving object classified into the states arranged in the matrix of the matrix-type checkpoint. , Identify the moving object at the beginning of the matrix,
    Based on the positional relationship between the moving body whose order is specified in the matrix and the other moving bodies, the moving bodies arranged next to the moving body whose order is specified in the matrix are specified. A mobile movement prediction system, characterized in that the order of the moving bodies arranged in the matrix is specified by performing the processing recursively.
  6.  請求項5に記載の移動体移動予測システムであって、
     前記移動モデルに入力される特徴量は、前記移動体と前記移動体の目的地との位置関係に関する特徴量を含み、
     前記行列型のチェックポイントに前記移動体が並んでいない場合、前記行列型のチェックポイントに向かう前記移動体の目的地は、前記行列型のチェックポイントに属するいずれかの地点であり、
     前記行列型のチェックポイントに1以上の前記移動体が並んでいる場合、前記行列型のチェックポイントに向かう前記移動体の目的地は、前記行列の最後尾に並んでいる前記移動体であり、
     前記行列型のチェックポイントに1以上の前記移動体が並んでいる場合、前記行列型のチェックポイントの先頭に並んでいる前記移動体の目的地は、前記行列型のチェックポイントに属するいずれかの地点であり、
     前記行列型のチェックポイントに2以上の前記移動体が並んでいる場合、前記行列型のチェックポイントの2番目以降に並んでいる前記移動体の目的地は、前記行列の一つ前に並んでいる前記移動体であることを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 5.
    The feature amount input to the movement model includes the feature amount relating to the positional relationship between the moving body and the destination of the moving body.
    When the moving body is not lined up at the matrix type checkpoint, the destination of the moving body toward the matrix type checkpoint is any point belonging to the matrix type checkpoint.
    When one or more of the moving bodies are lined up at the matrix-type checkpoint, the destination of the moving body toward the matrix-type checkpoint is the moving body lined up at the end of the matrix.
    When one or more of the moving objects are arranged at the matrix type checkpoint, the destination of the moving objects arranged at the head of the matrix type checkpoint is any one belonging to the matrix type checkpoint. It is a point,
    When two or more mobile bodies are lined up at the matrix type checkpoint, the destinations of the mobile bodies lined up after the second of the matrix type checkpoints are lined up in front of the matrix. A mobile object movement prediction system, characterized in that it is the mobile object.
  7.  請求項2に記載の移動体移動予測システムであって、
     前記移動体は人であり、
     前記プロセッサは、前記動線情報から、複数の前記人の間の距離及び各人の移動速度が所定の条件を満たす状態の持続時間に基づいて、前記複数の人の間の対話の発生を判定した結果を、前記状態情報に含めて前記記憶装置に格納することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 2.
    The mobile is a person
    From the movement line information, the processor determines the occurrence of a dialogue between the plurality of people based on the duration of a state in which the distance between the plurality of people and the movement speed of each person satisfy a predetermined condition. A moving body movement prediction system, characterized in that the results are included in the state information and stored in the storage device.
  8.  請求項7に記載の移動体移動予測システムであって、
     前記プロセッサは、
     前記複数の人の間の対話の発生を判定した結果に基づいて、前記空間内の領域ごとの対話の発生確率を計算し、
     前記生成された仮想的な動線と、前記対話の発生確率とに基づいて、対話の発生を予測することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 7.
    The processor
    Based on the result of determining the occurrence of dialogue between the plurality of people, the probability of occurrence of dialogue for each region in the space is calculated.
    A mobile movement prediction system characterized in that the occurrence of a dialogue is predicted based on the generated virtual flow line and the probability of occurrence of the dialogue.
  9.  請求項1に記載の移動体移動予測システムであって、
     前記プロセッサは、前記移動体の状態によって分類された前記動線情報と前記地図情報とに基づいて、前記移動体の状態ごとに、前記移動モデルを学習することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 1.
    The processor is a moving body movement prediction system characterized in that it learns the moving model for each state of the moving body based on the flow line information classified according to the state of the moving body and the map information. ..
  10.  請求項1に記載の移動体移動予測システムであって、
     前記プロセッサは、前記移動体の状態が特徴量として入力される前記移動モデルを学習することを特徴とする移動体移動予測システム。
    The mobile movement prediction system according to claim 1.
    The processor is a moving body movement prediction system characterized in that it learns the moving model in which the state of the moving body is input as a feature amount.
  11.  プロセッサと、記憶装置と、を有する計算機システムによる移動体移動予測方法であって、
     前記記憶装置は、
     移動体の時刻ごとの位置情報を含む動線情報と、
     前記移動体が移動可能な空間の地図情報と、
     前記移動体の移動に関する状態を示す状態情報と、を保持し、
     前記移動体移動予測方法は、
     前記プロセッサが、前記動線情報、前記地図情報及び前記移動体の状態に基づいて、前記移動体の移動先を予測する移動モデルを学習する手順と、
     前記プロセッサが、前記移動体の初期条件及び前記移動モデルに基づいて前記移動体の仮想的な動線を生成する手順と、を含むことを特徴とする移動体移動予測方法。
    It is a moving body movement prediction method by a computer system having a processor and a storage device.
    The storage device is
    Flow line information including the position information of the moving body for each time,
    Map information of the space where the moving body can move and
    The state information indicating the state related to the movement of the moving body and the state information are retained.
    The moving body movement prediction method is
    A procedure in which the processor learns a movement model that predicts a movement destination of the moving body based on the flow line information, the map information, and the state of the moving body.
    A moving body movement prediction method, comprising the procedure in which the processor generates a virtual flow line of the moving body based on the initial conditions of the moving body and the moving model.
PCT/JP2021/018093 2020-05-18 2021-05-12 Mobile body movement prediction system and mobile body movement prediction method WO2021235296A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-086998 2020-05-18
JP2020086998A JP2021182220A (en) 2020-05-18 2020-05-18 Moving object movement prediction system and moving object movement prediction method

Publications (1)

Publication Number Publication Date
WO2021235296A1 true WO2021235296A1 (en) 2021-11-25

Family

ID=78606561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/018093 WO2021235296A1 (en) 2020-05-18 2021-05-12 Mobile body movement prediction system and mobile body movement prediction method

Country Status (2)

Country Link
JP (1) JP2021182220A (en)
WO (1) WO2021235296A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011118777A (en) * 2009-12-04 2011-06-16 Sony Corp Learning device, learning method, prediction device, prediction method, and program
JP2012103902A (en) * 2010-11-10 2012-05-31 Nippon Telegr & Teleph Corp <Ntt> Action prediction method, device and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011118777A (en) * 2009-12-04 2011-06-16 Sony Corp Learning device, learning method, prediction device, prediction method, and program
JP2012103902A (en) * 2010-11-10 2012-05-31 Nippon Telegr & Teleph Corp <Ntt> Action prediction method, device and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KITANO YU; ASAHARA AKINORI: "Pedestrian Flow Simulation Based on OD Network Analysis. Multimedia, Distributed, Cooperative, and Mobile (DICOMO 2019)", IPSJ SYMPOSIUM SERIES: MULTIMEDIA, DISTRIBUTED, COOPERATIVE, AND MOBILE SYMPOSIUM, vol. 2019, no. 1, 3 July 2019 (2019-07-03), pages 1457 - 1462, XP009532008, ISSN: 1882-0840 *

Also Published As

Publication number Publication date
JP2021182220A (en) 2021-11-25

Similar Documents

Publication Publication Date Title
Martin-Martin et al. Jrdb: A dataset and benchmark of egocentric robot visual perception of humans in built environments
JP6236448B2 (en) Sensor arrangement determination device and sensor arrangement determination method
CN109429518A (en) Automatic Pilot traffic forecast based on map image
CN108885492A (en) Virtual objects path clustering
KR20160026707A (en) System for determining the location of entrances and areas of interest
WO2020158488A1 (en) Movement route predict system, movement route predict method, and computer program
CN106105184A (en) Time delay in camera optical projection system reduces
JP7233889B2 (en) Pedestrian simulation device
KR102198920B1 (en) Method and system for object tracking using online learning
US20230281357A1 (en) Generating simulation environments for testing av behaviour
JP4411393B2 (en) Analysis system
CN115699098A (en) Machine learning based object identification using scale maps and three-dimensional models
Bera et al. Reach-realtime crowd tracking using a hybrid motion model
JP7273601B2 (en) Congestion analysis device and congestion analysis method
EP3907679A1 (en) Enhanced robot fleet navigation and sequencing
CN113733086A (en) Robot traveling method, device, equipment and storage medium
WO2021235296A1 (en) Mobile body movement prediction system and mobile body movement prediction method
WO2013164140A1 (en) Method, apparatus and computer program product for simulating the movement of entities in an area
Alexandersson et al. Pedestrians in microscopic traffic simulation. Comparison between software Viswalk and Legion for Aimsun.
CN115357500A (en) Test method, device, equipment and medium for automatic driving system
WO2021241239A1 (en) Prediction system, prediction method, and prediction program
JP7449982B2 (en) Policy formulation support system, policy formulation support method, and policy formulation support program
JP7446416B2 (en) Space-time pose/object database
KR20180036562A (en) Information processing apparatus, information processing method, and storage medium
US20220034664A1 (en) Utilizing machine learning and a network of trust for crowd and traffic control and for mapping a geographical area

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21807551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21807551

Country of ref document: EP

Kind code of ref document: A1