CN105005505A - Parallel processing method for aerial multi-target-trace prediction - Google Patents

Parallel processing method for aerial multi-target-trace prediction Download PDF

Info

Publication number
CN105005505A
CN105005505A CN201510357525.5A CN201510357525A CN105005505A CN 105005505 A CN105005505 A CN 105005505A CN 201510357525 A CN201510357525 A CN 201510357525A CN 105005505 A CN105005505 A CN 105005505A
Authority
CN
China
Prior art keywords
target
data
computing node
target data
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510357525.5A
Other languages
Chinese (zh)
Other versions
CN105005505B (en
Inventor
王雪
袁家斌
刘爽
赵兴方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201510357525.5A priority Critical patent/CN105005505B/en
Publication of CN105005505A publication Critical patent/CN105005505A/en
Application granted granted Critical
Publication of CN105005505B publication Critical patent/CN105005505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a parallel processing method for aerial multi-target-trace prediction, and belongs to the technical field of aerial multi-target-trace prediction. In a cluster environment, a distributed frame including a main node responsible for task scheduling and logic transaction management and calculating nodes for predicting a plurality of target traces is constructed; communication between the main node and the calculating nodes as well as parallel processing of logic transaction are realized in a two-level parallel manner of MPI standards and Pthreads standards; and a task distributing policy based on an aircraft state is proposed, so that the task distribution of each calculating node is more balanced on the premise of effectively shortening communication time, and the technical problems of low processing capacity and incapability of meeting system real-time performance for a single computer node is solved.

Description

The method for parallel processing of aerial multi-target track prediction
Technical field
The invention discloses the method for parallel processing of aerial multi-target track prediction, belong to the technical field of aerial multi-target track prediction.
Background technology
Aerial multi-target track prediction needs to follow the tracks of a large amount of aerial target, provides the information such as the instantaneous position of target, speed and indication drop point.System must be distributed and calculate mass data within the extremely short time, and requirement of real-time is high.
Multi-target track prediction task data has following characteristics: 1. horizontal independence, namely separate between each batch of target; 2. longitudinal correlativity, namely needs to carry out data backtracking to a collection of target orbit determination and track correct; 3. single goal data stability, can obtain this target data namely do not fly away from the prerequisite of guarded region in target under continuously; 4. destination number instability, i.e. fresh target generation, old target disappear.
Use serial processing mode when processing multi-target track prediction task at present, the time sequencing namely arrived according to data in single computer processes successively.So both cannot make full use of the computing power of multi-core CPU, also cannot meet real-time demand.Increasing especially along with aerial target quantity, the task data amount of generation is increased sharply, harsher to the requirement of system processing power.
Utilize the cluster environment and multiple programming technology that are made up of multiple stage computing machine (as MPI, MessagePassing Interface, the standard criterion in message sending function storehouse), be assigned to after task is carried out classifying rationally on each computer node and carry out the execution time that task process can shorten overall task greatly simultaneously, thus meet real-time demand.Horizontal independence in multi-target track prediction task feature makes overall task have higher parallel characteristics, and its longitudinal correlativity makes again the formulation of task scheduling approach under cluster environment have certain challenge.
Research for the process of multi-target track prediction task mainly concentrates on trajectory calculation algorithm, also multi-target track is not predicted the method for parallel processing of task deployment under cluster environment in current patent and document.
Summary of the invention
Technical matters to be solved by this invention is the deficiency for above-mentioned background technology, provide the method for parallel processing of aerial multi-target track prediction, consider horizontal independence, longitudinal correlativity that multi-target track is predicted, a kind of parallel processing plan based on computer cluster is proposed, and a kind of Task Assigned Policy based on flight state is proposed, under the prerequisite effectively shortening call duration time, make the task matching of each computing node more balanced, solve single computer node process capacity low and the technical matters of system real time can not be met.
The present invention adopts following technical scheme for achieving the above object:
The method for parallel processing of aerial multi-target track prediction, comprises the steps:
Build under cluster environment and comprise: be responsible for the host node of task scheduling and logic transaction management, predict the system of the computing node of multi-target track, each computing node is separate, the mode that described system adopts MPI standard and Pthreads standard two-stage to walk abreast realize communication between host node, computing node, logic affairs parallel;
Host node target data is pressed lot number classification after stored in map mapping table, according to task scheduling strategy, sorted target data is sent to computing node, often criticize the numbering of the computing node that target data is assigned to Taskassign table record, be assigned to the task amount of each computing node with Proctasknum table record;
Computing node internal object data receiver thread and trajectory calculation thread parallel perform, receive with partial map table record target data receiving thread and the target data of classifying by lot number, trajectory calculation thread sends trajectory predictions result to host node after completing and calculating.
As the further prioritization scheme of the method for parallel processing of described aerial multi-target track prediction, by expression formula: obtain the task amount P being assigned to computing node i i, j is the numbering of computing node i current goal task, m ifor the number of computing node i current goal task, for the probability function that target data disappears, wherein, to be respectively on i-th computing node jth goal task time of arrival, initial position, current state, between Δ t represents at this moment, in section, new task does not disappear.
Further, in the method for parallel processing of described aerial multi-target track prediction, the construction method of map mapping table is:
Generate the vector of record object data according to the lot number of initial target data, the vector recording each batch of target data forms map mapping table,
When there being new target data to arrive, be appended in corresponding vector for the fresh target data recording lot number, for the vector that the fresh target data genaration not recording lot number is corresponding with its lot number,
After detecting that target leaves monitoring range, delete the record about this target data in map mapping table.
Further, in the method for parallel processing of described aerial multi-target track prediction, the method of computing node sorted target data is sent to be according to task scheduling strategy: after new target data arrives, when the length of fresh target data is greater than target data bag sending threshold value, the computing node that fresh target data send to task amount minimum by host node.
Further, the method for parallel processing of described aerial multi-target track prediction, be whether time threshold leaves monitoring range to detect target with N × T+a, N is target data bag sending threshold value, target first time there is arriving the time interval between disappearing in T, a is the Network Transmission Delays upper limit.
As the further prioritization scheme of the method for parallel processing of described aerial multi-target track prediction, target data receiving thread and trajectory calculation thread adopt Semaphore Mechanism to carry out alternately: when computing node receives data, semaphore adds 1, trajectory calculation thread from partial map table after copies data semaphore subtract 1.
As the further prioritization scheme of the method for parallel processing of described aerial multi-target track prediction, the vector recording each batch of target data comprises: target location coordinate, warp-wise speed, broadwise speed, observation moment.
The present invention adopts technique scheme, has following beneficial effect:
1, because task data has longitudinal correlativity, namely need to carry out data backtracking to a collection of target orbit determination and track correct, the mode that the present invention adopts task and computing node to bind, namely the data of same target can only be sent on same computing node and calculate, all like this historical datas all store on the local node, data backtracking time without the need to other node-node transmission data, greatly reduce call duration time;
2, task and computing node binding mode under, a kind of task scheduling strategy based on flight state is proposed, the probability utilizing target to disappear is estimated the task amount that it produces, and when task matching as normative reference, makes the task matching of each computing node more balanced;
3, because task data arrives in real time, computing node sThread0: receive the data that Master node sends, data are reclassified, revert to the form of map table in Master node, be designated as partial map, first sThread0 uses the asynchronous probe function MPI_IProbe of MPI to detect MPI buffer zone when receiving data, re-use the synchronous receiver function of MPI and receive data, avoid the situation that CPU takies always when having data to arrive;
4, use Semaphore Mechanism when designing sThread0 and sThread1 mutual, concrete operations are as follows: when sThread0 receives data, semaphore adds 1; Copies data during sThread1 to partial map shows, semaphore subtracts 1, then calculates, and has calculated directly result to be returned to host node afterwards, the situation using Semaphore Mechanism can avoid searching loop partial map table, not have data that CPU is dallied, saves computational resource.
The aspect that the present invention adds and advantage will part provide in the following description, and these will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
Fig. 1 is aerial multi-target track prediction parallel processing framework;
Fig. 2 is that map table builds real-time process;
Fig. 3 (1), Fig. 3 (2) are respectively computing node sThread0 and sThread1 treatment scheme;
Fig. 4 is that Master node Thread1 mono-takes turns processing flow chart;
Fig. 5 (1) shows for initial map;
Fig. 5 (2) is the map table after target ph5 arrival;
Fig. 6 is Taskassign table;
Fig. 7 is Proctasknum table.
Embodiment
Being described below in detail embodiments of the present invention, is exemplary below by the embodiment be described with reference to the drawings, and only for explaining the present invention, and can not be interpreted as limitation of the present invention.
Those skilled in the art of the present technique are appreciated that unless expressly stated, and singulative used herein " ", " one ", " described " and " being somebody's turn to do " also can comprise plural form.Should be further understood that, the wording used in instructions of the present invention " comprises " and refers to there is described feature, integer, step, operation, element and/or assembly, but does not get rid of and exist or add other features one or more, integer, step, operation, element, assembly and/or their combination.Should be appreciated that, when we claim element to be " connected " or " coupling " to another element time, it can be directly connected or coupled to other elements, or also can there is intermediary element.In addition, " connection " used herein or " coupling " can comprise wireless connections or couple.Wording "and/or" used herein comprises one or more arbitrary unit listing item be associated and all combinations.
Those skilled in the art will appreciate that unless otherwise defined, all terms used herein have (comprising technical term and scientific terminology) the identical meaning of the general understanding of the ordinary technical staff in the technical field of the invention.Should also be understood that those terms defined in such as general dictionary should be understood to have the meaning consistent with the meaning in the context of prior art, unless and define as here, can not explain by idealized or too formal implication.
Ph is target lot number, and system is to the unique identification often criticizing target; T is the time threshold judging that target disappears, and namely occurs not occurring after T second from target first time, thinks that target disappears; N, for often to criticize target data bag sending threshold value, namely for often criticizing target data, sending after accumulation N bag again thus reducing network service.
(1) architecture design
The overall architecture of trajectory predictions task process adopts master slave mode (Master/Slave) to build, computer node is divided into two classes: task scheduling and logic transaction management node (Master node, only have one) and computing node (Slave node has multiple).Wherein Master node is responsible for receiving raw data, classification of task, task distribution scheduling and result of calculation recovery etc., and Slave node is responsible for trajectory calculation.This Computational frame adopts MPI (Massage Passing Interface)+Pthreads two-stage parallel mode.MPI is responsible for communicating between upper layer node, and Pthread is responsible for each logic affairs in bottom layer node and walks abreast.Adopt this distributed structure/architecture, expand by the carrying out that the mode increasing computing node is flexible when number of tasks exceeds system load.Its parallel processing framework is shown in accompanying drawing 1.
(2) Master design of node
The reception of Master node primary responsibility raw data, Data classification, task matching, result reclaim and other logicality processing transactions.Open a MPI process in Master node, in this process, use pthread to create three data processing threads.In data processing, these threads are concurrent workings.Three data processing threads are as follows:
1) Thread0 receives original target data (out of order, each batch of target data is mixed in together), and by data directly stored in buffer zone buffer;
2) Thread1 has three tasks as shown in Figure 4: 1. read data from buffer zone, classify according to target lot number ph, and stored in map mapping table, 2. according to task scheduling strategy (emphasis description below), the data of having classified are sent to computing node, 3. target is detected whether also in monitoring range, if T internal object second does not have new data to arrive, then think that this target disappears, then discharge corresponding data space and remove record;
3) Thread2 is responsible for the calculating knot that reception computing node returns:
Thread1 is thread the most complicated in Master node, is also the critical thread realizing task scheduling; For the out of order data in buffer, Thread1 needs to carry out classifying to facilitate follow-up work to distribute according to ph, sorted data are stored in a map mapping table, key value in map mapping table is ph, ph value with a collection of target data is identical, the raw data of value value for receiving in map mapping table, stores with the form of vector vector; In value, each item number is stated to be a bag, comprise the information such as the target location coordinate in a certain moment, warp-wise speed, broadwise speed, observation moment, because of single goal data stability and destination number instability, so build in real time map table, its construction method is as follows, its process flow diagram is shown in accompanying drawing 2
New data data arrives:
1. the ph' value of data is calculated;
2. in map table, search ph', if find, perform 3., otherwise perform 4.;
3. in supplemental data that ph' is expert at, add up the length len of value value simultaneously; If len is greater than threshold value N, send to computing node, perform 5.;
4. map table increase a line, by ph' and data stored in; Perform 5.;
5. terminate.
(3) computing node design
Can determine according to the computing power calculating basic body in computing node that opening up several MPI calculation procedure (generally only opens one for common computer, give tacit consent to each computing node in method description below and open a calculation procedure), two threads are created in each computing node, be responsible for the data receiving Master peer distribution for one, be responsible for for one calculating and returning results, treatment scheme is shown in Fig. 3 (1) and Fig. 3 (2).
1) sThread0: receive the data that Master node sends, data reclassified, reverts to the form of map table in Master node, is designated as partial map; Because data are real-time, so first sThread0 using when receiving data the asynchronous probe function MPI_IProbe of MPI to detect MPI buffer zone, re-use MPI synchronous receiver function when having data to arrive and receiving data, avoiding the situation that CPU takies always.
2) sThread1: use Semaphore Mechanism when designing sThread0 and sThread1 mutual, concrete operations are as follows: when sThread0 receives data, semaphore adds 1; Copies data during sThread1 to partial map shows, semaphore subtracts 1, then calculates.Directly result is returned to host node after calculating completes, the situation using Semaphore Mechanism can avoid searching loop partial map table, not have data that CPU is dallied, saves computational resource.
(4) Master node is synchronous with computing node
Because destination number has instability, Master node and computing node add in task to be needed to be consistent with when cancelling.The present invention takes the mode omitting control information, and detailed process is as follows:
1) goal task adding procedure is comparatively simple, map shows to have realized Master node tasks in developing algorithm and adds, in addition also need to upgrade Taskassign table and Proctasknum table (mentioning in task scheduling strategy) below, computing node partial map shows to show construction method with reference to map, task adding procedure is identical with it, and when therefore fresh target arrives, Master node is finished the work synchronous with computing node;
2) target cancels relative complex, the present invention's setup times in the map of Master node stabs one, be used for recording the time that last data arrive, the difference of current time and timestamp is calculated when traveling through map table, difference is greater than threshold value T and then thinks that this target flies away from guarded region, target information is deleted in map table, upgrade Taskassign table and Proctasknum table simultaneously, making in computing node uses the same method safeguards that partialmap shows, but threshold value is now at least N × T+a, wherein, a is the Network Transmission Delays upper limit.Just do not need control information like this, the mapping table of Master node and computing node can keep synchronous substantially;
(5) Task Assigned Policy
For every a collection of target, Master node sends to computing node after receiving the packet (N bag) of right quantity, in order to realize system load balancing, the present invention devises two tables and carrys out management role distribution condition: Taskassign table record often criticizes the computing node numbering that target is assigned to, the task amount that each computing node of Proctasknum table record has distributed, in actual process, for single goal data stability, the design that destination number instability and task and computing node are bound, the present invention sets up following task model, according to target flight state estimations task amount, Task Assigned Policy under computer cluster environment is proposed.
Task model: 1. the speed of each target data arrival is identical, often organizes data processing time and is c; 2. target disappears with certain probability, and constant; 3. different target disappearance probability is different, and information-related with some, as initial position, state of flight etc.Suppose that the probability that each targets anticipate disappears is determined by function f (t, X, S).Wherein, t is the time, and X is target initial position, and S is dbjective state (current speed, acceleration); 4. in Δ t, new task does not disappear; 5. the maximum processing capability of system is not exceeded.
Under this task model, system delay comprises two parts, be respectively data processing delay (comprising data to calculate and transmission) and queueing delay, queueing delay depends on data calculation time and present node number of tasks, when a kth node tasks number increases by 1, queueing delay upper limit linear increase, i.e. Δ G=c × (2m k+ 1), wherein, m kfor a kth node current goal number of tasks, utilize queuing model, minimum for target with Δ t system queuing Delay bound increment, select minimum computing node.When there being new task TASK to arrive, processing procedure is as follows:
1. according to Taskassign table, formula f (t, X, S) is utilized to upgrade the task amount P of each computing node in Proctasknum table i, obtain the task amount P being assigned to computing node i i, j is the numbering of computing node i current goal task, m ifor the number of computing node i current goal task, for the probability function that target data disappears, wherein, to be respectively on i-th computing node jth goal task time of arrival, initial position, current state, Δ t represents that in section, new task does not disappear between hypothesis at this moment;
2. inquire about Proctasknum table, find the computing node P that task amount is minimum min;
3. a line is increased in Taskassign table, the lot number of record TASK and P min;
4. TASK is assigned to P minnode.
This task scheduling approach does not directly adopt the number of tasks of each computing node as the index passing judgment on node load, is that as some target because the time being in guarded region is longer, corresponding task amount is large because the time of each target Continuous may be different; And the target that other computing node distributes just may leave monitoring range within very short time, corresponding task amount is little, so in fact, each computing node needs data volume difference to be processed very large, each node calculate load can be caused uneven, the advantage of the program is that the probability utilizing target to disappear is estimated the task amount that it produces, make the task matching of each computing node more balanced, to function f (t, X 0, S) selection, usually define according to the feature of monitoring objective.
The computation complexity of above-mentioned allocation strategy is very low, is Ο (n+m), and wherein, n is computing node number, and m is number of targets.
For ease of the understanding to the embodiment of the present invention, be further explained explanation below in conjunction with accompanying drawing for several specific embodiment, and each embodiment does not form the restriction to the embodiment of the present invention.
One of ordinary skill in the art will appreciate that: accompanying drawing is the schematic diagram of an embodiment, the module in accompanying drawing or flow process might not be that enforcement the present invention is necessary.
Suppose: 1. have a Master node, three Slave nodes, numbering is followed successively by 1,2,3, each Slave node is opened up a calculation procedure; 2. accompanying drawing 5 (1) be shown in the middle record of current map table, namely now occurred 5 batches of targets, be respectively ph1, ph2, ph3, ph4, ph5; 3. accompanying drawing 6 is shown in by current Taskassign table, and namely ph1 and ph5 target is assigned to computing node 1, ph2 and ph4 target and is assigned to computing node 2, ph3 target and is assigned to computing node 3; 4. accompanying drawing 7 is shown in by current Proctasknum table, and wherein the evaluation criterion on task amount one hurdle is:
For ease of the understanding to the embodiment of the present invention, be further explained explanation below in conjunction with accompanying drawing for several specific embodiment, and each embodiment does not form the restriction to the embodiment of the present invention.
Example one: target data ph5 arrives (namely already present target data arrives, and does not disappear always)
The Thread0 of step 1:Master node receives target data by network interface, and put into the buffer zone buffer opened up, this receives the data of ph5 target;
Step 2:Thread1 reads data ph5 from buffer,, now there are the data of target ph5 in map table in inquiry map table, so show supplemental data in the value that ph5 is expert at map, become the form of Fig. 5 (2), the time of record object data arrival simultaneously;
Step 3: the length len checking packet in the value of ph5, if len is less than the sending threshold value N of setting, then do not send data, the processing procedure of ph5 is terminated, if len equals sending threshold value N, be then ready for sending data, first travel through Taskassign table (Fig. 6), search the computing node 1 at this target place, call MPI_Send () function and N bag data are sent to computing node 1;
The sThread0 of step 4:Slave node receives task data ph5, first partial map table is traveled through, if there is no the data of this target, then add in last column of table, if existed, then in the value value that in being shown to partial map by data supplementing, ph5 is corresponding, simultaneously, semaphore signal adds 1, and notice sThread1 has data to arrive;
Step 5:sThread1 finds that semaphore signal is greater than 0, then sense data from partial map shows, and writes the buffer zone of sThread1, and signal subtracts 1 and becomes 0 afterwards;
Step 6:sThread1 starts calculating and carries out trajectory calculation, after having calculated, calls MPI_Send () function and result is returned to Master node;
The result that the sThread1 that the Thread2 of step 7:Master node receives Slave node returns, Thread2 calls MPI_Recv () function when receiving data, and need to use MPI_ANY_SOURCE parameter, even if the less computing node of numbering is come to nothing and returned like this, the reception of other computing node results below also can not be affected;
Step 8: so far, takes turns treatment scheme to one of target ph5 data and terminates.
Example two: target data ph6 arrives (namely fresh target data arrive)
The Thread0 of step 1:Master node receives target data by network interface, and put into the buffer zone buffer opened up, this receives the data of ph6 target;
Step 2:Thread1 reads data ph6 from buffer, inquiry map table, now there are not the data of target ph6 in map table, so show supplemental data in the value that ph6 is expert at map, and time of arriving of record object data simultaneously;
Step 3: in the value of now ph6, the length len of packet is 1, be less than the sending threshold value N of setting, then do not send data, the first round processing procedure of ph6 is terminated, the data of ph6 arrive continuously afterwards, when the length of len equals sending threshold value N, then task scheduling strategy is used to be ready for sending data;
Step 4: use formula assess the task amount of each computing node, therefrom select minimum P minthis node is given by ph6 task matching, now, upgrade Taskassign table, insert record ph6 at bottom line and be allocated to pmin, upgrade Proctasknum table (Fig. 7), stored in the task amount of pmin node, call MPI_Send () function and N bag data are sent to computing node 1;
Step 5: later step is with the step 4,5,6,7,8 of example one.
Example three: target disappears and processes, for ph4
Step 1: when the data ph4 arrived is inserted map table by each Master node Thread1, can calculate the difference that current time and this object time stamp are recorded;
Step 2: if difference is greater than N, then before thinking, ph4 target disappears, the record before deleting from map table, upgrades Taskassign table and Proctasknum table simultaneously;
Step 3: think that target ph4 is fresh target, the mode process arrived according to fresh target, as example one;
The sThread0 of step 4:Slave node makes to use the same method and safeguards that partial map shows, but threshold value is now at least N × T+a, and wherein a is the Network Transmission Delays upper limit.
As seen through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add required general hardware platform by software and realizes.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in fact in other words, this computer software product can be stored in storage medium, as ROM/RAM, magnetic disc, CD etc., comprise the method some part described in of some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform embodiments of the invention or embodiment.

Claims (7)

1. the method for parallel processing of aerial multi-target track prediction, is characterized in that, comprise the steps:
Build under cluster environment and comprise: be responsible for the host node of task scheduling and logic transaction management, predict the system of the computing node of multi-target track, each computing node is separate, the mode that described system adopts MPI standard and Pthreads standard two-stage to walk abreast realize communication between host node, computing node, logic affairs parallel;
Host node target data is pressed lot number classification after stored in map mapping table, according to task scheduling strategy, sorted target data is sent to computing node, often criticize the numbering of the computing node that target data is assigned to Taskassign table record, be assigned to the task amount of each computing node with Proctasknum table record;
Computing node internal object data receiver thread and trajectory calculation thread parallel perform, receive with partial map table record target data receiving thread and the target data of classifying by lot number, trajectory calculation thread sends trajectory predictions result to host node after completing and calculating.
2. the method for parallel processing of aerial multi-target track prediction according to claim 1, is characterized in that, by expression formula: obtain the task amount P being assigned to computing node i i, j is the numbering of computing node i current goal task, m ifor the number of computing node i current goal task, for the probability function that target data disappears, wherein, to be respectively on i-th computing node jth goal task time of arrival, initial position, current state, between Δ t represents at this moment, in section, new task does not disappear.
3. the method for parallel processing of aerial multi-target track prediction according to claim 1 and 2, it is characterized in that, the construction method of described map mapping table is:
Generate the vector of record object data according to the lot number of initial target data, the vector recording each batch of target data forms map mapping table,
When there being new target data to arrive, be appended in corresponding vector for the fresh target data recording lot number, for the vector that the fresh target data genaration not recording lot number is corresponding with its lot number,
After detecting that target leaves monitoring range, delete the record about this target data in map mapping table.
4. the method for parallel processing of aerial multi-target track prediction according to claim 3, it is characterized in that, the method of computing node sorted target data is sent to be according to task scheduling strategy: after new target data arrives, when the length of fresh target data is greater than target data bag sending threshold value, the computing node that fresh target data send to task amount minimum by host node.
5. the method for parallel processing of aerial multi-target track prediction according to claim 3, it is characterized in that, be whether time threshold leaves monitoring range to detect target with N × T+a, N is target data bag sending threshold value, target first time there is arriving the time interval between disappearing in T, a is the Network Transmission Delays upper limit.
6. the method for parallel processing of aerial multi-target track prediction according to claim 1, it is characterized in that, described target data receiving thread and trajectory calculation thread adopt Semaphore Mechanism to carry out alternately: when computing node receives data, semaphore adds 1, trajectory calculation thread from partial map table after copies data semaphore subtract 1.
7. the method for parallel processing of aerial multi-target track prediction according to claim 3, it is characterized in that, the described vector recording each batch of target data comprises: target location coordinate, warp-wise speed, broadwise speed, observation moment.
CN201510357525.5A 2015-06-25 2015-06-25 The method for parallel processing of aerial multi-target track prediction Active CN105005505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510357525.5A CN105005505B (en) 2015-06-25 2015-06-25 The method for parallel processing of aerial multi-target track prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510357525.5A CN105005505B (en) 2015-06-25 2015-06-25 The method for parallel processing of aerial multi-target track prediction

Publications (2)

Publication Number Publication Date
CN105005505A true CN105005505A (en) 2015-10-28
CN105005505B CN105005505B (en) 2018-06-26

Family

ID=54378185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510357525.5A Active CN105005505B (en) 2015-06-25 2015-06-25 The method for parallel processing of aerial multi-target track prediction

Country Status (1)

Country Link
CN (1) CN105005505B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426256A (en) * 2015-11-03 2016-03-23 中国电子科技集团公司第二十八研究所 Multi-process collaboration based large-batch real-time target concurrent processing method
CN109144941A (en) * 2018-10-12 2019-01-04 北京环境特性研究所 Ballistic data processing method, device, computer equipment and readable storage medium storing program for executing
CN109597680A (en) * 2018-10-22 2019-04-09 阿里巴巴集团控股有限公司 Task queue's response parameter evaluation method and device
CN110398985A (en) * 2019-08-14 2019-11-01 北京信成未来科技有限公司 A kind of distributed self-adaption Telemetry System of UAV and method
CN115208954A (en) * 2022-06-07 2022-10-18 北京一流科技有限公司 Parallel strategy presetting system and method for distributed data processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393362B1 (en) * 2000-03-07 2002-05-21 Modular Mining Systems, Inc. Dynamic safety envelope for autonomous-vehicle collision avoidance system
CN102110079A (en) * 2011-03-07 2011-06-29 杭州电子科技大学 Tuning calculation method of distributed conjugate gradient method based on MPI
CN103645952A (en) * 2013-08-08 2014-03-19 中国人民解放军国防科学技术大学 Non-accurate task parallel processing method based on MapReduce
CN103716867A (en) * 2013-10-25 2014-04-09 华南理工大学 Wireless sensor network multiple target real-time tracking system based on event drive

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393362B1 (en) * 2000-03-07 2002-05-21 Modular Mining Systems, Inc. Dynamic safety envelope for autonomous-vehicle collision avoidance system
CN102110079A (en) * 2011-03-07 2011-06-29 杭州电子科技大学 Tuning calculation method of distributed conjugate gradient method based on MPI
CN103645952A (en) * 2013-08-08 2014-03-19 中国人民解放军国防科学技术大学 Non-accurate task parallel processing method based on MapReduce
CN103716867A (en) * 2013-10-25 2014-04-09 华南理工大学 Wireless sensor network multiple target real-time tracking system based on event drive

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426256A (en) * 2015-11-03 2016-03-23 中国电子科技集团公司第二十八研究所 Multi-process collaboration based large-batch real-time target concurrent processing method
CN105426256B (en) * 2015-11-03 2019-05-03 中电莱斯信息***有限公司 A kind of high-volume real-time target method for parallel processing based on multi-process collaboration
CN109144941A (en) * 2018-10-12 2019-01-04 北京环境特性研究所 Ballistic data processing method, device, computer equipment and readable storage medium storing program for executing
CN109597680A (en) * 2018-10-22 2019-04-09 阿里巴巴集团控股有限公司 Task queue's response parameter evaluation method and device
CN110398985A (en) * 2019-08-14 2019-11-01 北京信成未来科技有限公司 A kind of distributed self-adaption Telemetry System of UAV and method
CN110398985B (en) * 2019-08-14 2022-11-11 北京信成未来科技有限公司 Distributed self-adaptive unmanned aerial vehicle measurement and control system and method
CN115208954A (en) * 2022-06-07 2022-10-18 北京一流科技有限公司 Parallel strategy presetting system and method for distributed data processing
CN115208954B (en) * 2022-06-07 2024-04-26 北京一流科技有限公司 Parallel policy preset system for distributed data processing system and method thereof

Also Published As

Publication number Publication date
CN105005505B (en) 2018-06-26

Similar Documents

Publication Publication Date Title
CN105005505A (en) Parallel processing method for aerial multi-target-trace prediction
Zhao et al. Predictive task assignment in spatial crowdsourcing: a data-driven approach
Benlic et al. Breakout local search for the multi-objective gate allocation problem
CN105488892B (en) A kind of method and server for robot queuing management
CN103732471A (en) Resource management plan creation device, method thereof, and program
US9396250B2 (en) Flow line detection process data distribution system, flow line detection process data distribution method, and program
CN107562066B (en) Multi-target heuristic sequencing task planning method for spacecraft
CN106325284B (en) The robot motion planning method of identification multiple target task is searched for towards man-machine collaboration
CN103218380B (en) Server unit and the method ensureing data sequence
CN111191843B (en) Airport delay prediction method based on time sequence network propagation dynamics equation
CN107622699A (en) All the period of time spatial domain conflict probe and solution desorption method based on sequential
CN105144207A (en) Method and device for optimising a resource allocation plan
CN105469599A (en) Vehicle trajectory tracking and vehicle behavior prediction method
CN114827284A (en) Service function chain arrangement method and device in industrial Internet of things and federal learning system
Huang et al. Collective reinforcement learning based resource allocation for digital twin service in 6G networks
CN109062677A (en) Unmanned aerial vehicle system calculation migration method
Ma et al. Dynamic system optimal routing in multimodal transit network
Rida Modeling and optimization of decision-making process during loading and unloading operations at container port
CN109614263A (en) A kind of disaster tolerance data processing method, apparatus and system
Delgado et al. Integrated real-time transit signal priority control for high-frequency segregated transit services
CN110370270A (en) A kind of method and apparatus for preventing robot from colliding
CN111640300A (en) Vehicle detection processing method and device
CN111190711A (en) Multi-robot task allocation method combining BDD with heuristic A-search
Gu et al. Probabilistic mission planning and analysis for multi-agent systems
Oka et al. Spatial feature-based prioritization for transmission of point cloud data in 3D-image sensor networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant