CN105323547A - Video condensing system - Google Patents

Video condensing system Download PDF

Info

Publication number
CN105323547A
CN105323547A CN201510116761.8A CN201510116761A CN105323547A CN 105323547 A CN105323547 A CN 105323547A CN 201510116761 A CN201510116761 A CN 201510116761A CN 105323547 A CN105323547 A CN 105323547A
Authority
CN
China
Prior art keywords
video
memory
address
geographical position
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510116761.8A
Other languages
Chinese (zh)
Other versions
CN105323547B (en
Inventor
胡晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Original Assignee
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd filed Critical SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority to CN201510116761.8A priority Critical patent/CN105323547B/en
Publication of CN105323547A publication Critical patent/CN105323547A/en
Application granted granted Critical
Publication of CN105323547B publication Critical patent/CN105323547B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a video condensing system. The video condensing system comprises a plurality of video monitoring devices, a client side host, an intelligent combat platform server, a video storage server, a video abstraction retrieval server and a video condensing module; the video condensing module selects a video including a target video object from a plurality of videos according to a bitmap; a pre-set number of video frames including the target video object can be acquired from each video of the selected videos, such that a plurality of video frame sets are generated; and the plurality of video frame sets are jointed according to the moving direction displayed by the bitmap, such that a condensed video is formed.

Description

A kind of video concentration systems
Technical field
The present invention relates to the communications field, be specifically related to a kind of video concentration systems.
Background technology
Intelligence under battle conditions platform is based on video image intelligent analysis technology and video image process intellectualized algorithm technology, close laminating public security video investigation business, for case video analysis provides the application platform system of a set of " systematization, networking, intellectuality ".Existing intelligence under battle conditions platform can transfer various places video when public security department settles a case, in time for public security officer's reference.But camera distribution in various places is wide, quantity is many, and public security officer usually requires a great deal of time and searches destination object.Thus, the work efficiency of public security case is reduced.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of video concentration systems, to improve Video processing efficiency, improves user's case handling efficiency.
For solving the problems of the technologies described above, the present invention adopts following technical scheme:
The invention provides a kind of video concentration systems, it is characterized in that, described video concentration systems comprises:
Multiple video monitoring devices, for gathering multiple videos in multiple geographical position;
Client host, described client host is communicated with multiple video monitoring devices by network, and described client host receives described multiple video from described multiple video monitoring devices respectively, and described client host is select target object video also;
The intelligence be connected with described client host Platform Server under battle conditions, described multiple video is uploaded to described intelligence Platform Server under battle conditions by described client host;
The video storage server be connected with described intelligence actual combat Platform Server, described video storage server copies described multiple video from described intelligence actual combat Platform Server, and according to the geographical position of described multiple video, described multiple video is stored in multiple memory address respectively;
The video frequency abstract retrieval server be connected with described client host with described video storage server, for judging the traffic direction of described target video object according to described multiple memory address, and produce the bitmap that mark has described traffic direction, wherein, described video frequency abstract retrieval server also comprises:
Video concentrates module, for selecting according to described bitmap the video comprising described target video object in described multiple video; The frame of video comprising described target video object of predetermined number is gathered each video of video of selecting, to produce multiple frame of video group from described; According to the described multiple frame of video group of described traffic direction splicing that described bitmap shows, to form concentrated video.
In one embodiment, described video concentrates module and comprises:
Acquisition module, N number of frame of video is gathered backward for the first frame of occurring at each video from described target video object, and gather forward M frame of video the last frame occurred at each video from described target video object, wherein, M and N is positive integer.
In one embodiment, the duration that occurs in each video of the value of described M and N and described target video object is proportional.
In one embodiment, described video storage server comprises:
Memory, described memory comprises multiple memory cell;
The processing module be connected with described memory, described processing module reads the geographical location information of described multiple video, extract longitude and the latitude in described multiple geographical position, respectively described multiple longitude and described multiple latitude are sorted, judge described multiple memory address numbering according to described ranking results.
In one embodiment, described each memory address numbering includes two digits, and described processing module also comprises:
Numbering module, for determining first of the numbering of described memory address according to the longitude in described multiple geographical position, wherein, if the longitude in X geographical position is less than the longitude in Y geographical position, then the first bit value of X memory address is less than the first bit value of Y memory address; Described numbering module also determines the second of the numbering of described memory address according to the latitude in described multiple geographical position, wherein, if the latitude value in N geographical position is less than the latitude value in M geographical position, then the second numerical value of N memory address is less than the second numerical value of M memory address, wherein, X, Y, M and N are the positive integer being less than described multiple video sum.
In one embodiment, described video storage server comprises:
Memory, described memory comprises multiple memory cell; And
The processing module be connected with described memory, described processing module selectes reference position, calculates the multiple distances between described multiple geographical position and described reference position, more described multiple distance, and according to described multiple range estimation multiple memory address.
In one embodiment, when N geographical position is in the direct north of described reference position, N video storage address in described memory is the memory cell (namely the second memory address is (D1+BN) D2D3D4) of (D1+BN) D2D3D4 by described address determination module, wherein, N is less than or equal to the sum of described multiple video, the numbering of BN to be described numbering module be described N video; The Due South being in described reference position when described N geographical position to time, described N video storage address in described memory is the memory cell (namely N memory address is D1 (D2+BN) D3D4) of D1 (D2+BN) D3D4 by described address determination module; When described N geographical position is in the direction, due east of described reference position, described address determination module be the memory cell (namely N memory address is D1D2 (D3+BN) D4) of D1D2 (D3+BN) the D4 positive west that is in described reference position when described N geographical position by described N video storage address in described memory to time, described N video storage address in described memory is the memory cell (namely N memory address is D1D2D3 (D4+BN)) of D1D2D3 (D4+BN) by described address determination module.
In one embodiment, when described N geographical position is in the northeastward of described reference position, when the memory cell (namely N memory address is (D1+BN) D2 (D3+BN) D4) that described N video storage address in described memory is (D1+BN) D2 (D3+BN) D4 is in the southwestward of described reference position when described N geographical position by described address determination module, described N video storage address in described memory is that the memory cell (namely N memory address is D1 (D2+BN) D3 (D4+BN)) of D1 (D2+BN) D3 (D4+BN) is when described N geographical position is in the southeastern direction of described reference position by described address determination module, described N video storage address in described memory is the memory cell (namely N memory address is (D1+BN) D2 (D3+BN) D4) of (D1+BN) D2 (D3+BN) D4 by described address determination module, the northwest being in described reference position when described N geographical position to time, described N video storage address in described memory is the memory cell (namely N memory address is (D1+BN) D2D3 (D4+BN)) of (D1+BN) D2D3 (D4+BN) by described address determination module.
In one embodiment, described video frequency abstract retrieval server comprises:
Retrieval module, for retrieve described multiple video each video in there is the first frame of described target video object;
Time comparison module, for the time order and function that described multiple first frames in more described multiple video occur; And
Bitmap labeling module, for generation of the bitmap of the direction of motion of the described target video object of expression, and marks the direction of motion of described target video object on described bitmap according to described time comparative result.
Compared with prior art, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency.In addition, proportional by arranging the duration that M and N and target video object occur in each video, the collection of redundant image can be reduced, save the time that video is concentrated, improve video thickening efficiency.
Accompanying drawing explanation
Figure 1 shows that intelligence system under battle conditions according to an embodiment of the invention.
Figure 2 shows that video storage server according to an embodiment of the invention.
Figure 3 shows that processor according to an embodiment of the invention.
Figure 4 shows that the distribution schematic diagram of video monitoring devices according to an embodiment of the invention.
Figure 5 shows that video frequency abstract retrieval server according to an embodiment of the invention.
Figure 6 shows that the schematic diagram of bitmap according to an embodiment of the invention.
Figure 7 shows that the method flow diagram of tracking target object video according to an embodiment of the invention.
Figure 8 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 9 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 10 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 11 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 12 shows that video storage server according to another embodiment of the present invention.
Figure 13 shows that the schematic diagram of bitmap according to an embodiment of the invention.
Figure 14 shows that the schematic diagram of direction bitmap of making a summary according to an embodiment of the invention.
Figure 15 shows that intelligence method for supervising under battle conditions according to an embodiment of the invention.
Figure 16 shows that another intelligence method for supervising under battle conditions according to an embodiment of the invention.
Figure 17 shows that another intelligence method for supervising under battle conditions according to an embodiment of the invention.
Figure 18 shows that the another kind of structure of video frequency abstract retrieval server according to an embodiment of the invention.
Figure 19 shows that the flow chart of video concentration method according to an embodiment of the invention.
The flow chart of another video concentration method according to an embodiment of the invention shown in Figure 20.
Embodiment
Below will provide detailed description to embodiments of the invention.Although the present invention will carry out setting forth and illustrating in conjunction with some embodiments, it should be noted that the present invention is not merely confined to these execution modes.On the contrary, the amendment carry out the present invention or equivalent replacement, all should be encompassed in the middle of right of the present invention.
In addition, in order to better the present invention is described, in embodiment hereafter, give numerous details.It will be understood by those skilled in the art that do not have these details, the present invention can implement equally.In other example, known method, flow process, element and circuit are not described in detail, so that highlight purport of the present invention.
Figure 1 shows that intelligence system 100 under battle conditions according to an embodiment of the invention.Intelligence under battle conditions system 100 comprises multiple video monitoring devices 104,105,106 and 107.Video monitoring devices can be camera, sky net monitor or other can shoot with video-corder the monitoring arrangement of video.Although only show 4 video monitoring devices in the embodiment of Fig. 1, those skilled in the art will appreciate that the video monitoring devices that can comprise other numbers in category of the present invention.Multiple video monitoring devices 104 to 107 lays respectively at multiple geographical position, for gathering multiple videos in these geographical position.
Intelligence under battle conditions system 100 also comprises client host 102, intelligence Platform Server 108, video storage server 110 and video frequency abstract retrieval server 112 under battle conditions.Client host 102 is communicated with multiple video monitoring devices 104-107 by network, and client host 102 receives described multiple video from multiple video monitoring devices 104-107 respectively.In addition, client host 102 goes back select target object video.
Intelligence under battle conditions Platform Server 106 is connected with client host 102.Multiple video is uploaded to intelligence Platform Server 108 under battle conditions by client host 102.Video storage server 110 and intelligence under battle conditions Platform Server 108 are connected.Video storage server 110 copies described multiple video from intelligence actual combat Platform Server 108, and according to the geographical position of described multiple video, described multiple video is stored in multiple memory address respectively.
Video frequency abstract retrieval server 112 is connected with client host 102 with video storage server 110.Video frequency abstract retrieval server 112 judges the traffic direction of described target video object according to described multiple memory address, and the time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map.Client host 102 shows the summary traffic direction figure of described target video object on a display screen according to described traffic direction, and has the map of the running orbit of described target video object at described display screen display mark according to described running orbit.
Figure 2 shows that video storage server 110 according to an embodiment of the invention.In the embodiment of fig. 2, video storage server 110 comprises memory 202 and processing module 204.Memory 202 comprises multiple memory cell.Processing module 204 is connected with memory 202.Processing module 204 selectes reference position, calculates the multiple distances between described multiple geographical position and described reference position, more described multiple distance, and according to described multiple range estimation multiple memory address.
Figure 3 shows that processor 204 according to an embodiment of the invention.Processor 204 comprises numbering module 302, orientation judge module 304 and address determination module 306.Numbering module 302 adds numbering to described multiple video according to described multiple distance respectively, and wherein, the numbering of the video that distance value is little is less than the large video of distance value.Orientation judge module 304 judges the relative bearing of described multiple geographical position and described reference position respectively.
Figure 4 shows that the distribution schematic diagram of video monitoring devices according to an embodiment of the invention.As shown in Figure 4, if select reference position 402, then video monitoring devices 104,106,107 and 105 is respectively from small to large with the distance of reference position 402 in video monitoring devices 104 to 107.Therefore, can numbering 1,2,3 and 4 be formulated to respectively video monitoring devices 104,106,107 and 105.
Get back to Fig. 3, address determination module 306 judges described multiple memory address according to described multiple numbering and described multiple relative bearing.Specifically, for N number of video monitoring devices (wherein, N is less than or equal to the sum of described multiple video, the numbering of BN to be described numbering module be described N video), when N geographical position is in the direct north of reference position 402, N video storage address in memory 202 is (D1+B by address determination module 306 n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4).The Due South being in reference position 402 when N geographical position to time, N video storage address in memory 202 is D1 (D2+B by address determination module 306 n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4).When N geographical position is in the direction, due east of reference position 402, N video storage address in memory 202 is D1D2 (D3+B by address determination module 306 n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4); The positive west being in reference position 402 when N geographical position to time, N video storage address in memory 202 is D1D2D3 (D4+B by address determination module 306 n) memory cell (namely N memory address is D1D2D3 (D4+B n)).
In addition, when N geographical position is in the northeastward of reference position 402, N video storage address in memory 202 is (D1+B by address determination module 306 n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4).When N geographical position is in the southwestward of reference position 402, N video storage address in memory 202 is D1 (D2+B by address determination module 306 n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n)); When N geographical position is in the southeastern direction of reference position 402, N video storage address in memory 202 is (D1+B by address determination module 306 n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4); The northwest being in reference position 402 when N geographical position to time, N video storage address in memory 202 is (D1+B by address determination module 306 n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
Therefore, as shown in Figure 4, the memory address of video monitoring devices 104,105,106 and 107 is respectively (D1+1) D2D3D4, (D1+4) D2D3 (D4+4), D1 (D2+2) (D3+2) D4 and D1D2 (D3+3) D4.
Figure 5 shows that video frequency abstract retrieval server 112 according to an embodiment of the invention.Video frequency abstract retrieval server 112 comprises retrieval module 502, time comparison module 504 and bitmap labeling module 506.Retrieval module 502 retrieves the first frame occurring described target video object in each video of described multiple video.The time order and function that more described multiple first frame of time comparison module 504 occurs.Bitmap labeling module produces the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Figure 6 shows that the schematic diagram 600 of bitmap according to an embodiment of the invention.In the embodiment in fig 6, retrieval module 502 retrieves video monitoring devices 104,105 and 106 has target video object to occur, and the time order and function that the first two field picture occurs is respectively 105,104 and 106.Therefore, according to memory address and the described result for retrieval of each video, the bitmap in Fig. 6 can be drawn, demonstrate the traffic direction of target video object.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).
Figure 7 shows that the method flow diagram 700 of tracking target object video according to an embodiment of the invention.In a step 702, multiple videos in multiple geographical position are gathered.In step 704, select target object video.In step 706, according to the geographical position of described multiple video, described multiple video is stored in multiple memory address respectively.In step 708, the traffic direction of described target video object is judged according to described multiple memory address.In step 720, the time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map.In step 712, the summary traffic direction figure of described target video object is shown according to described traffic direction.In step 714, there is the map of the running orbit of described target video object according to described running orbit display mark.
Figure 8 shows that the other method flow chart 706 of tracking target object video according to an embodiment of the invention.Fig. 8 is further illustrating step 706 in Fig. 7.In step 802, selected reference position.In step 804, the multiple distances between described multiple geographical position and described reference position are calculated.In step 806, more described multiple distance.In step 808, multiple memory address according to described multiple range estimation.
Figure 9 shows that the other method flow chart 808 of tracking target object video according to an embodiment of the invention.Fig. 9 is further illustrating step 808 in Fig. 8.In step 902, add numbering to described multiple video according to described multiple distance respectively, wherein, the numbering of the video that distance value is little is less than the large video of distance value.In step 904, the relative bearing of described multiple geographical position and described reference position is judged respectively.In step 906, judge described multiple memory address according to described multiple numbering and described multiple relative bearing.
Figure 10 shows that the other method flow chart 906 of tracking target object video according to an embodiment of the invention.Figure 10 is further illustrating step 906 in Fig. 9.
In step 1002, when N geographical position is in the direct north of described reference position, then entering step 1003, is (D1+B by N video storage address in described memory n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4), wherein, N is less than or equal to the sum of described multiple video, the numbering of BN to be described numbering module be described N video.Otherwise, enter step 1004.
In step 1004, the Due South being in described reference position when described N geographical position to time, then entering step 1005, is D1 (D2+B by described N video storage address in described memory n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4).Otherwise, enter step 1006.
In step 1006, when described N geographical position is in the direction, due east of described reference position, then entering step 1007, is D1D2 (D3+B by described N video storage address in described memory n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4).Otherwise, enter step 1008.
In step 1008, the positive west being in described reference position when described N geographical position to time, then entering step 1009, is D1D2D3 (D4+B by described N video storage address in described memory n) memory cell (namely N memory address is D1D2D3 (D4+B n)).Otherwise, enter step 1010.
In step 1010, when described N geographical position is in the northeastward of described reference position, then entering step 1011, is (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4).Otherwise, enter step 1012.
In step 1012, when described N geographical position is in the southwestward of described reference position, then entering step 1013, is D1 (D2+B by described N video storage address in described memory n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n)).Otherwise, enter step 1014.
In step 1014, when described N geographical position is in the southeastern direction of described reference position, then entering step 1015, is (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4).Otherwise, enter step 1016.
In step 1016, can judge northwest that N geographical position is in described reference position to, now, entering step 1018, is (D1+B by described N video storage address in described memory n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
Figure 11 shows that the other method flow chart 710 of tracking target object video according to an embodiment of the invention.Figure 11 is further illustrating step 710 in Fig. 7.
In step 1102, the first frame occurring described target video object in each video of described multiple video is retrieved.In step 1104, the time order and function that described multiple first frames in more described multiple video occur.In a step 1106, produce the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).
Figure 12 shows that video storage server 110 ' according to another embodiment of the present invention.The part that Figure 12 label is identical with Fig. 2 has similar function.Figure 12 is the another kind of example structure of the intelligence actual combat system 100 of Fig. 1.
In the fig. 12 embodiment, video storage server comprises memory 202 and processing module 1204.Memory 202 comprises multiple memory cell.Processing module 1204 is connected with memory 202.Processing module 1204 reads the geographical location information of described multiple video, extracts longitude and the latitude in described multiple geographical position, sorts respectively to described multiple longitude and described multiple latitude, judges described multiple memory address numbering according to described ranking results.
More particularly, processing module 1204 comprises numbering module 1206.Numbering module 1206 determines first of the numbering of described memory address according to the longitude in described multiple geographical position, wherein, if the longitude in X geographical position is less than the longitude in Y geographical position, then the first bit value of X memory address is less than the first bit value of Y memory address; Described numbering module also determines the second of the numbering of described memory address according to the latitude in described multiple geographical position, wherein, if the latitude value in N geographical position is less than the latitude value in M geographical position, then the second numerical value of N memory address is less than the second numerical value of M memory address, wherein, X, Y, M and N are the positive integer being less than described multiple video sum.
Thus, can find out, in the fig. 12 embodiment, its numbering only has two.The embodiment of composition graphs 4, then can find out that longitude sequence is from small to large: video monitoring devices 105,104,106 and 107.Latitude value sequence is from small to large: video monitoring devices 106,107,104 and 105.Thus, the numbering of video monitoring devices 104,105,106 and 107 respectively: 23,14,31 and 42.Therefore, as shown in the bitmap schematic diagram of Figure 13, bitmap 1300 can be obtained from address information.Utilize aforesaid embodiment, the summary direction bitmap of 1400 can be drawn.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).In addition, adopt latitude, longitude information to determine memory address, simplifying calculating (such as: without the need to judging relative direction), saving memory space.
Figure 15 shows that intelligence method for supervising 1500 under battle conditions according to an embodiment of the invention.In step 1502, gather multiple videos in multiple geographical position.In step 1504, select target object video.In step 1506, read the geographical location information of described multiple video.In step 1508, extract longitude and the latitude in described multiple geographical position.In step 1510, respectively described multiple longitude and described multiple latitude are sorted, judge that multiple memory address is numbered according to described ranking results.In step 1512, according to the geographical position of described multiple video described multiple video is stored in respectively and numbers corresponding multiple memory addresss with described multiple memory address.In step 1514, judge the traffic direction of described target video object according to described multiple memory address.In step 1516, the time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map.In step 1518, show the summary traffic direction figure of described target video object according to described traffic direction, and have the map of the running orbit of described target video object according to described running orbit display mark.
Figure 16 shows that another intelligence method for supervising 1512 under battle conditions according to an embodiment of the invention.Figure 16 is further illustrating the step 1512 in Figure 15.In step 1602, determine the numbering of described memory address according to the longitude in described multiple geographical position first, wherein, if the longitude in X geographical position is less than the longitude in Y geographical position, then the first bit value of X memory address is less than the first bit value of Y memory address.In step 1604, the second of the numbering of described memory address is determined according to the latitude in described multiple geographical position, wherein, if the latitude value in N geographical position is less than the latitude value in M geographical position, then the second numerical value of N memory address is less than the second numerical value of M memory address, wherein, X, Y, M and N are the positive integer being less than described multiple video sum.
Figure 17 shows that another intelligence method for supervising 1512 under battle conditions according to an embodiment of the invention.Figure 17 is further illustrating the step 1514 in Figure 15.In step 1702, retrieve the first frame occurring described target video object in each video of described multiple video.In step 1704, the time order and function that described multiple first frames in more described multiple video occur.In step 1706, produce the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).In addition, adopt latitude, longitude information to determine memory address, simplifying calculating (such as: without the need to judging relative direction), saving memory space.
Figure 18 shows that another kind of structure Figure 112 ' of video frequency abstract retrieval server 112 according to an embodiment of the invention.The element that Figure 18 label is identical with Fig. 4 has similar function.In the embodiment of Figure 18, video frequency abstract retrieval server 112 comprises video and concentrates module 1802.Just because of this, the intelligence actual combat System's composition of video frequency abstract retrieval server 112 ' video concentration systems is comprised.In this video concentration systems, other portions and the structure except video frequency abstract retrieval server 112 ' all can adopt the dependency structure of Fig. 1 to Figure 17.
Video concentrates module 1802 selects to comprise described target video object in described multiple video video according to bitmap; The frame of video comprising described target video object of predetermined number is gathered each video of video of selecting, to produce multiple frame of video group from described; According to the described multiple frame of video group of described traffic direction splicing that described bitmap shows, to form concentrated video.As in the embodiment of Fig. 6 or Figure 14, video concentrates module 1802 can directly get rid of video monitoring devices 107, thus, saves the time that video is concentrated, improves the efficiency that video is concentrated, accelerate user's case handling efficiency further.
In one embodiment, video concentrates module 1802 and also comprises acquisition module 1804.N number of frame of video is gathered backward the first frame that acquisition module 1804 occurs from described target video object at each video, and gather forward M frame of video the last frame occurred at each video from described target video object, wherein, M and N is positive integer.In one embodiment, the duration that occurs in each video of the value of described M and N and described target video object is proportional.Advantage is, proportional by arranging the duration that M and N and target video object occur in each video, can reduce the collection of redundant image, save the time that video is concentrated, improve video thickening efficiency.
Figure 19 shows that the flow chart of video concentration method 1900 according to an embodiment of the invention.In step 1902, gather multiple videos in multiple geographical position.In step 1904, select target object video.In step 1906, described multiple video is stored in multiple memory address by geographical position according to described multiple video respectively.In step 1908, judge the traffic direction of described target video object according to described multiple memory address, and produce the bitmap that mark has described traffic direction.In step 1910, in described multiple video, select according to described bitmap the video comprising described target video object.In step 1912, gathered the frame of video comprising described target video object of predetermined number each video of video of selecting, to produce multiple frame of video group from described.In step 1914, according to the described multiple frame of video group of described traffic direction splicing that described bitmap shows, to form concentrated video.Wherein, step 1906 can adopt the method flow of Fig. 8 to Figure 10 or Figure 16 to Figure 17.
The flow chart of another video concentration method 1912 according to an embodiment of the invention shown in Figure 20.Figure 20 is further describing the step 1912 in Figure 19.In step 2002, the first frame occurred at each video from described target video object, gather N number of frame of video backward.In step 2004, gather forward M frame of video the last frame occurred at each video from described target video object, wherein, M and N is positive integer.In one embodiment, the duration that occurs in each video of the value of described M and N and described target video object is proportional.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency.In addition, proportional by arranging the duration that M and N and target video object occur in each video, the collection of redundant image can be reduced, save the time that video is concentrated, improve video thickening efficiency.
Embodiment and accompanying drawing are only the conventional embodiment of the present invention above.Obviously, various supplement, amendment and replacement can be had under the prerequisite not departing from the present invention's spirit that claims define and invention scope.It should be appreciated by those skilled in the art that the present invention can change in form, structure, layout, ratio, material, element, assembly and other side under the prerequisite not deviating from invention criterion according to concrete environment and job requirement in actual applications to some extent.Therefore, be only illustrative rather than definitive thereof in the embodiment of this disclosure, the scope of the present invention is defined by appended claim and legal equivalents thereof, and is not limited thereto front description.

Claims (9)

1. a video concentration systems, is characterized in that, described video concentration systems comprises:
Multiple video monitoring devices, for gathering multiple videos in multiple geographical position;
Client host, described client host is communicated with multiple video monitoring devices by network, and described client host receives described multiple video from described multiple video monitoring devices respectively, and described client host is select target object video also;
The intelligence be connected with described client host Platform Server under battle conditions, described multiple video is uploaded to described intelligence Platform Server under battle conditions by described client host;
The video storage server be connected with described intelligence actual combat Platform Server, described video storage server copies described multiple video from described intelligence actual combat Platform Server, and according to the geographical position of described multiple video, described multiple video is stored in multiple memory address respectively;
The video frequency abstract retrieval server be connected with described client host with described video storage server, for judging the traffic direction of described target video object according to described multiple memory address, and produce the bitmap that mark has described traffic direction, wherein, described video frequency abstract retrieval server also comprises:
Video concentrates module, for selecting according to described bitmap the video comprising described target video object in described multiple video; The frame of video comprising described target video object of predetermined number is gathered each video of video of selecting, to produce multiple frame of video group from described; According to the described multiple frame of video group of described traffic direction splicing that described bitmap shows, to form concentrated video.
2. video concentration systems according to claim 1, is characterized in that, described video concentrates module and comprises:
Acquisition module, N number of frame of video is gathered backward for the first frame of occurring at each video from described target video object, and gather forward M frame of video the last frame occurred at each video from described target video object, wherein, M and N is positive integer.
3. video concentration systems according to claim 2, is characterized in that, the duration that the value of described M and N and described target video object occur in each video is proportional.
4. the video concentration systems according to claim 1 or 2 or 3, it is characterized in that, described video storage server comprises:
Memory, described memory comprises multiple memory cell;
The processing module be connected with described memory, described processing module reads the geographical location information of described multiple video, extract longitude and the latitude in described multiple geographical position, respectively described multiple longitude and described multiple latitude are sorted, judge described multiple memory address numbering according to described ranking results.
5. video concentration systems according to claim 4, is characterized in that, described each memory address numbering includes two digits, and described processing module also comprises:
Numbering module, for determining first of the numbering of described memory address according to the longitude in described multiple geographical position, wherein, if the longitude in X geographical position is less than the longitude in Y geographical position, then the first bit value of X memory address is less than the first bit value of Y memory address; Described numbering module also determines the second of the numbering of described memory address according to the latitude in described multiple geographical position, wherein, if the latitude value in N geographical position is less than the latitude value in M geographical position, then the second numerical value of N memory address is less than the second numerical value of M memory address, wherein, X, Y, M and N are the positive integer being less than described multiple video sum.
6. the video concentration systems according to claim 1 or 2 or 3, it is characterized in that, described video storage server comprises:
Memory, described memory comprises multiple memory cell; And
The processing module be connected with described memory, described processing module selectes reference position, calculates the multiple distances between described multiple geographical position and described reference position, more described multiple distance, and according to described multiple range estimation multiple memory address.
7. video concentration systems according to claim 6, is characterized in that, when N geographical position is in the direct north of described reference position, N video storage address in described memory is (D1+B by described address determination module n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4), wherein, N is less than or equal to the sum of described multiple video, B nfor the numbering that described numbering module is described N video; The Due South being in described reference position when described N geographical position to time, described N video storage address in described memory is D1 (D2+B by described address determination module n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4); When described N geographical position is in the direction, due east of described reference position, described N video storage address in described memory is D1D2 (D3+B by described address determination module n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4); The positive west being in described reference position when described N geographical position to time, described N video storage address in described memory is D1D2D3 (D4+B by described address determination module n) memory cell (namely N memory address is D1D2D3 (D4+B n)).
8. video concentration systems according to claim 7, is characterized in that, when described N geographical position is in the northeastward of described reference position, described N video storage address in described memory is (D1+B by described address determination module n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4) when described N geographical position is in the southwestward of described reference position, described N video storage address in described memory is D1 (D2+B by described address determination module n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n)); When described N geographical position is in the southeastern direction of described reference position, described N video storage address in described memory is (D1+B by described address determination module n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4); The northwest being in described reference position when described N geographical position to time, described N video storage address in described memory is (D1+B by described address determination module n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
9. the video concentration systems according to claim 1 or 2 or 3, is characterized in that, described video frequency abstract retrieval server comprises:
Retrieval module, for retrieve described multiple video each video in there is the first frame of described target video object;
Time comparison module, for the time order and function that described multiple first frames in more described multiple video occur; And
Bitmap labeling module, for generation of the bitmap of the direction of motion of the described target video object of expression, and marks the direction of motion of described target video object on described bitmap according to described time comparative result.
CN201510116761.8A 2015-03-17 2015-03-17 A kind of video concentration systems Expired - Fee Related CN105323547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510116761.8A CN105323547B (en) 2015-03-17 2015-03-17 A kind of video concentration systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510116761.8A CN105323547B (en) 2015-03-17 2015-03-17 A kind of video concentration systems

Publications (2)

Publication Number Publication Date
CN105323547A true CN105323547A (en) 2016-02-10
CN105323547B CN105323547B (en) 2018-05-15

Family

ID=55250027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510116761.8A Expired - Fee Related CN105323547B (en) 2015-03-17 2015-03-17 A kind of video concentration systems

Country Status (1)

Country Link
CN (1) CN105323547B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517405A (en) * 2017-07-31 2017-12-26 努比亚技术有限公司 The method, apparatus and computer-readable recording medium of a kind of Video processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009200601A (en) * 2008-02-19 2009-09-03 Victor Co Of Japan Ltd Video editing device and method
CN103106250A (en) * 2013-01-14 2013-05-15 浙江元亨通信技术股份有限公司 Intelligent analysis and retrieval method for video surveillance and system thereof
CN103679730A (en) * 2013-12-17 2014-03-26 深圳先进技术研究院 Video abstract generating method based on GIS
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009200601A (en) * 2008-02-19 2009-09-03 Victor Co Of Japan Ltd Video editing device and method
CN103106250A (en) * 2013-01-14 2013-05-15 浙江元亨通信技术股份有限公司 Intelligent analysis and retrieval method for video surveillance and system thereof
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode
CN103679730A (en) * 2013-12-17 2014-03-26 深圳先进技术研究院 Video abstract generating method based on GIS

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517405A (en) * 2017-07-31 2017-12-26 努比亚技术有限公司 The method, apparatus and computer-readable recording medium of a kind of Video processing

Also Published As

Publication number Publication date
CN105323547B (en) 2018-05-15

Similar Documents

Publication Publication Date Title
US11860923B2 (en) Providing a thumbnail image that follows a main image
US20180261000A1 (en) Selecting time-distributed panoramic images for display
CN102147260B (en) Electronic map matching method and device
KR20100068468A (en) Method, apparatus and computer program product for performing a visual search using grid-based feature organization
KR102557049B1 (en) Image Feature Matching Method and System Using The Labeled Keyframes In SLAM-Based Camera Tracking
CN102884400A (en) Information processing device, information processing system, and program
TW201139990A (en) Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods
CN106462945A (en) Presenting hierarchies of map data at different zoom levels
KR101721114B1 (en) Method for Determining the Size of Grid for Clustering on Multi-Scale Web Map Services using Location-Based Point Data
CN111107319A (en) Target tracking method, device and system based on regional camera
WO2011071817A1 (en) Video processing system providing correlation between objects in different georeferenced video feeds and related methods
CN108089191A (en) A kind of Global localization system and method based on laser radar
CN112613397B (en) Method for constructing target recognition training sample set of multi-view optical satellite remote sensing image
TW201142751A (en) Video processing system generating corrected geospatial metadata for a plurality of georeferenced video feeds and related methods
CN109726868B (en) Path planning method, device and storage medium
CN105306880A (en) Video concentration method
CN105282496A (en) Method for tracking target video object
CN104217414A (en) Method and device for extracting mosaicing line for image mosaic
CN104298678A (en) Method, system, device and server for searching for interest points on electronic map
CN109194929B (en) WebGIS-based target associated video rapid screening method
CN105227902A (en) A kind of intelligence method for supervising under battle conditions
Lin et al. Moving camera analytics: Emerging scenarios, challenges, and applications
CN105721825A (en) Intelligent combat system
CN105323547A (en) Video condensing system
CN105323548A (en) Intelligent combat system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180515

CF01 Termination of patent right due to non-payment of annual fee