WO2019034053A1 - 一种目标定位方法、装置及*** - Google Patents

一种目标定位方法、装置及*** Download PDF

Info

Publication number
WO2019034053A1
WO2019034053A1 PCT/CN2018/100459 CN2018100459W WO2019034053A1 WO 2019034053 A1 WO2019034053 A1 WO 2019034053A1 CN 2018100459 W CN2018100459 W CN 2018100459W WO 2019034053 A1 WO2019034053 A1 WO 2019034053A1
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring target
location
prompt information
feature
determining
Prior art date
Application number
PCT/CN2018/100459
Other languages
English (en)
French (fr)
Inventor
陈碧泉
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019034053A1 publication Critical patent/WO2019034053A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a target positioning method, apparatus, and system.
  • the general positioning scheme usually includes: each collecting device sends the collected video to the server, the server receives and stores each video, and the server analyzes each video to determine the video with the monitoring target and the device that collects the video. Location information, based on the location information, determining the location of the monitoring target.
  • each device sends a video to the server, which occupies more network bandwidth.
  • the purpose of the embodiments of the present application is to provide a target location, device, and system to reduce network bandwidth usage.
  • the embodiment of the present application provides a target positioning system, including an acquisition device and a server, where the server is configured to acquire a monitoring target feature, and send the monitoring target feature to the collection device.
  • the collecting device is configured to receive the monitoring target feature, and is further configured to collect an image, perform feature extraction on the image, obtain a feature to be matched, and determine whether the to-be-matched feature matches the monitoring target feature; Yes, based on the location of the collection device, the location of the monitoring target is determined.
  • the collecting device may be further configured to: after determining the location of the monitoring target based on the location of the collecting device, send prompt information to the server; the server may also be configured to receive The prompt information determines a location of the monitoring target according to the prompt information.
  • the system includes multiple collection devices, and the collection device may be further configured to send prompt information to the server after determining the location of the monitoring target based on the location of the collection device.
  • the server may be further configured to receive prompt information sent by each collection device, and determine a location and a time corresponding to each prompt information, where the location is: a location of the monitoring target determined according to the prompt information, The time is: the time at which the prompt information is received; and a trajectory is generated as the trajectory of the monitoring target according to the position and time corresponding to each piece of prompt information.
  • the embodiment of the present application further provides a target positioning method, which is applied to an acquisition device, including: acquiring a monitoring target feature; performing feature extraction on the image collected by itself to obtain a to-be-matched feature; and determining the to-be-matched feature Whether it matches the monitoring target feature; if so, based on the location of the collection device, determining the location of the monitoring target.
  • the step of acquiring the monitoring target feature may include: receiving a monitoring target feature sent by the server; or acquiring an image including the monitoring target, performing feature extraction on the image, and obtaining a monitoring target feature.
  • the step of determining the location of the monitoring target based on the location of the collecting device may include: determining a location of the collecting device as a location of the monitoring target; or, according to the collecting device The location, and the field of view of the collection device, determine the location of the monitoring target.
  • the method may further include: determining that the to-be-matched feature corresponds to a location in the image, as the monitoring target a position in the image; the step of determining a location of the monitoring target based on a location of the collection device, comprising: a location according to the collection device, and a location of the monitoring target in the image And determining the location of the monitoring target.
  • the method may further include: outputting a location of the monitoring target; or outputting the location of the monitoring target and the The image collected by itself; or, the prompt information is sent to the server, and the prompt information is used to prompt the location of the monitoring target.
  • the embodiment of the present application further provides a target positioning method, which is applied to a server, and includes:
  • the step of determining the location of the monitoring target according to the prompt information includes: reading a location of the monitoring target carried in the prompt information; or sending the prompt information
  • the location of the device is determined as the location of the monitoring target; or the location of the monitoring target is determined according to the location of the collection device that sends the prompt information and the field of view; or the location carried in the prompt information is read.
  • the position of the monitoring target in the image is determined, and the position of the monitoring target is determined according to the position of the monitoring target in the image and the position of the collecting device that sends the prompt information.
  • the method further includes: determining a location corresponding to each prompt information, and At a time, the location is: a location of the monitoring target determined according to the prompt information, where the time is: a time when the prompt information is received; and a track is generated according to the location and time corresponding to each prompt information, as the Monitor the trajectory of the target.
  • the method further includes: performing, according to the trajectory of the monitoring target, the monitoring target Perform trajectory prediction.
  • the step of generating a trajectory as the trajectory of the monitoring target according to the position and time corresponding to each piece of prompt information may include: corresponding to each monitoring target, each piece of prompt information corresponding to the monitoring target identifier The position and time, a trajectory is generated as the trajectory of the monitoring target.
  • the embodiment of the present application further provides a target positioning apparatus, which is applied to an acquisition device, and includes: a first acquisition module, configured to acquire a monitoring target feature; and an extraction module, configured to perform feature extraction on the image collected by itself. And obtaining a feature to be matched; the determining module is configured to determine whether the feature to be matched matches the feature of the monitoring target; if yes, triggering the first determining module; and the first determining module is configured to determine, according to the location of the collecting device, Determine the location of the monitoring target.
  • a target positioning apparatus which is applied to an acquisition device, and includes: a first acquisition module, configured to acquire a monitoring target feature; and an extraction module, configured to perform feature extraction on the image collected by itself. And obtaining a feature to be matched; the determining module is configured to determine whether the feature to be matched matches the feature of the monitoring target; if yes, triggering the first determining module; and the first determining module is configured to determine, according to the location of the collecting device, Determine the location of the monitoring target
  • the first acquiring module may be specifically configured to: receive a monitoring target feature sent by the server; or acquire an image that includes the monitoring target, perform feature extraction on the image, and obtain a monitoring target feature.
  • the first determining module may be specifically configured to: determine a location of the collecting device as a location of the monitoring target; or, according to a location of the collecting device, and a field of view of the collecting device Range, determining the location of the monitoring target.
  • the device may further include: a second determining module, configured to determine, in the case that the determining result of the determining module is YES, that the to-be-matched feature corresponds to a position in the image, as the monitoring The position of the target in the image; the first determining module is specifically configured to: determine a location of the monitoring target according to a location of the collecting device and a position of the monitoring target in the image.
  • a second determining module configured to determine, in the case that the determining result of the determining module is YES, that the to-be-matched feature corresponds to a position in the image, as the monitoring The position of the target in the image
  • the first determining module is specifically configured to: determine a location of the monitoring target according to a location of the collecting device and a position of the monitoring target in the image.
  • the device may further include: an output module, configured to output a location of the monitoring target; or output a location of the monitoring target and the image collected by the self; or send a prompt message to the server, where The prompt information is used to prompt the location of the monitoring target.
  • an output module configured to output a location of the monitoring target; or output a location of the monitoring target and the image collected by the self; or send a prompt message to the server, where The prompt information is used to prompt the location of the monitoring target.
  • the embodiment of the present application further provides a target locating device, which is applied to a server, and includes: a second acquiring module, configured to acquire a monitoring target feature; and a sending module, configured to send the monitoring target feature to the collecting device.
  • a target locating device which is applied to a server, and includes: a second acquiring module, configured to acquire a monitoring target feature; and a sending module, configured to send the monitoring target feature to the collecting device.
  • the collecting device determines that the feature in the captured image matches the monitoring target feature, sending the prompt information to the server; the receiving module is configured to receive the prompt information; and the third determining module, And configured to determine a location of the monitoring target according to the prompt information.
  • the third determining module may be configured to: read a location of the monitoring target carried in the prompt information; or determine a location of the collecting device that sends the prompt information as the monitoring Position of the target; or determining the location of the monitoring target according to the location of the collection device that sends the prompt information and the field of view; or reading the location of the monitoring target in the image carried in the prompt information And determining a location of the monitoring target according to a location of the monitoring target in the image and a location of the collection device that sends the prompt information.
  • the device may further include: a fourth determining module, configured to determine a location and a time corresponding to each piece of prompt information, where the location is: determined according to the prompt information
  • the time of the monitoring target is the time when the prompt information is received
  • the generating module is configured to generate a trajectory as the trajectory of the monitoring target according to the position and time corresponding to each prompt information.
  • the device may further include: a prediction module, configured to perform trajectory prediction on the monitoring target according to the trajectory of the monitoring target.
  • a prediction module configured to perform trajectory prediction on the monitoring target according to the trajectory of the monitoring target.
  • the number of the monitoring targets is greater than 1; the second acquiring module may be specifically configured to: acquire each monitoring target feature and a corresponding monitoring target identifier; and the sending module may be specifically configured to: collect The device sends the each monitoring target feature and the corresponding monitoring target identifier; the fourth determining module may be further configured to determine a monitoring target identifier included in each prompt information; the generating module may be specifically configured to: target Each monitoring target generates a trajectory as a trajectory of the monitoring target according to the position and time corresponding to each prompt information including the monitoring target identifier.
  • an embodiment of the present application further provides an electronic device, including a processor and a memory, wherein the memory is used to store a computer program, and the processor is configured to execute the program stored on the memory.
  • an embodiment of the present application further provides an electronic device, including a processor and a memory, wherein the memory is used to store a computer program, and the processor is configured to execute the program stored on the memory.
  • an embodiment of the present application further discloses a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, any one of the foregoing is applied to the collection.
  • the target location method of the device is disclosed.
  • the embodiment of the present application further discloses a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the foregoing one is applied to the server.
  • Target targeting method when the computer program is executed by the processor, the foregoing one is applied to the server.
  • an embodiment of the present application further discloses an executable program code for being used to execute any of the above-described target positioning methods applied to an acquisition device.
  • an embodiment of the present application further discloses an executable program code for being used to execute any of the above-described target positioning methods applied to a server.
  • the collecting device performs feature extraction on the image collected by itself, and matches the extracted feature with the monitoring target feature.
  • the position of the monitoring target is determined based on the position thereof;
  • the collection device analyzes and processes the images collected by itself, instead of sending all the collected images to the server for analysis and processing, thus reducing the network bandwidth occupation rate.
  • FIG. 1 is a schematic diagram of a first structure of a target positioning system according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a second structure of a target positioning system according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a target positioning method applied to an acquisition device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a first process of a target location method applied to a server according to an embodiment of the present disclosure
  • FIG. 5 is a second schematic flowchart of a target positioning method applied to a server according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a target positioning apparatus applied to an acquisition device according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of a target positioning apparatus applied to a server according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of another electronic device according to an embodiment of the present disclosure.
  • the embodiment of the present application provides a target positioning method, device, and system.
  • the following is a detailed description of a target positioning system provided by an embodiment of the present application.
  • the system can be as shown in FIG. 1 , including an acquisition device and a server, where
  • the server is configured to acquire a monitoring target feature, and send the monitoring target feature to the collecting device;
  • the collecting device is configured to receive the monitoring target feature, and is further configured to collect an image, and perform feature extraction on the image to obtain a feature to be matched; determine whether the to-be-matched feature matches the monitoring target feature; if yes, based on the acquiring The location of the device (self location) to determine the location of the monitoring target.
  • the collection device is a smart terminal such as a mobile phone or a PAD, or may be a camera, a camera, or the like, and is not limited.
  • the collecting device determines the location of the monitoring target based on its own location, and may be in various ways, including but not limited to the following:
  • the collection device can directly determine its own location as the location of the monitoring target.
  • the collection device can determine the location of the monitoring target based on its own location and its own field of view.
  • the camera of the dome camera can be rotated, that is, the dome camera can be used for image acquisition in different directions.
  • the position of the dome camera and the field of view of the dome camera right The image acquisition is performed in which direction, and the position of the monitored target is determined to be more accurate.
  • the collecting device may first determine that the to-be-matched feature corresponds to a position in the image as a position of the monitoring target in the image; and then determine a monitoring target according to the position of the target and the position of the monitoring target in the image. position.
  • the acquisition device can be a wide-angle camera, and the field of view of the captured image is large.
  • the image captured by the wide-angle camera includes a cell, a park, and a square.
  • the monitoring target can be further determined. The position in the image (determining that the feature to be matched corresponds to the position in the image).
  • the area division can be performed in the image acquired by the wide-angle camera, and the image areas corresponding to the above-mentioned cells, parks, and squares are divided into different image areas. After determining the position of the monitoring target in the image, it can be determined according to the image region where the position is located whether the monitoring target is in the cell, in the park, or in the square. It can be seen that, combined with the position of the wide-angle camera and the position of the monitoring target in the image, the position of the monitored target is determined to be more accurate.
  • the collection device analyzes and processes the image collected by itself, instead of sending all the collected images to the server for analysis and processing, thereby reducing network bandwidth occupancy.
  • the collecting device may be further configured to send prompt information to the server after determining the location of the monitoring target based on the location of the collecting device;
  • the server is further configured to receive the prompt information, and determine a location of the monitoring target according to the prompt information.
  • the server may also locate the monitoring target.
  • the server determines the location of the monitoring target according to the prompt information. There may be multiple situations, including but not limited to the following, such as:
  • the location of the monitoring target is carried in the prompt information and sent to the server, and the server directly reads the location of the monitoring target carried in the prompt information.
  • the collection device sends the "self-position and the scope of the field of view” to the server in the prompt information, and the server determines the monitoring target according to the location of the collection device carried in the prompt information and the field of view of the collection device. position.
  • the ball machine can be rotated in the horizontal direction and the vertical direction, that is, the ball machine can be aligned in different directions for image acquisition.
  • the server combines the position of the ball machine and the ball machine.
  • the field of view (which direction is used for image acquisition) determines the location of the monitored target more accurately.
  • the collection device carries the “self-scope field” in the prompt information and sends it to the server; the server obtains the location information of each collection device in advance, and after receiving the prompt information, the server determines the location of the collection device that sends the prompt information; The determined location of the collection device and the field of view of the collection device carried in the prompt information determine the location of the monitoring target.
  • the server determines the location of the monitoring target according to the location of the collection device and the field of view; the difference is that in the second case, the prompt information includes the location of the collection device, In the three cases, the location of the collection device is not included in the prompt information, and the server obtains the location of the collection device in advance.
  • the collecting device carries the “home position and the position of the monitoring target in the captured image” in the prompt information and sends it to the server; after receiving the prompt information, the server collects the image according to the location of the collecting device and the monitoring target. The location in the location determines the location of the monitoring target.
  • the acquisition device can be a wide-angle camera, and the field of view of the captured image is large.
  • the image captured by the wide-angle camera includes a cell, a park, and a square.
  • the position of the wide-angle camera can be combined. And monitor the position of the target in the image to determine the location of the monitored target.
  • the area division can be performed in the image acquired by the wide-angle camera, and the image areas corresponding to the above-mentioned cells, parks, and squares are divided into different image areas. After determining the position of the monitoring target in the image, it can be determined according to the image region where the position is located whether the monitoring target is in the cell, in the park, or in the square. It can be seen that, combined with the position of the wide-angle camera and the position of the monitoring target in the image, the position of the monitored target is determined to be more accurate.
  • the collection device carries the location of the monitoring target in the image collected by itself (acquisition device) in the prompt information and sends it to the server; the server obtains the location information of each collection device in advance, and after receiving the prompt information, the server determines the sending prompt.
  • the server determines the position of the monitoring target according to the position of the collecting device and the position of the monitoring target in the image collected by the collecting device; the difference is that in the fifth case, the prompt The information includes the location of the collection device.
  • the location of the collection device is not included in the prompt information, and the server obtains the location of the collection device in advance.
  • the collection device carries the “own position” in the prompt information and sends it to the server, and the server determines the location of the collection device carried in the prompt information as the location of the monitoring target.
  • the prompt information sent by the collection device does not include any of the above information, and the server obtains the location of each collection device in advance; after receiving the prompt information, the server searches for the sending prompt information in the location of each acquisition device acquired in advance. The location of the collection device; the server determines the location found as the location of the monitoring target.
  • the collecting device may store the image if it is determined that the to-be-matched feature matches the monitoring target feature.
  • the collection device stores only the image including the monitoring target, and stores all the collected images compared to the collection device, or the collection device sends all the collected images to the server for storage, thereby saving storage resources.
  • the image may be output to the display device, so that the monitoring target can be displayed to the user more intuitively.
  • the collecting device may send the image to the server if the matching feature is matched with the monitoring target feature; or the prompt information may include the image, so that the server acquires and monitors The target-related information is more abundant, and the collection device only sends the image containing the monitoring target to the server, instead of sending all the collected images to the server, saving network bandwidth.
  • the collecting device may send the to-be-matched feature to the server when the matching feature is matched with the monitoring target feature; or the prompt information may include the to-be-matched feature.
  • the to-be-matched feature may include more rich content than the monitoring target feature.
  • the monitoring target feature acquired by the server includes facial features and height information
  • the to-be-matched feature includes not only facial features, height information, but also target Features such as apparel wear, such that the information acquired by the server related to the monitoring target is richer; and the feature occupies less network bandwidth than the image, and only transmitting the to-be-matched feature further saves network bandwidth compared to transmitting the image.
  • the system can also be as shown in FIG. 2, including multiple acquisition devices (acquisition device 1, acquisition device 2, ... acquisition device N) and servers.
  • acquisition device 1, acquisition device 2, ... acquisition device N acquisition device 1
  • servers The specific number of collection devices and servers is not limited.
  • Each of the collection devices may send a prompt message to the server, and after receiving the plurality of prompt information, the server may determine a location and a time corresponding to each prompt information, where the location is: the monitoring target determined according to the prompt information. Position, the time is: a time at which the prompt information is received; and a trajectory is generated as a trajectory of the monitoring target according to the position and time corresponding to each piece of prompt information.
  • the collecting device 1 performs feature value extraction on the image collected by itself, obtains a feature to be matched, and determines that the to-be-matched feature matches the monitoring target feature, and the collecting device 1 sends prompt information to the server; the server receives the prompt information.
  • the time of the monitoring is 9:00 am on July 20, and the location of the monitoring target determined by the server according to the prompt information is A;
  • the server receives the prompt information sent by the collection device 2, and the time for receiving the prompt information is 9:2 am on July 20, and the location of the monitoring target determined by the server according to the prompt information is B; the server receives the collection.
  • the prompt information sent by the device 3, the time of receiving the prompt information is 9:5 am on July 20, and the location of the monitoring target determined by the server according to the prompt information is C; the server receives the prompt information sent by the collecting device 4,
  • the time at which the prompt information is received is 9:8 am on July 20, and the position of the monitoring target determined by the server according to the prompt information is D.
  • the server may generate a trajectory of the monitoring target according to the position and time corresponding to each piece of the prompt information: A ⁇ B ⁇ C ⁇ D. It can be seen that with the embodiment shown in FIG. 2, the trajectory tracking of the monitoring target can be performed by the server.
  • the server may further perform trajectory prediction on the monitoring target according to the trajectory of the generated monitoring target.
  • the moving direction and the moving speed of the monitoring target may be determined according to the generated trajectory of the monitoring target, and the trajectory prediction is performed on the monitoring target according to the moving direction and the moving speed.
  • the trajectory of the generated monitoring target is always moving eastward, it can be predicted that the position of the monitoring target at the next moment is still moving eastward (that is, predicting the moving direction of the monitoring target);
  • the trajectory of the monitoring target can calculate the moving speed of the monitoring target; according to the moving direction and moving speed of the monitoring target, the subsequent trajectory of the monitoring target can be predicted.
  • the prompt information sent by the collecting device to the server carries the image of the monitoring target; after receiving the prompt information sent by the multiple collecting devices, the server analyzes the multiple images carried in the prompt information, and analyzes the result. It indicates that the monitoring target has been moving along a road, and there is no crossing at the road, and it can be predicted that the position of the monitoring target at the next moment still moves along the road (that is, predicting the moving direction of the monitoring target); In addition, according to the trajectory of the generated monitoring target, the moving speed of the monitoring target can be calculated; according to the moving direction and the moving speed of the monitoring target, the subsequent trajectory of the monitoring target can be predicted.
  • the trajectory tracking of multiple monitoring targets can be performed by using the embodiment shown in FIG. 2:
  • the server obtains multiple monitoring target features and corresponding monitoring target identifiers, and each monitoring target feature is a feature of the monitoring target; the server sends the plurality of monitoring target features and the corresponding monitoring target identifiers to the respective collecting devices;
  • Each acquisition device performs feature extraction on the image collected by itself, obtains a feature to be matched, and matches the to-be-matched feature with each monitoring target feature. If the matching is successful, it determines the matching monitoring target identifier and based on its own position. Determining a location of the monitoring target corresponding to the identifier; sending a prompt message to the server, where the identifier includes the identifier;
  • the server receives the prompt information sent by each collection device, determines the location, time, and the monitoring target identifier corresponding to each prompt information; for each monitoring target, according to the location corresponding to each prompt information including the monitoring target identifier, At the moment, a trajectory is generated as the trajectory of the monitoring target.
  • trajectory tracking of multiple monitoring targets is achieved. It is also possible to perform trajectory prediction on the plurality of monitoring targets by using one of the above embodiments.
  • the collection device may include: a smart terminal such as a mobile phone or a PAD, or a camera, a camera, or the like having an image processing function, which is not limited.
  • FIG. 3 is a schematic flowchart of a target positioning method applied to an acquisition device according to an embodiment of the present disclosure, including:
  • the server may send a monitoring target feature to the collection device.
  • the user can interact with the server, and the server acquires the target feature that the user needs to monitor, and delivers the feature to the collection device.
  • the collecting device may acquire an image including a monitoring target, perform feature extraction on the image, and obtain a monitoring target feature.
  • the user can directly interact with the collection device, and the user sends an image including the monitoring target to the collection device, and the collection device performs feature extraction on the image to obtain a monitoring target feature.
  • image features such as using color histograms, color moments, etc. to extract the color features of the image, or using statistical methods, geometric methods, model methods, etc. to extract the texture features of the image, or using boundary feature methods, geometry
  • the parameter method, the target detection algorithm and the like extract the shape feature of the human target in the image, or use the neural network obtained by the pre-training to extract the target feature of the human target in the image, etc., and the like is not limited.
  • S302 Perform feature extraction on the image collected by itself to obtain a feature to be matched.
  • the manner in which the monitored target feature is extracted is consistent with the manner in which the feature to be matched is extracted. For example, if the monitored target feature is extracted by using the pre-trained neural network, the same neural network may be used in S302. The image is extracted by features to obtain features to be matched.
  • S304 Determine a location of the monitoring target based on a location of the collection device.
  • the similarity between the two can be calculated. If the similarity is greater than the similarity threshold, it means that the two match, or the difference between the two can be calculated. If the difference is less than the difference The value threshold indicates that the two match, and so on.
  • the specific matching method and the threshold setting are not limited.
  • the collection device collects an image that includes the monitoring target, that is, the monitoring target appears in the collection range of the collection device. Therefore, based on the location of the collection device itself, the location of the monitoring target is determined. .
  • the collection device determines the location of the monitoring target based on its own location, and may be in various ways, including but not limited to the following:
  • the collection device can directly determine its own location as the location of the monitoring target.
  • the collection device can determine the location of the monitoring target based on its own location and its own field of view.
  • the acquisition device is a ball machine
  • the camera of the ball machine can be rotated, that is, the ball machine can be aligned in different directions for image acquisition.
  • the position of the monitored target is determined to be more accurate.
  • the collecting device may first determine that the to-be-matched feature corresponds to a position in the image as a position of the monitoring target in the image; and then determine a monitoring target according to the position of the target and the position of the monitoring target in the image. position.
  • the acquisition device can be a wide-angle camera, and the field of view of the captured image is large.
  • the image captured by the wide-angle camera includes a cell, a park, and a square.
  • the monitoring target can be further determined. The position in the image (determining that the feature to be matched corresponds to the position in the image).
  • the area division can be performed in the image acquired by the wide-angle camera, and the image areas corresponding to the above-mentioned cells, parks, and squares are divided into different image areas. After determining the position of the monitoring target in the image, it can be determined according to the image region where the position is located whether the monitoring target is in the cell, in the park, or in the square. It can be seen that, combined with the position of the wide-angle camera and the position of the monitoring target in the image, the position of the monitored target is determined to be more accurate.
  • the collecting device can output the location of the monitoring target; thus, the user can directly obtain the location of the monitoring target from the collecting device side, and the collecting device does not need to The collected images are sent to the server for analysis and processing, which reduces the network bandwidth occupancy.
  • the collection device can output the location of the monitoring target and the image including the monitoring target; thus, the monitoring target can be displayed to the user more intuitively.
  • the collecting device may store the image collected by itself only when the determination result in S303 is YES, so that all the collected images are stored compared to the collecting device, or all the images collected by the collecting device are sent. Save storage resources by saving to the server.
  • the collection device may send prompt information to the server, where the prompt information is used to prompt the location of the monitoring target. In this way, the server can also locate the monitoring target.
  • the prompt information may include the location of the monitoring target determined in S304, or the prompt information may include location information of the collection device, or the prompt information may not include the information, and only The effect of the prompt.
  • the collecting device may also send an image containing the monitoring target to the server, or the prompt information may include the image, where the image includes the monitoring target; thus, the information acquired by the server related to the monitoring target is richer.
  • the collection device only sends the image containing the monitoring target to the server, instead of sending all the collected images to the server, saving network bandwidth.
  • the collecting device may send the to-be-matched feature to the server if the result of the determination in S303 is YES; or the prompting information may include the to-be-matched feature; the to-be-matched feature includes the monitoring target feature, such that The information acquired by the server related to the monitoring target is richer, and the feature occupies less network bandwidth than the image. Compared with the transmitted image, only sending the to-be-matched feature further saves the network bandwidth.
  • the server can present these information related to the monitoring target (eg, location, image, feature) to the user.
  • the user interacts with the server, the server acquires the target feature that the user needs to monitor, and delivers the feature to the collection device.
  • the server displays the information related to the monitoring target to the user. The way is more reasonable.
  • the format in which the collection device sends the prompt information or the image or other information to the server may be structured information or unstructured information, and the specific information format is not limited.
  • the collecting device may send the information to the server in real time after executing S304; or the collecting device may send the information to the server in real time if the result of the determination in S303 is YES (may not include the location of the monitoring target determined by the collecting device) Or; can also send prompt information in real time, delay sending images or features and other information, the specific transmission method is not limited.
  • the server receives images sent by multiple acquisition devices, and the server stores the received images, and then analyzes the stored images, and locates the monitoring targets according to the analysis result; If the collecting device sends the prompt information to the server in real time after executing S304, the server can locate the monitoring target in real time; it can be seen that the server does not need to store and analyze the multi-path image in the solution, thereby improving the positioning efficiency and real-time. It is better, and the server does not store multiple images in this solution, which saves storage space.
  • the collecting device performs feature extraction on the image collected by itself, and matches the extracted feature with the monitoring target feature.
  • the position of the monitoring target is determined based on the position thereof;
  • the collection device analyzes and processes the image collected by itself, instead of sending all the collected images to the server for analysis and processing, thereby reducing the network bandwidth occupation rate.
  • the user may send an image including the monitoring target to the server, or input to the server, and the server performs feature extraction on the image to obtain a monitoring target feature.
  • the server performs feature extraction on the image to obtain a monitoring target feature.
  • the user may also directly send or input the monitoring target feature to the server, or the server may acquire the monitoring target feature from other devices.
  • S402 Send the monitoring target feature to the collecting device, so that the collecting device sends the prompting information to the server if it determines that the feature in the captured image matches the monitoring target feature.
  • the monitoring target features acquired in S401 can be sent to the multiple acquisition devices.
  • Each acquisition device determines whether a feature in the captured image matches the monitored target feature, and if so, sends a prompt message to the server.
  • S403 Receive prompt information sent by the collection device, and determine a location of the monitoring target according to the prompt information.
  • the server determines the location of the monitoring target according to the prompt information. There may be multiple situations, including but not limited to the following, such as:
  • the location of the monitoring target is carried in the prompt information and sent to the server, and the server directly reads the location of the monitoring target carried in the prompt information.
  • the collection device sends the "self-position and the scope of the field of view” to the server in the prompt information, and the server determines the monitoring target according to the location of the collection device carried in the prompt information and the field of view of the collection device. position.
  • the ball machine can be rotated in the horizontal direction and the vertical direction, that is, the ball machine can be aligned in different directions for image acquisition.
  • the server combines the position of the ball machine and the ball machine.
  • the field of view (which direction is used for image acquisition) determines the location of the monitored target more accurately.
  • the collection device carries the “self-scope field” in the prompt information and sends it to the server; the server obtains the location information of each collection device in advance, and after receiving the prompt information, the server determines the location of the collection device that sends the prompt information; The determined location of the collection device and the field of view of the collection device carried in the prompt information determine the location of the monitoring target.
  • the server determines the location of the monitoring target according to the location of the collection device and the field of view; the difference is that in the second case, the prompt information includes the location of the collection device, In the three cases, the location of the collection device is not included in the prompt information, and the server obtains the location of the collection device in advance.
  • the collecting device carries the “home position and the position of the monitoring target in the captured image” in the prompt information and sends it to the server; after receiving the prompt information, the server collects the image according to the location of the collecting device and the monitoring target. The location in the location determines the location of the monitoring target.
  • the acquisition device can be a wide-angle camera, and the field of view of the captured image is large.
  • the image captured by the wide-angle camera includes a cell, a park, and a square.
  • the position of the wide-angle camera can be combined. And monitor the position of the target in the image to determine the location of the monitored target.
  • the area division can be performed in the image acquired by the wide-angle camera, and the image areas corresponding to the above-mentioned cells, parks, and squares are divided into different image areas. After determining the position of the monitoring target in the image, it can be determined according to the image region where the position is located whether the monitoring target is in the cell, in the park, or in the square. It can be seen that, combined with the position of the wide-angle camera and the position of the monitoring target in the image, the position of the monitored target is determined to be more accurate.
  • the collection device carries the location of the monitoring target in the image collected by itself (acquisition device) in the prompt information and sends it to the server; the server obtains the location information of each collection device in advance, and after receiving the prompt information, the server determines the sending prompt.
  • the server determines the position of the monitoring target according to the position of the collecting device and the position of the monitoring target in the image collected by the collecting device; the difference is that in the fifth case, the prompt The information includes the location of the collection device.
  • the location of the collection device is not included in the prompt information, and the server obtains the location of the collection device in advance.
  • the collection device carries the “own position” in the prompt information and sends it to the server, and the server determines the location of the collection device carried in the prompt information as the location of the monitoring target.
  • the prompt information sent by the collection device does not include any of the above information, and the server obtains the location of each collection device in advance; after receiving the prompt information, the server searches for the sending prompt information in the location of each acquisition device acquired in advance. The location of the collection device; the server determines the location found as the location of the monitoring target.
  • the server can also perform trajectory tracking on the monitoring target:
  • each collection device sends a prompt message to the server, and after receiving a plurality of prompt information, the server may determine a location and a time corresponding to each prompt information, where the location is: according to the prompt
  • the location of the monitoring target determined by the information is: a time at which the prompt information is received; and a trajectory is generated as a trajectory of the monitoring target according to the position and time corresponding to each prompt information.
  • the collecting device 1 performs feature value extraction on the image collected by itself, obtains a feature to be matched, and determines that the to-be-matched feature matches the monitoring target feature, and the collecting device 1 sends prompt information to the server; the server receives the prompt information.
  • the time of the monitoring is 9:00 am on July 20, and the location of the monitoring target determined by the server according to the prompt information is A;
  • the server receives the prompt information sent by the collection device 2, and the time for receiving the prompt information is 9:2 am on July 20, and the location of the monitoring target determined by the server according to the prompt information is B; the server receives the collection.
  • the prompt information sent by the device 3, the time of receiving the prompt information is 9:5 am on July 20, and the location of the monitoring target determined by the server according to the prompt information is C; the server receives the prompt information sent by the collecting device 4,
  • the time at which the prompt information is received is 9:8 am on July 20, and the position of the monitoring target determined by the server according to the prompt information is D.
  • the server may generate a trajectory of the monitoring target according to the position and time corresponding to each piece of the prompt information: A ⁇ B ⁇ C ⁇ D. It can be seen that with the embodiment shown in FIG. 2, the trajectory tracking of the monitoring target can be performed by the server.
  • the server may further perform trajectory prediction on the monitoring target according to the trajectory of the generated monitoring target.
  • the moving direction and the moving speed of the monitoring target may be determined according to the generated trajectory of the monitoring target, and the trajectory prediction is performed on the monitoring target according to the moving direction and the moving speed.
  • the trajectory of the generated monitoring target is always moving eastward, it can be predicted that the position of the monitoring target at the next moment is still moving eastward (that is, predicting the moving direction of the monitoring target);
  • the trajectory of the monitoring target can calculate the moving speed of the monitoring target; according to the moving direction and moving speed of the monitoring target, the subsequent trajectory of the monitoring target can be predicted.
  • the prompt information sent by the collecting device to the server carries the image of the monitoring target; after receiving the prompt information sent by the multiple collecting devices, the server analyzes the multiple images carried in the prompt information, and analyzes the result. It indicates that the monitoring target has been moving along a road, and there is no crossing at the road, and it can be predicted that the position of the monitoring target at the next moment still moves along the road (that is, predicting the moving direction of the monitoring target); In addition, according to the trajectory of the generated monitoring target, the moving speed of the monitoring target can be calculated; according to the moving direction and the moving speed of the monitoring target, the subsequent trajectory of the monitoring target can be predicted.
  • the server may perform trajectory tracking on multiple monitoring targets, as shown in FIG. 5, and the method includes:
  • S501 Acquire each monitoring target feature and a corresponding monitoring target identifier.
  • S502 Send the each monitoring target feature and the corresponding monitoring target identifier to the collecting device.
  • Each acquisition device performs feature extraction on the image collected by itself, obtains a feature to be matched, matches the to-be-matched feature with each monitoring target feature, and if the matching is successful, determines a matching monitoring target identifier, and based on its own position, Determining the location of the monitoring target corresponding to the identifier; sending a prompt message to the server, where the identifier includes the identifier.
  • S503 Receive prompt information sent by each collection device, and determine a location of the monitoring target according to each received prompt information.
  • S504 Determine a location and a time corresponding to each piece of prompt information, and a monitoring target identifier included in each piece of prompt information.
  • the location is: a location of the monitoring target determined according to the prompt information, where the time is: a moment when the prompt information is received.
  • the embodiment of the present application further provides a target positioning device.
  • FIG. 6 is a schematic structural diagram of a target positioning apparatus applied to an acquisition device according to an embodiment of the present disclosure, including:
  • the first obtaining module 601 is configured to acquire a monitoring target feature
  • the extracting module 602 is configured to perform feature extraction on the image collected by the self, to obtain a feature to be matched
  • the determining module 603 is configured to determine the to-be-matched feature and the monitoring target. Whether the feature matches; if yes, triggering the first determining module; the first determining module 604 is configured to determine the location of the monitoring target based on the location of the collecting device.
  • the first obtaining module 601 may be specifically configured to: receive a monitoring target feature sent by the server; or acquire an image including the monitoring target, perform feature extraction on the image, and obtain a monitoring target feature.
  • the first determining module 604 may be specifically configured to: determine a location of the collection device as a location of the monitoring target; or, according to a location of the collection device, and a view of the collection device The field range determines the location of the monitoring target.
  • the device may further include: a second determining module (not shown), configured to determine, in the case that the determining module 603 determines that the result is YES, the to-be-matched feature corresponds to the image a position in the image as a position of the monitoring target in the image; the first determining module 604 is specifically configured to: determine, according to a location of the collecting device, and a position of the monitoring target in the image The location of the monitoring target.
  • a second determining module (not shown), configured to determine, in the case that the determining module 603 determines that the result is YES, the to-be-matched feature corresponds to the image a position in the image as a position of the monitoring target in the image
  • the first determining module 604 is specifically configured to: determine, according to a location of the collecting device, and a position of the monitoring target in the image The location of the monitoring target.
  • the device may further include: an output module (not shown) for outputting a location of the monitoring target; or outputting a location of the monitoring target and the image collected by the self;
  • the prompt information is sent to the server, and the prompt information is used to prompt the location of the monitoring target.
  • FIG. 7 is a schematic structural diagram of a target positioning apparatus applied to a server according to an embodiment of the present disclosure, including:
  • the second obtaining module 701 is configured to acquire a monitoring target feature
  • the sending module 702 is configured to send the monitoring target feature to the collecting device, so that the collecting device determines that the feature in the captured image matches the monitoring target feature.
  • the prompt information is sent to the server; the receiving module 703 is configured to receive the prompt information; and the third determining module 704 is configured to determine the location of the monitoring target according to the prompt information.
  • the third determining module 704 may be specifically configured to: read a location of the monitoring target carried in the prompt information; or determine a location of the collecting device that sends the prompt information as the Monitoring the location of the target; or determining the location of the monitoring target according to the location of the collection device that sends the prompt information and the field of view; or reading the monitoring target carried in the prompt information in the image
  • the location determines the location of the monitoring target according to the location of the monitoring target in the image and the location of the collection device that sends the prompt information.
  • the device may further include: a fourth determining module and a generating module (not shown), wherein the fourth determining module is configured to determine each prompt The location and the time corresponding to the information, the location is: the location of the monitoring target determined according to the prompt information, the time is: a time when the prompt information is received; and the generating module is configured to: according to the location corresponding to each prompt information And at the moment, a trajectory is generated as the trajectory of the monitoring target.
  • a fourth determining module is configured to determine each prompt The location and the time corresponding to the information, the location is: the location of the monitoring target determined according to the prompt information, the time is: a time when the prompt information is received; and the generating module is configured to: according to the location corresponding to each prompt information And at the moment, a trajectory is generated as the trajectory of the monitoring target.
  • the apparatus may further include: a prediction module (not shown), configured to perform trajectory prediction on the monitoring target according to the trajectory of the monitoring target.
  • a prediction module (not shown), configured to perform trajectory prediction on the monitoring target according to the trajectory of the monitoring target.
  • the number of the monitoring targets is greater than 1;
  • the second acquiring module 701 may be specifically configured to: acquire each monitoring target feature and a corresponding monitoring target identifier; and send a module 702, which may be specifically used to: The collecting device sends the monitoring target feature and the corresponding monitoring target identifier;
  • the fourth determining module may be further configured to determine a monitoring target identifier included in each prompt information;
  • the generating module may be specifically configured to: For each monitoring target, a trajectory is generated as a trajectory of the monitoring target according to the position and time corresponding to each piece of prompt information including the monitoring target identifier.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 8, including a processor 801 and a memory 802, wherein the memory 802 is configured to store a computer program, and the processor 801 is configured to execute a program stored on the memory 802.
  • the following steps are performed: acquiring a monitoring target feature; performing feature extraction on the image collected by itself to obtain a feature to be matched; determining whether the to-be-matched feature matches the monitoring target feature; and if so, based on the location of the collecting device And determining the location of the monitoring target.
  • the step of acquiring a monitoring target feature includes: receiving a monitoring target feature sent by the server; or acquiring an image including the monitoring target, performing feature extraction on the image, and obtaining a monitoring target feature.
  • the step of determining a location of the monitoring target based on a location of the collection device includes: determining a location of the collection device as a location of the monitoring target; or, according to the collecting The location of the device, and the field of view of the collection device, determine the location of the monitoring target.
  • the processor 801 is further configured to: when it is determined that the to-be-matched feature matches the monitoring target feature, determine that the to-be-matched feature corresponds to a location in the image, a position of the monitoring target in the image; the step of determining a location of the monitoring target based on a location of the collection device, including: according to a location of the collection device, and the monitoring target The position in the image determines the location of the monitoring target.
  • the processor 801 is further configured to: after the step of determining the location of the monitoring target based on the location of the acquiring device, outputting a location of the monitoring target; or Describe the location of the monitoring target and the image collected by the self; or send a prompt information to the server, the prompt information is used to prompt the location of the monitoring target.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 9, including a processor 901 and a memory 902, wherein the memory 902 is configured to store a computer program, and the processor 901 is configured to execute a program stored on the memory 902.
  • the monitoring target feature is acquired, the monitoring target feature is sent to the collecting device, so that the collecting device determines, when the feature in the captured image matches the monitoring target feature, to the server.
  • the step of determining the location of the monitoring target according to the prompt information includes: reading a location of the monitoring target carried in the prompt information; or sending the prompt information
  • the location of the collection device is determined as the location of the monitoring target; or the location of the monitoring target is determined according to the location of the collection device that sends the prompt information and the field of view; or the reading of the prompt information is carried
  • the position of the monitoring target in the image is determined according to the position of the monitoring target in the image and the position of the collecting device that sends the prompt information.
  • the processor 901 is further configured to: after the step of determining the location of the monitoring target according to the prompt information, determining, after the number of the prompt information is greater than 1, a position and a time corresponding to each piece of prompt information, where the position is: a position of the monitoring target determined according to the prompt information, the time is: a time at which the prompt information is received; and a position and a time corresponding to each piece of prompt information A trajectory is generated as a trajectory of the monitoring target.
  • the processor 901 is further configured to: generate a trajectory as a trajectory of the monitoring target according to the position and time corresponding to each piece of prompt information, according to the monitoring target The trajectory of the trajectory prediction of the monitoring target.
  • the number of the monitoring target is greater than 1;
  • the step of acquiring the monitoring target feature includes: acquiring each monitoring target feature and a corresponding monitoring target identifier; and sending the monitoring target feature to the collecting device And the step of: sending the monitoring target feature and the corresponding monitoring target identifier to the collecting device; after the step of receiving the prompting information, further comprising: determining a monitoring target identifier included in each prompt information;
  • the step of generating a trajectory as the trajectory of the monitoring target according to the location and the time corresponding to each prompt information includes: for each monitoring target, according to the location corresponding to each prompt information including the monitoring target identifier At the moment, a trajectory is generated as the trajectory of the monitoring target.
  • the memory mentioned in the above electronic device may include a random access memory (RAM), and may also include a non-volatile memory (NVM), such as at least one disk storage.
  • NVM non-volatile memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processing (DSP), dedicated integration.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any one of the foregoing target positioning methods applied to the collection device.
  • the embodiment of the present application further provides another computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any of the above-mentioned target positioning methods applied to the server.
  • the embodiment of the present application further provides an executable program code for a target positioning method that is executed to perform any of the above-described applications applied to an acquisition device.
  • the embodiment of the present application further provides an executable program code for a target positioning method that is executed to perform any of the above-described applications to a server.
  • the electronic device embodiment shown in FIG. 9 the other computer readable storage medium embodiment, and the above-described one executable program code embodiment, Since it is basically similar to the embodiment of the target positioning method applied to the server shown in FIG. 4-5, the description is relatively simple, and the relevant part refers to the part of the embodiment of the target positioning method applied to the server shown in FIG. 4-5. Description can be.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请实施例提供了一种目标定位方法、装置及***,***包括采集设备及服务器;服务器,获取监控目标特征,并将监控目标特征发送给采集设备;采集设备,接收监控目标特征;并对自身采集的图像进行特征提取,得到待匹配特征;判断待匹配特征与监控目标特征是否匹配;如果是,基于该采集设备的位置确定监控目标的位置。可见,本方案中,采集设备对自身采集的图像进行分析处理,而不是将采集到的所有图像发送给服务器进行分析处理,这样,减少了网络带宽占用率。

Description

一种目标定位方法、装置及***
本申请要求于2017年8月15日提交中国专利局、申请号为201710697867.0、发明名称为“一种目标定位方法、装置及***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种目标定位方法、装置及***。
背景技术
在视频监控过程中,通常需要对监控目标进行定位。一般的定位方案通常包括:各台采集设备将采集到的视频发送至服务器,服务器接收并存储各路视频,服务器对各路视频进行分析,确定存在监控目标的视频、以及采集该视频的设备的位置信息,根据该位置信息,确定该监控目标的位置。
上述方案中,各台设备将视频发送至服务器,占用较多网络带宽。
发明内容
本申请实施例的目的在于提供一种目标定位、装置及***,以减少网络带宽占用率。
为达到上述目的,本申请实施例提供了一种目标定位***,包括采集设备及服务器;其中,所述服务器,用于获取监控目标特征,并将所述监控目标特征发送给所述采集设备;所述采集设备,用于接收所述监控目标特征;还用于采集图像,并对所述图像进行特征提取,得到待匹配特征;判断所述待匹配特征与所述监控目标特征是否匹配;如果是,基于所述采集设备的位置,确定所述监控目标的位置。
可选的,所述采集设备,还可以用于在所述基于所述采集设备的位置,确定所述监控目标的位置之后,向所述服务器发送提示信息;所述服务器,还可以用于接收所述提示信息,根据所述提示信息,确定所述监控目标的位置。
可选的,所述***中包含多台采集设备;所述采集设备,还可以用于在所述基于所述采集设备的位置,确定所述监控目标的位置之后,向所述服务 器发送提示信息;所述服务器,还可以用于接收各台采集设备发送的提示信息;确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
为达到上述目的,本申请实施例还提供了一种目标定位方法,应用于采集设备,包括:获取监控目标特征;对自身采集的图像进行特征提取,得到待匹配特征;判断所述待匹配特征与所述监控目标特征是否匹配;如果是,基于所述采集设备的位置,确定所述监控目标的位置。
可选的,所述获取监控目标特征的步骤,可以包括:接收服务器发送的监控目标特征;或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
可选的,所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,可以包括:将所述采集设备的位置确定为所述监控目标的位置;或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
可选的,在判定所述待匹配特征与所述监控目标特征相匹配的情况下,所述方法还可以包括:确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,包括:根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
可选的,在所述基于所述采集设备的位置,确定所述监控目标的位置的步骤之后,还可以包括:输出所述监控目标的位置;或者,输出所述监控目标的位置及所述自身采集的图像;或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
为达到上述目的,本申请实施例还提供了一种目标定位方法,应用于服务器,包括:
获取监控目标特征;向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所 述服务器发送提示信息;接收所述提示信息,并根据所述提示信息,确定所述监控目标的位置。
可选的,所述根据所述提示信息,确定所述监控目标的位置的步骤,包括:读取所述提示信息中携带的所述监控目标的位置;或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
可选的,在所述提示信息的数量大于1的情况下,在所述根据所述提示信息,确定所述监控目标的位置的步骤之后,还可以包括:确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
可选的,在所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤之后,还可以包括:根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
可选的,所述监控目标的数量大于1;所述获取监控目标特征的步骤,可以包括:获取每份监控目标特征及对应的监控目标标识;所述向采集设备发送所述监控目标特征的步骤,可以包括:向采集设备发送所述每份监控目标特征及对应的监控目标标识;在所述接收所述提示信息的步骤之后,还可以包括:确定每条提示信息中包含的监控目标标识;所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤,可以包括:针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
为达到上述目的,本申请实施例还提供了一种目标定位装置,应用于采集设备,包括:第一获取模块,用于获取监控目标特征;提取模块,用于对自身采集的图像进行特征提取,得到待匹配特征;判断模块,用于判断所述待匹配特征与所述监控目标特征是否匹配;如果是,触发第一确定模块;第 一确定模块,用于基于所述采集设备的位置,确定所述监控目标的位置。
可选的,所述第一获取模块,具体可以用于:接收服务器发送的监控目标特征;或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
可选的,所述第一确定模块,具体可以用于:将所述采集设备的位置确定为所述监控目标的位置;或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
可选的,所述装置还可以包括:第二确定模块,用于在所述判断模块判断结果为是的情况下,确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;所述第一确定模块,具体用于:根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
可选的,所述装置还可以包括:输出模块,用于输出所述监控目标的位置;或者,输出所述监控目标的位置及所述自身采集的图像;或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
为达到上述目的,本申请实施例还提供了一种目标定位装置,应用于服务器,包括:第二获取模块,用于获取监控目标特征;发送模块,用于向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息;接收模块,用于接收所述提示信息;第三确定模块,用于根据所述提示信息,确定所述监控目标的位置。
可选的,所述第三确定模块,具体可以用于:读取所述提示信息中携带的所述监控目标的位置;或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
可选的,所述提示信息的数量大于1,所述装置还可以包括:第四确定模 块,用于确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;生成模块,用于根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
可选的,所述装置还可以包括:预测模块,用于根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
可选的,所述监控目标的数量大于1;所述第二获取模块,具体可以用于:获取每份监控目标特征及对应的监控目标标识;所述发送模块,具体可以用于:向采集设备发送所述每份监控目标特征及对应的监控目标标识;所述第四确定模块,还可以用于确定每条提示信息中包含的监控目标标识;所述生成模块,具体可以用于:针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
为达到上述目的,本申请实施例还提供了一种电子设备,包括处理器和存储器,其中,存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现上述任一种应用于采集设备的目标定位方法。
为达到上述目的,本申请实施例还提供了一种电子设备,包括处理器和存储器,其中,存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现上述任一种应用于服务器的目标定位方法。
为达到上述目的,本申请实施例还公开了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种应用于采集设备的目标定位方法。
为达到上述目的,本申请实施例还公开了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种应用于服务器的目标定位方法。
为达到上述目的,本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种应用于采集设备的目标定位方法。
为达到上述目的,本申请实施例还公开了一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种应用于服务器的目标定位方法。
应用本申请所示实施例,采集设备对自身采集的图像进行特征提取,将提取的特征与监控目标特征进行匹配,当匹配成功时,基于自身位置确定监控目标的位置;可见,本方案中,采集设备对自身采集的图像进行分析处理,而不是将采集到的所有图像发送给服务器进行分析处理,这样,减少了网络带宽占用率。
当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的目标定位***的第一种结构示意图;
图2为本申请实施例提供的目标定位***的第二种结构示意图;
图3为本申请实施例提供的一种应用于采集设备的目标定位方法的流程示意图;
图4为本申请实施例提供的应用于服务器的目标定位方法的第一种流程示意图;
图5为本申请实施例提供的应用于服务器的目标定位方法的第二种流程示意图;
图6为本申请实施例提供的一种应用于采集设备的目标定位装置的结构示意图;
图7为本申请实施例提供的一种应用于服务器的目标定位装置的结构示意图;
图8为本申请实施例提供的一种电子设备的结构示意图;
图9为本申请实施例提供的另一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了解决上述技术问题,本申请实施例提供了一种目标定位方法、装置及***。下面首先对本申请实施例提供的一种目标定位***进行详细说明。该***可以如图1所示,包括采集设备及服务器,其中,
该服务器,用于获取监控目标特征,并将该监控目标特征发送给该采集设备;
该采集设备,用于接收该监控目标特征;还用于采集图像,并对该图像进行特征提取,得到待匹配特征;判断该待匹配特征与该监控目标特征是否匹配;如果是,基于该采集设备的位置(自身位置),确定该监控目标的位置。
该采集设备为手机、PAD等智能终端,或者,也可以为相机、摄像机等等,具体不做限定。
该采集设备基于自身位置确定监控目标的位置,具体可以有多种方式,包括但不限于如下几种:
例如,采集设备可以直接将自身位置确定为所述监控目标的位置。
又例如,采集设备可以根据自身位置、以及自身视场范围,确定监控目标的位置。
举例来说,假设采集设备为球机,球机的摄像头可以转动,也就是说,球机可以对准不同方向进行图像采集,这种情况下,结合球机位置及球机视场范围(对准哪个方向进行图像采集),确定出的监控目标的位置更准确。
再例如,采集设备可以先确定该待匹配特征对应到该图像中的位置,作为监控目标在所述图像中的位置;再根据自身位置、以及监控目标在该图像中的位置,确定监控目标的位置。
举例来说,采集设备可以为广角相机,采集图像的视场范围很大,比如,广角相机采集到的图像中包含一个小区、一个公园和一个广场,这种情况下,可以进一步确定监控目标在图像中的位置(确定该待匹配特征对应到该图像中的位置)。
如果广角相机的视场范围是固定的,可以在广角相机采集到的图像中进行区域划分,上述小区、公园、广场对应的图像区域划分为不同的图像区域。确定监控目标在图像中的位置后,便可以根据该位置所在的图像区域,确定该监控目标是在小区中,还是公园中,还是广场中。可见,结合广角相机的位置、以及监控目标在图像中的位置,确定出的监控目标的位置更准确。
应用本申请图1所示实施例,采集设备对自身采集的图像进行分析处理,而不是将采集到的所有图像发送给服务器进行分析处理,这样,减少了网络带宽占用率。
作为一种实施方式,该采集设备还可以用于在基于所述采集设备的位置,确定所述监控目标的位置之后,向该服务器发送提示信息;
该服务器,还用于接收该提示信息,根据所述提示信息,确定所述监控目标的位置。
本实施方式中,服务器也可以对监控目标进行定位。服务器根据提示信息确定监控目标的位置,具体可以有多种情况,包括但不限于如下几种,比如:
一、采集设备确定出监控目标的位置后,将“监控目标的位置”携带于提示信息中发送给服务器,服务器直接读取提示信息中携带的所述监控目标的位置。
二、采集设备将“自身位置、以及自身视场范围”携带于提示信息中发送给服务器,服务器根据提示信息中携带的采集设备的位置、及采集设备的视场范围,确定所述监控目标的位置。
举例来说,假设采集设备为球机,球机可以在水平方向、垂直方向转动,也就是说,球机可以对准不同方向进行图像采集,这种情况下,服务器结合球机位置及球机视场范围(对准哪个方向进行图像采集),确定出的监控目标 的位置更准确。
三、采集设备将“自身视场范围”携带于提示信息中发送给服务器;服务器预先获取各台采集设备的位置信息,服务器接收到提示信息后,确定发送提示信息的采集设备的位置;服务器根据所确定的采集设备的位置、以及提示信息中携带的采集设备的视场范围,确定所述监控目标的位置。
第三种情况与第二种情况中,服务器都是根据采集设备的位置及视场范围,确定监控目标的位置;不同的是,第二种情况中,提示信息中包含采集设备的位置,第三种情况中,提示信息中不包含采集设备的位置,服务器预先获取采集设备的位置。
四、采集设备将“自身位置、以及监控目标在自身采集图像中的位置”携带于提示信息中发送给服务器;服务器接收到提示信息后,根据该采集设备的位置、以及监控目标在自身采集图像中的位置,确定监控目标的位置。
举例来说,采集设备可以为广角相机,采集图像的视场范围很大,比如,广角相机采集到的图像中包含一个小区、一个公园和一个广场,这种情况下,可以结合广角相机的位置、以及监控目标在图像中的位置,确定监控目标的位置。
如果广角相机的视场范围是固定的,可以在广角相机采集到的图像中进行区域划分,上述小区、公园、广场对应的图像区域划分为不同的图像区域。确定监控目标在图像中的位置后,便可以根据该位置所在的图像区域,确定该监控目标是在小区中,还是公园中,还是广场中。可见,结合广角相机的位置、以及监控目标在图像中的位置,确定出的监控目标的位置更准确。
五、采集设备将“监控目标在自身(采集设备)采集图像中的位置”携带于提示信息中发送给服务器;服务器预先获取各台采集设备的位置信息,服务器接收到提示信息后,确定发送提示信息的采集设备的位置;服务器根据所确定的采集设备的位置、以及提示信息中携带的“监控目标在自身(采集设备)采集图像中的位置”,确定所述监控目标的位置。
第五种情况与第四种情况中,服务器都是根据采集设备的位置、以及监控目标在采集设备所采集图像中的位置,确定监控目标的位置;不同的是, 第五种情况中,提示信息中包含采集设备的位置,第四种情况中,提示信息中不包含采集设备的位置,服务器预先获取采集设备的位置。
六、采集设备将“自身位置”携带于提示信息中发送给服务器,服务器将提示信息中携带的采集设备的位置确定为所述监控目标的位置。
七、采集设备发送的提示信息中不包含上述任一种信息,服务器预先获取各台采集设备的位置;服务器接收到提示信息后,在预先获取的各台采集设备的位置中,查找发送提示信息的采集设备的位置;服务器将查找到的位置确定为所述监控目标的位置。
作为一种实施方式,采集设备在判定待匹配特征与监控目标特征匹配的情况下,可以对该图像进行存储。
应用这种实施方式,采集设备仅对包含监控目标的图像进行存储,相比于采集设备存储采集到的所有图像,或者采集设备将采集到的所有图像发送至服务器进行存储,节省了存储资源。
作为一种实施方式,采集设备在判定待匹配特征与监控目标特征匹配的情况下,可以将该图像输出至显示设备,这样可以更直观地向用户展示监控目标。
作为一种实施方式,采集设备在判定待匹配特征与监控目标特征匹配的情况下,可以将该图像发送给服务器;或者说,该提示信息中可以包含该图像,这样,服务器获取到的与监控目标相关的信息更丰富,而且采集设备仅将包含监控目标的图像发送给服务器,而不是将采集的所有图像都发送给服务器,节省了网络带宽。
作为一种实施方式,采集设备在判定待匹配特征与监控目标特征匹配的情况下,可以将该待匹配特征发送给服务器;或者说,该提示信息中可以包含该待匹配特征。该待匹配特征可以包括比监控目标特征更丰富的内容,比如说,服务器获取到的监控目标特征包括人脸特征及身高信息,而待匹配特征不仅包括人脸特征、身高信息,还包括目标的服饰穿戴等特征,这样,服务器获取到的与监控目标相关的信息更丰富;而且特征比图像占用更少网络 带宽,相比于发送图像,仅发送待匹配特征进一步节省了网络带宽。
该***也可以如图2所示,包括多台采集设备(采集设备1、采集设备2……采集设备N)及服务器,采集设备及服务器的具体数量不做限定。
每台采集设备都可以向服务器发送提示信息,服务器在接收到多条提示信息后,可以确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
举例来说,假设采集设备1对自身采集的图像进行特征值提取,得到待匹配特征,且判定该待匹配特征与监控目标特征匹配,采集设备1向服务器发送提示信息;服务器接收该条提示信息的时刻为7月20日上午9点,且服务器根据该提示信息确定的监控目标的位置为A;
类似的,服务器接收到采集设备2发送的提示信息,接收该提示信息的时刻为7月20日上午9点2分,且服务器根据该提示信息确定的监控目标的位置为B;服务器接收到采集设备3发送的提示信息,接收该提示信息的时刻为7月20日上午9点5分,且服务器根据该提示信息确定的监控目标的位置为C;服务器接收到采集设备4发送的提示信息,接收该提示信息的时刻为7月20日上午9点8分,且服务器根据该提示信息确定的监控目标的位置为D。
服务器根据上述各条提示信息对应的位置及时刻,可以生成监控目标的轨迹为:A→B→C→D。可见,利用图2所示实施例,可以利用服务器对监控目标进行轨迹追踪。
作为一种实施方式,服务器还可以根据生成的监控目标的轨迹,对该监控目标进行轨迹预测。具体的,可以根据生成的监控目标的轨迹,确定该监控目标的移动方向及移动速度,根据该移动方向及移动速度,对该监控目标进行轨迹预测。
举例来说,假设上述生成的监控目标的轨迹一直在向东移动,则可以预测该监控目标下一时刻的位置仍向东移动(也就是预测该监控目标的移动方 向);另外,根据上述生成的监控目标的轨迹,可以计算该监控目标的移动速度;根据监控目标的移动方向及移动速度,便可以预测监控目标的后续轨迹。
再举一例,假设采集设备发送给服务器的提示信息中携带有该监控目标的图像;服务器接收到多台采集设备发送的提示信息后,对这些提示信息中携带的多张图像进行分析,分析结果表明该监控目标一直沿着一条路移动,而且该条路未出现岔道口,则可以预测该监控目标下一时刻的位置仍沿着该条路移动(也就是预测该监控目标的移动方向);另外,根据上述生成的监控目标的轨迹,可以计算该监控目标的移动速度;根据监控目标的移动方向及移动速度,便可以预测监控目标的后续轨迹。
作为一种实施方式,可以利用图2所示实施例对多个监控目标进行轨迹追踪:
服务器获取多份监控目标特征及对应的监控目标的标识,每份监控目标特征为一个监控目标的特征;服务器将这多份监控目标特征及对应的监控目标的标识发送给各台采集设备;
每台采集设备对自身采集的图像进行特征提取,得到待匹配特征,将该待匹配特征与每份监控目标特征进行匹配,如果匹配成功,则确定匹配成功的监控目标标识,并基于自身位置,确定该标识对应的监控目标的位置;向服务器发送提示信息,该提示信息中包含该标识;
服务器接收各台采集设备发送的提示信息,确定每条提示信息对应的位置、时刻以及其中包含的监控目标标识;针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
这样,便实现了对多个监控目标的轨迹追踪。还可以利用上述一种实施方式,对这多个监控目标进行轨迹预测。
下面介绍应用于采集设备的目标定位方法,该采集设备可以包括:手机、PAD等智能终端,或者,具有图像处理功能的相机、摄像机等等,具体不做限定。
图3为本申请实施例提供的一种应用于采集设备的目标定位方法的流程 示意图,包括:
S301:获取监控目标特征。
作为一种实施方式,服务器可以向采集设备发送监控目标特征。这种实施方式中,用户可以与服务器进行交互,服务器获取用户需要监控的目标特征,并将该特征下发至采集设备。
作为另一种实施方式,采集设备可以获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。这种实施方式中,用户可以直接与采集设备进行交互,用户将包含监控目标的图像发送至采集设备,采集设备对该图像进行特征提取,得到监控目标特征。
提取图像特征的方式有很多,比如,利用颜色直方图、颜色矩等方式提取图像的颜色特征,或者利用统计法、几何法、模型法等方式提取图像的纹理特征,或者利用边界特征法、几何参数法、目标检测算法等方式提取图像中人体目标的形状特征,或者利用预先训练得到的神经网络提取图像中人体目标的目标特征,等等,具体不做限定。
S302:对自身采集的图像进行特征提取,得到待匹配特征。
如上所述,提取图像特征的方式有很多,这里不再赘述。需要说明的是,提取得到监控目标特征的方式与提取得到待匹配特征的方式一致,比如,如果利用预先训练得到的神经网络提取得到监控目标特征,则S302中可以利用相同的神经网络对采集的图像进行特征提取,得到待匹配特征。
S303:判断所述待匹配特征与所述监控目标特征是否匹配,如果是,执行S304。
S304:基于所述采集设备的位置,确定所述监控目标的位置。
判断两种特征是否匹配的方式有很多,比如,可以计算二者的相似度,如果相似度大于相似度阈值,则表示二者匹配,或者,可以计算二者的差值,如果差值小于差值阈值,则表示二者匹配,等等,具体匹配方式、以及阈值的设定不做限定。
如果二者匹配,则表示本采集设备采集到了包含监控目标的图像,也就 是监控目标出现在了本采集设备的采集范围内,因此,基于本采集设备自身的位置,确定所述监控目标的位置。
本采集设备基于自身位置确定监控目标的位置,具体可以有多种方式,包括但不限于如下几种:
例如,采集设备可以直接将自身位置确定为所述监控目标的位置。
又例如,采集设备可以根据自身位置、以及自身视场范围,确定监控目标的位置。
举例来说,假设本采集设备为球机,球机的摄像头可以转动,也就是说,球机可以对准不同方向进行图像采集,这种情况下,结合球机位置及球机视场范围(对准哪个方向进行图像采集),确定出的监控目标的位置更准确。
再例如,采集设备可以先确定该待匹配特征对应到该图像中的位置,作为监控目标在所述图像中的位置;再根据自身位置、以及监控目标在该图像中的位置,确定监控目标的位置。
举例来说,采集设备可以为广角相机,采集图像的视场范围很大,比如,广角相机采集到的图像中包含一个小区、一个公园和一个广场,这种情况下,可以进一步确定监控目标在图像中的位置(确定该待匹配特征对应到该图像中的位置)。
如果广角相机的视场范围是固定的,可以在广角相机采集到的图像中进行区域划分,上述小区、公园、广场对应的图像区域划分为不同的图像区域。确定监控目标在图像中的位置后,便可以根据该位置所在的图像区域,确定该监控目标是在小区中,还是公园中,还是广场中。可见,结合广角相机的位置、以及监控目标在图像中的位置,确定出的监控目标的位置更准确。
作为一种实施方式,本采集设备在确定出监控目标的位置后(S304之后)可以输出监控目标的位置;这样,用户可以从采集设备侧直接获取到监控目标的位置,而且采集设备不需要将采集到的图像发送给服务器进行分析处理,减少了网络带宽占用率。
作为一种实施方式,本采集设备可以输出监控目标的位置及该包含监控目标的图像;这样,可以更直观地向用户展示监控目标。
另外,在本实施方式中,采集设备可以仅在S303判断结果为是的情况下存储自身采集的图像,这样相比于采集设备存储采集到的所有图像,或者采集设备将采集到的所有图像发送至服务器进行存储,节省了存储资源。
作为一种实施方式,本采集设备在S303判断结果为是的情况下,或者在S304之后,可以向服务器发送提示信息,该提示信息用于提示监控目标的位置。这样,服务器也可以对监控目标进行定位。
举例来说,该提示信息中可以包含S304中确定出的监控目标的位置,或者,该提示信息中可以包含本采集设备的位置信息,或者,该提示信息也可以不包含这些信息,仅起到提示的效果。
或者,采集设备也可以将包含监控目标的图像发送给服务器,或者说,该提示信息中可以包含该图像,该图像中包含监控目标;这样,服务器获取到的与监控目标相关的信息更丰富,而且采集设备仅将包含监控目标的图像发送给服务器,而不是将采集的所有图像都发送给服务器,节省了网络带宽。
或者,采集设备也可以在S303判断结果为是的情况下,将该待匹配特征发送给服务器;或者说,该提示信息中可以包含该待匹配特征;该待匹配特征中包含监控目标特征,这样,服务器获取到的与监控目标相关的信息更丰富,而且特征比图像占用更少网络带宽,相比于发送图像,仅发送待匹配特征进一步节省了网络带宽。
服务器可以将这些与监控目标相关的信息(比如,位置、图像、特征)展示给用户。在上述一种实施方式中,用户与服务器进行交互,服务器获取用户需要监控的目标特征,并将该特征下发至采集设备;这种情况下,服务器将与监控目标相关的信息展示给用户的方式更合理。
采集设备向服务器发送提示信息、或者图像或者其他信息的格式可以为结构化信息,也可以为非结构化信息,具体信息格式不做限定。另外,采集设备可以在执行S304后实时向服务器发送这些信息;或者,采集设备可以在S303判断结果为是的情况下,实时向服务器发送这些信息(可能不包含采集 设备确定出的监控目标的位置);或者,也可以实时发送提示信息,延时发送图像或者特征等其他信息,具体发送方式不作限定。
在一些方案中,服务器接收多台采集设备发送的图像,服务器对接收到的各路图像进行存储,然后对存储的各路图像进行分析,根据分析结果对监控目标进行定位;而在本方案中,如果采集设备在执行S304之后,实时向服务器发送提示信息,服务器便可以实时地对监控目标进行定位;可见,本方案中服务器不需要对多路图像进行存储及分析,提高了定位效率,实时性更佳,而且,本方案中服务器不对多路图像进行存储,节省了存储空间。
应用本申请图3所示实施例,采集设备对自身采集的图像进行特征提取,将提取的特征与监控目标特征进行匹配,当匹配成功时,基于自身位置确定监控目标的位置;可见,本方案中,采集设备对自身采集的图像进行分析处理,而不是将采集到的所有图像发送给服务器进行分析处理,这样,减少了网络带宽占用率。
下面介绍应用于服务器的目标定位方法,如图4所示,包括:
S401:获取监控目标特征。
作为一种实施方式,用户可以将包含监控目标的图像发送至服务器,或者输入至服务器,服务器对该图像进行特征提取,得到监控目标特征。如上所述,提取图像特征的方式有很多,这里不再赘述。需要说明的是,服务器提取得到监控目标特征的方式与采集设备提取得到待匹配特征的方式一致。
作为另一种实施方式,用户也可以直接将监控目标特征发送至或者输入至服务器,或者,服务器也可以从其他设备获取该监控目标特征。
S402:向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息。
采集设备可以有多台,如果有多台,则可以向这多台采集设备发送S401中获取到的监控目标特征。每台采集设备判断自身采集图像中的特征是否与所述监控目标特征匹配,如果是,则向服务器发送提示信息。
S403:接收采集设备发送的提示信息,并根据所述提示信息,确定所述监控目标的位置。
服务器根据提示信息确定监控目标的位置,具体可以有多种情况,包括但不限于如下几种,比如:
一、采集设备确定出监控目标的位置后,将“监控目标的位置”携带于提示信息中发送给服务器,服务器直接读取提示信息中携带的所述监控目标的位置。
二、采集设备将“自身位置、以及自身视场范围”携带于提示信息中发送给服务器,服务器根据提示信息中携带的采集设备的位置、及采集设备的视场范围,确定所述监控目标的位置。
举例来说,假设采集设备为球机,球机可以在水平方向、垂直方向转动,也就是说,球机可以对准不同方向进行图像采集,这种情况下,服务器结合球机位置及球机视场范围(对准哪个方向进行图像采集),确定出的监控目标的位置更准确。
三、采集设备将“自身视场范围”携带于提示信息中发送给服务器;服务器预先获取各台采集设备的位置信息,服务器接收到提示信息后,确定发送提示信息的采集设备的位置;服务器根据所确定的采集设备的位置、以及提示信息中携带的采集设备的视场范围,确定所述监控目标的位置。
第三种情况与第二种情况中,服务器都是根据采集设备的位置及视场范围,确定监控目标的位置;不同的是,第二种情况中,提示信息中包含采集设备的位置,第三种情况中,提示信息中不包含采集设备的位置,服务器预先获取采集设备的位置。
四、采集设备将“自身位置、以及监控目标在自身采集图像中的位置”携带于提示信息中发送给服务器;服务器接收到提示信息后,根据该采集设备的位置、以及监控目标在自身采集图像中的位置,确定监控目标的位置。
举例来说,采集设备可以为广角相机,采集图像的视场范围很大,比如,广角相机采集到的图像中包含一个小区、一个公园和一个广场,这种情况下,可以结合广角相机的位置、以及监控目标在图像中的位置,确定监控目标的 位置。
如果广角相机的视场范围是固定的,可以在广角相机采集到的图像中进行区域划分,上述小区、公园、广场对应的图像区域划分为不同的图像区域。确定监控目标在图像中的位置后,便可以根据该位置所在的图像区域,确定该监控目标是在小区中,还是公园中,还是广场中。可见,结合广角相机的位置、以及监控目标在图像中的位置,确定出的监控目标的位置更准确。
五、采集设备将“监控目标在自身(采集设备)采集图像中的位置”携带于提示信息中发送给服务器;服务器预先获取各台采集设备的位置信息,服务器接收到提示信息后,确定发送提示信息的采集设备的位置;服务器根据所确定的采集设备的位置、以及提示信息中携带的“监控目标在自身(采集设备)采集图像中的位置”,确定所述监控目标的位置。
第五种情况与第四种情况中,服务器都是根据采集设备的位置、以及监控目标在采集设备所采集图像中的位置,确定监控目标的位置;不同的是,第五种情况中,提示信息中包含采集设备的位置,第四种情况中,提示信息中不包含采集设备的位置,服务器预先获取采集设备的位置。
六、采集设备将“自身位置”携带于提示信息中发送给服务器,服务器将提示信息中携带的采集设备的位置确定为所述监控目标的位置。
七、采集设备发送的提示信息中不包含上述任一种信息,服务器预先获取各台采集设备的位置;服务器接收到提示信息后,在预先获取的各台采集设备的位置中,查找发送提示信息的采集设备的位置;服务器将查找到的位置确定为所述监控目标的位置。
通过图4所示实施例,便实现了服务器对监控目标的定位。作为一种实施方式,服务器还可以对监控目标进行轨迹追踪:
比如图2所示的***中,每台采集设备都向服务器发送提示信息,服务器在接收到多条提示信息后,可以确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
举例来说,假设采集设备1对自身采集的图像进行特征值提取,得到待匹配特征,且判定该待匹配特征与监控目标特征匹配,采集设备1向服务器发送提示信息;服务器接收该条提示信息的时刻为7月20日上午9点,且服务器根据该提示信息确定的监控目标的位置为A;
类似的,服务器接收到采集设备2发送的提示信息,接收该提示信息的时刻为7月20日上午9点2分,且服务器根据该提示信息确定的监控目标的位置为B;服务器接收到采集设备3发送的提示信息,接收该提示信息的时刻为7月20日上午9点5分,且服务器根据该提示信息确定的监控目标的位置为C;服务器接收到采集设备4发送的提示信息,接收该提示信息的时刻为7月20日上午9点8分,且服务器根据该提示信息确定的监控目标的位置为D。
服务器根据上述各条提示信息对应的位置及时刻,可以生成监控目标的轨迹为:A→B→C→D。可见,利用图2所示实施例,可以利用服务器对监控目标进行轨迹追踪。
作为一种实施方式,服务器还可以根据生成的监控目标的轨迹,对该监控目标进行轨迹预测。具体的,可以根据生成的监控目标的轨迹,确定该监控目标的移动方向及移动速度,根据该移动方向及移动速度,对该监控目标进行轨迹预测。
举例来说,假设上述生成的监控目标的轨迹一直在向东移动,则可以预测该监控目标下一时刻的位置仍向东移动(也就是预测该监控目标的移动方向);另外,根据上述生成的监控目标的轨迹,可以计算该监控目标的移动速度;根据监控目标的移动方向及移动速度,便可以预测监控目标的后续轨迹。
再举一例,假设采集设备发送给服务器的提示信息中携带有该监控目标的图像;服务器接收到多台采集设备发送的提示信息后,对这些提示信息中携带的多张图像进行分析,分析结果表明该监控目标一直沿着一条路移动,而且该条路未出现岔道口,则可以预测该监控目标下一时刻的位置仍沿着该条路移动(也就是预测该监控目标的移动方向);另外,根据上述生成的监控目标的轨迹,可以计算该监控目标的移动速度;根据监控目标的移动方向及移动速度,便可以预测监控目标的后续轨迹。
作为一种实施方式,服务器还可以对多个监控目标进行轨迹追踪,如图5所示,方法包括:
S501:获取每份监控目标特征及对应的监控目标标识。
S502:向采集设备发送所述每份监控目标特征及对应的监控目标标识。
各台采集设备对自身采集的图像进行特征提取,得到待匹配特征,将该待匹配特征与每份监控目标特征进行匹配,如果匹配成功,则确定匹配成功的监控目标标识,并基于自身位置,确定该标识对应的监控目标的位置;向服务器发送提示信息,该提示信息中包含该标识。
S503:接收各台采集设备发送的提示信息,并根据接收到的每条提示信息,分别确定所述监控目标的位置。
S504:确定每条提示信息对应的位置及时刻、以及每条提示信息中包含的监控目标标识。其中,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻。
S505:针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
应用图5所示实施例,实现了服务器对多个监控目标的轨迹追踪。还可以利用上述一种实施方式,对这多个监控目标进行轨迹预测。
与上述方法实施例相对应,本申请实施例还提供了一种目标定位装置。
图6为本申请实施例提供的一种应用于采集设备的目标定位装置的结构示意图,包括:
第一获取模块601,用于获取监控目标特征;提取模块602,用于对自身采集的图像进行特征提取,得到待匹配特征;判断模块603,用于判断所述待匹配特征与所述监控目标特征是否匹配;如果是,触发第一确定模块;第一确定模块604,用于基于所述采集设备的位置,确定所述监控目标的位置。
作为一种实施方式,第一获取模块601,具体可以用于:接收服务器发送的监控目标特征;或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
作为一种实施方式,第一确定模块604,具体可以用于:将所述采集设备的位置确定为所述监控目标的位置;或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
作为一种实施方式,所述装置还可以包括:第二确定模块(图中未示出),用于在判断模块603判断结果为是的情况下,确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;第一确定模块604,具体可以用于:根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
作为一种实施方式,所述装置还可以包括:输出模块(图中未示出),用于输出所述监控目标的位置;或者,输出所述监控目标的位置及所述自身采集的图像;或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
图7为本申请实施例提供的一种应用于服务器的目标定位装置的结构示意图,包括:
第二获取模块701,用于获取监控目标特征;发送模块702,用于向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息;接收模块703,用于接收所述提示信息;第三确定模块704,用于根据所述提示信息,确定所述监控目标的位置。
作为一种实施方式,第三确定模块704,具体可以用于:读取所述提示信息中携带的所述监控目标的位置;或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
作为一种实施方式,所述提示信息的数量大于1,所述装置还可以包括:第四确定模块和生成模块(图中未示出),其中,第四确定模块,用于确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监 控目标的位置,所述时刻为:接收该提示信息的时刻;生成模块,用于根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
作为一种实施方式,所述装置还可以包括:预测模块(图中未示出),用于根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
作为一种实施方式,所述监控目标的数量大于1;第二获取模块701,具体可以用于:获取每份监控目标特征及对应的监控目标标识;发送模块702,,具体可以用于:向采集设备发送所述每份监控目标特征及对应的监控目标标识;所述第四确定模块,还可以用于确定每条提示信息中包含的监控目标标识;所述生成模块,具体可以用于:针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
本申请实施例还提供一种电子设备,如图8所示,包括处理器801和存储器802,其中,存储器802,用于存放计算机程序;处理器801,用于执行存储器802上所存放的程序时,实现如下步骤:获取监控目标特征;对自身采集的图像进行特征提取,得到待匹配特征;判断所述待匹配特征与所述监控目标特征是否匹配;如果是,基于所述采集设备的位置,确定所述监控目标的位置。
作为一种实施方式,所述获取监控目标特征的步骤,包括:接收服务器发送的监控目标特征;或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
作为一种实施方式,所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,包括:将所述采集设备的位置确定为所述监控目标的位置;或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
作为一种实施方式,处理器801还用于实现如下步骤:在判定所述待匹配特征与所述监控目标特征相匹配的情况下,确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;所述基于所述采集 设备的位置,确定所述监控目标的位置的步骤,包括:根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
作为一种实施方式,处理器801还用于实现如下步骤:在所述基于所述采集设备的位置,确定所述监控目标的位置的步骤之后,输出所述监控目标的位置;或者,输出所述监控目标的位置及所述自身采集的图像;或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
本申请实施例还提供一种电子设备,如图9所示,包括处理器901和存储器902,其中,存储器902,用于存放计算机程序;处理器901,用于执行存储器902上所存放的程序时,实现如下步骤:获取监控目标特征;向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息;接收所述提示信息,并根据所述提示信息,确定所述监控目标的位置。
作为一种实施方式,所述根据所述提示信息,确定所述监控目标的位置的步骤,包括:读取所述提示信息中携带的所述监控目标的位置;或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
作为一种实施方式,处理器901还用于实现如下步骤:在所述提示信息的数量大于1的情况下,在所述根据所述提示信息,确定所述监控目标的位置的步骤之后,确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
作为一种实施方式,处理器901还用于实现如下步骤:在所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤之后,根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
作为一种实施方式,所述监控目标的数量大于1;所述获取监控目标特征的步骤,包括:获取每份监控目标特征及对应的监控目标标识;所述向采集设备发送所述监控目标特征的步骤,包括:向采集设备发送所述每份监控目标特征及对应的监控目标标识;在所述接收所述提示信息的步骤之后,还包括:确定每条提示信息中包含的监控目标标识;所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤,包括:针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
上述电子设备提到的存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种应用于采集设备的目标定位方法。
本申请实施例还提供另一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种应用于服务器的目标定位方法。
本申请实施例还提供一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种应用于采集设备的目标定位方法。
本申请实施例还提供一种可执行程序代码,所述可执行程序代码用于被运行以执行上述任一种应用于服务器的目标定位方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于图6所示的应用于采集设备的目标定位装置实施例、图8所示的电子设备实施例、上述一种计算机可读存储介质实施例、上述一种可执行程序代码实施例而言,由于其基本相似于图3所示的应用于采集设备的目标定位方法实施例,所以描述的比较简单,相关之处参见图3所示的应用于采集设备的目标定位方法实施例的部分说明即可。
对于图7所示的应用于服务器的目标定位装置实施例、图9所示的电子设备实施例、上述另一种计算机可读存储介质实施例、上述一种可执行程序代码实施例而言,由于其基本相似于图4-5所示的应用于服务器的目标定位方法实施例,所以描述的比较简单,相关之处参见图4-5所示的应用于服务器的目标定位方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (33)

  1. 一种目标定位***,其特征在于,包括采集设备及服务器;其中,
    所述服务器,用于获取监控目标特征,并将所述监控目标特征发送给所述采集设备;
    所述采集设备,用于接收所述监控目标特征;还用于采集图像,并对所述图像进行特征提取,得到待匹配特征;判断所述待匹配特征与所述监控目标特征是否匹配;如果是,基于所述采集设备的位置,确定所述监控目标的位置。
  2. 根据权利要求1所述的***,其特征在于,
    所述采集设备,还用于在所述基于所述采集设备的位置,确定所述监控目标的位置之后,向所述服务器发送提示信息;
    所述服务器,还用于接收所述提示信息,根据所述提示信息,确定所述监控目标的位置。
  3. 根据权利要求1所述的***,其特征在于,所述***中包含多台采集设备;
    所述采集设备,还用于在所述基于所述采集设备的位置,确定所述监控目标的位置之后,向所述服务器发送提示信息;
    所述服务器,还用于接收各台采集设备发送的提示信息;确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
  4. 一种目标定位方法,其特征在于,应用于采集设备,包括:
    获取监控目标特征;
    对自身采集的图像进行特征提取,得到待匹配特征;
    判断所述待匹配特征与所述监控目标特征是否匹配;
    如果是,基于所述采集设备的位置,确定所述监控目标的位置。
  5. 根据权利要求4所述的方法,其特征在于,所述获取监控目标特征的步骤,包括:
    接收服务器发送的监控目标特征;
    或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
  6. 根据权利要求4所述的方法,其特征在于,所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,包括:
    将所述采集设备的位置确定为所述监控目标的位置;
    或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
  7. 根据权利要求4所述的方法,其特征在于,在判定所述待匹配特征与所述监控目标特征相匹配的情况下,所述方法还包括:
    确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;
    所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,包括:
    根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
  8. 根据权利要求4所述的方法,其特征在于,在所述基于所述采集设备的位置,确定所述监控目标的位置的步骤之后,还包括:
    输出所述监控目标的位置;
    或者,输出所述监控目标的位置及所述自身采集的图像;
    或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
  9. 一种目标定位方法,其特征在于,应用于服务器,包括:
    获取监控目标特征;
    向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息;
    接收所述提示信息,并根据所述提示信息,确定所述监控目标的位置。
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述提示信息,确定所述监控目标的位置的步骤,包括:
    读取所述提示信息中携带的所述监控目标的位置;
    或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;
    或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;
    或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
  11. 根据权利要求9所述的方法,其特征在于,在所述提示信息的数量大于1的情况下,在所述根据所述提示信息,确定所述监控目标的位置的步骤之后,还包括:
    确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;
    根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
  12. 根据权利要求11所述的方法,其特征在于,在所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤之后,还包括:
    根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
  13. 根据权利要求11所述的方法,其特征在于,所述监控目标的数量大 于1;
    所述获取监控目标特征的步骤,包括:
    获取每份监控目标特征及对应的监控目标标识;
    所述向采集设备发送所述监控目标特征的步骤,包括:
    向采集设备发送所述每份监控目标特征及对应的监控目标标识;
    在所述接收所述提示信息的步骤之后,还包括:
    确定每条提示信息中包含的监控目标标识;
    所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤,包括:
    针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
  14. 一种目标定位装置,其特征在于,应用于采集设备,包括:
    第一获取模块,用于获取监控目标特征;
    提取模块,用于对自身采集的图像进行特征提取,得到待匹配特征;
    判断模块,用于判断所述待匹配特征与所述监控目标特征是否匹配;如果是,触发第一确定模块;
    第一确定模块,用于基于所述采集设备的位置,确定所述监控目标的位置。
  15. 根据权利要求14所述的装置,其特征在于,所述第一获取模块,具体用于:
    接收服务器发送的监控目标特征;
    或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
  16. 根据权利要求14所述的装置,其特征在于,所述第一确定模块,具体用于:
    将所述采集设备的位置确定为所述监控目标的位置;
    或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
  17. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    第二确定模块,用于在所述判断模块判断结果为是的情况下,确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;
    所述第一确定模块,具体用于:
    根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
  18. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    输出模块,用于输出所述监控目标的位置;
    或者,输出所述监控目标的位置及所述自身采集的图像;
    或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
  19. 一种目标定位装置,其特征在于,应用于服务器,包括:
    第二获取模块,用于获取监控目标特征;
    发送模块,用于向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息;
    接收模块,用于接收所述提示信息;
    第三确定模块,用于根据所述提示信息,确定所述监控目标的位置。
  20. 根据权利要求19所述的装置,其特征在于,所述第三确定模块,具体用于:
    读取所述提示信息中携带的所述监控目标的位置;
    或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;
    或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;
    或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
  21. 根据权利要求19所述的装置,其特征在于,所述提示信息的数量大于1,所述装置还包括:
    第四确定模块,用于确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;
    生成模块,用于根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
  22. 根据权利要求21所述的装置,其特征在于,所述装置还包括:
    预测模块,用于根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
  23. 根据权利要求21所述的装置,其特征在于,所述监控目标的数量大于1;
    所述第二获取模块,具体用于:
    获取每份监控目标特征及对应的监控目标标识;
    所述发送模块,具体用于:
    向采集设备发送所述每份监控目标特征及对应的监控目标标识;
    所述第四确定模块,还用于确定每条提示信息中包含的监控目标标识;
    所述生成模块,具体用于:
    针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
  24. 一种电子设备,其特征在于,包括处理器和存储器,其中,存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现如下步骤:
    获取监控目标特征;
    对自身采集的图像进行特征提取,得到待匹配特征;
    判断所述待匹配特征与所述监控目标特征是否匹配;
    如果是,基于所述采集设备的位置,确定所述监控目标的位置。
  25. 根据权利要求24所述的设备,其特征在于,所述获取监控目标特征的步骤,包括:
    接收服务器发送的监控目标特征;
    或者,获取包含监控目标的图像,对所述图像进行特征提取,得到监控目标特征。
  26. 根据权利要求24所述的设备,其特征在于,所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,包括:
    将所述采集设备的位置确定为所述监控目标的位置;
    或者,根据所述采集设备的位置、以及所述采集设备的视场范围,确定所述监控目标的位置。
  27. 根据权利要求24所述的设备,其特征在于,所述处理器还用于实现如下步骤:在判定所述待匹配特征与所述监控目标特征相匹配的情况下,确定所述待匹配特征对应到所述图像中的位置,作为所述监控目标在所述图像中的位置;
    所述基于所述采集设备的位置,确定所述监控目标的位置的步骤,包括:
    根据所述采集设备的位置、以及所述监控目标在所述图像中的位置,确定所述监控目标的位置。
  28. 根据权利要求24所述的设备,其特征在于,所述处理器还用于实现如下步骤:
    在所述基于所述采集设备的位置,确定所述监控目标的位置的步骤之后,输出所述监控目标的位置;或者,输出所述监控目标的位置及所述自身采集的图像;或者,向服务器发送提示信息,所述提示信息用于提示所述监控目标的位置。
  29. 一种电子设备,其特征在于,包括处理器和存储器,其中,存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现如下步骤:
    获取监控目标特征;
    向采集设备发送所述监控目标特征,以使所述采集设备在判定自身采集图像中的特征与所述监控目标特征匹配的情况下,向所述服务器发送提示信息;
    接收所述提示信息,并根据所述提示信息,确定所述监控目标的位置。
  30. 根据权利要求29所述的设备,其特征在于,所述根据所述提示信息,确定所述监控目标的位置的步骤,包括:
    读取所述提示信息中携带的所述监控目标的位置;
    或者,将发送所述提示信息的采集设备的位置确定为所述监控目标的位置;
    或者,根据发送所述提示信息的采集设备的位置以及视场范围,确定所述监控目标的位置;
    或者,读取所述提示信息中携带的所述监控目标在图像中的位置,根据所述监控目标在图像中的位置、以及发送所述提示信息的采集设备的位置,确定所述监控目标的位置。
  31. 根据权利要求29所述的设备,其特征在于,所述处理器还用于实现如下步骤:
    在所述提示信息的数量大于1的情况下,在所述根据所述提示信息,确定所述监控目标的位置的步骤之后,确定每条提示信息对应的位置及时刻,所述位置为:根据该提示信息确定的所述监控目标的位置,所述时刻为:接收该提示信息的时刻;根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹。
  32. 根据权利要求31所述的设备,其特征在于,所述处理器还用于实现如下步骤:
    在所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤之后,根据所述监控目标的轨迹,对所述监控目标进行轨迹预测。
  33. 根据权利要求31所述的设备,其特征在于,所述监控目标的数量大于1;所述获取监控目标特征的步骤,包括:
    获取每份监控目标特征及对应的监控目标标识;
    所述向采集设备发送所述监控目标特征的步骤,包括:
    向采集设备发送所述每份监控目标特征及对应的监控目标标识;
    在所述接收所述提示信息的步骤之后,还包括:
    确定每条提示信息中包含的监控目标标识;
    所述根据每条提示信息对应的位置及时刻,生成一条轨迹,作为所述监控目标的轨迹的步骤,包括:
    针对每个监控目标,根据包含该监控目标标识的每条提示信息对应的位置及时刻,生成一条轨迹,作为该监控目标的轨迹。
PCT/CN2018/100459 2017-08-15 2018-08-14 一种目标定位方法、装置及*** WO2019034053A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710697867.0A CN109410278B (zh) 2017-08-15 2017-08-15 一种目标定位方法、装置及***
CN201710697867.0 2017-08-15

Publications (1)

Publication Number Publication Date
WO2019034053A1 true WO2019034053A1 (zh) 2019-02-21

Family

ID=65361790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100459 WO2019034053A1 (zh) 2017-08-15 2018-08-14 一种目标定位方法、装置及***

Country Status (2)

Country Link
CN (1) CN109410278B (zh)
WO (1) WO2019034053A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275745A (zh) * 2020-03-23 2020-06-12 中国建设银行股份有限公司 客户银行网点内的轨迹图像生成方法及装置
CN111403021A (zh) * 2020-03-11 2020-07-10 中国电子工程设计院有限公司 一种监测方法及装置
CN112835947A (zh) * 2019-11-22 2021-05-25 杭州海康威视***技术有限公司 目标识别方法及装置、电子设备、存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188691A (zh) * 2019-05-30 2019-08-30 银河水滴科技(北京)有限公司 一种移动轨迹确定方法及装置
CN110245268A (zh) * 2019-06-26 2019-09-17 银河水滴科技(北京)有限公司 一种路线确定、展示的方法及装置
CN110717386A (zh) * 2019-08-30 2020-01-21 深圳壹账通智能科技有限公司 涉事对象追踪方法及装置、电子设备和非暂态存储介质
CN110935079A (zh) * 2019-11-27 2020-03-31 上海市普陀区长风街道长风社区卫生服务中心 基于图像识别具有场景识别功能的输液监控方法及***
CN112616023A (zh) * 2020-12-22 2021-04-06 荆门汇易佳信息科技有限公司 复杂环境下的多摄像机视频目标追踪方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574415A (zh) * 2015-01-26 2015-04-29 南京邮电大学 一种基于单摄像机的目标空间定位方法
US20160187477A1 (en) * 2014-12-29 2016-06-30 Sony Corporation Surveillance apparatus having a radar sensor
CN106529497A (zh) * 2016-11-25 2017-03-22 浙江大华技术股份有限公司 一种图像采集设备的定位方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3586761B2 (ja) * 1998-01-12 2004-11-10 三菱電機株式会社 画像の2値化処理方法およびこの処理方法を搭載した画像処理装置
CN103942811B (zh) * 2013-01-21 2017-08-15 中国电信股份有限公司 分布式并行确定特征目标运动轨迹的方法与***
CN103985230B (zh) * 2014-05-14 2016-06-01 深圳市大疆创新科技有限公司 一种基于图像的通知方法、装置及通知***
CN104023212B (zh) * 2014-06-23 2017-08-11 太原理工大学 一种基于多终端的远程智能视频监控***
CN104284150A (zh) * 2014-09-23 2015-01-14 同济大学 基于道路交通监控的智能摄像头自主协同跟踪方法及其监控***
CN105741261B (zh) * 2014-12-11 2020-06-09 北京大唐高鸿数据网络技术有限公司 一种基于四摄像头的平面多目标定位方法
CN104776832B (zh) * 2015-04-16 2017-02-22 浪潮软件集团有限公司 一种空间内物体的定位方法、机顶盒和***

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160187477A1 (en) * 2014-12-29 2016-06-30 Sony Corporation Surveillance apparatus having a radar sensor
CN104574415A (zh) * 2015-01-26 2015-04-29 南京邮电大学 一种基于单摄像机的目标空间定位方法
CN106529497A (zh) * 2016-11-25 2017-03-22 浙江大华技术股份有限公司 一种图像采集设备的定位方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112835947A (zh) * 2019-11-22 2021-05-25 杭州海康威视***技术有限公司 目标识别方法及装置、电子设备、存储介质
CN112835947B (zh) * 2019-11-22 2024-04-02 杭州海康威视***技术有限公司 目标识别方法及装置、电子设备、存储介质
CN111403021A (zh) * 2020-03-11 2020-07-10 中国电子工程设计院有限公司 一种监测方法及装置
CN111403021B (zh) * 2020-03-11 2023-12-05 中国电子工程设计院有限公司 一种监测方法及装置
CN111275745A (zh) * 2020-03-23 2020-06-12 中国建设银行股份有限公司 客户银行网点内的轨迹图像生成方法及装置
CN111275745B (zh) * 2020-03-23 2023-07-11 中国建设银行股份有限公司 客户银行网点内的轨迹图像生成方法及装置

Also Published As

Publication number Publication date
CN109410278B (zh) 2021-12-10
CN109410278A (zh) 2019-03-01

Similar Documents

Publication Publication Date Title
WO2019034053A1 (zh) 一种目标定位方法、装置及***
US11354901B2 (en) Activity recognition method and system
TWI786313B (zh) 目標跟蹤方法、裝置、介質以及設備
US9251588B2 (en) Methods, apparatuses and computer program products for performing accurate pose estimation of objects
US8938092B2 (en) Image processing system, image capture apparatus, image processing apparatus, control method therefor, and program
WO2020094091A1 (zh) 一种图像抓拍方法、监控相机及监控***
WO2022227490A1 (zh) 行为识别方法、装置、设备、存储介质、计算机程序及程序产品
US20160062456A1 (en) Method and apparatus for live user recognition
KR102296088B1 (ko) 보행자 추적 방법 및 전자 디바이스
CN108073890A (zh) 视频序列中的动作识别
WO2020094088A1 (zh) 一种图像抓拍方法、监控相机及监控***
US11748896B2 (en) Object tracking method and apparatus, storage medium, and electronic device
CN109426785B (zh) 一种人体目标身份识别方法及装置
WO2017071086A1 (zh) 用于视频播放的方法及装置
JP6588413B2 (ja) 監視装置および監視方法
CN111091098A (zh) 检测模型的训练方法、检测方法及相关装置
WO2021180004A1 (zh) 视频分析方法、视频分析的管理方法及相关设备
US11170512B2 (en) Image processing apparatus and method, and image processing system
CN110930434A (zh) 目标对象跟踪方法、装置、存储介质和计算机设备
US20210295550A1 (en) Information processing device, information processing method, and program
WO2020114102A1 (zh) 视频追踪方法和***、存储介质
US20170060255A1 (en) Object detection apparatus and object detection method thereof
Li et al. Loitering detection based on trajectory analysis
JP7314959B2 (ja) 人物認証装置、制御方法、及びプログラム
CN114067390A (zh) 基于视频图像的老年人跌倒检测方法、***、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18846890

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18846890

Country of ref document: EP

Kind code of ref document: A1