CN109284707B - Moving target detection method and device - Google Patents

Moving target detection method and device Download PDF

Info

Publication number
CN109284707B
CN109284707B CN201811062836.9A CN201811062836A CN109284707B CN 109284707 B CN109284707 B CN 109284707B CN 201811062836 A CN201811062836 A CN 201811062836A CN 109284707 B CN109284707 B CN 109284707B
Authority
CN
China
Prior art keywords
image
detected
image frame
registration
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811062836.9A
Other languages
Chinese (zh)
Other versions
CN109284707A (en
Inventor
周春平
宫辉力
李小娟
李想
杨灿坤
孟冠嘉
钟若飞
张可
郭姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinamap Hi Tech Beijing Information Technology Co ltd
Capital Normal University
Original Assignee
Chinamap Hi Tech Beijing Information Technology Co ltd
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinamap Hi Tech Beijing Information Technology Co ltd, Capital Normal University filed Critical Chinamap Hi Tech Beijing Information Technology Co ltd
Priority to CN201811062836.9A priority Critical patent/CN109284707B/en
Publication of CN109284707A publication Critical patent/CN109284707A/en
Application granted granted Critical
Publication of CN109284707B publication Critical patent/CN109284707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of moving target detection, and provides a moving target detection method and a moving target detection device, wherein the method comprises the following steps: acquiring a first image frame to be detected and a second image frame to be detected, which are acquired by an area array image sensor, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent; taking the first image frame to be detected as a reference image, carrying out image registration on the second image frame to be detected to obtain a registration image, and obtaining a first area in the first image frame to be detected and a second area in the registration image which are overlapped with each other according to the registration image; and carrying out differential operation on the first area and the second area to determine the moving target to be detected. The invention adopts the frame-push type imaging technology to obtain the image to be detected, and the moving target is detected by calculating the frame difference of the overlapping area of the images acquired at adjacent moments, so that the attitude parameter of the image acquisition equipment is not required to be changed, and the processing process is simplified.

Description

Moving target detection method and device
Technical Field
The invention relates to the technical field of moving target detection, in particular to a moving target detection method and a moving target detection device.
Background
The detection of moving objects in images is to extract moving objects with changes in form, position, size, and the like from a plurality of time-series images. The moving target detection is beneficial to moving target classification, tracking and behavior research, is an important means for dynamic information acquisition, and is widely applied in the fields of public safety and traffic. The existing moving target detection method is usually carried out based on linear array push-broom type imaging images, the method has high requirements on push-broom precision, attitude parameters of image acquisition equipment must be accurately measured and controlled so as to serve geometric correction in the later period, two-dimensional images can be obtained only by splicing, and the processing process is complex.
Disclosure of Invention
The invention aims to provide a moving target detection method, a moving target detection device, an image processing device and a storage medium.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a moving target detection method, which is applied to an image processing device equipped with an area array image sensor, where the area array image sensor is in communication connection with the image processing device, and the method includes: acquiring a first image frame to be detected and a second image frame to be detected, which are acquired by an area array image sensor, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent; taking the first image frame to be detected as a reference image, carrying out image registration on the second image frame to be detected to obtain a registration image, and obtaining a first area in the first image frame to be detected and a second area in the registration image which are overlapped with each other according to the registration image; and carrying out differential operation on the first area and the second area to determine the moving target to be detected.
In a second aspect, an embodiment of the present invention further provides a moving target detection apparatus, where the apparatus includes an obtaining module, a registration module, and a difference module. The acquisition module is used for acquiring a first image frame to be detected and a second image frame to be detected, which are acquired by the area array image sensor, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent; the registration module is used for carrying out image registration on a second image frame to be detected by taking a first image frame to be detected as a reference image to obtain a registration image, and obtaining a first area in the first image frame to be detected and a second area in the registration image which are mutually overlapped according to the registration image, and the difference module is used for carrying out difference operation on the first area and the second area to determine a moving target to be detected.
Compared with the prior art, the moving target detection method and the moving target detection device provided by the embodiment of the invention have the advantages that firstly, an area array image sensor collects an image frame to be detected; then, the image processing equipment acquires a first image frame to be detected and a second image frame to be detected which are adjacent at the acquisition time from the area array image sensor; then, taking the first image frame to be detected as a reference image, carrying out image registration on the second image frame to be detected to obtain a registration image, and obtaining a first area and a second area which are overlapped in the first image frame to be detected and the registration image according to the registration image; and finally, carrying out differential operation on the first area and the second area to determine the moving target to be detected. Compared with the prior art, the embodiment of the invention adopts a frame-pushing type imaging technology to obtain the image to be detected, and the moving target is detected by calculating the frame difference of the overlapping area of the images acquired at adjacent moments, so that the attitude parameter of the image acquisition equipment is not required to be changed, and the processing process is simplified.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a block schematic diagram of an image processing apparatus provided by an embodiment of the present invention.
Fig. 2 shows a flow chart of a moving target detection method provided by the embodiment of the invention.
Fig. 3 is a flowchart illustrating sub-steps of step S102 shown in fig. 2.
Fig. 4 is a flowchart illustrating sub-steps of step S103 shown in fig. 2.
Fig. 5 is a block diagram illustrating a moving object detection apparatus according to an embodiment of the present invention.
Fig. 6 is a schematic block diagram of the units of the registration module shown in fig. 5.
Icon: 100-an image processing device; 101-a memory; 102-a communication interface; 103-a processor; 104-a bus; 200-moving target detection means; 201-an acquisition module; 202-a registration module; 2021-feature extraction unit; 2022-feature matching unit; 2023-a registered image generation unit; 2024-an overlap region determination unit; 203-difference module; 204-calculation module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a block diagram illustrating an image processing apparatus 100 according to an embodiment of the present invention. The image processing apparatus 100 may be, but is not limited to, a host, a virtual machine, a physical server, a virtual machine on a physical server, or the like, which can provide a physical or virtual server having the same function as the server or the virtual machine. The operating system of the image processing apparatus 100 may be, but is not limited to, a Windows system, a Linux system, and the like. The image processing apparatus 100 comprises a memory 101, a communication interface 102, a processor 103 and a bus 104, the memory 101, the communication interface 102 and the processor 103 being connected via the bus 104, the processor 103 being adapted to execute executable modules, such as computer programs, stored in the memory 101.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the image processing apparatus 100 and at least one other image processing apparatus 100, and an external storage device is realized by at least one communication interface 102 (which may be wired or wireless).
The bus 104 may be an ISA bus, PCI bus, EISA bus, or the like. Only one bi-directional arrow is shown in fig. 1, but this does not indicate only one bus or one type of bus.
The memory 101 is used for storing a program, such as the moving object detecting device 200 shown in fig. 5. The moving object detection apparatus 200 includes at least one software functional module which may be stored in the memory 101 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the image processing device 100. After receiving the execution instruction, the processor 103 executes the program to implement the moving object detection method disclosed in the above embodiment of the present invention.
First embodiment
Referring to fig. 2, fig. 2 is a flowchart illustrating a moving object detection method according to an embodiment of the present invention. The processing method comprises the following steps:
step S101, a first image frame to be detected and a second image frame to be detected which are acquired by an area array image sensor are acquired, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent.
In the embodiment of the invention, the area array image sensor acquires images in a frame-pushing type imaging mode, the area array image sensor performs frame-pushing type imaging on a target area at different moments to obtain image frames at the moment, and the size of an overlapping area of two adjacent image frames and the motion displacement of a target to be detected are acquired at two adjacent moments to be enough for moving target detection.
Step S102, taking the first image frame to be detected as a reference image, carrying out image registration on the second image frame to be detected to obtain a registration image, and obtaining a first area in the first image frame to be detected and a second area in the registration image which are overlapped with each other according to the registration image.
In the embodiment of the present invention, the acquisition time of the first image frame to be detected and the acquisition time of the second image frame to be detected are adjacent to each other, and the acquisition time of the first image frame to be detected may be earlier than that of the second image frame to be detected or later than that of the second image frame to be detected. In the embodiment of the invention, an image frame serving as a reference image is used as a first image frame to be detected, another image frame requiring image registration is used as a second image frame to be detected, the first image frame to be detected is used as a reference image, and the process of performing image registration on the second image frame to be detected is to find a coordinate transformation parameter which enables the matching degree of the first image frame to be detected and the second image frame to be detected to be highest. The registration image is obtained by converting the second image to be detected into an image having the same coordinate system as the first image frame to be detected according to the coordinate transformation parameter, that is, the registration image and the first image frame to be detected have the same coordinate system, and under the condition that the coordinate systems are the same, the overlapped area between the registration image and the first image frame to be detected is determined, and the specific process may be: firstly, respectively extracting characteristic points of a first image frame to be detected and a second image frame to be detected; secondly, performing feature matching on feature points of the first image frame to be detected and the second image frame to be detected to obtain coordinate transformation parameters between the first image frame to be detected and the second image frame to be detected; then, obtaining a registration image of a second image frame to be detected in a coordinate system of the first image frame to be detected according to the coordinate transformation parameters; and finally, determining the mutually overlapped area of the first image frame to be detected and the registration image.
Referring to fig. 3, step S102 may further include the following sub-steps:
and a substep S1021, extracting a first characteristic point of the first image frame to be detected and a second characteristic point of the second image frame to be detected.
In this embodiment of the present invention, the first feature point may be a pixel point representing that the first image frame to be detected has a specific singularity, the second feature point may be a pixel point representing that the second image frame to be detected has a specific singularity, and the feature points adopted in the image registration may be, but are not limited to, harris, surf, sift, and other feature points.
In the substep S1022, the first feature point and the second feature point are subjected to feature matching to obtain a coordinate transformation parameter between the first image frame to be detected and the second image frame to be detected.
In the embodiment of the invention, the feature matching process is a process of measuring the similarity between a first feature point and a second feature point by using a feature matching algorithm, eliminating error matching and then selecting a coordinate transformation parameter which can best fit the change between a first image frame to be detected and a second image frame to be detected, wherein the coordinate transformation parameter represents the coordinate transformation relation between the first image frame to be detected and the second image frame to be detected, namely the first feature point corresponding to the second feature point can be obtained by the coordinate transformation parameter, for example, the image A is the first image frame to be detected, the image A is moved by 2 pixels to the right, then moved by 3 pixels to the upper direction and rotated by 60 degrees clockwise, at the moment, the obtained image B is determined to be 2, 3, and if the image B is used as the second image frame to be detected for image registration, the process of registering the image B is determined, And 60, according to the three coordinate transformation parameters, obtaining a pixel point in the image A corresponding to any one pixel point in the image B. In the embodiment of the present invention, the algorithm for performing feature matching may be, but is not limited to, a flann algorithm, a freak algorithm, and the like.
And a substep S1023, taking the first image frame to be detected as a reference image, and generating a registration image of the second image frame to be detected in the coordinate system of the first image frame to be detected according to the coordinate transformation parameters.
In the embodiment of the present invention, the registration image is an image that is obtained according to the coordinate transformation parameters and has the same coordinate system as the first to-be-detected image frame, and the registration image obtained through image registration has the same size as the second to-be-detected image frame, but undergoes transformations such as translation, rotation, and the like, when the registration image is generated, a region corresponding to the first feature point without matching is filled with a value 0, and the region filled with the value 0 is reflected in the registration image and is a black region, so the black region can be regarded as a region where the first to-be-detected image frame and the registration image do not overlap.
And a substep S1024 of taking a region corresponding to the feature of the registration image in the first image frame to be detected as a first region and taking a region corresponding to the feature of the first image frame to be detected in the registration image as a second region.
In the embodiment of the invention, the first region is based on the registered image feature, the registered image feature is mapped to a region in the first image frame to be detected, namely the region in the first image frame to be detected, which is overlapped with the registered image, the second region is based on the first image frame feature to be detected, the first image frame feature to be detected is mapped to the region in the registered image, namely the region in the registered image, which is overlapped with the first image frame to be detected, the second region can be obtained by cutting off the black region in the registered image, and the part, corresponding to the second region, in the first image frame to be detected is taken out to be the first region.
And step S103, carrying out differential operation on the first area and the second area to determine the moving target to be detected.
In the embodiment of the invention, the first area and the second area comprise a plurality of pixels which are in one-to-one correspondence, the difference operation of the first area and the second area is to carry out difference operation on the pixels in the first area and the pixels in the second area to obtain a difference result, a difference image is generated according to the difference result and a preset threshold value, and the moving target to be detected is determined from the difference image.
Referring to fig. 4, step S103 may further include the following sub-steps:
and a substep S1031, performing difference operation on the pixel in the first region and the corresponding pixel in the second region to obtain a difference result.
In the embodiment of the present invention, the difference operation is to subtract the image element in the first area from the corresponding image element in the second area, and the difference result is an absolute value obtained by subtracting the two.
And a substep S1032 of marking corresponding pixels in the difference image when the difference result is greater than or equal to a preset threshold value, wherein the pixels in the difference image correspond to the pixels in the first region one by one.
In the embodiment of the present invention, the difference image is obtained according to a formula of difference operation, pixels in the difference image correspond to pixels in the first region one to one, and because pixels in the first region correspond to pixels in the second region one to one, pixels in the difference image also correspond to pixels in the second region one to one, and the formula of difference operation may be:
Figure BDA0001797560880000081
wherein D (x, y) is the pixel value of the coordinate point (x, y) in the difference image, fN(x, y) is the pixel value of the coordinate point (x, y) in the first area, fN+1(x, y) is the pixel value of the coordinate point (x, y) in the second area, TaIs a preset threshold. Preset threshold valueThe detection of the moving target is greatly influenced, a part of the moving target is lost due to too large selection of the preset threshold value, so that the detection is missed, a large number of non-moving targets are detected due to too small selection of the preset threshold value, so that the detection accuracy is reduced, the preset threshold value can be determined through a self-adaptive threshold value selection algorithm, the preset threshold value which is most suitable for the image frame to be detected is obtained, and the effects of reducing the detection missing and obtaining higher detection accuracy are achieved. When the difference result is greater than or equal to the preset threshold, marking the corresponding pixel in the difference image may be to set the value of the corresponding pixel to 1, and when the difference result is less than the preset threshold, setting the value of the corresponding pixel in the difference image to 0, so that the difference image obtained according to the difference formula is a binary image.
And a substep S1033, using the area corresponding to the marked pixel in the differential image as the moving target to be detected.
In the embodiment of the present invention, the pixel marked in the difference image is also the pixel whose difference result is greater than or equal to the preset threshold, that is, the pixel whose pixel value is 1, and the moving target to be detected refers to the area corresponding to the pixel whose pixel value is 1.
In the embodiment of the present invention, after the moving target to be detected is determined, the motion parameter of the moving target to be detected may be estimated according to the positions of the moving target in the first region and the second region and the corresponding acquisition time, so as to estimate the approximate motion trajectory of the moving target to be detected, and therefore, the embodiment of the present invention further includes step S104.
And step S104, calculating the motion parameters of the moving target to be detected according to the positions of the moving target to be detected in the first area and the second area.
In the embodiment of the present invention, the motion parameter may be motion information such as a motion speed and a motion direction, after the moving target to be detected is determined, the corresponding positions of the moving target to be detected in the first region and the second region are obtained according to the position of the moving target to be detected in the differential image, then the displacement change between the corresponding position of the moving target to be detected in the first region and the corresponding position of the second region is calculated, and finally the motion parameters such as the motion speed and the motion direction of the moving target to be detected are calculated according to the acquisition time corresponding to the first region, the acquisition time corresponding to the second region, and the displacement change, wherein the acquisition time corresponding to the first region is the acquisition time of the first image frame to be detected, and the acquisition time corresponding to the second region is the acquisition time of the second image frame to be detected.
It should be noted that, when there are multiple moving targets to be detected in the same differential image, the corresponding positions of the moving targets to be detected in the first region and the second region are obtained according to the position of each moving target to be detected in the differential image, the multiple moving targets to be detected in the first region are matched with the multiple moving targets to be detected in the second region according to the characteristic information of the area, shape, and the like of the moving targets to be detected, so as to determine the corresponding positions of the same moving target in the first region and the second region, and then the moving parameters of the moving targets to be detected are calculated according to the method in step S104.
In the embodiment of the invention, the frame-pushing type imaging technology is adopted to obtain the image to be detected, and the moving target is detected by calculating the frame difference of the overlapping area of the images acquired at adjacent moments, so that compared with the prior art, the method has the following beneficial effects:
first, because the image to be detected is acquired based on the frame-push type imaging technology, the attitude parameters of the image acquisition equipment do not need to be changed, and the processing process is simplified.
Secondly, because only the overlapping area of the image frames to be detected needs to be processed by a frame difference method, the realization process is simple and the false alarm rate is low.
Thirdly, the moving target detection result can be presented in the form of a binary image, so that the transmission pressure caused by transmitting the detection result data is reduced.
Second embodiment
Referring to fig. 5, fig. 5 is a block diagram illustrating a moving object detecting device 200 according to an embodiment of the present invention. The moving object detecting device 200 is applied to the image processing apparatus 100, and includes an obtaining module 201; a registration module 202; a difference module 203; a calculation module 204.
The acquiring module 201 is configured to acquire a first image frame to be detected and a second image frame to be detected, which are acquired by an area array image sensor, where acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent to each other.
In this embodiment of the present invention, the obtaining module 201 is configured to execute step S101.
The registration module 202 is configured to perform image registration on a second image frame to be detected to obtain a registration image by using the first image frame to be detected as a reference image, and obtain a first region in the first image frame to be detected and a second region in the registration image, which are overlapped with each other, according to the registration image.
In an embodiment of the present invention, the registration module 202 is configured to perform step S102 and its substeps 1021-S1024.
Referring to fig. 6, fig. 6 is a block diagram illustrating the registration module 202 in the moving object detecting apparatus 200 shown in fig. 5. The registration module 202 includes a feature extraction unit 2021, a feature matching unit 2022, a registration image generation unit 2023, and an overlap region determination unit 2024.
The feature extraction unit 2021 is configured to extract a first feature point of the first image frame to be detected and a second feature point of the second image frame to be detected.
In the embodiment of the present invention, the feature extraction unit 2021 is configured to perform the sub-step S1021.
The feature matching unit 2022 is configured to perform feature matching on the first feature points and the second feature points to obtain coordinate transformation parameters between the first image frame to be detected and the second image frame to be detected.
In the embodiment of the present invention, the feature matching unit 2022 is configured to perform the sub-step S1022.
The registered image generating unit 2023 is configured to generate a registered image of the second image frame to be detected in the coordinate system of the first image frame to be detected according to the coordinate transformation parameter, with the first image frame to be detected as a reference image.
In an embodiment of the present invention, the registration image generation unit 2023 is configured to perform sub-step S1023.
The overlap region determining unit 2024 is configured to use a region in the first image frame to be detected corresponding to the feature of the registration image as a first region, and use a region in the registration image corresponding to the feature of the first image frame to be detected as a second region.
In this embodiment of the present invention, the overlap region determining unit 2024 is configured to perform the sub-step S1024.
And the difference module 203 is configured to perform difference operation on the first region and the second region to determine a moving target to be detected.
In this embodiment of the present invention, the difference module 203 is configured to execute step S103.
In this embodiment of the present invention, the difference module 203 is specifically configured to:
carrying out differential operation on the pixels in the first area and the corresponding pixels in the second area to obtain a differential result;
when the difference result is greater than or equal to a preset threshold value, marking the corresponding pixel in the difference image;
and taking the area corresponding to the marked pixel in the differential image as a moving target to be detected.
The calculating module 204 is configured to calculate a motion parameter of the moving target to be detected according to positions of the moving target to be detected in the first area and the second area.
In this embodiment of the present invention, the calculating module 204 is configured to execute step S104.
In summary, the present invention provides a moving target detection method and apparatus, where the method is applied to an image processing device installed with an area array image sensor, and the area array image sensor is in communication connection with the image processing device, and the method includes: acquiring a first image frame to be detected and a second image frame to be detected, which are acquired by an area array image sensor, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent; taking the first image frame to be detected as a reference image, carrying out image registration on the second image frame to be detected to obtain a registration image, and obtaining a first area in the first image frame to be detected and a second area in the registration image which are overlapped with each other according to the registration image; and carrying out differential operation on the first area and the second area to determine the moving target to be detected. Compared with the prior art, the embodiment of the invention adopts a frame-pushing type imaging technology to obtain the image to be detected, and the moving target is detected by calculating the frame difference of the overlapping area of the images acquired at adjacent moments, so that the attitude parameter of the image acquisition equipment is not required to be changed, and the processing process is simplified.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (6)

1. A moving object detection method is applied to an image processing device provided with an area array image sensor, wherein the area array image sensor is in communication connection with the image processing device, and the method comprises the following steps:
acquiring a first image frame to be detected and a second image frame to be detected, which are acquired by the area array image sensor, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent;
taking the first image frame to be detected as a reference image, performing image registration on the second image frame to be detected to obtain a registration image, and obtaining a first region in the first image frame to be detected and a second region in the registration image which are overlapped with each other according to the registration image, wherein the first region is a region corresponding to a ground object of the registration image in the first image frame to be detected, the second region is a region corresponding to the ground object of the first image frame to be detected in the registration image, and the first region and the second region comprise pixels which correspond to each other one by one;
carrying out differential operation on the pixels in the first area and the corresponding pixels in the second area to obtain a differential result;
when the difference result is greater than or equal to a preset threshold value, marking corresponding pixels in the difference image, wherein the pixels in the difference image correspond to the pixels of the first area one by one, and the preset threshold value is determined through a self-adaptive threshold value selection algorithm;
and taking the area corresponding to the marked pixel in the differential image as a moving target to be detected.
2. The moving object detection method according to claim 1, wherein the step of performing image registration on the second image frame to be detected to obtain a registered image by using the first image frame to be detected as a reference image comprises:
extracting a first characteristic point of the first image frame to be detected and a second characteristic point of the second image frame to be detected;
performing feature matching on the first feature points and the second feature points to obtain coordinate transformation parameters between the first image frame to be detected and the second image frame to be detected;
and generating a registration image of the second image frame to be detected in the coordinate system of the first image frame to be detected according to the coordinate transformation parameter by taking the first image frame to be detected as a reference image.
3. The moving object detection method of claim 1, further comprising:
and calculating the motion parameters of the moving target to be detected according to the positions of the moving target to be detected in the first region and the second region.
4. A moving object detection device, applied to an image processing apparatus mounted with an area array image sensor, the area array image sensor being in communication connection with the image processing apparatus, the device comprising:
the acquisition module is used for acquiring a first image frame to be detected and a second image frame to be detected, which are acquired by the area array image sensor, wherein the acquisition moments of the first image frame to be detected and the second image frame to be detected are adjacent;
the registration module is configured to perform image registration on the second image frame to be detected to obtain a registration image by using the first image frame to be detected as a reference image, and obtain a first region in the first image frame to be detected and a second region in the registration image, which are overlapped with each other, according to the registration image, where the first region is a region in the first image frame to be detected corresponding to a ground object of the registration image, the second region is a region in the registration image corresponding to the ground object of the first image frame to be detected, and the first region and the second region include pixels in one-to-one correspondence;
a difference module to: carrying out differential operation on the pixels in the first area and the corresponding pixels in the second area to obtain a differential result; when the difference result is greater than or equal to a preset threshold value, marking corresponding pixels in the difference image, wherein the pixels in the difference image correspond to the pixels of the first area one by one, and the preset threshold value is determined through a self-adaptive threshold value selection algorithm; and taking the area corresponding to the marked pixel in the differential image as a moving target to be detected.
5. The moving object detecting device according to claim 4, wherein the registration module comprises:
the characteristic extraction unit is used for extracting a first characteristic point of the first image frame to be detected and a second characteristic point of the second image frame to be detected;
the characteristic matching unit is used for carrying out characteristic matching on the first characteristic points and the second characteristic points to obtain coordinate transformation parameters between the first image frame to be detected and the second image frame to be detected;
and the registration image generating unit is used for generating a registration image of the second image frame to be detected in the coordinate system of the first image frame to be detected according to the coordinate transformation parameter by taking the first image frame to be detected as a reference image.
6. The moving object detecting device according to claim 4, wherein said device further comprises:
and the calculation module is used for calculating the motion parameters of the moving target to be detected according to the positions of the moving target to be detected in the first area and the second area.
CN201811062836.9A 2018-09-12 2018-09-12 Moving target detection method and device Active CN109284707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811062836.9A CN109284707B (en) 2018-09-12 2018-09-12 Moving target detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811062836.9A CN109284707B (en) 2018-09-12 2018-09-12 Moving target detection method and device

Publications (2)

Publication Number Publication Date
CN109284707A CN109284707A (en) 2019-01-29
CN109284707B true CN109284707B (en) 2021-07-20

Family

ID=65178343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811062836.9A Active CN109284707B (en) 2018-09-12 2018-09-12 Moving target detection method and device

Country Status (1)

Country Link
CN (1) CN109284707B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183173B (en) * 2019-07-05 2024-04-09 北京字节跳动网络技术有限公司 Image processing method, device and storage medium
CN111951313B (en) * 2020-08-06 2024-04-26 北京灵汐科技有限公司 Image registration method, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593353A (en) * 2008-05-28 2009-12-02 日电(中国)有限公司 Image processing method and equipment and video system
CN102254147A (en) * 2011-04-18 2011-11-23 哈尔滨工业大学 Method for identifying long-distance space motion target based on stellar map matching
WO2016009811A1 (en) * 2014-07-14 2016-01-21 Mitsubishi Electric Corporation Method for calibrating one or more cameras
CN106056625A (en) * 2016-05-25 2016-10-26 中国民航大学 Airborne infrared moving target detection method based on geographical homologous point registration
JP2017016333A (en) * 2015-06-30 2017-01-19 Kddi株式会社 Mobile object extraction device, method, and program
CN107292910A (en) * 2016-04-12 2017-10-24 南京理工大学 Moving target detecting method under a kind of mobile camera based on pixel modeling
CN107563961A (en) * 2017-09-01 2018-01-09 首都师范大学 A kind of system and method for the moving-target detection based on camera sensor
CN108399627A (en) * 2018-03-23 2018-08-14 云南大学 Video interframe target method for estimating, device and realization device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077520B (en) * 2013-01-04 2016-02-24 浙江大学 A kind of background subtraction method for mobile camera
US9911197B1 (en) * 2013-03-14 2018-03-06 Hrl Laboratories, Llc Moving object spotting by forward-backward motion history accumulation
CN103456026B (en) * 2013-07-29 2016-11-16 华中科技大学 A kind of Ground moving target detection method under highway terrestrial reference constraint
CN104021575A (en) * 2014-06-18 2014-09-03 国家电网公司 Moving object detection method, device and system
CN105447888B (en) * 2015-11-16 2018-06-29 中国航天时代电子公司 A kind of UAV Maneuver object detection method judged based on effective target
CN106127801A (en) * 2016-06-16 2016-11-16 乐视控股(北京)有限公司 A kind of method and apparatus of moving region detection
CN108154520B (en) * 2017-12-25 2019-01-08 北京航空航天大学 A kind of moving target detecting method based on light stream and frame matching

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593353A (en) * 2008-05-28 2009-12-02 日电(中国)有限公司 Image processing method and equipment and video system
CN102254147A (en) * 2011-04-18 2011-11-23 哈尔滨工业大学 Method for identifying long-distance space motion target based on stellar map matching
WO2016009811A1 (en) * 2014-07-14 2016-01-21 Mitsubishi Electric Corporation Method for calibrating one or more cameras
JP2017016333A (en) * 2015-06-30 2017-01-19 Kddi株式会社 Mobile object extraction device, method, and program
CN107292910A (en) * 2016-04-12 2017-10-24 南京理工大学 Moving target detecting method under a kind of mobile camera based on pixel modeling
CN106056625A (en) * 2016-05-25 2016-10-26 中国民航大学 Airborne infrared moving target detection method based on geographical homologous point registration
CN107563961A (en) * 2017-09-01 2018-01-09 首都师范大学 A kind of system and method for the moving-target detection based on camera sensor
CN108399627A (en) * 2018-03-23 2018-08-14 云南大学 Video interframe target method for estimating, device and realization device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Detection and Tracking of Large Number of Targets in Wide Area Surveillance;Vladimir Reilly 等;《ECCV 2010》;20101231;第186-199页 *
一种VideoSAR动目标阴影检测方法;张营;《电子与信息学报》;20170930;第39卷(第9期);第2197-2202页 *

Also Published As

Publication number Publication date
CN109284707A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
CN107388960B (en) A kind of method and device of determining object volume
CN111160210B (en) Video-based water flow rate detection method and system
CN107194361B (en) Two-dimensional posture detection method and device
CN109977776B (en) Lane line detection method and device and vehicle-mounted equipment
CN113421305B (en) Target detection method, device, system, electronic equipment and storage medium
CN102831427B (en) Texture feature extraction method fused with visual significance and gray level co-occurrence matrix (GLCM)
JP6684475B2 (en) Image processing apparatus, image processing method and program
US9865061B2 (en) Constructing a 3D structure
CN109284707B (en) Moving target detection method and device
CN107272899B (en) VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
Jung et al. Object Detection and Tracking‐Based Camera Calibration for Normalized Human Height Estimation
CN110276788B (en) Method and apparatus for infrared imaging seeker target tracking
Lauridsen et al. Reading circular analogue gauges using digital image processing
CN111476812A (en) Map segmentation method and device, pose estimation method and equipment terminal
CN111680680A (en) Object code positioning method and device, electronic equipment and storage medium
CN117274255B (en) Data detection method, device, electronic equipment and storage medium
CN112837384B (en) Vehicle marking method and device and electronic equipment
CN110533663B (en) Image parallax determining method, device, equipment and system
CN111373393B (en) Image retrieval method and device and image library generation method and device
CN105225219B (en) Information processing method and electronic equipment
CN106886796B (en) Icon position identification method and device and terminal equipment
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant