CN113011272B - Track image generation method, device, equipment and storage medium - Google Patents

Track image generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113011272B
CN113011272B CN202110206056.2A CN202110206056A CN113011272B CN 113011272 B CN113011272 B CN 113011272B CN 202110206056 A CN202110206056 A CN 202110206056A CN 113011272 B CN113011272 B CN 113011272B
Authority
CN
China
Prior art keywords
image
motion
target
moving
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110206056.2A
Other languages
Chinese (zh)
Other versions
CN113011272A (en
Inventor
钱扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibee Technology Co Ltd
Original Assignee
Beijing Aibee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibee Technology Co Ltd filed Critical Beijing Aibee Technology Co Ltd
Priority to CN202110206056.2A priority Critical patent/CN113011272B/en
Publication of CN113011272A publication Critical patent/CN113011272A/en
Application granted granted Critical
Publication of CN113011272B publication Critical patent/CN113011272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a track image generation method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image in a target video; converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from that of the scene area; determining a target track image according to the time sequence of the multi-frame target image in the target video and the motion areas in the motion images corresponding to the multi-frame target image; the time sequence of each motion region in the target track image and the corresponding target image of the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region. The method can accurately generate the track image corresponding to the moving object.

Description

Track image generation method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a track image generating method, apparatus, device, and storage medium.
Background
In recent years, how to determine a corresponding motion trajectory for a moving object and generate a corresponding trajectory image has been one of research hotspots in the field of computer vision.
The track image corresponding to the moving target is generated, and the track image has higher application value in the aspects of security monitoring and the like. By way of example, an abnormal parabolic behavior in public places is an extremely dangerous behavior, such as throwing contraband to an isolation area on airport terminals, light could jeopardize the operational order of the airport, and heavy could cause major security incidents; the overall coverage is often difficult to realize only by relying on manual monitoring and management of the parabolic behavior, so that whether the parabolic behavior exists in a public place or not is generally required to be monitored based on images shot by monitoring cameras deployed in the public place, and the implementation basis of the monitoring of the parabolic behavior is to determine a moving target in the images and generate track images corresponding to the moving target.
Therefore, how to accurately generate the track image corresponding to the moving object is very important to realize the detection of certain movement behaviors in a specific place.
Disclosure of Invention
The embodiment of the application provides a track image generation method, a track image generation device, track image generation equipment and a storage medium, which can accurately generate a track image corresponding to a moving object.
In view of this, a first aspect of the present application provides a trajectory image generation method, the method comprising:
acquiring a target image in a target video;
Converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images; and the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region.
A second aspect of the present application provides a trajectory image generation device, the device comprising:
The image acquisition module is used for acquiring a target image in the target video;
A moving image conversion module for converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
the track image generation module is used for determining a target track image according to the time sequence of a plurality of frames of target images in the target video and the motion areas in the motion images corresponding to the frames of target images; and the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region.
A third aspect of the application provides an apparatus comprising: a processor and a memory;
the memory is used for storing a computer program;
The processor is configured to invoke the computer program to execute the track image generating method described in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program for executing the trajectory image generation method of the first aspect.
From the above technical solutions, the embodiment of the present application has the following advantages:
the embodiment of the application provides a track image generation method, which comprises the following steps: acquiring a target image in a target video; then, converting the target image into a corresponding moving image including a moving region corresponding to the moving target and a scene region corresponding to the scene, the pixel value of the moving region being different from the pixel value of the scene region; furthermore, according to the time sequence of the multi-frame target images in the target video and the motion areas in the motion images corresponding to the multi-frame target images, the target track images are determined, and the time sequence of each motion area in the target track images and the motion image corresponding to the motion image to which each motion area belongs in the target track images in the target video can represent the motion track of the motion target corresponding to the motion area. Thus, the target track image is generated based on the moving area in the moving image and the time sequence of the target image corresponding to the moving image in the target video, so that the generated target track image can be ensured to have higher accuracy, namely, the generated target track image can accurately represent the moving track of the moving target.
Drawings
Fig. 1 is a schematic flow chart of a track image generating method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a target image and a moving image provided by an embodiment of the present application;
Fig. 3 is a schematic diagram of a moving image obtained after filtering a moving region in the moving image according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a target track image obtained after screening a motion area in the target track image according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a track image generating device according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to determine a motion trail of a moving target based on a target video shot by a target camera and generate a trail image capable of reflecting the motion trail, the embodiment of the application provides a trail image generation method.
In the track image generation method, a target image in a target video is acquired first; then, converting the target image into a corresponding moving image including a moving region corresponding to the moving target and a scene region corresponding to the scene, the pixel value of the moving region being different from the pixel value of the scene region; furthermore, according to the time sequence of the multi-frame target images in the target video and the motion areas in the motion images corresponding to the multi-frame target images, the target track images are determined, and the time sequence of each motion area in the target track images and the motion image corresponding to the motion image to which each motion area belongs in the target track images in the target video can represent the motion track of the motion target corresponding to the motion area.
In this way, the video frame images in the target video are converted into the moving images which clearly distinguish the moving areas from the scene areas, and then the target track images are determined based on the moving areas in the moving images and the time sequence of the video frame images corresponding to the moving images in the target video, wherein the target track images can reflect the moving tracks of the moving targets corresponding to the moving areas in the moving images; the generated target track image has higher real-time performance and accuracy, can be updated in real time along with the updating of the video frame image in the target video, and can correspondingly reflect the motion track of the moving target in the target video in real time.
It should be noted that the track image generating method provided by the embodiment of the present application may be applied to various devices having data processing capabilities, such as a terminal device, a server, and the like. The terminal device may be a computer, a tablet computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a smart phone, etc.; the server can be an application server or a Web server, and can be an independent server or a cluster server in specific deployment.
The track image generating method provided by the application is described in detail by a method embodiment.
Referring to fig. 1, fig. 1 is a flowchart of a track image generating method according to an embodiment of the present application. For convenience of description, the following embodiments will be described taking a server as an execution subject. As shown in fig. 1, the trajectory image generation method includes the steps of:
step 101: and acquiring a target image in the target video.
In practical application, when the server needs to track and detect a moving target aiming at a target video shot by a target camera and determine a moving track of the moving target, the server needs to acquire the target video shot by the target camera and further acquire a target image in the target video. In particular, the target camera may transmit the target video shot by the target camera to the server in real time, or the server may retrieve the target video from a database dedicated to storing the target video shot by the target camera.
In the method provided by the embodiment of the application, after the server acquires the target video shot by the target camera, the server can perform preprocessing on the image in the target video to obtain a corresponding target image, wherein the preprocessing can comprise at least one of the following operations: downsampling the image to a preset image size, and performing Gaussian blur processing on the image.
Specifically, the server may adjust the image size of each frame of image in the target video to reach the preset image size. It should be understood that if the resolution of each frame of image in the target video is too high, it will consume a large amount of calculation for the server to detect the moving region based on each frame of image, which will reduce the real-time performance of moving target detection; in order to avoid this, the server may perform downsampling processing for each frame of image in the target video after it acquires the target video so that each frame of image reaches a preset image size; because the resolution ratio of the image under the preset image size is lower, the calculation amount required by the subsequent detection of the moving area of the server can be reduced, and the real-time performance of the detection of the moving object is improved.
In addition, given that a certain amount of noise generally exists in the image, the server can also use a gaussian kernel with a certain size to perform global blurring processing on the image after the downsampling processing, so as to remove noise, especially salt and pepper noise, in the image, thereby helping to improve the accuracy of the detection of the subsequent movement region.
Step 102: converting the target image into a corresponding moving image; the moving image includes a moving region corresponding to a moving object and a scene region corresponding to a scene, and a pixel value of the moving region is different from a pixel value of the scene region.
After the server acquires the target image in the target video, the target image can be converted into a moving image by a motion detection algorithm such as a background modeling algorithm based on a Gaussian mixture model, and the moving image comprises a moving area corresponding to the moving target and a scene area corresponding to the scene, wherein the pixel value of the moving area is different from the pixel value of the scene area. Fig. 2 is a schematic diagram of a target image and a moving image, as shown in fig. 2, where (a) is a target image, i.e., RGB (Red Green Blue) images, (b) is a moving image corresponding to the target image, a white area (i.e., an area with a pixel value of 255) in the moving image is a moving area corresponding to the moving target, and a black area (i.e., an area with a pixel value of 0) is a scene area corresponding to the scene.
In specific implementation, the server may perform motion region detection on the target image obtained after preprocessing by using a motion detection algorithm, so as to obtain a motion image corresponding to the target image. For example, the server may detect, for each pixel in the target image, whether it corresponds to the moving target, by using a background modeling algorithm based on a gaussian mixture model, if so, set the pixel value of the pixel to 255, and if not, set the pixel value of the pixel to 0; in this way, the pixel value of each pixel corresponding to the moving object in the target image is set to 255 so that each pixel corresponding to the moving object appears white, and the pixel value of each pixel corresponding to the non-moving object (i.e., scene) in the target image is set to 0 so that each pixel corresponding to the scene appears black, thereby obtaining the moving image corresponding to the target image. And further, the pixel points corresponding to the same pixel value in the moving image are communicated by adopting an area communication algorithm, so that a moving area formed by communicating white pixel points and a scene area formed by communicating black pixel points are obtained.
It should be understood that in practical application, besides the background modeling algorithm based on the mixed gaussian model may be used to convert the target image into the moving image, other motion detection algorithms, such as a pixel difference detection algorithm, may also be used to convert the target image into the moving image, and the present application is not limited in detail to the motion detection algorithm used herein.
It is contemplated that in some cases, the methods provided by embodiments of the present application are specific to detecting a particular moving object, such as an object being thrown by a throwing action; in such a scenario, the server does not need to perform track detection on each moving object in the object video, so as to avoid wasting calculation processing resources on the moving objects which do not need to be tracked and detected.
Specifically, the server may determine, for each motion region included in the motion image, whether the motion region satisfies a first preset area constraint condition; if yes, reserving the motion area in the motion image; if not, the pixel value of the motion area in the motion image is adjusted to the pixel value of the scene area.
The screening method of the motion area is described in an exemplary manner by taking the method provided by the embodiment of the application as an example and being specially used for detecting the throwing object. Because the throwing object is generally characterized by being far away from the camera, small in volume and the like, a first preset area threshold interval can be set as a first preset area constraint condition based on the throwing object; correspondingly, after converting a target image in a target video into a corresponding moving image, judging whether the corresponding area of each moving area in the moving image is within a first preset area threshold interval, if so, determining that the corresponding moving object is likely to be a throwing object and is reserved in the moving image, if not, determining that the corresponding moving object is unlikely to be a throwing object and setting the corresponding pixel value of each moving area as the pixel value of a scene area; in this way, the motion areas of the motion image corresponding to the non-casting object are filtered out. The moving image shown in fig. 3 is obtained by performing a moving region screening process on the moving image shown in fig. 2 (b), in which only the moving region that may correspond to the thrown object is retained.
It should be understood that, in practical application, the first preset area constraint condition may be set according to practical application requirements, and the present application is not limited to this first preset area constraint condition.
Step 103: determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images; and the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region.
After the server converts the multiple frames of target images in the target video into corresponding moving images respectively, namely multiple frames of moving images are obtained, the target track images can be determined according to the moving areas in the multiple frames of moving images and the time sequence of the target images corresponding to the multiple frames in the target video. For example, the server may configure corresponding numbers for the motion areas in the multiple frames of motion images according to the time sequence of the target images corresponding to the multiple frames of motion images in the target video; the corresponding numbers of the motion areas in the same motion image are the same; further, the server may generate a target track image from the motion regions in the multiple frames of motion images, where the motion regions and corresponding numbers in the target track image can reflect the motion track of the motion target corresponding to the motion regions.
In one possible implementation, the server may take a moving image corresponding to the start target image as the initial track image. Then, a track image discriminating operation is performed for a moving image corresponding to an i-th (i is an integer greater than or equal to 1) frame target image located after the start target image; the trajectory image discrimination operation includes: and judging whether the motion area in the motion image meets the preset motion trail condition according to the motion area in the motion image and the motion area added to the trail image. If the motion area in the motion image meets the preset motion trail condition, adding the motion area in the motion image into the trail image; otherwise, if the motion area in the motion image does not meet the preset motion trail condition, deleting the motion area.
Specifically, the server may select a start target image from among the target images of each frame included in the target video, take a moving image corresponding to the start target image as a start moving image and an initial track image, and may configure a start number for a moving region in the start moving image. Further, the server may perform, one by one, the following operations for each moving image corresponding to each frame of the target image located after the start target image, that is, the following operations for each frame of the moving image located after the start moving image, until the number corresponding to the moving region in the track image reaches a preset number value, where the operations performed include: determining an ith (i is an integer greater than or equal to 1) frame moving image located after the starting moving image as a candidate moving image, and performing a track image discrimination operation for the candidate moving image; the trajectory image discrimination operation here includes: judging whether the motion area in the candidate motion image meets the preset motion track condition according to the motion area in the candidate motion image and the motion area with the largest number in the track image (namely the motion area added to the track image; if the motion area in the candidate motion image meets the preset motion track condition, configuring corresponding numbers aiming at the motion area in the candidate motion image, and adding the motion area in the candidate motion image into the track image, wherein the numbers correspond to the sequence in which the motion area in the candidate motion image is added into the track image; and if the motion area in the candidate motion image does not meet the preset motion trail condition, deleting the candidate motion image.
For example, a moving image corresponding to a first frame target image in a target video may be taken as a starting moving image, and a starting number 1 may be configured for a moving region in the starting moving image, taking the starting moving image as a track image. Then, regarding a moving image corresponding to a second frame of target image in the target video as a candidate moving image, and judging whether the moving region in the candidate moving image meets the preset moving track condition according to the moving region with the largest number in the track image (namely the moving region in the initial moving image) and the moving region in the candidate moving image; for example, it may be determined whether or not there is an overlap of a motion region in a candidate motion image and a motion region in a starting motion image, and if there is no overlap, the motion region in the candidate motion image is considered to satisfy a preset motion trajectory condition, and if there is an overlap, the motion region in the candidate motion image is considered to not satisfy the preset motion trajectory condition. If the motion area in the candidate motion image meets the preset motion trail condition, the server may configure number 2 for the motion area in the candidate motion image, and add the motion area in the candidate motion image to the trail image, that is, add the motion area in the candidate motion image at the corresponding position in the trail image according to the position of the motion area in the candidate motion image; further, a moving image corresponding to a third frame target image in the target video is regarded as a new candidate moving image, and the above-described operation is performed for the candidate moving image. If the motion area in the candidate motion image does not satisfy the preset motion trajectory condition, the server may delete the candidate motion image, that is, delete the motion image corresponding to the second frame of the target image in the target video, and then treat the motion image corresponding to the third frame of the target image in the target video as a new candidate motion image, and re-perform the above operation for the candidate motion image.
In this way, the above operations are performed one by one for the moving image located after the start moving image until the number corresponding to the moving region included in the track image reaches a preset number value. The preset number value may be set according to actual requirements, and, for example, assuming that the method provided by the embodiment of the present application is specifically used for detecting a motion track of a throwing object generated by a throwing behavior, the preset number value may be set according to a relevant feature of the throwing behavior, for example, assuming that a motion time of the throwing object lasts for 1s at most, and a frame rate of a target camera is T, a maximum number value (i.e., the preset number value) corresponding to a motion area included in a track image may be set as T. Of course, in practical application, the preset number value may be set to other values, and the present application is not limited to the preset number value.
In addition, the preset motion trajectory condition may be set according to actual requirements, for example, the overlapping area of the motion region in the candidate motion image and the motion region added to the trajectory image in the trajectory image is lower than a preset overlapping area threshold, and the present application is not limited in this regard.
When the number of moving images included in the track image reaches the preset number, the server may delete the initial moving region in the track image, where the moving image to which the initial moving region belongs corresponds to a target image that is more time-sequential in the target video than the target images to which the moving images to which the other moving regions belong in the track image. Further, the above-described track image discriminating operation is performed for a moving image located after a moving image to which a moving region that is finally added in the track image belongs.
That is, after the number corresponding to the motion region in the track image reaches the preset number value, the server may delete the motion region corresponding to the start number in the track image, and reduce the number corresponding to each motion region in the track image by 1; further, a first frame moving image after a moving image to which a moving region having the largest number belongs is determined as a candidate moving image, and the above-described track image discriminating operation is performed for the candidate moving image.
That is, in the method provided by the embodiment of the application, the server can update the track image in real time. If the number of a certain motion area in the track image reaches a preset number value, indicating that the motion area included in the track image belongs to a preset number of frame motion images, wherein the track image can embody the motion track of a motion target corresponding to the motion area in the preset number of frame motion images; at this time, the server may continuously update the track image by using the motion region in the subsequent motion image, that is, delete the motion region with the number of 1 in the track image, and subtract 1 from the number corresponding to each motion region in the track image, so as to perform the track image discriminating operation for the next frame motion image (that is, the next frame motion image of the motion image to which the motion region with the largest number in the track image belongs), so as to update the track image in real time by using the motion region in the subsequent motion image, thereby achieving the purpose of updating the motion track in real time.
In this way, when the number of moving images to which the moving region included in the track image belongs reaches the preset number, the server may directly take the track image at that time as the target track image; each motion region and the corresponding number thereof in the target track image can reflect the motion track of the motion target corresponding to the motion region. And, the target track image may be updated in real time following the update of the moving image to reflect the moving track of the moving target in the scene photographed by the target camera in real time.
In another possible implementation manner, in the method provided by the embodiment of the present application, the server may further acquire a motion time image, where a pixel value of the initial motion time image is the same as a pixel value of a scene area in the motion image. After adding the motion region in the motion image to the track image, the server may determine a reference pixel value corresponding to the motion region according to the order in which the motion region is added to the track image; and displaying the moving region in the moving image with a reference pixel value at a corresponding position in the moving time image according to the position of the moving region in the moving image. After deleting the initial motion region in the track image, the server may delete the initial motion region in the motion time image, and adjust the reference pixel values corresponding to each motion region in the motion time image to obtain corresponding updated pixel values, so as to display each motion region in the motion time image with the updated pixel value corresponding to each motion region. The target image corresponding to the moving image to which the initial moving region belongs is positioned in the time sequence of the target video earlier than the target images corresponding to the moving images to which the other moving regions belong in the track image.
Specifically, the server may use the number corresponding to the motion region as a reference pixel value corresponding to the motion region, and add and display the motion region in the candidate motion image at the reference pixel value at the corresponding position in the motion time image according to the position of the motion region in the candidate motion image. After deleting the moving region corresponding to the start number in the track image, the server may delete the moving region belonging to the start moving image in the moving time image at the same time, subtract 1 from the reference pixel value corresponding to each moving region in the moving time image to obtain a corresponding updated reference pixel value, and display each moving region in the moving time image with the updated reference pixel value corresponding to each moving region.
That is, the method provided by the embodiment of the application can additionally obtain the initial moving time image except taking the initial moving image as the initial track image, wherein the pixel value of each pixel point in the initial moving time image is the pixel value of the scene area in the moving image, that is, each pixel value in the initial moving time image is 0, and the initial moving time image is black.
After adding a motion region in a candidate motion image to a trajectory image, the server may determine a number corresponding to the motion region in the candidate motion image as a reference pixel value corresponding to the motion region, and accordingly display the motion region with the reference pixel value in a motion time image. After deleting the moving region corresponding to the start number in the track image, the server may correspondingly delete the moving region with the smallest number in the moving time image, and simultaneously subtract 1 from the reference pixel value corresponding to each moving region in the moving time image to obtain a corresponding updated reference pixel value, and further, display each moving region in the moving time image with the updated reference pixel value corresponding to each moving region.
In this way, when the number of moving images to which the moving region included in the track image belongs reaches the preset number, the server can take the moving time image at that time as the target track image. That is, when the number corresponding to the motion area in the track image reaches a preset number value, the server can directly take the motion time image at the moment as a target track image; each motion region in the target track image and the reference pixel value corresponding to each motion region can reflect the motion track of the motion target corresponding to the motion region. And, the target track image may be updated in real time following the update of the moving image to reflect the moving track of the moving target in the scene photographed by the target camera in real time.
In yet another possible implementation, the server may generate the track image and the motion time image simultaneously, with the motion time image providing the track image with the relevant reference time information.
For example, the server may select a start moving image from moving images corresponding to each frame of the target image included in the target video, where a moving region in the start moving image corresponds to a start number, and at this time, the server may use the start moving image as a track image, and use the start number as a reference pixel value corresponding to the moving region in the start moving image, and display the moving region in the start moving image in the moving time image with the reference pixel value according to a position of the moving region in the start moving image.
For each frame of moving image located after the starting moving image, the server may correspondingly configure a corresponding number for a moving region in the moving image, and determine the number as a reference pixel value corresponding to the moving region; further, the motion region is displayed with its corresponding reference pixel value at a corresponding position in the motion time image according to the position of the motion region in the motion image to which it belongs. For example, for a first frame moving image after a start moving image, the server may configure a corresponding number 2 for a moving region in the moving image, and determine that a reference pixel value corresponding to the moving region is 2, and display the moving region with the pixel value of 2 at a corresponding position in a moving time image according to the position of the moving region in the moving image to which it belongs; and the like, until the reference pixel value corresponding to the motion area in the motion time image reaches the preset pixel value.
It should be understood that if there is an overlapping region between the moving regions belonging to different moving images in the moving time image, the server may use the reference pixel value corresponding to the moving region with the larger number as the reference pixel value corresponding to the overlapping region, and display the overlapping region with the reference pixel value in the moving time image.
The server may determine whether an overlapping area exists between a moving area in the moving image and a moving area with the largest number corresponding to the moving area in the track image while displaying the moving area in the moving image in the moving time image with its corresponding reference pixel value, for example, when the moving area in the first frame moving image after the initial moving image is additionally displayed in the moving time image with the reference pixel value of 2, the server may determine whether the overlapping area exists between the moving area and the initial moving area in the initial moving image, if so, the moving area in the first frame moving image after the initial moving image is not displayed in the track image, otherwise, the moving area in the first frame moving image after the initial moving image is displayed in the track image.
Thus, the above operation is repeatedly performed until the reference pixel value corresponding to the motion region in the motion time image reaches the preset pixel value n (n is an integer greater than or equal to 1). At this time, n moving regions are included in the moving time image, and reference pixel values corresponding to the n moving regions are from 1 to n; the trajectory image includes m (m is less than or equal to n) motion regions, each of which corresponds to a pixel value of the motion region. When the target track image is determined, the server can determine the motion track of the moving target based on m motion areas in the track image, and in the process of determining the motion track, the server can acquire relevant time information according to reference pixel values corresponding to the m motion areas in the motion time image.
In addition, after the reference pixel value corresponding to the motion region in the motion time image reaches the preset pixel value n, the server may delete the motion region corresponding to the reference pixel value with the same starting number in the motion time image, and subtract 1 from the reference pixel value corresponding to each motion region in the motion time image; at the same time, the server also needs to delete the motion area corresponding to the start number in the track image. Further, for a moving image of a frame subsequent to a moving image to which a moving region having the largest number among moving images belongs, a number n is arranged for the moving region in the moving image, and the number n is used as a reference pixel value corresponding to the moving region, and the moving region is displayed in the moving images with the reference pixel value; when the motion area does not overlap with the motion area with the largest number corresponding to the motion area in the track image, the motion area is displayed in the track image at the same time.
It is contemplated that in some cases, the methods provided by embodiments of the present application are specific to detecting a particular moving object, such as an object being thrown by a throwing action; in this scenario, after the server obtains the target track image through the above method, the moving area in the target track image may be screened to reserve the moving area corresponding to the moving object to be detected, so that the target track image is dedicated to characterizing the moving track of the moving object.
In one possible implementation manner, the server may determine, for each motion region in the target track image, whether the motion region meets a second preset area constraint condition; if yes, reserving the motion area in the target track image; if not, deleting the motion area in the target track image.
Considering that when the moving area in the moving image is subjected to screening processing, the moving area corresponding to the non-throwing object in the moving image may not be completely filtered out, for example, the moving area corresponding to the small part of the human body may be included in a certain moving image, but the moving area in the moving image is not filtered out because the area of the moving area meets the first preset area constraint condition, after the server generates the target track image, all the moving areas corresponding to the human body in the multi-frame moving image are communicated, and the area of the communicated moving area is larger, so that the moving area is easier to accurately filter out. Based on this, a second preset area threshold interval may be further set as a second preset area constraint condition; and filtering out the moving region exceeding the second preset area threshold interval in the target track image, and only reserving the moving region with the area size within the second preset area threshold interval in the target track image. Typically, the second preset area threshold interval is greater than the first preset area threshold interval.
It should be understood that, in practical applications, the second preset area constraint condition may be set according to practical application requirements, and the present application is not limited to this second preset area constraint condition.
In another possible implementation manner, the server may configure corresponding numbers for the motion areas in the motion images corresponding to the multiple frames of target images according to the time sequence of the multiple frames of target images in the target video; correspondingly, the number corresponding to the pixel point in the motion area in the target track image is the same as the number of the motion area. In this case, the server may perform region-communicating processing for the target trajectory image to obtain the reference motion region; further, for each reference motion region in the target track image, determining the pixel point duty ratio under various numbers according to the numbers corresponding to the pixel points in the reference motion region, and taking the pixel point duty ratio as the pixel point duty ratio corresponding to the reference motion region; judging whether the pixel point duty ratio corresponding to each reference motion area in the target track image meets a preset duty ratio condition or not; if yes, reserving the reference motion area in the target track image; if not, deleting the reference motion area in the target track image.
Still taking the method provided by the embodiment of the application as an example of detecting the motion track of the throwing object, because the motion speed corresponding to the throwing object is generally faster, in the target track image, the pixels in the motion area corresponding to the throwing object should generally correspond to the same number, whereas the pixels in the motion area corresponding to the non-throwing object (such as a human body) may come from multiple frames of different motion images, and the pixels corresponding to multiple numbers may be fused.
In specific implementation, the server can adopt an area communication algorithm to communicate pixel points with the pixel value of 0 in the target track image so as to obtain each reference motion area; then, for each reference motion region in the target track image, determining the respective corresponding number of each pixel point in the reference motion region, and calculating the pixel point duty ratio under various numbers. If all the pixel points in the reference motion area correspond to the same number, or most of the pixel points in the reference motion area correspond to the same number, that is, the duty ratio of the pixel points corresponding to a certain number exceeds a preset duty ratio threshold, the reference motion area can be considered to meet a preset duty ratio condition; furthermore, the reference motion area in the target track image can be reserved, otherwise, if the numbers corresponding to the pixel points in the reference motion area are scattered, that is, the pixel point duty ratios corresponding to the numbers do not exceed the preset duty ratio threshold value, the reference motion area can be considered to not meet the preset duty ratio condition; further, the reference motion region in the target trajectory image may be deleted. Fig. 4 shows a track image corresponding to the throwing object obtained by performing the motion region screening processing on the target track image.
In the track image generation method, a target image in a target video is acquired first; then, converting the target image into a corresponding moving image including a moving region corresponding to the moving target and a scene region corresponding to the scene, the pixel value of the moving region being different from the pixel value of the scene region; furthermore, according to the time sequence of the multi-frame target images in the target video and the motion areas in the motion images corresponding to the multi-frame target images, the target track images are determined, and the time sequence of each motion area in the target track images and the motion image corresponding to the motion image to which each motion area belongs in the target track images in the target video can represent the motion track of the motion target corresponding to the motion area. In this way, the video frame images in the target video are converted into the moving images which clearly distinguish the moving areas from the scene areas, and then the target track images are determined based on the moving areas in the moving images and the time sequence of the video frame images corresponding to the moving images in the target video, wherein the target track images can reflect the moving tracks of the moving targets corresponding to the moving areas in the moving images; the generated target track image has higher real-time performance and accuracy, can be updated in real time along with the updating of the video frame image in the target video, and can correspondingly reflect the motion track of the moving target in the target video in real time.
The embodiment of the application also provides a track image generation device. Referring to fig. 5, fig. 5 is a schematic structural view of the trajectory image generating device, as shown in fig. 5, the device includes:
an image acquisition module 501, configured to acquire a target image in a target video;
A moving image conversion module 502 for converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
A track image generating module 503, configured to determine a target track image according to a time sequence of multiple frames of the target images in the target video and the motion areas in the motion images corresponding to the multiple frames of the target images; and the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region.
Optionally, the track image generating module 503 is specifically configured to:
taking the moving image corresponding to the initial target image as an initial track image;
Performing a track image discrimination operation on the moving image corresponding to an i-th frame target image positioned behind the initial target image; the i is an integer greater than or equal to 1; the track image discrimination operation includes: judging whether the motion area in the motion image meets the preset motion trail condition according to the motion area in the motion image and the motion area added to the trail image;
If the motion area in the motion image meets the preset motion trail condition, adding the motion area in the motion image into the trail image;
and deleting the moving image if the moving area in the moving image does not meet the preset moving track condition.
Optionally, the track image generating module 503 is further configured to:
Deleting an initial motion region in the track image when the number of the motion images to which the motion region included in the track image belongs reaches a preset number; the target image corresponding to the moving image to which the initial moving region belongs is more advanced in time sequence in the target video than the target images corresponding to the moving images to which the other moving regions belong in the track image;
The track image discriminating operation is performed for a moving image located after a moving image to which a moving region finally added in the track image belongs.
Optionally, the track image generating module 503 is specifically configured to:
and when the number of the moving images included in the track image and to which the moving region belongs reaches a preset number, the track image is taken as the target track image.
Optionally, the track image generating module 503 is further configured to:
acquiring a motion time image; the pixel value of the initial motion time image is the same as the pixel value of the scene area;
After a motion region in the motion image is added to the track image, determining a reference pixel value corresponding to the motion region according to the sequence in which the motion region is added to the track image; displaying the moving region in the moving image at the reference pixel value at a corresponding position in the moving time image according to the position of the moving region in the moving image;
Deleting the initial motion region in the track image, deleting the initial motion region in the motion time image, adjusting the reference pixel value corresponding to each motion region in the motion time image to obtain a corresponding updated reference pixel value, and displaying each motion region in the motion time image by using the updated reference pixel value corresponding to each motion region; and the time sequence of the target image corresponding to the moving image to which the initial moving area belongs is earlier in the target video than the time sequence of the target images corresponding to the moving images to which the other moving areas belong in the track image.
Optionally, the track image generating module 503 is specifically configured to:
And when the number of the moving images to which the moving region included in the track image belongs reaches a preset number, the moving time image is taken as the target track image.
Optionally, the apparatus further includes:
The image preprocessing module is used for preprocessing the images in the target video to obtain the target images; the pretreatment includes at least one of the following operations: and downsampling the image to a preset image size, and performing Gaussian blur processing on the image.
Optionally, the apparatus further includes:
A first region screening module, configured to determine, for each of the motion regions included in the motion image, whether the motion region meets a first preset area constraint condition; if yes, reserving the motion area in the motion image; and if not, adjusting the pixel value of the motion area in the motion image to be the pixel value of the scene area.
Optionally, the apparatus further includes:
The second region screening module is used for judging whether the motion region meets a second preset area constraint condition or not according to each motion region in the target track image; if yes, reserving the motion area in the target track image; and if not, deleting the motion area in the target track image.
Optionally, the track image generating module 503 is specifically configured to:
Configuring corresponding numbers for the motion areas in the motion images corresponding to the multiple frames of target images according to the time sequence of the multiple frames of target images in the target video; the number corresponding to the pixel point in the motion area in the target track image is the same as the number corresponding to the motion area;
the apparatus further comprises:
A third region screening module, configured to determine, for each reference motion region in the target track image, a pixel point duty ratio under each number according to the number corresponding to each pixel point in the reference motion region, as a pixel point duty ratio corresponding to the reference motion region; judging whether the pixel point duty ratio corresponding to the reference motion area meets a preset duty ratio condition or not according to each reference motion area in the target track image; if yes, reserving the reference motion area in the target track image; and if not, deleting the reference motion area in the target track image.
The track image generating device firstly acquires a target image in a target video; then, converting the target image into a corresponding moving image including a moving region corresponding to the moving target and a scene region corresponding to the scene, the pixel value of the moving region being different from the pixel value of the scene region; furthermore, according to the time sequence of the multi-frame target images in the target video and the motion areas in the motion images corresponding to the multi-frame target images, the target track images are determined, and the time sequence of each motion area in the target track images and the motion image corresponding to the motion image to which each motion area belongs in the target track images in the target video can represent the motion track of the motion target corresponding to the motion area. In this way, the video frame images in the target video are converted into the moving images which clearly distinguish the moving areas from the scene areas, and then the target track images are determined based on the moving areas in the moving images and the time sequence of the video frame images corresponding to the moving images in the target video, wherein the target track images can reflect the moving tracks of the moving targets corresponding to the moving areas in the moving images; the generated target track image has higher real-time performance and accuracy, can be updated in real time along with the updating of the video frame image in the target video, and can correspondingly reflect the motion track of the moving target in the target video in real time.
The embodiment of the application also provides a device for generating the track image, which can be a server or a terminal device, and the server and the terminal device provided by the embodiment of the application are introduced from the aspect of hardware materialization.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a server 600 according to an embodiment of the present application. The server 600 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPUs) 622 (e.g., one or more processors) and memory 632, one or more storage mediums 630 (e.g., one or more mass storage devices) that store applications 642 or data 644. Wherein memory 632 and storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 622 may be configured to communicate with a storage medium 630 and execute a series of instruction operations in the storage medium 630 on the server 600.
The server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 650, one or more input/output interfaces 658, and/or one or more operating systems 641, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 6.
Wherein, CPU 622 is configured to perform the following steps:
acquiring a target image in a target video;
Converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images; and the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region.
Optionally, the CPU 622 may be further configured to perform the steps of any implementation of the track image generating method provided by the embodiment of the present application.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only those portions of the embodiments of the present application that are relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present application. The terminal may be any terminal device including a computer, a tablet computer, a Personal digital assistant (english: personal DIGITAL ASSISTANT, english: PDA), etc., taking the terminal as an example of the computer:
Fig. 7 is a block diagram showing a part of the structure of a computer related to a terminal provided by an embodiment of the present application. Referring to fig. 7, a computer includes: radio Frequency (RF) circuit 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuit 760, wireless fidelity (WIRELESS FIDELITY, wiFi) module 770, processor 780, and power supply 790. Those skilled in the art will appreciate that the computer architecture shown in fig. 7 is not limiting and that more or fewer components than shown may be included, or that certain components may be combined, or that different arrangements of components may be provided.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing of the computer by running the software programs and modules stored in the memory 720. The memory 720 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the computer (such as audio data, phonebooks, etc.), and the like. In addition, memory 720 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 780 is a control center of the computer, connects various parts of the entire computer using various interfaces and lines, and performs various functions of the computer and processes data by running or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby performing overall monitoring of the computer. Optionally, the processor 780 may include one or more processing units; preferably, the processor 780 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 780.
In the embodiment of the present application, the processor 780 included in the terminal further has the following functions:
acquiring a target image in a target video;
Converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images; and the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region.
Optionally, the processor 780 is further configured to execute steps of any implementation manner of the track image generating method provided by the embodiment of the present application.
The embodiments of the present application also provide a computer-readable storage medium storing program code for executing any one of the implementations of the track image generating method described in the foregoing embodiments.
The embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform any one of the implementations of a trajectory image generation method described in the previous embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc. various media for storing computer program.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A track image generation method, the method comprising:
acquiring a target image in a target video;
Converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
Determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images; the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region;
The determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images respectively comprises the following steps: taking the moving image corresponding to the initial target image as an initial track image; performing a track image discrimination operation on the moving image corresponding to an i-th frame target image positioned behind the initial target image; the i is an integer greater than or equal to 1; the track image discrimination operation includes: judging whether the motion area in the motion image meets the preset motion trail condition according to the motion area in the motion image and the motion area added to the trail image; if the motion area in the motion image meets the preset motion trail condition, adding the motion area in the motion image into the trail image; deleting the moving image if the moving area in the moving image does not meet the preset moving track condition;
acquiring a motion time image; the pixel value of the initial motion time image is the same as the pixel value of the scene area;
After a motion region in the motion image is added to the track image, determining a reference pixel value corresponding to the motion region according to the sequence in which the motion region is added to the track image; displaying the moving region in the moving image at the reference pixel value at a corresponding position in the moving time image according to the position of the moving region in the moving image;
Deleting the initial motion region in the track image, deleting the initial motion region in the motion time image, adjusting the reference pixel value corresponding to each motion region in the motion time image to obtain a corresponding updated reference pixel value, and displaying each motion region in the motion time image by using the updated reference pixel value corresponding to each motion region; and the time sequence of the target image corresponding to the moving image to which the initial moving area belongs is earlier in the target video than the time sequence of the target images corresponding to the moving images to which the other moving areas belong in the track image.
2. The method according to claim 1, wherein the method further comprises:
Deleting an initial motion region in the track image when the number of the motion images to which the motion region included in the track image belongs reaches a preset number; the target image corresponding to the moving image to which the initial moving region belongs is more advanced in time sequence in the target video than the target images corresponding to the moving images to which the other moving regions belong in the track image;
The track image discriminating operation is performed for a moving image located after a moving image to which a moving region finally added in the track image belongs.
3. The method according to claim 1 or 2, wherein the determining a target trajectory image from a time sequence of a plurality of frames of the target image in the target video and the moving region in the moving image to which the plurality of frames of the target image each correspond, comprises:
and when the number of the moving images included in the track image and to which the moving region belongs reaches a preset number, the track image is taken as the target track image.
4. The method of claim 1, wherein determining the target trajectory image from a temporal sequence of a plurality of frames of the target image in the target video and the moving region in the moving image to which the plurality of frames of the target image each correspond comprises:
And when the number of the moving images to which the moving region included in the track image belongs reaches a preset number, the moving time image is taken as the target track image.
5. The method of claim 1, wherein prior to said converting the target image into a corresponding motion image, the method further comprises:
preprocessing an image in the target video to obtain the target image; the pretreatment includes at least one of the following operations: and downsampling the image to a preset image size, and performing Gaussian blur processing on the image.
6. The method according to claim 1, wherein after said converting the target image into a corresponding moving image, the method further comprises:
Judging whether the motion area meets a first preset area constraint condition for each motion area included in the motion image; if yes, reserving the motion area in the motion image; and if not, adjusting the pixel value of the motion area in the motion image to be the pixel value of the scene area.
7. The method according to claim 1, wherein after the determination of a target trajectory image from a temporal sequence of a plurality of frames of the target image in the target video and the moving regions in the moving images to which the plurality of frames of the target image each correspond, the method further comprises:
Judging whether the motion area meets a second preset area constraint condition or not according to each motion area in the target track image; if yes, reserving the motion area in the target track image; and if not, deleting the motion area in the target track image.
8. The method of claim 1, wherein determining the target trajectory image from a temporal sequence of a plurality of frames of the target image in the target video and the moving region in the moving image to which the plurality of frames of the target image each correspond comprises:
Configuring corresponding numbers for the motion areas in the motion images corresponding to the multiple frames of target images according to the time sequence of the multiple frames of target images in the target video; the number corresponding to the pixel point in the motion area in the target track image is the same as the number corresponding to the motion area; after the determining of the target track image according to the time sequence of the multiple frames of the target images in the target video and the motion areas in the motion images corresponding to the multiple frames of the target images, the method further comprises:
performing region communication processing on the target track image to obtain a reference motion region;
For each reference motion region in the target track image, determining the pixel point duty ratio under various numbers according to the numbers corresponding to the pixel points in the reference motion region, and taking the pixel point duty ratio as the pixel point duty ratio corresponding to the reference motion region;
Judging whether the pixel point duty ratio corresponding to the reference motion area meets a preset duty ratio condition or not according to each reference motion area in the target track image; if yes, reserving the reference motion area in the target track image; and if not, deleting the reference motion area in the target track image.
9. A trajectory image generation device, characterized in that the device comprises:
The image acquisition module is used for acquiring a target image in the target video;
A moving image conversion module for converting the target image into a corresponding moving image; the moving image comprises a moving area corresponding to a moving object and a scene area corresponding to a scene, wherein the pixel value of the moving area is different from the pixel value of the scene area;
The track image generation module is used for determining a target track image according to the time sequence of a plurality of frames of target images in the target video and the motion areas in the motion images corresponding to the frames of target images; the time sequence of each motion region in the target track image and the target image corresponding to the motion image to which the motion region belongs in the target video is used for representing the motion track of the motion target corresponding to the motion region;
The determining a target track image according to the time sequence of multiple frames of target images in the target video and the motion areas in the motion images corresponding to the multiple frames of target images respectively comprises the following steps: taking the moving image corresponding to the initial target image as an initial track image; performing a track image discrimination operation on the moving image corresponding to an i-th frame target image positioned behind the initial target image; the i is an integer greater than or equal to 1; the track image discrimination operation includes: judging whether the motion area in the motion image meets the preset motion trail condition according to the motion area in the motion image and the motion area added to the trail image; if the motion area in the motion image meets the preset motion trail condition, adding the motion area in the motion image into the trail image; deleting the moving image if the moving area in the moving image does not meet the preset moving track condition;
The track image generation module is also used for acquiring a motion time image; the pixel value of the initial motion time image is the same as the pixel value of the scene area; after a motion region in the motion image is added to the track image, determining a reference pixel value corresponding to the motion region according to the sequence in which the motion region is added to the track image; displaying the moving region in the moving image at the reference pixel value at a corresponding position in the moving time image according to the position of the moving region in the moving image; deleting the initial motion region in the track image, deleting the initial motion region in the motion time image, adjusting the reference pixel value corresponding to each motion region in the motion time image to obtain a corresponding updated reference pixel value, and displaying each motion region in the motion time image by using the updated reference pixel value corresponding to each motion region; and the time sequence of the target image corresponding to the moving image to which the initial moving area belongs is earlier in the target video than the time sequence of the target images corresponding to the moving images to which the other moving areas belong in the track image.
10. An apparatus, the apparatus comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor for invoking the computer program to perform the trajectory image generation method of any one of claims 1 to 8.
11. A computer-readable storage medium storing a computer program for executing the trajectory image generation method according to any one of claims 1 to 8.
CN202110206056.2A 2021-02-24 2021-02-24 Track image generation method, device, equipment and storage medium Active CN113011272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110206056.2A CN113011272B (en) 2021-02-24 2021-02-24 Track image generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110206056.2A CN113011272B (en) 2021-02-24 2021-02-24 Track image generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113011272A CN113011272A (en) 2021-06-22
CN113011272B true CN113011272B (en) 2024-05-31

Family

ID=76385591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110206056.2A Active CN113011272B (en) 2021-02-24 2021-02-24 Track image generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113011272B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117975071B (en) * 2024-03-28 2024-06-18 浙江大华技术股份有限公司 Image clustering method, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019218824A1 (en) * 2018-05-15 2019-11-21 腾讯科技(深圳)有限公司 Method for acquiring motion track and device thereof, storage medium, and terminal
CN111667508A (en) * 2020-06-10 2020-09-15 北京爱笔科技有限公司 Detection method and related device
CN111784729A (en) * 2020-07-01 2020-10-16 杭州海康威视数字技术股份有限公司 Object tracking method and device, electronic equipment and storage medium
CN112258573A (en) * 2020-10-16 2021-01-22 腾讯科技(深圳)有限公司 Method and device for acquiring throwing position, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019218824A1 (en) * 2018-05-15 2019-11-21 腾讯科技(深圳)有限公司 Method for acquiring motion track and device thereof, storage medium, and terminal
CN111667508A (en) * 2020-06-10 2020-09-15 北京爱笔科技有限公司 Detection method and related device
CN111784729A (en) * 2020-07-01 2020-10-16 杭州海康威视数字技术股份有限公司 Object tracking method and device, electronic equipment and storage medium
CN112258573A (en) * 2020-10-16 2021-01-22 腾讯科技(深圳)有限公司 Method and device for acquiring throwing position, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于运动轨迹聚类的监控视频摘要算法;李大湘;朱志宇;刘颖;;计算机工程与设计(06);全文 *

Also Published As

Publication number Publication date
CN113011272A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
US20180048894A1 (en) Methods and systems of performing lighting condition change compensation in video analytics
US20200193609A1 (en) Motion-assisted image segmentation and object detection
US20190130189A1 (en) Suppressing duplicated bounding boxes from object detection in a video analytics system
US10269135B2 (en) Methods and systems for performing sleeping object detection in video analytics
CN108665476B (en) Pedestrian tracking method and electronic equipment
US10269123B2 (en) Methods and apparatus for video background subtraction
US20180144476A1 (en) Cascaded-time-scale background modeling
US10223590B2 (en) Methods and systems of performing adaptive morphology operations in video analytics
US10152630B2 (en) Methods and systems of performing blob filtering in video analytics
US10229503B2 (en) Methods and systems for splitting merged objects in detected blobs for video analytics
CA2910965A1 (en) Tracker assisted image capture
CN110798592B (en) Object movement detection method, device and equipment based on video image and storage medium
WO2018031104A1 (en) Methods and systems of maintaining object trackers in video analytics
US20180046877A1 (en) Methods and systems of determining a minimum blob size in video analytics
US10115005B2 (en) Methods and systems of updating motion models for object trackers in video analytics
WO2019089441A1 (en) Exclusion zone in video analytics
CN105635554B (en) Auto-focusing control method and device
CN113011272B (en) Track image generation method, device, equipment and storage medium
CN106612385A (en) Video detection method and video detection device
KR101366198B1 (en) Image processing method for automatic early smoke signature of forest fire detection based on the gaussian background mixture models and hsl color space analysis
JP2020504383A (en) Image foreground detection device, detection method, and electronic apparatus
CN113989531A (en) Image processing method and device, computer equipment and storage medium
CN116055894B (en) Image stroboscopic removing method and device based on neural network
EP3543902A1 (en) Image processing apparatus and method and storage medium storing instructions
US20220346855A1 (en) Electronic device and method for smoke level estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant