CN113792622A - Frame rate adjusting method and device, electronic equipment and storage medium - Google Patents

Frame rate adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113792622A
CN113792622A CN202110996537.8A CN202110996537A CN113792622A CN 113792622 A CN113792622 A CN 113792622A CN 202110996537 A CN202110996537 A CN 202110996537A CN 113792622 A CN113792622 A CN 113792622A
Authority
CN
China
Prior art keywords
target
frame
detection
speed
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110996537.8A
Other languages
Chinese (zh)
Inventor
曾志颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110996537.8A priority Critical patent/CN113792622A/en
Publication of CN113792622A publication Critical patent/CN113792622A/en
Priority to PCT/CN2022/107700 priority patent/WO2023024791A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a frame rate adjustment method and apparatus, an electronic device, and a storage medium, wherein the method includes: acquiring a first image frame acquired aiming at a target scene; detecting the first image frame, and determining the number of detection targets in the first image frame; and determining a target frame skipping number based on the number of the detection targets, and adjusting the frame rate of the image frames of the acquired target scene according to the target frame skipping number. The embodiment of the disclosure can adaptively adjust the frame rate of the acquired image frame.

Description

Frame rate adjusting method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a frame rate adjustment method and apparatus, an electronic device, and a storage medium.
Background
In the field of video analysis, a frame rate of video analysis is usually set to save computing resources and improve video analysis performance. The appropriate frame rate is set, so that the consumption of computing resources can be reduced as much as possible on the premise of meeting the requirement of video analysis precision. The frame rate of the video analytics is highly coupled to the application scenarios, and the frame rate of the optimal video analytics may be different for different application scenarios.
In the related art, after analyzing the related information of a scene manually, a fixed video analysis frame rate is set manually. But this approach is only suitable for application scenarios that do not vary much.
Disclosure of Invention
The present disclosure provides a frame rate adjustment technical solution.
According to an aspect of the present disclosure, there is provided a frame rate adjustment method, including: acquiring a first image frame acquired aiming at a target scene; detecting the first image frame, and determining the number of detection targets in the first image frame; and determining a target frame skipping number based on the number of the detection targets, and adjusting the frame rate of the image frames of the acquired target scene according to the target frame skipping number.
In some possible implementations, the method further includes: acquiring a quantity change value of a detection target in a second image frame relative to a detection target in the first image frame, wherein the second image frame is a next frame image of the first image frame; determining the number of target frame hops based on the number of the detection targets comprises: and determining the target frame skipping number according to the number of the detection targets in the first image frame and the number change value.
In some possible implementations, the acquiring a number variation value of the detection target in the second image frame relative to the detection target in the first image frame includes: determining a motion speed of a detection target in the first image frame; predicting a number variation value of the detection target based on a movement speed of the detection target.
In some possible implementations, the determining a motion speed of a detection target in the first image frame includes: determining a first target in the first image frame and a first speed of the first target according to a preset first motion detection algorithm; determining a second target in the first image frame and a second speed of the second target according to a preset second motion detection algorithm; matching at least one first target with at least one second target, and determining at least one detection target with successful matching and at least one detection target with failed matching; and determining the movement speed of the detection target which is successfully matched according to the first speed and the second speed of the detection target which is successfully matched, and determining the movement speed of the detection target which is unsuccessfully matched according to the first speed or the second speed of the detection target which is unsuccessfully matched.
In some possible implementations, the determining the moving speed of the successfully matched detection target according to the first speed and the second speed of the successfully matched detection target includes: and fusing the first speed and the second speed to obtain the moving speed of the detection target successfully matched.
In some possible implementations, the predicting a number variation value of the detection target based on the movement speed of the detection target includes: summing the running speeds of a plurality of detection targets in the first image frame to obtain an accumulated speed; determining the magnitude change value based on the cumulative velocity.
In some possible implementations, the summing the operating speeds of the plurality of detection targets in the first image frame to obtain an accumulated speed includes: and carrying out weighted summation on the movement speed of at least one detection target which is successfully matched and the movement speed of at least one detection target which is failed to be matched to obtain the accumulated speed, wherein the weight coefficients corresponding to the detection target which is successfully matched and the detection target which is failed to be matched are different.
In some possible implementations, the determining the target frame skipping number based on the number of the detection targets includes: determining an initial frame skipping number based on the number of the detection targets; comparing the initial frame skipping number with a preset frame skipping threshold value to obtain a comparison result; and determining the target frame skipping number according to the comparison result.
In some possible implementation manners, the determining the target frame skipping number according to the comparison result includes: determining the initial frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is smaller than the maximum frame skipping number and larger than the minimum frame skipping number; determining the maximum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is greater than or equal to the maximum frame skipping number; and determining the minimum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is less than or equal to the minimum frame skipping number.
In some possible implementations, the maximum number of hops frames is determined based on accuracy requirements for image frame analysis, and the minimum number of hops frames is determined based on available computing resources.
In some possible implementations, the method further includes: and acquiring an image frame to be analyzed in the video of the target scene according to the target frame skipping number.
According to an aspect of the present disclosure, there is provided a frame rate adjustment apparatus including:
the acquisition module is used for acquiring a first image frame acquired aiming at a target scene;
the detection module is used for detecting the first image frame and determining the number of detection targets in the first image frame;
and the determining module is used for determining the target frame skipping number based on the number of the detection targets so as to adjust the frame rate of the image frames of the acquired target scene according to the target frame skipping number.
In some possible implementations, the detection module is further configured to obtain a number change value of a detection target in a second image frame relative to a detection target in the first image frame, where the second image frame is a next frame image of the first image frame; the determining module is configured to determine the target frame skipping number according to the number of the detection targets in the first image frame and the number change value.
In some possible implementations, the detection module is configured to determine a motion speed of a detection target in the first image frame; predicting a number variation value of the detection target based on a movement speed of the detection target.
In some possible implementations, the detection module is configured to determine a first target in the first image frame and a first speed of the first target according to a preset first motion detection algorithm; determining a second target in the first image frame and a second speed of the second target according to a preset second motion detection algorithm; matching at least one first target with at least one second target, and determining at least one detection target with successful matching and at least one detection target with failed matching; and determining the movement speed of the detection target which is successfully matched according to the first speed and the second speed of the detection target which is successfully matched, and determining the movement speed of the detection target which is unsuccessfully matched according to the first speed or the second speed of the detection target which is unsuccessfully matched.
In some possible implementation manners, the detection module is configured to fuse the first speed and the second speed to obtain a moving speed of the detection target successfully matched.
In some possible implementations, the detection module is configured to sum the operating speeds of a plurality of detection targets in the first image frame to obtain an accumulated speed; determining the magnitude change value based on the cumulative velocity.
In some possible implementation manners, the determining module is configured to perform weighted summation on the motion speed of at least one detection target that is successfully matched and the motion speed of at least one detection target that is failed to be matched, so as to obtain the accumulated speed, where the weight coefficients corresponding to the detection target that is successfully matched and the detection target that is failed to be matched are different.
In some possible implementations, the determining module is configured to determine an initial number of frame hops based on the number of the detection targets; comparing the initial frame skipping number with a preset frame skipping threshold value to obtain a comparison result; and determining the target frame skipping number according to the comparison result.
In some possible implementations, the determining module is configured to determine the initial frame skipping number as the target frame skipping number when the initial frame skipping number is smaller than the maximum frame skipping number and larger than the minimum frame skipping number; determining the maximum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is greater than or equal to the maximum frame skipping number; and determining the minimum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is less than or equal to the minimum frame skipping number.
In some possible implementations, the maximum number of hops frames is determined based on accuracy requirements for image frame analysis, and the minimum number of hops frames is determined based on available computing resources.
In some possible implementations, the apparatus further includes: and the analysis module is used for acquiring an image frame to be analyzed in the video of the target scene according to the target frame skipping number.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a first image frame collected for a target scene may be acquired, the first image frame may be further detected, the number of detection targets in the first image frame is determined, and based on the number of detection targets, a target adjustment number may be determined, so as to adjust a frame rate of acquiring the image frame of the target scene according to the target frame skip number, so that the target frame skip number of the image frame may be determined in real time according to a specific situation of the target scene, so that a frame rate of image frame analysis may adapt to the target scene, for example, when there are many pedestrians in the target scene, the frame rate of image frame acquisition may be increased, when there are few pedestrians in the target scene, the frame rate of image frame acquisition may be decreased, and waste of computing resources may be reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a frame rate adjustment method according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of an example of a frame rate adjustment method according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a frame rate adjustment apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, a fixed number of skip frames or frame rate may be set to analyze an image frame during video analysis. The method is simple to operate and is suitable for application scenes which do not change much along with time. However, in an application scene with a large change over time, when there are few objects in the scene, the computational resources may be wasted due to the set frame rate being too high, and when there are many objects in the scene, some image frames including the object with fast motion may be lost due to the set frame rate being too low.
The frame rate adjustment scheme provided by the embodiment of the disclosure can determine the number of target frame hops based on the number of detected targets in the first image frame, so that the frame rate of the image frame of the acquired target scene can be dynamically adjusted according to the number of the target frame hops, and the requirement of the application scene which changes greatly along with time on the frame rate of video analysis is met. The frame rate adjustment scheme provided by the embodiment of the disclosure can be applied to scenes such as video analysis, image sampling and video frame extraction. For example, in a pedestrian analysis scenario at a subway station, the number of pedestrians may vary greatly over time. In the peak period of work, the number of pedestrians is large, and a small target frame skipping number can be obtained according to the number of the pedestrians; in the off-peak period, the number of pedestrians is small, and a large target frame skipping number can be obtained according to the number of the pedestrians. Therefore, the frame rate of image frame analysis can be adjusted in real time according to the actual situation of the target scene, and the requirement of the application scene which changes greatly along with time on the frame rate of video analysis is met.
The map generation method provided by the embodiment of the present disclosure may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the frame rate adjustment method may be implemented by a processor calling computer readable instructions stored in a memory. Alternatively, the method may be performed by a server. For convenience of description, an execution subject of the frame rate adjustment method is hereinafter collectively referred to as an electronic apparatus.
Fig. 1 shows a flowchart of a frame rate adjustment method according to an embodiment of the present disclosure, as shown in fig. 1, the frame rate adjustment method includes:
in step S11, a first image frame acquired for a target scene is acquired.
In the embodiment of the present disclosure, the target scene may be an application scene that changes with time without change. The first image frame may be a current image frame acquired for a target scene. The electronic equipment can have a shooting function, can shoot aiming at a target scene in real time, and acquires a first image frame in a shot video. Alternatively, the electronic device may acquire the first image frame transmitted by the other device in a wired or wireless manner, for example, the electronic device may acquire a video stream transmitted by a camera, and extract the first image frame from the video stream.
Step S12, detecting the first image frame, and determining the number of detection targets in the first image frame.
In the embodiment of the disclosure, after the first image frame is acquired, the first image frame may be detected, for example, the first image frame may be detected by using some target detection algorithms, such as a target detection algorithm of R-CNN, fast R-CNN, YOLO, or SSD, to obtain detection targets in the first image frame, and further count the number of the detection targets in the first image frame. The detection target may be determined according to an actual application scenario or a requirement, for example, in a target scenario in which pedestrians who enter or exit a subway station are analyzed, the detection target may be a pedestrian, and for example, in a target scenario in which vehicles on a traffic road are analyzed, the detection target may be a vehicle.
Step S13, determining a target frame skipping number based on the number of the detection targets, so as to adjust the frame rate of the image frame of the acquired target scene according to the target frame skipping number.
In the embodiment of the disclosure, the number of detection targets may reflect the details of the target scene, so that the target skip number may be determined based on the number of detection targets in the first image frame. For example, the target frame hopping number may be obtained according to a preset linear relationship and the number of the detection targets, where the linear relationship may be a corresponding relationship between the number of the detection targets and the target frame hopping number, and the target frame hopping number corresponding to the number of the detection targets may be determined by using the preset linear relationship. The number of detection targets may be inversely related to the number of target frame hops, i.e., the larger the number of detection targets, the smaller the number of target frame hops, and the smaller the number of detection targets, the larger the number of target frame hops.
In some implementation manners, after the target frame skipping number is determined, the frame rate of the image frame of the acquired target scene may be further adjusted according to the target frame skipping number, for example, the target frame skipping number may be converted into the target frame rate, and the frame rate of the currently acquired image frame is further adjusted to the target frame rate, so that the frame rate of the video analysis may be adjusted in real time, and the requirement of the target scene on the frame rate of the image frame acquisition is met.
In some implementations, after determining the target frame skipping number, the image frame to be analyzed may be acquired in the video of the target scene according to the target frame skipping number, for example, in the case that the target frame skipping number is 2, the image frame spaced by the number of frames of 2 from the first image frame in the video may be acquired, that is, assuming that the first image frame is the 1 st frame image in the video, the 4 th frame image in the video may be acquired. Therefore, the image frames for subsequent analysis can be extracted from the video according to the target frame skipping number, and the requirements on analysis precision and calculation capacity can be met.
In some implementation manners, when the electronic device and the video analysis device are not the same device, a target frame skipping number or a target frame rate determined according to the target frame skipping number may be sent to the video analysis device, and the video analysis device may obtain a next frame image of the first image frame according to the target frame skipping number or the target frame rate.
The embodiment of the disclosure can determine the number of the target frame hops based on the number of the detection targets, so as to adjust the frame rate of the image frame of the acquired target scene according to the number of the target frame hops, thereby adaptively adjusting the frame rate, and meeting the requirement of the application scene with the number of the detection targets changing greatly along with time on the frame rate.
In some implementations, in order to further adapt the target frame skipping number to the actual situation of the target scene, when the target frame skipping number is determined based on the number of the detection targets, an initial frame skipping number is determined based on the number of the detection targets, then the initial frame skipping number is compared with a preset frame skipping threshold to obtain a comparison result, and then the target frame skipping number is determined according to the comparison result, for example, when the initial frame skipping number is smaller than the preset frame skipping threshold, the initial frame skipping number is set as the target frame skipping number, and when the initial frame skipping number is greater than or equal to the preset frame skipping threshold, the preset frame skipping threshold is set as the target frame skipping number. Here, the preset frame skipping threshold may be set according to an actual application scenario or a requirement. In some implementations, the frame skip threshold may be set according to the accuracy requirements for the image frame analysis. Therefore, the size of the target frame skipping number can be limited through the preset frame skipping threshold, and frame skipping is performed according to the target frame skipping number on the premise of meeting the requirements of practical application scenes, for example, meeting the precision requirements of image frame analysis.
In an example, the preset frame skipping threshold includes a maximum frame skipping number and a minimum frame skipping number, and in the case that the initial frame skipping number is smaller than the maximum frame skipping number and larger than the minimum frame skipping number, the initial frame skipping number may be determined as the target frame skipping number. In the case where the initial number of frame hops is greater than or equal to the maximum number of frame hops, the maximum number of frame hops may be determined as the target number of frame hops. In the case where the initial number of frame hops is less than or equal to the minimum number of frame hops, the minimum number of frame hops may be determined as the target number of frame hops. Therefore, the initial frame skipping number can be subjected to saturation processing, and the target frame skipping number is limited through the maximum frame skipping number and the minimum frame skipping number, so that the target frame skipping number can better meet the requirement of an actual application scene.
Here, in determining the target number of frame hops, the target number of frame hops may be calculated as in the following formula (1):
Figure BDA0003234313090000061
wherein j ismaxIs the maximum number of frame jumps, jminIs the minimum number of skipped frames, j0Is the initial frame jump number, and j is the target frame jump number.
In some examples, the maximum number of skipped frames may be determined based on the accuracy requirement for image frame analysis, such that the target number of skipped frames may meet the accuracy requirement for image frame analysis even when the maximum value is reached. The minimum number of hops frames may be determined based on available computing resources, such that the target number of hops frames may satisfy the limitations of the computing resources currently available to the electronic device even when the minimum value is reached. By the method, on the premise of meeting the requirement of video analysis precision, the computing resources of the electronic equipment are saved as much as possible, and the target frame skipping number can meet the actual situation of a target scene.
The frame rate adjustment scheme provided by the embodiment of the present disclosure may also be used in an application scenario in which computing resources are dynamically allocated, for example, when the obtained number of target frame hops is large, computing resources allocated to a target scenario may be reduced, and when the obtained number of target frame hops is small, computing resources allocated to the target scenario may be increased, thereby implementing dynamic adjustment of computing resources of the target scenario, and improving the utilization rate of the computing resources.
In some implementations, when the target frame hopping number is determined based on the number of detection targets, a change value of the number of detection targets in the second image frame relative to the number of detection targets in the first image frame may also be obtained, and the target frame hopping number may be determined according to the number of detection targets in the first image frame and the change value of the number of detection targets. Here, the second image frame may be a next frame image of the first image frame in the video, and the number of detection targets in the second image frame is changed by a value with respect to the number of detection targets in the first image frame, which may be understood as the number of detection targets that may be changed at a time interval corresponding to the first image frame and the second image frame.
For example, the number variation value of the detection target at the current time may be predicted according to the statistical historical number variation of the detection targets. The historical change number of the detection target may be a number of real-time changes of the detection target in a target scene counted in a preset historical counting period, for example, the real-time number of the detection target in the target scene in a historical week is recorded to form a corresponding relationship between the number of the detection target and time, and further, the historical number change of the detection target may be determined according to the corresponding relationship between the number of the detection target and time. The number of detection targets change value may reflect the number of detection targets in the second image frame, i.e., the number of future detection targets to some extent, so that the number of detection targets in the second image frame may be determined based on the number of detection targets in the first image frame and the number change value of detection targets. For example, the number of detection targets in the first image frame and the number change value of the detection targets may be added to obtain the number of detection targets in the second image frame. Further, the target frame skipping number is determined according to the number of the detection targets in the second image frame, for example, the target frame skipping number corresponding to the number of the detection targets in the second image frame may be determined according to the preset linear relationship.
When the target number of hops is determined based on the number of detection targets and the number conversion value of the detection targets, the calculation can be performed by the following formula (2):
j ═ γ ═ (n + Δ n) formula (2);
where j may be a target skip number, γ may be a conversion coefficient, n may be the number of detection targets in the first image frame, and Δ n may be a number variation value of detection targets.
By determining the number of target frame hops based on the number of detection targets in the first image frame and the number conversion value of the detection targets, the number change of the detection targets of the target scene in a short time can be considered, and the number of the target frame hops thus determined can be more accurate.
In some implementations, in the case of acquiring the number change value of the detection target in the second image frame relative to the detection target in the first image frame, the motion speed of the detection target in the first image frame may also be determined first, for example, a detection object of a previous frame image of the first image frame may be acquired, the detection object of the previous frame image and the detection object of the first image frame may be matched, and the matched detection objects may be determined in the previous frame image and the first image frame, and may be regarded as the same detection object. Further, according to the image position of each detection object in the previous frame image and the first image frame, the motion speed of each detection object can be determined, that is, the motion speed of each detection object in the first image frame can be determined by performing object tracking on each detection object. Further, the number variation value of the detection targets may be predicted based on the movement speed of each detection target, for example, the number variation value of the detection targets may be correlated with the movement speed of the detection target, the greater the speed of the detection target is, the greater the number variation value of the detection target is, and the number variation value of the detection target may be determined based on an average value or a median value of the movement speeds of each detection target. The number change value of the detection targets can be accurately predicted by detecting the movement speed of the targets, and the accurate target frame jump number can be further obtained according to the number change value of the detection targets, so that the movement speed of the detection targets can be considered in frame rate adjustment, the frame rate change is more timely and smooth, and the precision of video analysis can be improved.
In some implementations, in order to make the determined motion speed more accurate, a first speed of the detection target in the first image frame may be determined according to a preset first motion detection algorithm, a second speed of the detection target in the first image frame may be determined according to a preset second motion detection algorithm, and the motion speed of the detection target in the first image frame may be further determined according to the first speed and the second speed.
Here, the preset first motion detection algorithm and the preset second motion detection algorithm may correspond to different target tracking algorithms, for example, the first motion detection algorithm may correspond to a Sort multi-target tracking algorithm, and the second motion detection algorithm may correspond to an optical flow tracking method. Due to the fact that the target tracking principle of the first motion detection algorithm is different from that of the second motion detection algorithm, the motion speeds of the detected target and the detected target determined by target tracking may be different. For example, the Sort multi-target tracking algorithm may detect and track a complete detection target in a first image frame, in some cases, an incomplete detection target may exist in the first image frame, for example, for a pedestrian entering or exiting the first image frame, only one foot of the pedestrian may exist in the first image frame, it may be difficult to track the incomplete detection target using the first motion detection algorithm, or the first speed of the detection target determined according to the first motion detection algorithm may be inaccurate. Therefore, on the basis of determining the first speed of the detection target in the first image frame according to the preset first motion detection algorithm, the preset second motion detection algorithm may be reused to determine the second speed of the detection target in the first image frame, and further, the motion speed of the detection target in the first image frame may be determined according to the first speed and the second speed, for example, the average value of the first speed and the second speed may be used as the motion speed of the detection target in the first image frame. By adopting different motion detection algorithms to determine the motion speed of the detection target in the first image frame, the accuracy of the motion speed can be further improved, and the quantity change value of the detection target determined by the motion speed is more accurate.
In some implementations, in the case that the moving speed of the detection target in the first image frame is determined according to the first speed and the second speed, the first speed and the second speed may be fused, for example, for any one detection target, the first speed and the second speed of the detection target may be fused by using a preset fusion coefficient, so as to obtain a fusion speed of the detection target. The fusion coefficients corresponding to the first speed and the second speed may be different. Here, the fusion coefficient may be set according to an actual application scenario or a requirement, or may also be set according to the accuracy corresponding to the first speed and the second speed, for example, in a case that the accuracy of the first speed is greater than the accuracy of the second speed, the fusion coefficient of the first speed may be set to be greater than the fusion coefficient of the second speed, so that the more accurate first speed may occupy a larger proportion of the motion speed obtained after the fusion. By fusing the first speed and the second speed, the more accurate movement speed of the detected target can be obtained.
In one example, the speed of movement of the detection target may be as shown in equation (3):
s=α*strack+β*sLKformula (3);
where s may represent the speed of movement of the detection target, strackMay represent a first speed, s, of the detected objectLKThe second speed of the detection target may be represented, α may represent a fusion coefficient corresponding to the first speed, and β may represent a fusion coefficient corresponding to the second speed.
In some implementations, it may happen that the detected objects detected and tracked based on different motion detection algorithms are inconsistent, such as the case where some of the object tracking algorithms mentioned above only detect and track the complete detected object in the first image frame. In this case, a suitable second motion detection algorithm may be selected to supplement the deficiency of the first motion detection algorithm, for example, an optical flow tracking method may be selected as the second motion detection algorithm to detect the first image frame, the optical flow tracking method may establish a correspondence between pixel points corresponding to the same projection point in the first image frame and the previous image frame through the gray scale of the pixel points, the projection point may represent an object in a real scene, and the optical flow tracking method may be used to obtain not only a complete detection target in the first image frame but also an incomplete detection target in the first image frame. Accordingly, the detection target determined by the first motion detection algorithm may be a first target, and the detection target determined by the second motion detection algorithm may be a second target, and the number of the first targets and the number of the second targets may be different, that is, one detection target may be detected by both the first motion detection algorithm and the second motion detection algorithm, or one detection target may be detected only by the first motion detection algorithm or the second motion detection algorithm. In the case where a detection target is detectable only by the first motion detection algorithm or the second motion detection algorithm, the detection target corresponds to only the first speed or the second speed, so that the detection target corresponds to only the first speed or the second speed as the motion speed of the detection target.
In the above implementation, the number change value of the detection target in the second image frame with respect to the detection target in the first image frame may be predicted based on the movement speed of the detection target. In some implementations, when determining the number variation value of the detection targets based on the movement speed of the detection targets, the operation speeds of the plurality of detection targets in the first image frame may be summed, that is, the operation speeds of the respective detection targets in the first image frame are added to obtain an accumulation speed, and the number variation value of the detection targets is determined according to the accumulation speed. The number variation value of the detection target may be positively correlated with the accumulation speed, and the larger the accumulation speed is, the larger the number variation value of the detection target is. The number change value of the detection targets can be measured by the accumulation speed of the plurality of detection targets in the first image frame, so that the number change value of the detection targets can be quickly and accurately determined by the accumulation speed.
In some examples, in a case where the detection objects in the first image frame are detected by different algorithms, such as in a case where the first image frame is detected by the first motion detection algorithm and the second motion detection algorithm, respectively, that is, according to a preset first motion detection algorithm, a first speed of at least one first object and the first object in the first image frame is determined, and, according to a preset second motion detection algorithm, a second speed of at least one second object and the second object in the first image frame is determined. It is further possible to match at least one first target with at least one second target and determine at least one successfully matched detection target and at least one unsuccessfully matched detection target in the first image frame, for example, some matching algorithms may be used to match at least one first target with at least one second target, and a successfully matched detection target may be understood as a first target and a second target representing the same detection target, and a unsuccessfully matched detection target may correspond to a single detection target. And then, according to the first speed and the second speed of the detection target which is successfully matched, determining the movement speed of the detection target which is successfully matched, for example, for each detection target which is successfully matched, the first speed and the second speed of the detection target which is successfully matched can be fused by using the formula (3) to obtain the movement speed of the detection target which is successfully matched, and according to the first speed or the second speed of the detection target which is unsuccessfully matched, determining the movement speed of the detection target which is unsuccessfully matched, for example, for each detection target which is unsuccessfully matched, only the corresponding first speed or the corresponding second speed of each detection target which is unsuccessfully matched can be used as the movement speed of the detection target which is unsuccessfully matched. In this way, the detection target in the first image frame and the movement speed of each detection target can be determined more accurately.
Further, the operation speeds of the plurality of detection targets may be summed to obtain an accumulated speed, that is, the movement speed of at least one detection target that is successfully matched and the movement speed of at least one detection target that is unsuccessfully matched may be weighted and summed to obtain the accumulated speed, for example, different weight coefficients may be set for the detection target that is successfully matched and the detection target that is unsuccessfully matched, and then the movement speeds of the detection targets are weighted and summed to obtain the accumulated speed. By the method, different weights can be distributed to the detection targets which are successfully matched and the detection targets which are not successfully matched in different detection modes, so that the accumulation speed obtained by weighting is more accurate, and the more accurate quantity change value of the detection targets can be obtained according to the accumulation speed.
In one example, the value of the number change of the detection target may be determined according to the following formula (4):
Δn=λ*∑smatched+μ*∑snot_matchedformula (4);
wherein s ismatchedMay be the moving speed, s, of the detection target successfully matchednot_matchedMay be the moving speed of the detection target that fails to match, λ may be the weighting coefficient corresponding to the detection target that succeeds in matching,μ may be a weight coefficient corresponding to the detection target failing in matching, and Δ n may represent a number variation value of the prediction detection targets.
The frame rate adjustment method provided by the embodiment of the present disclosure is explained by an example. Fig. 2 is a flowchart illustrating an example of a frame rate adjustment method according to an embodiment of the present disclosure, including:
in step S201, a first image frame acquired for a target scene is acquired.
Step S202, detecting the first image frame by using a preset first motion detection algorithm, and determining a first target in the first image and a first speed of the first target.
Step S203, detecting the first image frame by using a preset second motion detection algorithm to obtain a second target in the first image and a second speed of the second target.
Step S204, the first target and the second target are matched, and a detection target which is successfully matched and a detection target which is unsuccessfully matched in the first image frame are determined.
Step S205, determining the moving speed of the successfully matched detection target according to the first speed and the second speed corresponding to the successfully matched detection target, and determining the moving speed of the unsuccessfully matched detection target according to the first speed or the second speed corresponding to the unsuccessfully matched detection target.
In step S206, a number variation value of the detection target is predicted from the movement speed of the detection target.
Here, the detection target may include a detection target whose matching is successful and a detection target whose matching is failed.
Step S207, determining the number of target jump frames according to the number change value of the detection targets and the number of the detection targets in the first image frame.
The embodiment of the disclosure can be applied to scenes with large time variation, such as target scenes with a large number of pedestrians and a high movement speed during the peak period of work and work. By dynamically adjusting the target frame skipping number, a lower target frame skipping number can be obtained in a peak period, so that the analysis frame rate of an image frame is improved, and the requirement on video analysis precision is met. After the peak period, the number of pedestrians is reduced, and a higher target frame skipping number can be obtained, so that the analysis frame rate of the image frame is reduced, the number of the analyzed image frames is reduced on the premise of meeting the requirement of video analysis precision, computing resources are saved, and the power consumption of the system is reduced.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a frame rate adjustment device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any frame rate adjustment method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 3 is a block diagram illustrating a frame rate adjustment apparatus according to an embodiment of the present disclosure, and as shown in fig. 3, the apparatus includes:
an obtaining module 31, configured to obtain a first image frame acquired for a target scene;
a detection module 32, configured to detect the first image frame and determine the number of detection targets in the first image frame;
the determining module 33 is configured to determine a target frame skipping number based on the number of the detection targets, so as to adjust the frame rate of the image frame of the acquired target scene according to the target frame skipping number.
In some possible implementations, the detection module 32 is further configured to obtain a number variation value of a detection target in a second image frame relative to a detection target in the first image frame, where the second image frame is a next frame image of the first image frame;
the determining module 33 is configured to determine the target frame skipping number according to the number of the detection targets in the first image frame and the number change value.
In some possible implementations, the detection module 32 is configured to determine a motion speed of a detection target in the first image frame; predicting a number variation value of the detection target based on a movement speed of the detection target.
In some possible implementations, the detecting module 32 is configured to determine a first target in the first image frame and a first speed of the first target according to a preset first motion detection algorithm; determining a second target in the first image frame and a second speed of the second target according to a preset second motion detection algorithm; matching at least one first target with at least one second target, and determining at least one detection target with successful matching and at least one detection target with failed matching; and determining the movement speed of the detection target which is successfully matched according to the first speed and the second speed of the detection target which is successfully matched, and determining the movement speed of the detection target which is unsuccessfully matched according to the first speed or the second speed of the detection target which is unsuccessfully matched.
In some possible implementations, the detection module 32 is configured to fuse the first speed and the second speed to obtain a moving speed of the detection target successfully matched.
In some possible implementations, the detecting module 32 is configured to sum the operating speeds of the plurality of detection targets in the first image frame to obtain an accumulated speed; determining the magnitude change value based on the cumulative velocity.
In some possible implementations, the determining module 33 is configured to perform weighted summation on the motion speed of at least one detection target that is successfully matched and the motion speed of at least one detection target that is failed to be matched, so as to obtain the cumulative speed, where the weight coefficients corresponding to the detection target that is successfully matched and the detection target that is failed to be matched are different.
In some possible implementations, the determining module 33 is configured to determine an initial number of skip frames based on the number of the detection targets; comparing the initial frame skipping number with a preset frame skipping threshold value to obtain a comparison result; and determining the target frame skipping number according to the comparison result.
In some possible implementations, the determining module 33 is configured to determine the initial frame skipping number as the target frame skipping number when the initial frame skipping number is smaller than the maximum frame skipping number and larger than the minimum frame skipping number; determining the maximum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is greater than or equal to the maximum frame skipping number; and determining the minimum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is less than or equal to the minimum frame skipping number.
In some possible implementations, the maximum number of hops frames is determined based on accuracy requirements for image frame analysis, and the minimum number of hops frames is determined based on available computing resources.
In some possible implementations, the apparatus further includes: and the analysis module is used for acquiring an image frame to be analyzed in the video of the target scene according to the target frame skipping number.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code is run on an apparatus, a processor in the apparatus executes instructions for implementing the frame rate adjustment method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed, cause a computer to perform the operations of the frame rate adjustment method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 4, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 5 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 5, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows Server), the graphical user interface based operating system (Mac OS XTM) available from apple Inc., the Multi-user Multi-Process computer operating system (Unix), the Unix-like operating system of free and open native code (LinuxTM), the Unix-like operating system of open native code (FreeBSDTM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A method for frame rate adjustment, comprising:
acquiring a first image frame acquired aiming at a target scene;
detecting the first image frame, and determining the number of detection targets in the first image frame;
and determining a target frame skipping number based on the number of the detection targets, and adjusting the frame rate of the image frames of the acquired target scene according to the target frame skipping number.
2. The method of claim 1, further comprising:
acquiring a quantity change value of a detection target in a second image frame relative to a detection target in the first image frame, wherein the second image frame is a next frame image of the first image frame;
determining the number of target frame hops based on the number of the detection targets comprises: and determining the target frame skipping number according to the number of the detection targets in the first image frame and the number change value.
3. The method of claim 2, wherein the obtaining a change value of the number of detection targets in the second image frame relative to the detection targets in the first image frame comprises:
determining a motion speed of a detection target in the first image frame;
predicting a number variation value of the detection target based on a movement speed of the detection target.
4. The method of claim 3, wherein the determining a speed of motion of a detected object in the first image frame comprises:
determining a first target in the first image frame and a first speed of the first target according to a preset first motion detection algorithm;
determining a second target in the first image frame and a second speed of the second target according to a preset second motion detection algorithm;
matching at least one first target with at least one second target, and determining at least one detection target with successful matching and at least one detection target with failed matching;
and determining the movement speed of the detection target which is successfully matched according to the first speed and the second speed of the detection target which is successfully matched, and determining the movement speed of the detection target which is unsuccessfully matched according to the first speed or the second speed of the detection target which is unsuccessfully matched.
5. The method of claim 4, wherein determining the moving speed of the successfully matched detected object according to the first speed and the second speed of the successfully matched detected object comprises:
and fusing the first speed and the second speed to obtain the moving speed of the detection target successfully matched.
6. The method according to any one of claims 4 or 5, wherein the predicting the number variation value of the detection target based on the movement speed of the detection target comprises:
summing the running speeds of a plurality of detection targets in the first image frame to obtain an accumulated speed;
determining the magnitude change value based on the cumulative velocity.
7. The method of claim 6, wherein summing the operating speeds of the plurality of detected objects in the first image frame to obtain an accumulated speed comprises:
and carrying out weighted summation on the movement speed of at least one detection target which is successfully matched and the movement speed of at least one detection target which is failed to be matched to obtain the accumulated speed, wherein the weight coefficients corresponding to the detection target which is successfully matched and the detection target which is failed to be matched are different.
8. The method according to any one of claims 1 to 7, wherein the determining a target frame skipping number based on the number of the detection targets comprises:
determining an initial frame skipping number based on the number of the detection targets;
comparing the initial frame skipping number with a preset frame skipping threshold value to obtain a comparison result;
and determining the target frame skipping number according to the comparison result.
9. The method of claim 8, wherein the preset frame skipping threshold comprises a maximum frame skipping number and a minimum frame skipping number, and wherein determining the target frame skipping number according to the comparison comprises:
determining the initial frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is smaller than the maximum frame skipping number and larger than the minimum frame skipping number;
determining the maximum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is greater than or equal to the maximum frame skipping number;
and determining the minimum frame skipping number as the target frame skipping number under the condition that the initial frame skipping number is less than or equal to the minimum frame skipping number.
10. The method of claim 9, wherein the maximum number of hops frames is determined based on accuracy requirements for image frame analysis, and wherein the minimum number of hops frames is determined based on available computing resources.
11. The method according to any one of claims 1 to 10, further comprising:
and acquiring an image frame to be analyzed in the video of the target scene according to the target frame skipping number.
12. A frame rate adjustment apparatus, comprising:
the acquisition module is used for acquiring a first image frame acquired aiming at a target scene;
the detection module is used for detecting the first image frame and determining the number of detection targets in the first image frame;
and the determining module is used for determining the target frame skipping number based on the number of the detection targets so as to adjust the frame rate of the image frames of the acquired target scene according to the target frame skipping number.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 11.
14. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 11.
CN202110996537.8A 2021-08-27 2021-08-27 Frame rate adjusting method and device, electronic equipment and storage medium Pending CN113792622A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110996537.8A CN113792622A (en) 2021-08-27 2021-08-27 Frame rate adjusting method and device, electronic equipment and storage medium
PCT/CN2022/107700 WO2023024791A1 (en) 2021-08-27 2022-07-25 Frame rate adjustment method and apparatus, electronic device, storage medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110996537.8A CN113792622A (en) 2021-08-27 2021-08-27 Frame rate adjusting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113792622A true CN113792622A (en) 2021-12-14

Family

ID=79182248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110996537.8A Pending CN113792622A (en) 2021-08-27 2021-08-27 Frame rate adjusting method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113792622A (en)
WO (1) WO2023024791A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114679607A (en) * 2022-03-22 2022-06-28 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN114913471A (en) * 2022-07-18 2022-08-16 深圳比特微电子科技有限公司 Image processing method and device and readable storage medium
WO2023024791A1 (en) * 2021-08-27 2023-03-02 上海商汤智能科技有限公司 Frame rate adjustment method and apparatus, electronic device, storage medium, and program

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448157A (en) * 2008-12-30 2009-06-03 杭州华三通信技术有限公司 Video encoding method and video encoder
CN102457728B (en) * 2010-10-27 2013-12-25 杭州华三通信技术有限公司 Method and device for video image encoding
JP7015183B2 (en) * 2018-02-13 2022-02-02 キヤノン株式会社 Image coding device and its control method and program
CN111415347B (en) * 2020-03-25 2024-04-16 上海商汤临港智能科技有限公司 Method and device for detecting legacy object and vehicle
CN112040090A (en) * 2020-08-10 2020-12-04 浙江大华技术股份有限公司 Video stream processing method and device, electronic equipment and storage medium
CN111741305B (en) * 2020-08-27 2020-11-24 科大讯飞(苏州)科技有限公司 Video coding method and device, electronic equipment and readable storage medium
CN112351337B (en) * 2021-01-04 2022-02-01 腾讯科技(深圳)有限公司 Video quality inspection method and device, computer equipment and storage medium
CN113792622A (en) * 2021-08-27 2021-12-14 深圳市商汤科技有限公司 Frame rate adjusting method and device, electronic equipment and storage medium
CN114741185A (en) * 2022-03-25 2022-07-12 南京大学 Edge computing system for multi-target video monitoring and working method thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024791A1 (en) * 2021-08-27 2023-03-02 上海商汤智能科技有限公司 Frame rate adjustment method and apparatus, electronic device, storage medium, and program
CN114679607A (en) * 2022-03-22 2022-06-28 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN114679607B (en) * 2022-03-22 2024-03-05 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN114913471A (en) * 2022-07-18 2022-08-16 深圳比特微电子科技有限公司 Image processing method and device and readable storage medium
CN114913471B (en) * 2022-07-18 2023-09-12 深圳比特微电子科技有限公司 Image processing method, device and readable storage medium

Also Published As

Publication number Publication date
WO2023024791A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
US20210326587A1 (en) Human face and hand association detecting method and a device, and storage medium
CN109801270B (en) Anchor point determining method and device, electronic equipment and storage medium
CN108010060B (en) Target detection method and device
CN113792622A (en) Frame rate adjusting method and device, electronic equipment and storage medium
CN107480665B (en) Character detection method and device and computer readable storage medium
CN112001321A (en) Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium
CN111881956A (en) Network training method and device, target detection method and device and electronic equipment
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN111881827B (en) Target detection method and device, electronic equipment and storage medium
CN111104920A (en) Video processing method and device, electronic equipment and storage medium
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN113762169A (en) People flow statistical method and device, electronic equipment and storage medium
CN111523599B (en) Target detection method and device, electronic equipment and storage medium
US20220383517A1 (en) Method and device for target tracking, and storage medium
CN108171222B (en) Real-time video classification method and device based on multi-stream neural network
CN112884809A (en) Target tracking method and device, electronic equipment and storage medium
CN111523346A (en) Image recognition method and device, electronic equipment and storage medium
CN111680646A (en) Motion detection method and device, electronic device and storage medium
CN112330717A (en) Target tracking method and device, electronic equipment and storage medium
CN109919126B (en) Method and device for detecting moving object and storage medium
CN110121115B (en) Method and device for determining wonderful video clip
CN113506325B (en) Image processing method and device, electronic equipment and storage medium
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN113506324B (en) Image processing method and device, electronic equipment and storage medium
CN112330721B (en) Three-dimensional coordinate recovery method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056761

Country of ref document: HK