CN113766175A - Target monitoring method, device, equipment and storage medium - Google Patents

Target monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN113766175A
CN113766175A CN202010500284.6A CN202010500284A CN113766175A CN 113766175 A CN113766175 A CN 113766175A CN 202010500284 A CN202010500284 A CN 202010500284A CN 113766175 A CN113766175 A CN 113766175A
Authority
CN
China
Prior art keywords
target
position information
determining
video frame
current video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010500284.6A
Other languages
Chinese (zh)
Inventor
刘干
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Network Co Ltd
Original Assignee
Hangzhou Ezviz Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Network Co Ltd filed Critical Hangzhou Ezviz Network Co Ltd
Priority to CN202010500284.6A priority Critical patent/CN113766175A/en
Publication of CN113766175A publication Critical patent/CN113766175A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for target monitoring, and belongs to the technical field of video monitoring. The method comprises the following steps: determining first position information of a target in a current video frame, if it is determined that the camera shooting the current video frame has shake based on the first position information, determining second position information of the target, and determining a distance between the target and the center of the current video frame based on the second position information. The second position information is obtained by correcting the position deviation generated by the jitter, so that the distance between the target and the center of the current video frame determined according to the second position information is more accurate. If the distance is greater than the specified distance threshold value, the target is possibly positioned at the edge of the monitoring picture, and at the moment, the pose of the camera can be adjusted, so that the target is positioned in the central range of the adjusted monitoring picture, and a user can conveniently check the target. Therefore, the camera can be prevented from being controlled by mistake as much as possible.

Description

Target monitoring method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of video surveillance technologies, and in particular, to a method, an apparatus, a device, and a storage medium for monitoring a target.
Background
At present, video monitoring technology is widely applied in daily life. In the process of monitoring the target, the target in the video frame can be usually determined through an image detection technology, and then the target can be tracked by using a tracking algorithm, so that the target is tracked and monitored. In some scenes, if the position of the target is determined to be at the edge of the monitoring picture, the user may not see the target clearly, and therefore, the camera generally needs to be controlled according to the monitored condition.
However, due to the tracking algorithm itself or unstable hardware device installation, the tracking frame of the target may shake, which may cause the determined position of the target to be inaccurate, and thus cause the camera to be controlled incorrectly.
Disclosure of Invention
The application provides a target monitoring method, a target monitoring device, a target monitoring equipment and a storage medium, which can solve the problem of error control of a camera caused by inaccurate positioning in the related art. The technical scheme is as follows:
in one aspect, a target monitoring method is provided, and the method includes:
determining position information of a target frame where a target in a current video frame is located to obtain first position information of the target;
if the target frame is determined to have jitter based on the first position information, determining second position information of the target, wherein the second position information is obtained by correcting position deviation generated by jitter;
determining the distance between the target and the center of the current video frame according to the second position information;
and if the distance between the target and the center of the current video frame is greater than a specified distance threshold, adjusting the pose of the camera so that the target is in the central range of the adjusted monitoring picture.
In one possible implementation manner of the present application, the method further includes:
acquiring position information of the target in a video frame before the current video frame to obtain third position information;
acquiring size information of the current video frame to obtain first size information;
determining a rate of change of position of the target based on the first position information, the third position information, and the first size information;
determining whether jitter is present in the target box based on the rate of change of position.
In one possible implementation manner of the present application, the first position information includes a first abscissa and a first ordinate, the third position information includes a second abscissa and a second ordinate, and the first size information includes a first width and a first height;
the determining a rate of change of position of the target based on the first location information, the third location information, and the first size information comprises:
determining an abscissa change amount based on the first abscissa and the second abscissa, and determining an ordinate change amount based on the first ordinate and the second ordinate;
dividing the abscissa variation by the first width to obtain an abscissa variation rate, and dividing the ordinate variation by the first height to obtain an ordinate variation rate;
and determining the abscissa change rate and the ordinate change rate as the position change rate of the target.
In one possible implementation manner of the present application, the determining whether jitter exists in the target frame based on the position change rate includes:
and if the abscissa change rate is greater than a specified change rate threshold value and/or the ordinate change rate is greater than the specified change rate threshold value, determining that the target frame has jitter.
In a possible implementation manner of the present application, the determining the second location information of the target includes:
obtaining position information of the target in a plurality of video frames before the current video frame to obtain a plurality of fourth position information;
determining average position information of the plurality of fourth position information;
determining the average location information as the second location information.
In one possible implementation manner of the present application, before determining whether there is jitter in the target frame based on the position change rate, the method further includes:
determining a correction factor based on the first position information;
multiplying the correction coefficient by the position change rate to obtain a corrected position change rate;
the determining whether jitter is present in the target frame based on the rate of change of position comprises:
and determining whether the target frame has jitter or not based on the corrected position change rate.
In one possible implementation manner of the present application, the determining a correction coefficient based on the first position information includes:
if the distance between the position of the target and the center of the current video frame is determined to be within a first distance range based on the first position information, determining the correction coefficient according to the first position information and a first reference model, wherein the correction coefficient output by the first reference model is negatively correlated with the position information;
if the distance between the position of the target and the center of the current video frame is determined to be within a second distance range based on the first position information, determining the area occupied by the target, and determining the correction coefficient based on the area and a second reference model, wherein the correction coefficient output by the second reference model is negatively correlated with the area;
wherein the distance within the first distance range is greater than the distance within the second distance range.
In one possible implementation manner of the present application, in a case that the current video frame includes at least a specified number of threshold video frames before, the method further includes:
acquiring a position information distribution model, wherein the position information distribution model is determined based on the position information of the target in a plurality of video frames;
inputting the first position information into the position information distribution model to obtain an output result;
and if the output result does not accord with the distribution rule of the position information distribution model, determining that the target frame shakes.
In another aspect, an object monitoring apparatus is provided, the apparatus comprising:
the first determining module is used for determining the position information of a target frame where a target in a current video frame is located to obtain first position information of the target;
a second determining module, configured to determine second position information of the target if it is determined that the target frame shakes based on the first position information, where the second position information is obtained by correcting a position deviation caused by shaking;
a third determining module, configured to determine, according to the second location information, a distance between the target and a center of the current video frame;
and the adjusting module is used for adjusting the pose of the camera if the distance between the target and the center of the current video frame is greater than a specified distance threshold value, so that the target is in the central range of the adjusted monitoring picture.
In one possible implementation manner of the present application, the second determining module is further configured to:
acquiring position information of the target in a video frame before the current video frame to obtain third position information;
acquiring size information of the current video frame to obtain first size information;
determining a rate of change of position of the target based on the first position information, the third position information, and the first size information;
determining whether jitter is present in the target box based on the rate of change of position.
In one possible implementation manner of the present application, the first position information includes a first abscissa and a first ordinate, the third position information includes a second abscissa and a second ordinate, and the first size information includes a first width and a first height; the second determination module is to:
determining an abscissa change amount based on the first abscissa and the second abscissa, and determining an ordinate change amount based on the first ordinate and the second ordinate;
dividing the abscissa variation by the first width to obtain an abscissa variation rate, and dividing the ordinate variation by the first height to obtain an ordinate variation rate;
and determining the abscissa change rate and the ordinate change rate as the position change rate of the target.
In one possible implementation manner of the present application, the second determining module is configured to:
and if the abscissa change rate is greater than a specified change rate threshold value and/or the ordinate change rate is greater than the specified change rate threshold value, determining that the target frame has jitter.
In one possible implementation manner of the present application, the second determining module is configured to:
obtaining position information of the target in a plurality of video frames before the current video frame to obtain a plurality of fourth position information;
determining average position information of the plurality of fourth position information;
determining the average location information as the second location information.
In one possible implementation manner of the present application, the second determining module is configured to:
determining a correction factor based on the first position information;
multiplying the correction coefficient by the position change rate to obtain a corrected position change rate;
and determining whether the target frame has jitter or not based on the corrected position change rate.
In one possible implementation manner of the present application, the second determining module is configured to:
if the distance between the position of the target and the center of the current video frame is determined to be within a first distance range based on the first position information, determining the correction coefficient according to the first position information and a first reference model, wherein the correction coefficient output by the first reference model is negatively correlated with the position information;
if the distance between the position of the target and the center of the current video frame is determined to be within a second distance range based on the first position information, determining the area occupied by the target, and determining the correction coefficient based on the area and a second reference model, wherein the correction coefficient output by the second reference model is negatively correlated with the area;
wherein the distance within the first distance range is greater than the distance within the second distance range.
In one possible implementation manner of the present application, in a case that the current video frame includes at least a specified number of threshold video frames before, the second determining module is configured to:
acquiring a position information distribution model, wherein the position information distribution model is determined based on the position information of the target in a plurality of video frames;
inputting the first position information into the position information distribution model to obtain an output result;
and if the output result does not accord with the distribution rule of the position information distribution model, determining that the target frame shakes.
In another aspect, an object monitoring apparatus configured in a monitoring device is provided, the apparatus including:
the camera is used for collecting video frames;
a processor configured to perform target monitoring according to the method of any one of the above aspects based on the video frame acquired by the camera.
In another aspect, a monitoring device is provided, where the monitoring device includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus, the memory is used to store a computer program, and the processor is used to execute the program stored in the memory to implement the steps of the target monitoring method.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned object monitoring method.
In another aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of the object monitoring method described above.
The technical scheme provided by the application can at least bring the following beneficial effects:
in this case, second position information of the target may be determined, and a distance between the target and the center of the current video frame may be determined based on the second position information. The second position information is obtained by correcting the position deviation generated by the jitter, so that the distance between the target and the center of the current video frame determined according to the second position information is more accurate. And then, judging whether the distance is greater than a specified distance threshold, if so, indicating that the target is possibly positioned at the edge of the monitoring picture, and at the moment, adjusting the pose of the camera to ensure that the target is positioned in the central range of the adjusted monitoring picture, thereby facilitating the user to check the target. Therefore, the camera can be prevented from being controlled by mistake as much as possible.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for monitoring an object according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a relationship between a correction coefficient and position information according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a relationship between a correction factor and an area according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of another method for target monitoring provided by embodiments of the present application;
FIG. 5 is a schematic structural diagram of an apparatus for monitoring an object according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a monitoring device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the target monitoring method provided by the embodiment of the present application in detail, an implementation environment provided by the embodiment of the present application is introduced.
The target monitoring method provided by the embodiment of the application can be executed by the monitoring equipment. As an example, the monitoring device may be connected with a camera, or the monitoring device may be configured with a camera itself to monitor the target through the camera. In addition, the monitoring device can be connected with the camera through a driving device, so that the monitoring device can drive the camera to rotate by controlling the driving device. For example, in some embodiments, the monitoring device may be an imaging device.
As an example, the monitoring device may be configured in an automatic driving device, which may be an AGV (Automated Guided Vehicle), a robot, or the like, for example.
After the embodiments of the present application are described in an implementation environment, a detailed explanation will be given below to an object monitoring method provided by the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a target monitoring method according to an embodiment of the present application, where the method is applied to the monitoring device. The method can comprise the following implementation steps:
step 101: and determining the position information of a target frame where the target in the current video frame is located to obtain the first position information of the target.
The current video frame refers to a currently acquired video frame, and the current video frame may be acquired by a camera.
Wherein the first location information is used to indicate the location of the target in the current video frame, and the first location information may be determined by performing image detection on the current video frame or may be determined by performing target tracking on the target. As an example, the first position information may include coordinates of a coordinate point at which the object is located, and for example, may include a first abscissa and a first ordinate. The coordinate point may be a central point of the target frame where the target is located, or may also be some other point of the target frame where the target is located, for example, any vertex of four vertices of the target frame may be used, which is not limited in this embodiment of the present invention.
The coordinate system of the coordinate point may be established based on a specified point in the current video frame, and the specified point may be set according to actual requirements, for example, the coordinate point may refer to an upper left vertex of the current video frame.
After the monitoring device acquires the current video frame, the position information of the target can be detected. As an example, if the current video frame is the first video frame, the monitoring device may detect a target frame of a target in the current video frame through a detection model, so as to determine first position information of the target. In one possible implementation, the current video frame includes a plurality of targets, in which case the monitoring device may select one target from the plurality of targets, illustratively, the monitoring device may select the one target with the largest area, or may select the one target with the fastest moving speed, or may select the one target with a specified gesture and/or posture, and so on.
The detection model may be a model that is trained in advance and can be used for image detection of the target, for example, the detection model may be a neural network model, and the embodiment of the present application is not limited thereto.
As another example, if the current video frame is not the first video frame, the monitoring device may determine first position information of a target in the current video frame through a target tracking model, where the target tracking model may be a model that is trained in advance and can be used for tracking the target. That is, after a certain target is determined in the first video frame, the target may be tracked subsequently to determine the position information of the target frame corresponding to the target in the subsequent video frames.
Step 102: and if the target frame is determined to have jitter based on the first position information, determining second position information of the target, wherein the second position information is obtained by correcting position deviation generated by the jitter.
The jitter of the target frame means that the position of the target frame does not accord with the actual position of the target.
In some embodiments, the target frame may shake due to defects of the detection model or the target tracking model itself, or due to hardware devices, or due to a large scale change, a shape change, motion blur, occlusion, or the like of the target. For example, if the monitoring device is configured in an automatic driving device, when the object is photographed, if the camera shakes due to the movement of the automatic driving device, the position of the object frame in the current video frame may deviate from the actual position, and for example, the camera may also deviate from the position of the object frame during the operation process.
Once the target frame is unstable, the subsequent instruction to the camera head may be unstable, for example, when the camera head may need to be raised upward, the target frame may issue a downward head lowering instruction again due to instability, so that the user experience is poor, and therefore, it may be determined whether the target frame shakes. As an example, whether the target frame has jitter may be determined based on the first position information, and the specific implementation may include the following ways:
the first implementation mode comprises the following steps: and acquiring the position information of the target in the previous video frame of the current video frame to obtain third position information, and acquiring the size information of the current video frame to obtain first size information. Determining a position change rate of the object based on the first position information, the third position information and the first size information, and determining whether there is jitter in the object frame based on the position change rate.
Wherein the third position information may include a second abscissa and a second ordinate.
Wherein the first size information may include a first width and a first height. It is understood that the size information of each video frame is not changed without changing the imaging parameters.
As an example, the position change rate may be used to indicate a change in the position of the target in a current video frame relative to the position of the target in a previous video frame.
As an example, the monitoring device may record the position information of the object in each video frame, so that when it is necessary to determine whether there is jitter in the object frame, the position information of the object in the previous video frame of the current video frame may be obtained, which is referred to as third position information for convenience of description. In addition, the monitoring device also acquires the size information of the current video frame, namely acquires the first size information. Then, based on the first position information, the third position information and the first size information, a position change rate of the object is determined, and since the position change rate can be used to indicate a change of the position of the object in the current video frame relative to the position of the object in the previous video frame, it can be determined whether jitter exists in the object frame according to the position change rate.
As an example, determining a particular implementation of the rate of change of position of the target based on the first position information, the third position information, and the first size information may include: an abscissa change amount is determined based on the first abscissa and the second abscissa, and an ordinate change amount is determined based on the first ordinate and the second ordinate. The abscissa variation is divided by the first width to obtain an abscissa variation rate, and the ordinate variation is divided by the first height to obtain an ordinate variation rate. The abscissa change rate and the ordinate change rate are determined as the position change rate of the object.
For example, the abscissa variation may be determined by a modulo operation based on the first abscissa and the second abscissa, and similarly, the ordinate variation may be determined by a modulo operation based on the first ordinate and the second ordinate.
As an example, assuming that the first abscissa is denoted as cur _ coorX, the first ordinate is denoted as cur _ coorY, the second abscissa is denoted as his _ coorX, the second ordinate is denoted as his _ coorY, the first width is denoted as W, and the first height is denoted as H, the monitoring apparatus can determine the abscissa and ordinate change rates by the following equations (1) to (4):
deltaX=||cur_coorX-his_coorX|| (1)
deltaY=||cur_coorY-his_coorY|| (2)
ratioX=deltaX/W (3)
ratioY=deltaY/H (4)
wherein deltaX represents the variation of the abscissa, deltaY represents the variation of the ordinate, ratio X represents the variation rate of the abscissa, ratio Y represents the variation rate of the ordinate, and | | represents the modulo operation.
As an example, the specific implementation of determining whether jitter is present in the target frame based on the position change rate may include: and if the abscissa change rate is greater than a specified change rate threshold value and/or the ordinate change rate is greater than the specified change rate threshold value, determining that the target frame has jitter.
The specified change rate threshold may be set by a user according to actual needs, or may also be set by default by the monitoring device, which is not limited in the embodiment of the present application.
If the abscissa change rate is greater than the prescribed change rate threshold, it is described that the position of the target has changed greatly in the abscissa direction, that is, the target frame has changed unstably in the abscissa direction, and in this case, it can be determined that the target frame has a shake. If the ordinate change rate is greater than the specified change rate threshold, it indicates that the position of the target in the vertical axis direction has changed greatly, i.e., the target frame changes unstably in the vertical axis direction, in which case it can be determined that the target frame has jitter. Of course, if the abscissa change rate is greater than the predetermined change rate threshold and the ordinate change rate is greater than the predetermined change rate threshold, it is described that the position of the object changes greatly in both the horizontal axis direction and the vertical axis direction.
Of course, if the abscissa change rate is less than or equal to the specified change rate threshold and the ordinate change rate is less than or equal to the specified change rate threshold, it may be determined that the target frame does not have jitter.
Further, since the area of the target when the target approaches the monitoring device is large, the target frame is normally and stably transformed during tracking, but the position change rate may still be large, and in this case, it is determined whether there is jitter based on the position change rate, and misjudgment may occur. To this end, the rate of change of position may also be corrected, and the implementation may include: determining a correction coefficient based on the first position information, and multiplying the correction coefficient by the position change rate to obtain a corrected position change rate, where the specific implementation of determining whether the target frame has jitter based on the position change rate may include: and determining whether the target frame has jitter or not based on the corrected position change rate.
That is, a correction coefficient may be determined from the first position information, and the position change rate may be corrected using the correction coefficient to obtain a corrected position change rate. Therefore, in the judging process, the judgment can be carried out based on the corrected position change rate, namely whether the target frame has jitter or not is judged, and the judging accuracy is ensured.
As an example, determining a specific implementation of the correction coefficient based on the first position information may include: and if the distance between the position of the target and the center of the current video frame is determined to be within a first distance range based on the first position information, determining the correction coefficient according to the first position information and a first reference model, wherein the correction coefficient output by the first reference model is negatively correlated with the position information. Or if the distance between the position of the target and the center of the current video frame is determined to be within a second distance range based on the first position information, determining the area occupied by the target, and determining the correction coefficient based on the area and a second reference model, wherein the correction coefficient output by the second reference model is negatively correlated with the area. Wherein the distance in the first distance range is greater than the distance in the second distance range.
Wherein, this first distance scope can set up according to actual demand, and in the same way, this second distance scope also can set up according to actual demand.
In addition, the first reference model can be set according to actual requirements, and the second reference model can also be set according to actual requirements.
If it is determined based on the first position information that the distance between the position of the target and the center of the current video frame is within a first distance range, which indicates that the distance between the target and the center of the current video frame is relatively long, in this case, the correction coefficient may be determined according to the first position information of the target. For example, referring to fig. 2, assuming that the coordinate system is established with the top left vertex of the current video frame as the origin, and the value range corresponding to the first distance range is [0,0.8H ], if the first ordinate included in the first position information is within [0,0.8H ], the correction factor may be determined based on the first reference model, for example, the relationship between the correction factor determined by the first reference model and the coordinate is as shown in fig. 2.
If the distance between the position of the target and the center of the current video frame is determined to be within the second distance range based on the first position information, the distance between the target and the center of the current video frame is relatively close, and in this case, the correction coefficient can be determined according to the area occupied by the target. For example, referring to fig. 3, assuming that the coordinate system is established with the top left vertex of the current video frame as the origin, and the value range corresponding to the second distance range is [0.8H, H ], if the first vertical coordinate included in the first position information is within the range, the correction coefficient may be determined based on the second reference model, for example, the relationship between the correction coefficient and the coordinate determined by the second reference model is as shown in fig. 3.
As an example, if the area of the object is large, the corresponding distance of the object on the video frame is large due to slight movement of the object, and therefore, when the area of the object is large, the correction coefficients may be all smaller than 1.
It should be noted that the above is only an example of determining whether the target frame has jitter by using the first implementation manner for each video frame. In a possible implementation manner, it may be determined that the current video frame is the second video frame, and if the current video frame is one of the video frames of the first specified number of threshold number, it is determined whether jitter exists in the target frame according to the first implementation manner. Wherein, the specified number threshold value can be set according to actual requirements. That is, in the initial stage of tracking the target, since the number of processed video frames is relatively small, the data of the target frame that can be acquired is relatively small, and in this case, whether or not the target frame has jitter can be determined by using the first implementation. Further, if the current video frame is not one of the video frames of the previously specified number threshold, the following second implementation manner may be adopted for determination.
The second implementation mode comprises the following steps: and under the condition that the current video frame comprises at least a specified number of threshold video frames, acquiring a position information distribution model, wherein the position information distribution model is determined based on the position information of the target in a plurality of video frames, inputting the first position information into the position information distribution model to obtain an output result, and if the output result does not conform to the distribution rule of the position information distribution model, determining that the target frame has jitter.
That is, after the monitoring device tracks the target for a period of time, a large amount of data may be collected, in which case, the location information distribution model may be established based on the collected data (i.e., the location information of the target in a plurality of video frames), and may be, for example, a normal distribution model or a linear model. In this way, for the subsequent video frame, whether the target frame of the target has jitter can be determined through the position information distribution model.
In an implementation, in a case where the current video frame includes at least a specified number of threshold video frames before the current video frame, the first location information of the object may be input into the location information distribution model, and an output result is obtained. Then, the monitoring device may determine whether the output result conforms to the distribution rule of the location information distribution model, and if not, it indicates that the first location information is determined under the condition that the target frame has jitter, that is, it may determine that the target frame has jitter. Of course, if the output result conforms to the distribution rule of the location information distribution model, it may be determined that the target frame does not have jitter.
As an example, if the location information distribution model is a normal distribution model, the determining whether the output result conforms to the distribution rule of the location information distribution model may be implemented in such a way that whether the output result conforms to the normal distribution. As another example, if the location information distribution model is a linear model, determining whether the output result conforms to the distribution rule of the location information distribution model may be implemented by determining whether the output result conforms to a linear distribution.
When it is determined that the target frame has jitter, it is determined that there is a certain position deviation between the first position information and the actual position information of the target, and at this time, the monitoring device may correct the position deviation caused by jitter to obtain second position information of the target, where the second position information may indicate the actual position of the target more accurately.
As an example, determining the second location information of the target may include: and obtaining the position information of the target in a plurality of video frames before the current video frame to obtain a plurality of fourth position information, determining the average position information of the plurality of fourth position information, and determining the average position information as the second position information.
The video frames before the current video frame may be partial video frames before the current video frame, or may refer to all video frames before the current video frame.
It is understood that the average position information may reflect a steady position change trend of the object, so when it is determined that the object frame of the object has jitter, an average value of the position information of the object in a plurality of video frames before the current video frame may be determined to obtain average position information, and then, the monitoring apparatus may determine the average position information as the second position information, thereby correcting the position information of the object.
Step 103: and determining the distance between the target and the center of the current video frame according to the second position information.
As an example, the distance between the target and the center of the current video frame may be determined according to the second location information and the center coordinate of the center, for example, if the center coordinate includes a center abscissa and a center ordinate, the distance between the target and the center of the current video frame may be determined by a distance formula based on the third abscissa, the third ordinate, the center abscissa and the center ordinate in the second location information.
It should be noted that, in another embodiment, if there is no jitter in the target frame, the distance between the target and the center of the current video frame may be determined directly based on the first position information, and the determination method is the same, and is not described herein again.
Step 104: and if the distance between the target and the center of the current video frame is greater than a specified distance threshold, adjusting the pose of the camera so that the target is in the central range of the adjusted monitoring picture.
The specified distance threshold may be set by a user according to actual needs, or may be set by default by the monitoring device, which is not limited in the embodiment of the present application.
If the distance between the target and the center of the current video frame is larger than the specified distance threshold, the target is located at the edge of the monitoring picture of the camera currently.
It should be noted that the pose adjustment mode of the camera may be adjusted according to actual conditions, for example, the operation speed, the operation time, and the like of the camera may be set, so as to adjust the pose of the camera. For example, if the target is located above the center of the monitoring screen, the camera may be controlled to raise the head at a certain speed and operate for a certain period of time until the target is monitored to be within the central range of the monitoring screen.
It should be noted that, since the second position information is determined under the condition of correction, the determined distance is more accurate, and thus, an erroneous command can be prevented from being issued to the camera based on the abnormal data.
For ease of understanding, the implementation flow of the present application is briefly described below with reference to fig. 4. In the initial monitoring period, the monitoring equipment detects the current video frame, determines whether a target exists in the current video frame, and if not, continues to acquire and detect the next video frame. And if the target exists, selecting one target for tracking, determining whether a target frame of the target has jitter or not, if the target frame of the target has the jitter, performing de-jitter operation, judging the distance between the target and the center of the current video frame based on second position information after the de-jitter operation, and adjusting the camera according to the distance. And if the shake does not exist, judging the distance between the target and the center of the current video frame based on the first position information, and adjusting the camera according to the distance.
Further, in some embodiments, if the monitoring device is configured in an autonomous driving device, the width and height of the target may also be monitored to determine the area of the target. Under the condition that the area of the target is detected to be larger, the fact that the target is possibly close to the monitoring device at present is indicated, at the moment, the monitoring device can also control the automatic driving device to be far away from the target, and after the automatic driving device is far away from the target, the camera is used for continuously shooting the target.
Since the width and height of the target need to be monitored in the above implementation, in order to avoid using abnormal data due to the jitter of the target frame, it is also possible to determine whether the target frame has jitter. For this, the monitoring apparatus may further determine a width change rate and a height change rate of the object, and re-determine the width and the height in case that the width change rate and the height change rate of the object are determined to be largely changed, thereby determining the area of the object using the re-determined width and height.
Wherein, the width change rate and the height change rate of the determined target can be determined by the following modes: determining a width variation of the object based on a width of the object in the current video frame and a width of the object in a video frame previous to the current video frame, and determining a height variation of the object based on a height of the object in the current video frame and a height of the object in the video frame previous to the current video frame. The target width change rate is obtained by dividing the width change amount by the first width, and the target height change rate is obtained by dividing the height change amount by the first height.
For example, the monitoring device may determine the width change rate and the height change rate of the target by the following equations (5) to (8):
deltaW=||cur_coorW-his_coorW|| (5)
deltaH=||cur_coorH-his_coorH|| (6)
ratioW=deltaW/W (7)
ratioH=deltaH/H (8)
where deltaW represents a width change amount, cur _ coorW represents a width of the object in the current video frame, cur _ coorH represents a height of the object in the current video frame, his _ coorW represents a width of the object in a video frame previous to the current video frame, his _ coorH represents a height of the object in a video frame previous to the current video frame, W represents the first width, H represents the first height, ratio W represents a width change rate, ratio represents a height change rate, and | | represent a modulo operation.
Thereafter, the width rate of change is compared to a specified rate of change threshold, and the height rate of change is compared to the specified rate of change threshold. If the width change rate is greater than a specified change rate threshold, and/or the height change rate is greater than a specified change rate threshold, it may be determined that jitter exists in the target frame.
As another example, if the current video frame includes at least a specified number of threshold video frames before, the monitoring device may obtain a width distribution model and a height distribution model, and determine whether there is jitter in the target frame according to the width and the height of the target in the current video frame through the width distribution model and the height distribution model, respectively. For example, if the width of the object at the current video frame does not conform to the distribution rule of the width distribution model and/or the height of the object at the current video frame does not conform to the distribution rule of the height distribution model, it is determined that the object frame has jitter.
Wherein the width distribution model is determined based on a width of the object in a plurality of video frames preceding the current video frame, and the height distribution model is determined based on a height of the object in a plurality of video frames preceding the current video frame.
Further, in the case where it is determined that there is jitter, a width average value and a height average value of the target in a plurality of video frames preceding the current video frame may be determined, and the width average value and the height average value are determined as a width and a height of the target in the current video frame.
In the embodiment of the application, first position information of a target in a current video frame is determined, if it is determined that a camera shooting the current video frame has shake based on the first position information, it is indicated that the first position information has a certain deviation from actual position information of the target, in this case, second position information of the target can be determined, and a distance between the target and the center of the current video frame is determined based on the second position information. The second position information is obtained by correcting the position deviation generated by the jitter, so that the distance between the target and the center of the current video frame determined according to the second position information is more accurate. And then, judging whether the distance is greater than a specified distance threshold, if so, indicating that the target is possibly positioned at the edge of the monitoring picture, and at the moment, adjusting the pose of the camera to ensure that the target is positioned in the central range of the adjusted monitoring picture, thereby facilitating the user to check the target. Therefore, the camera can be prevented from being controlled by mistake as much as possible.
Fig. 5 is a schematic structural diagram of an object monitoring apparatus provided in an embodiment of the present application, where the object monitoring apparatus may be implemented as part or all of a monitoring device by software, hardware, or a combination of the two. Referring to fig. 5, the apparatus includes:
a first determining module 510, configured to determine position information of a target frame where a target in a current video frame is located, to obtain first position information of the target;
a second determining module 520, configured to determine second position information of the target if it is determined that the target frame shakes based on the first position information, where the second position information is obtained by correcting a position deviation caused by shaking;
a third determining module 530, configured to determine, according to the second location information, a distance between the target and the center of the current video frame;
and an adjusting module 540, configured to adjust a pose of the camera if a distance between the target and the center of the current video frame is greater than a specified distance threshold, so that the target is within a center range of the adjusted monitoring picture.
In a possible implementation manner of the present application, the second determining module 520 is further configured to:
acquiring position information of the target in a video frame before the current video frame to obtain third position information;
acquiring size information of the current video frame to obtain first size information;
determining a rate of change of position of the target based on the first position information, the third position information, and the first size information;
determining whether jitter is present in the target box based on the rate of change of position.
In one possible implementation manner of the present application, the first position information includes a first abscissa and a first ordinate, the third position information includes a second abscissa and a second ordinate, and the first size information includes a first width and a first height; the second determining module 520 is configured to:
determining an abscissa change amount based on the first abscissa and the second abscissa, and determining an ordinate change amount based on the first ordinate and the second ordinate;
dividing the abscissa variation by the first width to obtain an abscissa variation rate, and dividing the ordinate variation by the first height to obtain an ordinate variation rate;
and determining the abscissa change rate and the ordinate change rate as the position change rate of the target.
In one possible implementation manner of the present application, the second determining module 520 is configured to:
and if the abscissa change rate is greater than a specified change rate threshold value and/or the ordinate change rate is greater than the specified change rate threshold value, determining that the target frame has jitter.
In one possible implementation manner of the present application, the second determining module 520 is configured to:
obtaining position information of the target in a plurality of video frames before the current video frame to obtain a plurality of fourth position information;
determining average position information of the plurality of fourth position information;
determining the average location information as the second location information.
In one possible implementation manner of the present application, the second determining module 520 is configured to:
determining a correction factor based on the first position information;
multiplying the correction coefficient by the position change rate to obtain a corrected position change rate;
and determining whether the target frame has jitter or not based on the corrected position change rate.
In one possible implementation manner of the present application, the second determining module 520 is configured to:
if the distance between the position of the target and the center of the current video frame is determined to be within a first distance range based on the first position information, determining the correction coefficient according to the first position information and a first reference model, wherein the correction coefficient output by the first reference model is negatively correlated with the position information;
if the distance between the position of the target and the center of the current video frame is determined to be within a second distance range based on the first position information, determining the area occupied by the target, and determining the correction coefficient based on the area and a second reference model, wherein the correction coefficient output by the second reference model is negatively correlated with the area;
wherein the distance within the first distance range is greater than the distance within the second distance range.
In a possible implementation manner of the present application, in a case that the current video frame includes at least a specified number of threshold video frames before, the second determining module 520 is configured to:
acquiring a position information distribution model, wherein the position information distribution model is determined based on the position information of the target in a plurality of video frames;
inputting the first position information into the position information distribution model to obtain an output result;
and if the output result does not accord with the distribution rule of the position information distribution model, determining that the target frame shakes.
In the embodiment of the application, first position information of a target in a current video frame is determined, if it is determined that a camera shooting the current video frame has shake based on the first position information, it is indicated that the first position information has a certain deviation from actual position information of the target, in this case, second position information of the target can be determined, and a distance between the target and the center of the current video frame is determined based on the second position information. The second position information is obtained by correcting the position deviation generated by the jitter, so that the distance between the target and the center of the current video frame determined according to the second position information is more accurate. And then, judging whether the distance is greater than a specified distance threshold, if so, indicating that the target is possibly positioned at the edge of the monitoring picture, and at the moment, adjusting the pose of the camera to ensure that the target is positioned in the central range of the adjusted monitoring picture, thereby facilitating the user to check the target. Therefore, the camera can be prevented from being controlled by mistake as much as possible.
It should be noted that: in the target monitoring apparatus provided in the foregoing embodiment, when performing target monitoring, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the target monitoring device and the target monitoring method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 6 is a schematic structural diagram of a monitoring device according to an embodiment of the present application. The monitoring device 600 includes a Central Processing Unit (CPU)601, a system memory 604 including a Random Access Memory (RAM)602 and a Read Only Memory (ROM)603, and a system bus 605 connecting the system memory 604 and the central processing unit 601. The monitoring device 600 also includes a basic input/output system (I/O system) 606 to facilitate information transfer between various components within the computer, and a mass storage device 607 for storing an operating system 613, application programs 614, and other program modules 615.
The basic input/output system 606 includes a display 608 for displaying information and an input device 609 such as a mouse, keyboard, etc. for user input of information. Wherein a display 608 and an input device 609 are connected to the central processing unit 601 through an input output controller 610 connected to the system bus 605. The basic input/output system 606 may also include an input/output controller 610 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input/output controller 610 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 607 is connected to the central processing unit 601 through a mass storage controller (not shown) connected to the system bus 605. The mass storage device 607 and its associated computer-readable media provide non-volatile storage for the monitoring device 600. That is, mass storage device 607 may include a computer-readable medium (not shown), such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 604 and mass storage device 607 described above may be collectively referred to as memory.
According to various embodiments of the present application, the monitoring device 600 may also operate as a remote computer connected to a network via a network, such as the Internet. That is, the monitoring device 600 may be connected to the network 612 through the network interface unit 611 connected to the system bus 605, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 611.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which, when being executed by a processor, implements the steps of the target monitoring method in the above embodiments. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the object monitoring method described above.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. An object monitoring method is applied to monitoring equipment, and the method comprises the following steps:
determining position information of a target frame where a target in a current video frame is located to obtain first position information of the target;
if the target frame is determined to have jitter based on the first position information, determining second position information of the target, wherein the second position information is obtained by correcting position deviation generated by jitter;
determining the distance between the target and the center of the current video frame according to the second position information;
and if the distance between the target and the center of the current video frame is greater than a specified distance threshold, adjusting the pose of the camera so that the target is in the central range of the adjusted monitoring picture.
2. The method of claim 1, wherein the method further comprises:
acquiring position information of the target in a video frame before the current video frame to obtain third position information;
acquiring size information of the current video frame to obtain first size information;
determining a rate of change of position of the target based on the first position information, the third position information, and the first size information;
determining whether jitter is present in the target box based on the rate of change of position.
3. The method of claim 2, wherein the first location information includes a first abscissa and a first ordinate, the third location information includes a second abscissa and a second ordinate, and the first size information includes a first width and a first height;
the determining a rate of change of position of the target based on the first location information, the third location information, and the first size information comprises:
determining an abscissa change amount based on the first abscissa and the second abscissa, and determining an ordinate change amount based on the first ordinate and the second ordinate;
dividing the abscissa variation by the first width to obtain an abscissa variation rate, and dividing the ordinate variation by the first height to obtain an ordinate variation rate;
and determining the abscissa change rate and the ordinate change rate as the position change rate of the target.
4. The method of claim 3, wherein said determining whether jitter is present in the target frame based on the rate of change of position comprises:
and if the abscissa change rate is greater than a specified change rate threshold value and/or the ordinate change rate is greater than the specified change rate threshold value, determining that the target frame has jitter.
5. The method of any of claims 1-4, wherein the determining second location information for the target comprises:
obtaining position information of the target in a plurality of video frames before the current video frame to obtain a plurality of fourth position information;
determining average position information of the plurality of fourth position information;
determining the average location information as the second location information.
6. The method of claim 2, wherein prior to determining whether jitter is present in the target frame based on the rate of change of position, further comprising:
determining a correction factor based on the first position information;
multiplying the correction coefficient by the position change rate to obtain a corrected position change rate;
the determining whether jitter is present in the target frame based on the rate of change of position comprises:
and determining whether the target frame has jitter or not based on the corrected position change rate.
7. The method of claim 6, wherein determining a correction factor based on the first location information comprises:
if the distance between the position of the target and the center of the current video frame is determined to be within a first distance range based on the first position information, determining the correction coefficient according to the first position information and a first reference model, wherein the correction coefficient output by the first reference model is negatively correlated with the position information;
if the distance between the position of the target and the center of the current video frame is determined to be within a second distance range based on the first position information, determining the area occupied by the target, and determining the correction coefficient based on the area and a second reference model, wherein the correction coefficient output by the second reference model is negatively correlated with the area;
wherein the distance within the first distance range is greater than the distance within the second distance range.
8. The method of claim 1, wherein in a case that the current video frame is preceded by at least a specified number of threshold number of video frames, the method further comprises:
acquiring a position information distribution model, wherein the position information distribution model is determined based on the position information of the target in a plurality of video frames;
inputting the first position information into the position information distribution model to obtain an output result;
and if the output result does not accord with the distribution rule of the position information distribution model, determining that the target frame shakes.
9. An object monitoring apparatus, configured in a monitoring device, the apparatus comprising:
the camera is used for collecting video frames;
a processor for performing object monitoring according to the method of any one of claims 1-8 based on the video frames captured by the camera.
10. A monitoring device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus, the memory is used for storing computer programs, and the processor is used for executing the programs stored in the memory to implement the steps of the method according to any one of claims 1 to 8.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202010500284.6A 2020-06-04 2020-06-04 Target monitoring method, device, equipment and storage medium Pending CN113766175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010500284.6A CN113766175A (en) 2020-06-04 2020-06-04 Target monitoring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010500284.6A CN113766175A (en) 2020-06-04 2020-06-04 Target monitoring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113766175A true CN113766175A (en) 2021-12-07

Family

ID=78783647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010500284.6A Pending CN113766175A (en) 2020-06-04 2020-06-04 Target monitoring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113766175A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240214509A1 (en) * 2022-12-22 2024-06-27 Microsoft Technology Licensing, Llc Location-Based Frame Skipping

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035273A (en) * 2007-04-24 2007-09-12 北京中星微电子有限公司 Automatically tracking and controlling method and control device in the video monitoring
KR100871833B1 (en) * 2008-04-28 2008-12-03 재 훈 장 Camera apparatus for auto tracking
CN106204653A (en) * 2016-07-13 2016-12-07 浙江宇视科技有限公司 A kind of monitoring tracking and device
CN106228112A (en) * 2016-07-08 2016-12-14 深圳市优必选科技有限公司 Face detection tracking method, robot head rotation control method and robot
CN108038417A (en) * 2017-11-14 2018-05-15 上海歌尔泰克机器人有限公司 Cloud platform control method, apparatus and system
CN109379533A (en) * 2018-10-15 2019-02-22 Oppo广东移动通信有限公司 A kind of photographic method, camera arrangement and terminal device
CN109391762A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method and apparatus of track up
CN109886998A (en) * 2019-01-23 2019-06-14 平安科技(深圳)有限公司 Multi-object tracking method, device, computer installation and computer storage medium
CN110086988A (en) * 2019-04-24 2019-08-02 薄涛 Shooting angle method of adjustment, device, equipment and its storage medium
CN110830846A (en) * 2018-08-07 2020-02-21 北京优酷科技有限公司 Video clipping method and server

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035273A (en) * 2007-04-24 2007-09-12 北京中星微电子有限公司 Automatically tracking and controlling method and control device in the video monitoring
KR100871833B1 (en) * 2008-04-28 2008-12-03 재 훈 장 Camera apparatus for auto tracking
CN106228112A (en) * 2016-07-08 2016-12-14 深圳市优必选科技有限公司 Face detection tracking method, robot head rotation control method and robot
CN106204653A (en) * 2016-07-13 2016-12-07 浙江宇视科技有限公司 A kind of monitoring tracking and device
CN109391762A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method and apparatus of track up
CN108038417A (en) * 2017-11-14 2018-05-15 上海歌尔泰克机器人有限公司 Cloud platform control method, apparatus and system
CN110830846A (en) * 2018-08-07 2020-02-21 北京优酷科技有限公司 Video clipping method and server
CN109379533A (en) * 2018-10-15 2019-02-22 Oppo广东移动通信有限公司 A kind of photographic method, camera arrangement and terminal device
CN109886998A (en) * 2019-01-23 2019-06-14 平安科技(深圳)有限公司 Multi-object tracking method, device, computer installation and computer storage medium
CN110086988A (en) * 2019-04-24 2019-08-02 薄涛 Shooting angle method of adjustment, device, equipment and its storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240214509A1 (en) * 2022-12-22 2024-06-27 Microsoft Technology Licensing, Llc Location-Based Frame Skipping

Similar Documents

Publication Publication Date Title
CN109345593B (en) Camera posture detection method and device
US8755562B2 (en) Estimation apparatus, control method thereof, and program
US10853950B2 (en) Moving object detection apparatus, moving object detection method and program
US10659676B2 (en) Method and apparatus for tracking a moving subject image based on reliability of the tracking state
US20130121597A1 (en) Image stabilization method and image stabilization device
CN111787232B (en) Image processing method, device and storage medium based on pan-tilt-zoom camera
US20140037212A1 (en) Image processing method and device
CN110401796B (en) Jitter compensation method and device of image acquisition device
JP6098873B2 (en) Imaging apparatus and image processing apparatus
US20110304730A1 (en) Pan, tilt, and zoom camera and method for aiming ptz camera
CN111508272A (en) Method and apparatus for providing robust camera-based object distance prediction
CN113766175A (en) Target monitoring method, device, equipment and storage medium
CN105791703A (en) Shooting method and terminal
US8737686B2 (en) 3D object tracking method and apparatus
CN114815912A (en) Cloud deck control method and device and robot equipment
EP3718302B1 (en) Method and system for handling 360 degree image content
CN111414012A (en) Region retrieval and holder correction method for inspection robot
US10198084B2 (en) Gesture control device and method
CN115601271B (en) Visual information anti-shake method, storage warehouse location state management method and system
JP2009217912A (en) Device and method of adjusting stop position of accessor mechanism
US20210185230A1 (en) Image capturing method and image capturing apparatus
CN113824939B (en) Projection image adjusting method, device, projection equipment and storage medium
CN116630374B (en) Visual tracking method, device, storage medium and equipment for target object
CN112785519A (en) Positioning error calibration method, device and equipment based on panoramic image and storage medium
US11800230B2 (en) Image processing apparatus of reducing whole operation time of arrival inspection, image processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination