CN116309685A - Multi-camera collaborative swimming movement speed measurement method and system based on video stitching - Google Patents

Multi-camera collaborative swimming movement speed measurement method and system based on video stitching Download PDF

Info

Publication number
CN116309685A
CN116309685A CN202310565836.5A CN202310565836A CN116309685A CN 116309685 A CN116309685 A CN 116309685A CN 202310565836 A CN202310565836 A CN 202310565836A CN 116309685 A CN116309685 A CN 116309685A
Authority
CN
China
Prior art keywords
tracking
frame
target
image
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310565836.5A
Other languages
Chinese (zh)
Inventor
张超超
孟祥涛
赵合
向政
黄磊
褚天琪
葛宏升
付尧顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Times Optical Electronic Technology Co Ltd
Original Assignee
Beijing Aerospace Times Optical Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Times Optical Electronic Technology Co Ltd filed Critical Beijing Aerospace Times Optical Electronic Technology Co Ltd
Priority to CN202310565836.5A priority Critical patent/CN116309685A/en
Publication of CN116309685A publication Critical patent/CN116309685A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/36Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
    • G01P3/38Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light using photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-camera collaborative swimming movement speed measuring method and system based on video stitching, and belongs to the field of movement measurement. The method comprises the steps of triggering a plurality of cameras to synchronously acquire natatorium picture information through multiple threads, splicing images acquired by the multiple cameras in real time by using a video splicing method to obtain a single wide-angle picture of the natatorium, performing target tracking and positioning by using a YOLOv5 target detection and ISORT target tracking algorithm, and finally obtaining the world coordinates of the athlete according to an internal reference correction and external reference calibration module, so that the co-positioning and speed measurement of the multiple cameras can be realized. The invention has the advantages of small limit on venues and acquisition equipment, high picture splicing accuracy, easy investigation and engineering, and the like, can effectively solve the problem that the conventional swimming speed quantization method cannot test high-precision feedback results, and greatly improves the matching accuracy.

Description

Multi-camera collaborative swimming movement speed measurement method and system based on video stitching
Technical Field
The invention belongs to the field of motion measurement, and relates to a multi-camera collaborative swimming motion speed measurement method and system based on video stitching.
Background
In recent years, along with the proposal of the 'science and technology assisted sports' number, the realization of accurate quantification of sports indexes by a science and technology means is important for improving the level of athletes. In swimming athletics, speed quantification is an intuitive manifestation of athlete level and ability. Speed measurement has been a problem for industry personnel due to the specific limitations of testing in an underwater environment. The current traditional swimming speed measuring method is easy to diverge in precision and cannot feed back the result in real time with high precision.
Disclosure of Invention
The technical solution of the invention is as follows: the method and the system for measuring the speed of the swimming motion by the aid of the multiple cameras based on video stitching are provided, the problem that an existing swimming motion speed quantification method cannot test high-precision feedback results is solved, and matching accuracy is greatly improved.
The technical scheme of the invention is as follows:
a multi-camera collaborative swimming movement speed measurement method based on video stitching comprises the following steps:
sequentially arranging a plurality of cameras along the track direction on the top of a wall at one side of the natatorium, wherein the butt joint areas of the image acquisition areas of the adjacent cameras are overlapped;
uniformly arranging calibration plates on two sides of the swimming pool, recording world coordinates of the center position of the swimming pool, and ensuring that the overlapped area of any two cameras comprises at least 1 calibration plate up and down;
each camera performs camera internal parameter correction and external parameter calibration to obtain a projection relationship from an image two-dimensional coordinate system to a world coordinate system;
image registration and splicing are completed by using the image containing the calibration plate, and the image registration and splicing is used as a video splicing template;
the method comprises the steps of triggering the collection of each camera in a natatorium through multithreading, obtaining images respectively captured by each camera at the same time, correcting internal parameters to obtain images to be spliced, and finishing splicing according to a video splicing template to obtain a single wide-angle picture;
constructing a target detection YOLOv5 model, detecting a target in the obtained single Zhang Anjiao picture by using the target detection YOLOv5 model, and obtaining a target position; the swimming cap for the target swimming athlete;
real-time tracking is carried out on the detected athlete through an ISORT tracking algorithm, and the real-time tracking method is used for matching the identification result of the current frame with the tracking track of the previous frame to obtain the position difference and the time difference between the adjacent frames;
and converting the natatorium real-time image coordinates into world coordinates, and realizing the positioning and speed measurement of the natatorium according to the position difference and the time difference between the adjacent frames.
Preferably, the video stitching template acquisition process is as follows:
step 41: according to the homography matrix calibrated to the world coordinate projection transformation by the external parameters, the images shot by the multiple cameras are projected to the same coordinate in a perspective mode, and image registration work is completed;
step 42: and taking a vertical line passing through the center point of the calibration plate under the image overlapping area as a splicing line, and finishing splicing of multiple images after removing the overlapping area to obtain the video splicing template.
Preferably, the homography matrix solving process is as follows:
homography matrix
Figure SMS_1
It satisfies->
Figure SMS_2
Wherein the method comprises the steps of
Figure SMS_3
Representing any point in the image taken by the camera, a +.>
Figure SMS_4
For the real world coordinate point of the point, < +.>
Figure SMS_5
—/>
Figure SMS_6
Is an element of a homography matrix;
selecting 4 calibration plate center coordinate points from images shot by a certain camera and respectively taking the 4 calibration plate center coordinate points and the corresponding actual lane world coordinate points into
Figure SMS_7
In (3) obtaining homography matrix +.>
Figure SMS_8
Preferably, the images shot by the multiple cameras are projected to the same coordinate in a perspective mode, the lane scale under the coordinate is consistent with that of an actual lane, and the lane angles are horizontally distributed.
Preferably, the detected athlete is tracked in real time by using an ISORT tracking algorithm to obtain the position difference and the time difference between adjacent frames, and the specific implementation mode is as follows:
initializing and creating a new tracking track for all targets detected by the first frame image through a target detection YOLOv5 model, and labeling an ID;
after any frame after the first frame passes through a target detection YOLOv5 model, real-time tracking is carried out on detected athletes by using an ISORT tracking algorithm, and the specific process is as follows:
obtaining the position predictions of all targets of the previous track through Kalman filtering;
calculating a matching loss matrix of the target detection frame of the current frame and the position prediction;
obtaining the unique matching of the track with the maximum similarity of the target detection frame of the current frame through a Hungary algorithm, and updating the target position of the track; initializing and creating a new tracking track for a target detection frame which is not matched with the track, and labeling an ID;
and updating the tracking tracks of all the targets and updating the target states according to the matching result to obtain the position difference and the time difference between the adjacent frames.
Preferably, the matching loss matrix calculation method comprises the following steps:
the ISORT tracking algorithm uses the Euclidean distance between the target detection box and the prediction box to describe the degree of motion correlation:
Figure SMS_9
wherein,,
Figure SMS_10
and->
Figure SMS_11
Represents the abscissa of the center point position of the jth target detection frame, < >>
Figure SMS_12
And->
Figure SMS_13
Represents the abscissa of the position of the center point of the ith track prediction frame, +.>
Figure SMS_14
The Euclidean distance between the jth target detection frame and the ith track prediction frame;
Figure SMS_15
wherein the ith row and jth column elements of the matching loss matrix are
Figure SMS_16
A loss value indicating that the ith track prediction frame matches the jth target detection frame; gate represents the threshold of the gating matrix.
A multi-camera collaborative swimming motion speed measurement system based on video stitching comprises a multi-camera acquisition module, an internal reference correction module, a video stitching module, a target identification module, a target tracking module and an external reference calibration module;
a multi-camera acquisition module: the swimming pool comprises a plurality of cameras which are sequentially arranged at the top of one side wall of the swimming pool along the direction of lanes and are used for acquiring swimming pool images in real time;
and an internal reference correction module: performing internal reference correction on the image acquired by the multi-camera acquisition module to correct radial distortion and tangential distortion of the image;
and the video splicing module is used for: the image containing the calibration plate is registered and spliced through 4 calibration points, and a video splicing template is obtained; taking the images subjected to internal reference correction at the same time as images to be spliced, and finishing splicing according to a video splicing template to obtain a single wide-angle picture;
and a target identification module: detecting athletes in an image picture frame by frame in the obtained single Zhang Anjiao picture by using a target detection YOLOv5 model to obtain a target position;
a target tracking module: real-time tracking is carried out on the detected athlete through an ISORT tracking algorithm, and the real-time tracking method is used for matching the identification result of the current frame with the tracking track of the previous frame to obtain the position difference and the time difference between the adjacent frames;
the external parameter calibration module: and converting the natatorium real-time image coordinates into world coordinates, and realizing the positioning and speed measurement of the natatorium according to the position difference and the time difference between the adjacent frames.
Preferably, the process of obtaining the video splicing template by the video splicing module is as follows:
step 41: according to the homography matrix calibrated to the world coordinate projection transformation by the external parameters, the images shot by the multiple cameras are projected to the same coordinate in a perspective mode, and image registration work is completed;
step 42: and taking a vertical line passing through the center point of the calibration plate under the image overlapping area as a splicing line, and finishing splicing of multiple images after removing the overlapping area to obtain the video splicing template.
Preferably, the homography matrix solving process is as follows:
homography matrix
Figure SMS_17
Which satisfies the following requirements
Wherein the method comprises the steps of
Figure SMS_18
Representing any point in the image taken by the camera, a +.>
Figure SMS_19
For the real world coordinate point of the lane corresponding to the point, -/->
Figure SMS_20
Is an element of a homography matrix;
selecting 4 calibration plate center coordinate points from the image containing the calibration plates and corresponding actual lane world coordinate points to be respectively brought into
Figure SMS_21
And obtaining the homography matrix.
Preferably, the specific implementation manner of the target tracking module is as follows:
initializing and creating a new tracking track for all targets detected by the first frame image through a target detection YOLOv5 model, and labeling an ID;
after any frame after the first frame passes through a target detection YOLOv5 model, real-time tracking is carried out on detected athletes by using an ISORT tracking algorithm, and the specific process is as follows:
obtaining the position predictions of all targets of the previous track through Kalman filtering;
calculating a matching loss matrix of the target detection frame of the current frame and the position prediction;
obtaining the unique matching of the track with the maximum similarity of the target detection frame through a Hungary algorithm, and updating the target position of the track;
initializing and creating a new tracking track for a target detection frame which is not matched with the track, and labeling an ID;
and updating the tracking tracks of all the targets and updating the target states according to the matching result to obtain the position difference and the time difference between the adjacent frames.
Compared with the prior art, the invention has the following beneficial effects:
(1) According to the invention, pictures of different areas of the natatorium collected by the multiple cameras are spliced into a single wide-angle picture, and target tracking is carried out on the single wide-angle picture, so that the problem of collaborative distribution of the multiple cameras is effectively solved, and the problems of confirmation and handover of the same target under different cameras are solved.
(2) The invention adopts the target tracking method to measure the speed of all athletes in the natatorium, has high accuracy and high tracking speed for the athletes, and effectively solves the problems that the accuracy of the traditional swimming speed measuring method is easy to diverge and the result cannot be fed back in real time with high accuracy.
(3) The invention adopts the ISORT target tracking algorithm, improves the accuracy of data association, effectively reduces the interference of different track athletes in the tracking process, and improves the tracking accuracy.
Drawings
FIG. 1 is a flow chart of image stitching;
FIG. 2 is a flowchart of an ISORT multi-target tracking algorithm;
fig. 3 is a flow chart of the present invention.
Detailed Description
The following describes specific embodiments of the present invention with reference to the drawings.
Aiming at the difficult problem of realizing underwater target tracking by using image information, the deep learning image processing method can extract abstract features of images by using deep convolution calculation, and fully utilize pixel information to improve the performance of the detector, wherein YOLOv5 is a multi-target and multi-scale deep learning detector, has the advantages of small calculated amount, high recognition speed, low delay, high precision and the like, and is more suitable for detecting and tracking a plurality of athletes in a natatorium compared with a single-target tracking algorithm. As the SORT tracker which is often used in combination with the YOLOv5 detector, targets can be effectively associated, and real-time tracking performance is improved. The core of the SORT is mainly a combined version of Kalman filtering and Hungary algorithm, and a good tracking effect can be achieved.
As for the tracking view angle, video tracking can be classified into tracking under a single camera and tracking under multiple cameras, depending on the number of cameras employed. The two research aspects are interrelated, and tracking under multiple cameras is based on tracking under a single camera. Whether tracking under a single camera or multiple cameras depends on the underlying processing of motion detection, adaptive tracking of moving objects under a single camera, and the like.
In multi-camera collaborative tracking, a major difficulty is how to establish a correct correspondence between multiple cameras for the same target, i.e., target handover. For example: how to confirm the common targets detected between the cameras and how to effectively distribute the plurality of cameras to cover all athletes in the natatorium.
The invention splices the multiple-camera collaborative images into a complete image by an image splicing and fusion method of image registration and combination, and successfully solves the problem of target handover.
The method for measuring the speed of swimming sports by cooperation of multiple cameras based on video stitching is used for covering the whole natatorium by using multiple cameras and realizing track tracking on all athletes in the natatorium, and as shown in figure 3, the method comprises the following steps:
step 1: arranging all cameras on the top of the right wall of the natatorium along the way once, and overlapping the batch butt joint areas of the image acquisition areas of the adjacent cameras;
step 2: uniformly arranging calibration plates on two sides of the swimming pool, recording world coordinates of the calibration plates, and ensuring that the overlapped area of any two cameras comprises at least 1 calibration plate up and down;
step 3: the plurality of cameras respectively obtain the projection relation of the two-dimensional image coordinate system corresponding to the world coordinate system according to the correction of the internal parameters of the cameras and the calibration of the external parameters;
step 4: the method comprises the steps that image registration and splicing are completed on pictures collected by a plurality of cameras each comprising a calibration plate, and the pictures are used as video splicing templates;
step 5: the acquisition of each camera in the natatorium is triggered by multithreading, images respectively captured by each camera at the same time are obtained, the images are used as images to be spliced after internal reference correction, and splicing is completed according to the splicing template in the step 4, so that a single wide-angle picture is obtained;
step 6: detecting a target (swimming cap of a swimming player) in the obtained single Zhang Anjiao picture by adopting the constructed target detection YOLOv5 model, and obtaining a target position;
step 7: improving the SORT loss function according to swimming movement characteristics, providing an ISORT tracking algorithm, and tracking the athlete detected in the step 6 in real time through the ISORT;
step 8: and the external parameter calibration module is used for converting the target image coordinates into world coordinates so as to realize the positioning and speed measurement of the swimmer.
1. Video stitching step
For multi-camera coordinated target tracking, each frame sequence for shooting the subsequent video can be spliced through image splicing, and the whole steps are as follows:
(1) Firstly, splicing pictures containing calibration plates according to a projection conversion method based on calibration to obtain a video splicing template;
(2) Splicing the real-time sub-sequence frames through the template to generate a wide-angle video consisting of wide-angle pictures;
(3) And (5) utilizing a target tracking algorithm to complete real-time tracking of the athlete.
1.1 Video stitching template acquisition
The splicing process of the video splicing template is core content, fig. 1 is a flowchart for acquiring the video splicing template, and the steps are as follows:
step 41: according to the homography matrix of world coordinate projection transformation calibrated by the external parameters, converting pictures shot by the multi-camera to the same coordinates to finish image registration work;
step 42: and according to the central point of the overlapping area calibration plate as a splicing line, splicing the multiple pictures after removing the overlapping area, and obtaining the video splicing template.
The homography matrix solving process is related to the image registration process as follows:
and 4 coordinate points of the calibration plates and corresponding coordinate points of the actual lane plane are selected from pictures shot by each natatorium camera, and the conversion relationship of the two planes is calculated by adopting a homography matrix. The homography matrix H is obtained, and the specific calculation process is as follows:
Figure SMS_22
(1)
wherein the method comprises the steps of
Figure SMS_23
Representing any point in the image taken by the camera, a +.>
Figure SMS_24
For the real world coordinate point of the point, < +.>
Figure SMS_25
—/>
Figure SMS_26
Is an element of a homography matrix;
Figure SMS_27
representing an H matrix, which has 8 degrees of freedom, so that only 4 pairs of points are needed to calculate. />
Figure SMS_28
Representing any point in the image taken by the camera, < >>
Figure SMS_29
The real world coordinate point of the lane corresponding to the point.
First spread:
Figure SMS_30
(2)
then, the first two equations can be compared with the third equation to obtain equation (3), and let the right of the equation equal to 0 to obtain equation (4).
Figure SMS_31
(3) (4)
Expanded into a matrix
Figure SMS_32
The form of (2):
Figure SMS_33
(5)
Figure SMS_34
(6)
the corresponding feature points are brought in to obtain:
Figure SMS_35
(7)
solving the equation set to obtain the maximum irrelevant vector
Figure SMS_36
Is a matrix H of (c).
Therefore, 4 calibration plate center coordinate points and corresponding actual lane world coordinate points are selected from the image containing the calibration plates and respectively brought into
Figure SMS_37
In (3) obtaining homography matrix +.>
Figure SMS_38
Projection perspective involved in the image registration procedure is considered as: the perspective transformation (Perspective Transformation) is to project a Plane (picture) through a projection matrix to a new Viewing Plane (Viewing Plane), also called projection map.
According to the natatorium of the current project, lanes are distributed in a 2D plane, and the lanes shot by the multiple cameras are in different scale angles in the picture due to different parameters such as the focal length, the shooting angle and the like of the multiple cameras. Therefore, the pictures shot by the multi-camera are converted into the same plane coordinates through the homography matrix, the lane scale under the coordinates is consistent with the actual lanes, and the lane angles are uniformly distributed horizontally.
Yolov5 target detection
After the images are spliced into a single wide-angle picture, the picture can be directly input into a target detection model YOLOv5, and all swimming movements in the multiple cameras are detected:
YOLOv5 is one of the most excellent single-stage target recognition algorithms at present, and is widely applied to the field of target detection by virtue of the characteristics of high accuracy and adaptability to various complex scenes due to the rapid detection speed. The recognition of the swimmer adopts an end-to-end mode, and the target swimming cap and the envelope frame corresponding to the target swimming cap are directly detected from the input picture. The YOLOv5 mainly comprises four parts, namely an input end, a backbone network, a neck network and a detection head, and is respectively responsible for image preprocessing, image feature extraction, feature diversity processing and predicting the type and envelope frame of a target.
The input end represents the input athlete swimming scene picture, the input image size of the network is 608
Figure SMS_39
This stage uses preprocessing to scale the input image to the size required by the network and uses adaptive anchor frame computation for picture adaptive scaling 608. The backbone network extracts multi-scale deep features in the picture. The neck network fuses the multi-scale features, and the diversity of the features and the robustness of the model can be further improved by using the neck network. The detection head is used for outputting a target detection result and is used for predicting the swimming cap and the corresponding envelope frame of the athlete.
ISORT target tracking
After the target detection is completed, inputting a YOLOv5 detection target box into an ISORT matching algorithm to realize target persistence tracking, wherein fig. 2 is a flowchart of an ISORT multi-target tracking algorithm, and the steps are as follows:
(1) Initializing and creating a new tracking track for all targets detected by the first frame image through YOLOv5, and labeling ID;
(2) After any frame after the first frame passes through a YOLOv5 target detection model, real-time tracking is carried out on detected athletes by using an ISORT tracking algorithm, and the specific process is as follows:
obtaining the position predictions of all targets of the previous track through Kalman filtering;
calculating a matching loss matrix of the target detection frame of the current frame and the position prediction;
obtaining the unique matching with the maximum similarity between the track and the target frame through a Hungary algorithm, and updating the track target position;
initializing and creating a new tracking track for an unmatched target frame, and labeling an ID;
(3) And updating the tracking tracks of all the targets and updating the states of the target frames according to the matching result to obtain the position difference and the time difference between the adjacent frames.
The Kalman filtering target prediction process in the ISORT algorithm flow is as follows:
kalman filtering with good noise interference resistance is used to predict player cap position. Setting the state of one frame on the swimming cap to be tracked as
Figure SMS_40
,/>
Figure SMS_41
Representing the previous frame of image +.>
Figure SMS_42
And->
Figure SMS_43
The position state and the speed state of the previous frame image are respectively represented. Giving a certain noise covariance matrix Q in consideration of the disturbance such as uneven athlete speed and water surface, and the state prediction equation is shown as formulas (8) and (9)。
Figure SMS_44
(8)
Figure SMS_45
(9)
To ensure real-time and accuracy of tracking, some parameters in the model need to be updated.
Figure SMS_46
(10)
Figure SMS_47
(11)
Figure SMS_48
(12)
Wherein,,
Figure SMS_51
is a state transition matrix, ">
Figure SMS_54
Is a state control vector, +.>
Figure SMS_57
Is a control variable matrix; />
Figure SMS_50
And->
Figure SMS_53
Posterior state estimation at the k-1 moment and the k moment respectively is updated result; />
Figure SMS_56
Representing a priori state estimation at time k, i.e. based on time k-1Optimally estimating and predicting the state at the moment k; />
Figure SMS_59
And a posterior estimated covariance for each of the k-1 and k times; />
Figure SMS_49
Estimating covariance a priori at time k; />
Figure SMS_52
Is a measurement state; />
Figure SMS_55
Is the kalman filter gain. />
Figure SMS_58
A transformation matrix for the state variables to the predicted measurements. R and Q are covariance matrices of the observation noise and the system noise, respectively.
The loss matrix calculation method in the ISORT algorithm flow comprises the following steps:
for sportsmen swimming sport characteristics: different athletes are distributed in different lanes (with lane length as abscissa), and the distance during the athletic movement of the athletes will not be mutated and the lanes will not be changed.
Thus, the ISORT uses the Euclidean distance between the detection and prediction frames to describe the degree of motion correlation:
Figure SMS_60
(13)
wherein,,
Figure SMS_61
and->
Figure SMS_62
Represents the abscissa of the center point position of the jth target detection frame, < >>
Figure SMS_63
And->
Figure SMS_64
Represents the abscissa of the position of the center point of the ith track prediction frame, +.>
Figure SMS_65
The Euclidean distance between the jth target detection frame and the ith track prediction frame;
then, in order to further ensure the quality of matching, a gating matrix is used for correcting the loss matrix, and the characteristic that lanes are longitudinally distributed in the actual application scene and the longitudinal coordinate values of image pixel points of the same lanes are similar is combined, so that the longitudinal coordinate value coordinates of the pixels of the center point of the swimming cap of the athlete are used as the gating matrix for limiting the matching between tracks and targets in different lanes.
Figure SMS_66
(14)
Wherein,,
Figure SMS_67
a loss value indicating that the ith track prediction frame matches the jth target detection frame; gate represents the threshold of the gating matrix.
4. External parameter calibration
Finally, converting the real-time tracked athlete image coordinate position into world coordinates through an external parameter calibration method according to the track position obtained by the target tracking algorithm, obtaining the image coordinates of the center points of the 6 calibration plates placed on the two sides of the swimming pool and the world coordinates corresponding to the image coordinates, and directly calculating a calibration conversion matrix, wherein the calculation formula is shown in a formula (15):
Figure SMS_68
(15)
wherein a, b, c and
Figure SMS_69
、/>
Figure SMS_70
、/>
Figure SMS_71
is the parameter to be solved. Substituting the known image coordinates and world coordinates of the center point of the 6 groups of calibration plates, solving the above to obtain a transformation matrix T, and specifically, a formula (16) can be seen.
Figure SMS_72
(16)
And calculating world coordinates of the athlete according to the transformation matrix T and the pixel points of the picture swimming cap.
The invention also provides a multi-camera collaborative swimming movement speed measurement system based on video stitching, which comprises a multi-camera acquisition module, an internal parameter correction module, a video stitching module, a target identification module, a target tracking module and an external parameter calibration module.
A multi-camera acquisition module: the swimming pool comprises a plurality of cameras which are sequentially arranged at the top of one side wall of the swimming pool along the direction of lanes and are used for acquiring swimming pool images in real time; the internal reference correction module is used for correcting the internal reference of the camera by adopting a Zhang Zhengyou calibration method, so that the radial distortion and tangential distortion of the picture can be corrected; the video stitching module is used for completing image registration and stitching of the images containing the calibration plates through 4 calibration points to obtain a video stitching template; the method comprises the steps of triggering the collection of each camera in a natatorium through multithreading, obtaining images respectively captured by each camera at the same time, correcting internal parameters to obtain images to be spliced, and finishing splicing according to a video splicing template to obtain a single wide-angle picture. And the target identification module is used for detecting athletes in the image picture frame by frame in the obtained single Zhang Anjiao picture by using a target detection YOLOv5 model, and acquiring a target position. And the target tracking module is used for tracking the detected athlete in real time through an ISORT tracking algorithm and matching the identification result of the current frame with the tracking track of the previous frame to obtain the position difference and the time difference between the adjacent frames. And the external parameter calibration module converts the real-time image coordinates of the natatorium into world coordinates, and realizes the positioning and speed measurement of the natatorium according to the position difference and the time difference between adjacent frames.
The process of the video splicing module obtaining the video splicing template is as follows:
step 41: according to the homography matrix calibrated to the world coordinate projection transformation by the external parameters, the images shot by the multiple cameras are projected to the same coordinate in a perspective mode, and image registration work is completed;
step 42: and taking a vertical line passing through the center point of the calibration plate under the image overlapping area as a splicing line, and finishing splicing of multiple images after removing the overlapping area to obtain the video splicing template.
The homography matrix solving flow is as follows:
homography matrix
Figure SMS_73
It satisfies->
Figure SMS_74
Wherein the method comprises the steps of
Figure SMS_75
Representing any point in the image taken by the camera, a +.>
Figure SMS_76
A real world coordinate point of the lane corresponding to the point;
selecting 4 calibration plate center coordinate points from the image containing the calibration plates and corresponding actual lane world coordinate points to be respectively brought into
Figure SMS_77
In (3) obtaining homography matrix +.>
Figure SMS_78
The specific implementation mode of the target tracking module is as follows:
initializing and creating a new tracking track for all targets detected by the first frame image through a target detection YOLOv5 model, and labeling an ID;
after any frame after the first frame passes through a target detection YOLOv5 model, real-time tracking is carried out on detected athletes by using an ISORT tracking algorithm, and the specific process is as follows:
obtaining the position predictions of all targets of the previous track through Kalman filtering;
calculating a matching loss matrix of the target detection frame of the current frame and the position prediction;
obtaining the unique matching with the maximum similarity between the track and the target frame through a Hungary algorithm, and updating the track target position;
initializing and creating a new tracking track for a target frame which is not matched with the track, and labeling an ID;
and updating the tracking tracks of all the targets and updating the states of the target frames according to the matching result to obtain the position difference and the time difference between the adjacent frames.
According to the invention, the script multithreading triggers a plurality of cameras to synchronously acquire natatorium picture information, a video stitching algorithm is utilized to stitch images acquired by the plurality of cameras in real time to obtain a single wide-angle picture of the natatorium, the YOLOv5 target detection and the ISORT target tracking algorithm provided by the invention are utilized to track and position targets, and finally, the world coordinates of athletes are obtained according to an internal reference correction and external reference calibration module, so that the collaborative positioning and speed measurement of the plurality of cameras can be realized. The invention has the advantages of small limit on venues and acquisition equipment, high picture splicing accuracy, easy investigation, easy engineering and the like.
The non-detailed description of the invention is within the knowledge of a person skilled in the art.

Claims (10)

1. A multi-camera collaborative swimming movement speed measurement method based on video stitching is characterized by comprising the following steps:
sequentially arranging a plurality of cameras along the track direction on the top of a wall at one side of the natatorium, wherein the butt joint areas of the image acquisition areas of the adjacent cameras are overlapped;
uniformly arranging calibration plates on two sides of the swimming pool, recording world coordinates of the center position of the swimming pool, and ensuring that the overlapped area of any two cameras comprises at least 1 calibration plate up and down;
each camera performs camera internal parameter correction and external parameter calibration to obtain a projection relationship from an image two-dimensional coordinate system to a world coordinate system;
image registration and splicing are completed by using the image containing the calibration plate, and the image registration and splicing is used as a video splicing template;
the method comprises the steps of triggering the collection of each camera in a natatorium through multithreading, obtaining images respectively captured by each camera at the same time, correcting internal parameters to obtain images to be spliced, and finishing splicing according to a video splicing template to obtain a single wide-angle picture;
constructing a target detection YOLOv5 model, detecting a target in the obtained single Zhang Anjiao picture by using the target detection YOLOv5 model, and obtaining a target position; the swimming cap for the target swimming athlete;
real-time tracking is carried out on the detected athlete through an ISORT tracking algorithm, and the real-time tracking method is used for matching the identification result of the current frame with the tracking track of the previous frame to obtain the position difference and the time difference between the adjacent frames;
and converting the natatorium real-time image coordinates into world coordinates, and realizing the positioning and speed measurement of the natatorium according to the position difference and the time difference between the adjacent frames.
2. The method for measuring the speed of swimming motion by cooperation of multiple cameras based on video stitching according to claim 1, wherein the process of acquiring the video stitching template is as follows:
step 41: according to the homography matrix calibrated to the world coordinate projection transformation by the external parameters, the images shot by the multiple cameras are projected to the same coordinate in a perspective mode, and image registration work is completed;
step 42: and taking a vertical line passing through the center point of the calibration plate under the image overlapping area as a splicing line, and finishing splicing of multiple images after removing the overlapping area to obtain the video splicing template.
3. The method for measuring the speed of the swimming motion by cooperation of multiple cameras based on video stitching according to claim 2, wherein the homography matrix solving process is as follows:
homography matrix
Figure QLYQS_1
It satisfies->
Figure QLYQS_2
Wherein the method comprises the steps of
Figure QLYQS_3
Representing any point in the image taken by the camera, a +.>
Figure QLYQS_4
For the real world coordinate point of the point, < +.>
Figure QLYQS_5
—/>
Figure QLYQS_6
Is an element of a homography matrix;
selecting 4 calibration plate center coordinate points and corresponding actual lane world coordinate points from images shot by a certain camera, and respectively taking the calibration plate center coordinate points and the corresponding actual lane world coordinate points into the images to obtain a homography matrix
Figure QLYQS_7
4. The video stitching-based multi-camera collaborative swimming motion speed measurement method according to claim 2, wherein images shot by the multi-camera are projected to the same coordinate in a perspective mode, the lane dimensions under the coordinate are consistent with the actual lanes, and the lane angles are horizontally distributed.
5. The method for measuring the speed of swimming motion by cooperation of multiple cameras based on video stitching according to claim 1, wherein the detected athlete is tracked in real time by an ISORT tracking algorithm to obtain the position difference and the time difference between adjacent frames, and the specific implementation mode is as follows:
initializing and creating a new tracking track for all targets detected by the first frame image through a target detection YOLOv5 model, and labeling an ID;
after any frame after the first frame passes through a target detection YOLOv5 model, real-time tracking is carried out on detected athletes by using an ISORT tracking algorithm, and the specific process is as follows:
obtaining the position predictions of all targets of the previous track through Kalman filtering;
calculating a matching loss matrix of the target detection frame of the current frame and the position prediction;
obtaining the unique matching of the track with the maximum similarity of the target detection frame of the current frame through a Hungary algorithm, and updating the target position of the track; initializing and creating a new tracking track for a target detection frame which is not matched with the track, and labeling an ID;
and updating the tracking tracks of all the targets and updating the target states according to the matching result to obtain the position difference and the time difference between the adjacent frames.
6. The method for measuring the speed of the swimming motion by the cooperation of multiple cameras based on video stitching according to claim 5, wherein the method for calculating the matching loss matrix is as follows:
the ISORT tracking algorithm uses the Euclidean distance between the target detection box and the prediction box to describe the degree of motion correlation:
Figure QLYQS_8
wherein,,
Figure QLYQS_9
and->
Figure QLYQS_10
Represents the abscissa of the center point position of the jth target detection frame, < >>
Figure QLYQS_11
And->
Figure QLYQS_12
Represents the abscissa of the central point position of the ith track prediction frame, and is the jth target detection frame and the jth target detection frameThe Euclidean distance between the ith track prediction frames;
Figure QLYQS_13
the ith row and jth column elements of the matching loss matrix are loss values representing the matching of the ith track prediction frame and the jth target detection frame; gate represents the threshold of the gating matrix.
7. The multi-camera collaborative swimming speed measurement system based on video stitching is characterized by comprising a multi-camera acquisition module, an internal reference correction module, a video stitching module, a target recognition module, a target tracking module and an external reference calibration module;
a multi-camera acquisition module: the swimming pool comprises a plurality of cameras which are sequentially arranged at the top of one side wall of the swimming pool along the direction of lanes and are used for acquiring swimming pool images in real time;
and an internal reference correction module: performing internal reference correction on the image acquired by the multi-camera acquisition module to correct radial distortion and tangential distortion of the image;
and the video splicing module is used for: the image containing the calibration plate is registered and spliced through 4 calibration points, and a video splicing template is obtained; taking the images subjected to internal reference correction at the same time as images to be spliced, and finishing splicing according to a video splicing template to obtain a single wide-angle picture;
and a target identification module: detecting athletes in an image picture frame by frame in the obtained single Zhang Anjiao picture by using a target detection YOLOv5 model to obtain a target position;
a target tracking module: real-time tracking is carried out on the detected athlete through an ISORT tracking algorithm, and the real-time tracking method is used for matching the identification result of the current frame with the tracking track of the previous frame to obtain the position difference and the time difference between the adjacent frames;
the external parameter calibration module: and converting the natatorium real-time image coordinates into world coordinates, and realizing the positioning and speed measurement of the natatorium according to the position difference and the time difference between the adjacent frames.
8. The video stitching-based multi-camera collaborative swimming speed measurement system according to claim 7, wherein the video stitching module obtains a video stitching template by:
step 41: according to the homography matrix calibrated to the world coordinate projection transformation by the external parameters, the images shot by the multiple cameras are projected to the same coordinate in a perspective mode, and image registration work is completed;
step 42: and taking a vertical line passing through the center point of the calibration plate under the image overlapping area as a splicing line, and finishing splicing of multiple images after removing the overlapping area to obtain the video splicing template.
9. The video stitching-based multi-camera collaborative swimming motion speed measurement system according to claim 8, wherein the homography matrix solving process is as follows:
homography matrix, which satisfies
Figure QLYQS_14
Wherein the method comprises the steps of
Figure QLYQS_15
Representing any point in the image taken by the camera, a +.>
Figure QLYQS_16
For the real world coordinate point of the point, < +.>
Figure QLYQS_17
—/>
Figure QLYQS_18
Is an element of a homography matrix;
selecting 4 calibration plate center coordinate points from the image containing the calibration plates and corresponding actual lane world coordinate points to be respectively brought into
Figure QLYQS_19
Is obtained byTo homography matrix->
Figure QLYQS_20
10. The video stitching-based multi-camera collaborative swimming motion speed measurement system according to claim 7, wherein the target tracking module is specifically implemented as follows:
initializing and creating a new tracking track for all targets detected by the first frame image through a target detection YOLOv5 model, and labeling an ID;
after any frame after the first frame passes through a target detection YOLOv5 model, real-time tracking is carried out on detected athletes by using an ISORT tracking algorithm, and the specific process is as follows:
obtaining the position predictions of all targets of the previous track through Kalman filtering;
calculating a matching loss matrix of the target detection frame of the current frame and the position prediction;
obtaining the unique matching of the track with the maximum similarity of the target detection frame through a Hungary algorithm, and updating the target position of the track;
initializing and creating a new tracking track for a target detection frame which is not matched with the track, and labeling an ID;
and updating the tracking tracks of all the targets and updating the target states according to the matching result to obtain the position difference and the time difference between the adjacent frames.
CN202310565836.5A 2023-05-19 2023-05-19 Multi-camera collaborative swimming movement speed measurement method and system based on video stitching Pending CN116309685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310565836.5A CN116309685A (en) 2023-05-19 2023-05-19 Multi-camera collaborative swimming movement speed measurement method and system based on video stitching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310565836.5A CN116309685A (en) 2023-05-19 2023-05-19 Multi-camera collaborative swimming movement speed measurement method and system based on video stitching

Publications (1)

Publication Number Publication Date
CN116309685A true CN116309685A (en) 2023-06-23

Family

ID=86818920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310565836.5A Pending CN116309685A (en) 2023-05-19 2023-05-19 Multi-camera collaborative swimming movement speed measurement method and system based on video stitching

Country Status (1)

Country Link
CN (1) CN116309685A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116540252A (en) * 2023-07-06 2023-08-04 上海云骥跃动智能科技发展有限公司 Laser radar-based speed determination method, device, equipment and storage medium
CN117169887A (en) * 2023-11-03 2023-12-05 武汉能钠智能装备技术股份有限公司 SAR ground moving target positioning method based on direction determination
CN117197193A (en) * 2023-11-07 2023-12-08 杭州巨岩欣成科技有限公司 Swimming speed estimation method, swimming speed estimation device, computer equipment and storage medium
CN117058331B (en) * 2023-10-13 2023-12-19 山东建筑大学 Indoor personnel three-dimensional track reconstruction method and system based on single monitoring camera

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689665A (en) * 2021-08-23 2021-11-23 上海林港人工智能科技有限公司 Swimming anti-drowning identification method based on AI vision
CN113688724A (en) * 2021-08-24 2021-11-23 桂林电子科技大学 Swimming pool drowning monitoring method based on binocular vision
CN115024715A (en) * 2022-05-20 2022-09-09 北京航天时代光电科技有限公司 Intelligent measurement and digital training system for human body movement
CN115131821A (en) * 2022-06-29 2022-09-30 大连理工大学 Improved YOLOv5+ Deepsort-based campus personnel crossing warning line detection method
CN115690913A (en) * 2022-11-04 2023-02-03 航天物联网技术有限公司 Swimming pool safety management device and method based on multi-camera vision
CN115761470A (en) * 2022-11-29 2023-03-07 每步科技(上海)有限公司 Method and system for tracking motion trail in swimming scene
CN115994930A (en) * 2023-01-12 2023-04-21 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for detecting and positioning moving target under camera based on artificial intelligence
CN115994911A (en) * 2023-03-24 2023-04-21 山东上水环境科技集团有限公司 Natatorium target detection method based on multi-mode visual information fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689665A (en) * 2021-08-23 2021-11-23 上海林港人工智能科技有限公司 Swimming anti-drowning identification method based on AI vision
CN113688724A (en) * 2021-08-24 2021-11-23 桂林电子科技大学 Swimming pool drowning monitoring method based on binocular vision
CN115024715A (en) * 2022-05-20 2022-09-09 北京航天时代光电科技有限公司 Intelligent measurement and digital training system for human body movement
CN115131821A (en) * 2022-06-29 2022-09-30 大连理工大学 Improved YOLOv5+ Deepsort-based campus personnel crossing warning line detection method
CN115690913A (en) * 2022-11-04 2023-02-03 航天物联网技术有限公司 Swimming pool safety management device and method based on multi-camera vision
CN115761470A (en) * 2022-11-29 2023-03-07 每步科技(上海)有限公司 Method and system for tracking motion trail in swimming scene
CN115994930A (en) * 2023-01-12 2023-04-21 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for detecting and positioning moving target under camera based on artificial intelligence
CN115994911A (en) * 2023-03-24 2023-04-21 山东上水环境科技集团有限公司 Natatorium target detection method based on multi-mode visual information fusion

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116540252A (en) * 2023-07-06 2023-08-04 上海云骥跃动智能科技发展有限公司 Laser radar-based speed determination method, device, equipment and storage medium
CN116540252B (en) * 2023-07-06 2023-09-26 上海云骥跃动智能科技发展有限公司 Laser radar-based speed determination method, device, equipment and storage medium
CN117058331B (en) * 2023-10-13 2023-12-19 山东建筑大学 Indoor personnel three-dimensional track reconstruction method and system based on single monitoring camera
CN117169887A (en) * 2023-11-03 2023-12-05 武汉能钠智能装备技术股份有限公司 SAR ground moving target positioning method based on direction determination
CN117169887B (en) * 2023-11-03 2024-04-19 武汉能钠智能装备技术股份有限公司 SAR ground moving target positioning method based on direction determination
CN117197193A (en) * 2023-11-07 2023-12-08 杭州巨岩欣成科技有限公司 Swimming speed estimation method, swimming speed estimation device, computer equipment and storage medium
CN117197193B (en) * 2023-11-07 2024-05-28 杭州巨岩欣成科技有限公司 Swimming speed estimation method, swimming speed estimation device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN116309685A (en) Multi-camera collaborative swimming movement speed measurement method and system based on video stitching
CN106780620B (en) Table tennis motion trail identification, positioning and tracking system and method
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
CN109190508B (en) Multi-camera data fusion method based on space coordinate system
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN109919975B (en) Wide-area monitoring moving target association method based on coordinate calibration
CN102819847A (en) Method for extracting movement track based on PTZ mobile camera
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN113689331B (en) Panoramic image stitching method under complex background
Wu et al. A framework for fast and robust visual odometry
Yang et al. Multiple marker tracking in a single-camera system for gait analysis
CN116309686A (en) Video positioning and speed measuring method, device and equipment for swimmers and storage medium
CN109030854B (en) Pace measuring method based on RGB image
Jeges et al. Measuring human height using calibrated cameras
CN114120168A (en) Target running distance measuring and calculating method, system, equipment and storage medium
Bachmann et al. Motion capture from pan-tilt cameras with unknown orientation
CN115731266A (en) Cross-camera multi-target tracking method, device and equipment and readable storage medium
CN113379801A (en) High-altitude parabolic monitoring and positioning method based on machine vision
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
CN115100744A (en) Badminton game human body posture estimation and ball path tracking method
CN116385496A (en) Swimming movement real-time speed measurement method and system based on image processing
Morais et al. Automatic tracking of indoor soccer players using videos from multiple cameras
CN104156933A (en) Image registering method based on optical flow field
Woinoski et al. Swimmer stroke rate estimation from overhead race video
CN112508998A (en) Visual target alignment method based on global motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230623