CN109146782A - A kind of full-view image joining method and system - Google Patents

A kind of full-view image joining method and system Download PDF

Info

Publication number
CN109146782A
CN109146782A CN201810811294.4A CN201810811294A CN109146782A CN 109146782 A CN109146782 A CN 109146782A CN 201810811294 A CN201810811294 A CN 201810811294A CN 109146782 A CN109146782 A CN 109146782A
Authority
CN
China
Prior art keywords
moving target
image
spliced
target
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810811294.4A
Other languages
Chinese (zh)
Inventor
姚剑
董颖青
常娟
涂静敏
朱吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Block Technology Technology Co Ltd
Original Assignee
Shenzhen Block Technology Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Block Technology Technology Co Ltd filed Critical Shenzhen Block Technology Technology Co Ltd
Priority to CN201810811294.4A priority Critical patent/CN109146782A/en
Publication of CN109146782A publication Critical patent/CN109146782A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The present invention relates to a kind of full-view image joining method and systems, this method includes by adjustment of image to be spliced in the same coordinate system, acquisition moving target is split to image to be spliced, motion state judgement and conspicuousness detection are carried out to obtained moving target, tax power is carried out to moving target according to judgement and testing result, cutting energy-optimised algorithm using figure combines the weight assigned to carry out splicing line detection, obtains optimal splicing line.The present invention divides network by example can be more complete to moving Object Segmentation, cuts energy-optimised algorithm using figure, in conjunction with the weight of moving target, splicing line can be made around moving target, generate the full-view image of clean background, dislocation-free.

Description

A kind of full-view image joining method and system
Technical field
The present invention relates to image procossing and technical field of computer vision more particularly to a kind of full-view image joining method and System.
Background technique
Full-view image is widely used in outer scene, because its angular field of view is big such as streetscape map, scenic spot indoors Roaming figure and navigation etc..Wherein, full-view image splicing is key technology and hot research problem.In static scene, to Splice and moving target is not present in video scenery, in the preferable situation of geometrical registration, it is easy to obtain more satisfactory panorama Image.However, can inevitably have the moving targets such as pedestrian in scene during image collection.There are the dynamics of moving target In scene, when splicing to image to be spliced, for the panoramic mosaic image for obtaining high quality, splicing line should be made to get around movement mesh Mark avoids in splicing splicing line from passing through moving target and image is caused to misplace.
There are mainly four types of the joining methods based on dynamic scene in the prior art: first is that using the side of markov random file Method;Second is that passing through the method for gradient disparities and edge strengthening weight gray difference;Third is that energy-optimised algorithm is cut by figure, in conjunction with The method of the exposure mask of depth information, smooth transition standard and moving region.However the above method is required to reference to several images, And the segmentation of moving target is influenced by background, illumination big, when being spliced for two width images, effect is bad.Fourth is that adopting The moving target in image is extracted with color space method, differential technique, morphology etc., by edge detection operator to movement Target is matched, and cuts energy-optimised algorithm detection splicing line in conjunction with figure;However, this method wants intensity of illumination and background color Ask higher, imperfect to moving Object Segmentation when moving target is more, there are splicing lines in splicing wears from moving target The case where crossing.
Summary of the invention
In order to obtain optimal splicing line when splicing to two width images, splicing line in splicing is avoided to pass through movement Target, the present invention provide a kind of full-view image joining method and system.
The technical scheme to solve the above technical problems is that
In a first aspect, the embodiment of the invention provides a kind of full-view image joining methods, which comprises
Step 1: by the adjustment of image to be spliced of several sequence arrangements and adjacent mutual overlapping region in same coordinate System.
Step 2: the image to be spliced described in every width after correction is split, and is obtained in image to be spliced described in every width All moving targets.
Step 3: motion state judgement being carried out to the moving target, motion state is obtained and determines result.
Step 4: significance detection being carried out to the moving target, determines that the moving target is aobvious compared to image background Work degree is determined in conjunction with the motion state as a result, assigning corresponding weight, acquisition moving target weight to the moving target Figure.
Step 5: between image to be spliced described in two width for obtaining mutual overlapping region according to the moving target weight map Optimal splicing line, according to the optimal splicing line carry out splicing obtain full-view image.
Second aspect, the present invention provides a kind of full-view image splicing system, the system comprises:
Correction module: by the adjustment of image to be spliced of several sequence arrangements and adjacent mutual overlapping region in same seat Mark system.
Segmentation module: the image to be spliced described in every width after correction is split, and obtains image to be spliced described in every width Interior all moving targets.
Determination module: carrying out motion state judgement to the moving target, obtains motion state and determines result.
Detection module: significance detection is carried out to the moving target, determines the moving target compared to image background Significance, determine to obtain moving target power as a result, assign corresponding weight to the moving target in conjunction with the motion state Multigraph.
Splicing module: image to be spliced described in two width for obtaining mutual overlapping region according to the moving target weight map it Between optimal splicing line, according to the optimal splicing line carry out splicing obtain full-view image.
The beneficial effect of a kind of full-view image joining method of the invention and system is: being split and obtains to image to be spliced Moving target is obtained, the motion state of moving target is determined, according to the motion state of moving target and moving target relative to shadow As the significance of background carries out tax power to moving target, can be accurately identified according to the weight of moving target when obtaining splicing line The edge of moving target, makes splicing line accurately get around moving target, and splicing line in splicing is avoided to cause across moving target Image dislocation, the present invention can quickly and accurately obtain the optimal splicing line between the image to be spliced of mutual overlapping region, root The full-view image that splicing generates clean background, dislocation-free is carried out according to optimal splicing line.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of full-view image joining method of the embodiment of the present invention;
Fig. 2 is that the moving target example of the embodiment of the present invention divides schematic diagram;
Fig. 3 is that the moving Object Segmentation of the embodiment of the present invention merges schematic diagram;
Fig. 4 is that the moving target motion state of the embodiment of the present invention determines schematic diagram;
Fig. 5 is the moving target conspicuousness detection schematic diagram of the embodiment of the present invention;
Fig. 6 is the splicing schematic diagram of two images to be spliced of the embodiment of the present invention;
Fig. 7 is the schematic diagram that energy-optimised algorithm searching splicing line is cut using figure of the embodiment of the present invention;
Fig. 8 is that 360 degree of full-view images of the embodiment of the present invention splice schematic diagram;
Fig. 9 is a kind of full-view image splicing system structural schematic diagram of the embodiment of the present invention.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the invention.
As shown in Figure 1, a kind of flow diagram of full-view image joining method of the embodiment of the present invention, in the present embodiment with Pedestrian is described as moving target, which comprises
Step 1: by the adjustment of image to be spliced of several sequence arrangements and adjacent mutual overlapping region in same coordinate System.
Specifically, the image to be spliced is flake image, using five end to end flake images to be spliced, to obtain To 360 degree of full-view images.
Step 2: the image to be spliced described in every width after correction is split, and is obtained in image to be spliced described in every width All moving targets.
Step 3: motion state judgement being carried out to the moving target, motion state is obtained and determines result.
Step 4: significance detection being carried out to the moving target, determines that the moving target is aobvious compared to image background Work degree is determined in conjunction with the motion state as a result, assigning corresponding weight, acquisition moving target weight to the moving target Figure.
Step 5: between image to be spliced described in two width for obtaining mutual overlapping region according to the moving target weight map Optimal splicing line, according to the optimal splicing line carry out splicing obtain full-view image.
In the present embodiment, acquisition moving target is split to image to be spliced, determines the motion state of moving target, root Tax power is carried out to moving target relative to the significance of image background according to the motion state and moving target of moving target, according to The weight of moving target can accurately identify the edge of moving target when obtaining splicing line, and splicing line is made accurately to get around movement mesh Mark avoids in splicing splicing line from passing through moving target and image is caused to misplace, and the present invention can quickly and accurately obtain mutual Optimal splicing line between the image to be spliced of overlapping region carries out splicing according to optimal splicing line and generates clean background, error-free The full-view image of position.
Preferably, use PTGUI software by adjustment of image to be spliced in the same coordinate system in step 1.
Preferably, the step 2 specifically includes:
Step 2.1: being the stripping and slicing image of several adjacent mutual overlapping region by the image longitudinal direction to be spliced cutting.
Specifically, it will correct and respectively punctured 1/6 height in the top and the bottom of all images to be spliced of the same coordinate system Degree gets rid of the image to be spliced top and bottom of not moving target, and the remaining image that will puncture top and the bottom is longitudinal Cutting is the stripping and slicing image of several adjacent mutual overlapping region, and the degree of overlapping between the stripping and slicing image of mutual overlapping region is 40%.
Moving target example as shown in Figure 2 divides schematic diagram, and Fig. 2 (a) is image to be spliced, and Fig. 2 (b) is by Fig. 2 (a) top and the bottom puncture the remaining image after 1/6 height, Fig. 2 (c) be will obtain after the longitudinal cutting of Fig. 2 (b) adjacent mutually There is the stripping and slicing image of overlapping region.
Step 2.2: the stripping and slicing image being split using the example segmentation network based on deep learning, is obtained described The position coordinates of the moving target and the moving target in the image to be spliced in stripping and slicing image.
Specifically, stripping and slicing image is inputted into pre-set MNC (Multi-task Network Cascades) depth Learn example and divide network model, is distinguished by example, exposure mask estimation, the processing of three subtasks of target classification, obtain stripping and slicing The position coordinates of all moving targets and moving target based on image to be spliced in image form cascade between three subtasks Structure shares convolution feature.
The segmentation of target level example is carried out to moving target using the example segmentation network based on deep learning, can solve movement The problems such as Objective extraction is imperfect, multiple moving targets are difficult to complete parttion, guarantees the integrality of moving target being partitioned into.
Preferably, after being split to the overlapping region of the stripping and slicing image, the repeating motion target being partitioned into is merged At a moving target.It prevents erroneous detection and repeats to detect, improve segmentation precision.
Moving Object Segmentation as shown in Figure 3 merges schematic diagram, and Fig. 3 (a) is a width stripping and slicing image, and Fig. 3 (b) is and Fig. 3 (a) the stripping and slicing image of mutual overlapping region, moving target is the same fortune in the overlapping region of Fig. 3 (a) and Fig. 3 (b) in box This repeating motion target being partitioned into is merged into a moving target in Fig. 3 (c) by moving-target.
Preferably, the step 3 specifically includes:
Step 3.1: right according to the position coordinates of the moving target and the moving target in the image to be spliced The image to be spliced in any two width mutual overlapping region, by image to be spliced described in a width the moving target and The moving target in image to be spliced described in another width is matched.
Specifically, in the present embodiment using pedestrian's weight identification technology between in two images to be spliced of mutual overlapping region Pedestrian target matched, bilateral symmetry is based on to the head, trunk, leg of human body according to pedestrian body symmetric characteristics first Axis is divided, and is extracted each region and is added up the features such as color, statistic texture, and is weighted based on symmetrical axis to feature, Closer to the feature of axis, its weight is higher, and the feature weight far from axis is lower, is matched according to the weight of acquisition.
Step 3.2: motion state being carried out to the moving target and is sentenced according to scheduled criterion according to matching result It is fixed.
Preferably, the scheduled criterion is indicated by the first formula, first formula are as follows:
M (i, j)=p (i) * exp (r (i, j))
Wherein, i and j respectively represents the moving target in the image to be spliced of two width mutual overlapping regions, m (i, j) represents the motion state of moving target i and moving target j.P (i) represents the matching knot of moving target i Yu moving target j Fruit, if successful match, value 1;Otherwise value is 0.R (i, j) represents the moving target i and moving target j of successful match Minimum bounding box degree of overlapping, the minimum bounding box degree of overlapping indicates by the second formula, second formula are as follows:
R (i, j)=o (i, j)/(a (i)+a (j)-o (i, j))
Wherein, o (i, j) represents the overlapping area between moving target i and moving target j, and a (i) and a (j) are respectively represented The minimum bounding box area of moving target i and moving target j.Bounding box refers to the minimum that can completely include some object Cube.
Specifically, as m (i, j)=0, moving target i is judged to disappearing;As m (i, j)=1, moving target i and movement Target j is determined as grand movement;As m (i, j) ∈ (1, e), moving target i and moving target j are determined as that small range moves;It is no Then, moving target i and moving target j is judged to not moving.
Moving target motion state as shown in Figure 4 determines that schematic diagram, Fig. 4 (a) are a certain left image to be spliced, Fig. 4 (d) it is the right image to be spliced with Fig. 4 (a) mutual overlapping region, is the overlay region after Fig. 4 (a) and Fig. 4 (d) correction in box Domain, Fig. 4 (b) are all moving targets in Fig. 4 (a), and Fig. 4 (e) is all moving targets in Fig. 4 (d), will be in Fig. 4 (b) Moving target is matched with the moving target in Fig. 4 (e) respectively, is the moving target of successful match, movement in Fig. 4 (c) State is judged as grand movement, is the disappearance moving target of non-successful match in Fig. 4 (f).
Preferably, the step 4 specifically includes:
Step 4.1: according to HC (Histogram-based Contrast) conspicuousness detection method, determining the movement mesh Mark the significance compared to image background.
Specifically, color contrast algorithm of the HC conspicuousness detection method based on global color histogram uses movement mesh The color contrast difference of every other pixel determines that its significant angle value, difference are got in each pixel of target and image background Big significant angle value is bigger.
Moving target conspicuousness detection schematic diagram as shown in Figure 5, Fig. 5 (a) are a certain image to be spliced, are in box Moving target in image to be spliced, Fig. 5 (b) are the moving target saliency map obtained after the detection of HC conspicuousness.
Step 4.2: according to the significance, the motion state obtained in conjunction with step 3 determines result to the movement Target assigns weight, obtains the moving target weight map of the image to be spliced.
Preferably, the motion state determines that result moves and do not move four kinds including disappearance, grand movement, small range State, the motion state that the combination step 3 obtains determine that result assigns the specific implementation of weight to the moving target Are as follows:
Enabling Img1 and Img2 is the image to be spliced of any two width mutual overlapping region, and i and j are respectively shadow to be spliced As the moving target in Img1 and image Img2 to be spliced.
For the moving target i in image Img1 to be spliced, determined according to the motion state as a result, if moving target i It disappears, there is no the movements to match with the moving target i in the image Img2 to be spliced with Img1 mutual overlapping region Target then assigns weight W to moving target i1(i)=θ.
If there is the moving target j to match with moving target i in image Img2 to be spliced, when moving target i and movement When target j is determined as grand movement, moving target i and moving target j is subjected to significance comparison, H (Ok) represent it is any to The conspicuousness of any moving target k in splicing image Img is finally measured, and the conspicuousness is finally measured by third formula table Show, the third formula are as follows:
Wherein, OkRegion belonging to referring to moving target k in image Img to be spliced, is defined as motion target area, HC (x) refer to positioned at motion target area OkThe single pixel saliency value that interior pixel x is obtained based on HC algorithm, cnt (Ok) refer to movement mesh Mark region OkInterior number of pixels is finally measured according to the conspicuousness that third formula obtains moving target i and moving target j respectively H(Oi) and H (Oj), OiIt is the moving target i motion target area affiliated in image Img1 to be spliced, OiIt is movement mesh Mark the j motion target area affiliated in image Img2 to be spliced.
To H (Oi) and H (Oj) be normalized, obtain normalized value T (Oi) and T (Oj), the weight W of moving target i1(Oi) With the weight W of moving target j2(Oj) indicated by the 4th formula, the 4th formula are as follows:
Wherein θ is selected weighted value, is set as 255.
When the moving target i and moving target j that match are determined as small range movement, by moving target i and movement mesh It marks j and merges imparting weight W1(Oi∪Oj)=θ, W2(Oi∪Oj)=θ.
When the moving target i and moving target j that match are judged to not moving, to moving target i and moving target j points It Fu Yu not weight W1(Oi)=0, W2(Oj)=0.
Simultaneously set the weight of remaining non-athletic target area of image Img1 to be spliced and image Img2 to be spliced to 0, remaining non-athletic target area refers in image Img1 to be spliced and image Img2 to be spliced other than motion target area All areas.
The moving target weight map that result generates every image to be spliced is weighed according to above-mentioned tax, i.e. moving target cum rights is covered Film, for extracting the motion target area in image to be spliced.
The splicing schematic diagram of two images to be spliced as shown in FIG. 6, Fig. 6 (a) and Fig. 6 (c) are respectively mutual overlay region The left image to be spliced and right image to be spliced in domain, the interior overlapping region for two images to be spliced of box, Fig. 6 (b) and Fig. 6 (d) be respectively overlapping region in Fig. 6 (a) and Fig. 6 (c) moving target weight map, Fig. 6 (e) is that Fig. 6 (a) and Fig. 6 (c) splices The image obtained afterwards.
Preferably, the specific implementation of step 5 are as follows:
Energy-optimised algorithm is cut using figure, according to the moving target weight map, obtain mutual overlapping region it is described to Splice the optimal splicing line between image, spliced according to the optimal splicing line, obtains full-view image.
Specifically, energy-optimised calculation is cut using figure for two image Img1 and Img2 to be spliced of mutual overlapping region Method finally splices the energy term E (I) of obtained full-view image I by data capacity item Edata(I) and smooth energy term Esmooth(I) Composition, energy term E (I) shows as the total power consumption that splicing line is passed through from image in image joint, obtains optimal splicing The process of line is exactly to find the process of least energy item E (I), and the energy term E (I) is indicated by the 5th formula, and the described 5th is public Formula are as follows:
E (I)=Edata(I)+Esmooth(I)
Wherein, data capacity item Edata(I) it is indicated by the 6th formula, the 6th formula are as follows:
WithIt respectively represents pixel p in full-view image I and comes from image Img1 to be spliced and image to be spliced The energy consumption of Img2.Motion target area is extracted according to moving target weight map, for the moving target in image to be spliced Region and non-athletic target area, non-athletic target area refer to all areas in image to be spliced other than motion target area Domain,WithCalculation it is different.
For the motion target area O of image Img1 to be spliced and image Img2 to be splicediAnd Oj,With Not by the weight W of pixel p motion target area in image Img1 to be spliced and image Img2 to be spliced1(Oi) and W2(Oj) certainly It is fixed, energy consumption of the pixel p from motion target areaWithIt is indicated by the 7th formula, the 7th formula Are as follows:
T represents the penalty coefficient of equilibrium data energy term and smooth energy term, and the value of T is set as 100 in the present embodiment.
For the non-athletic target area of image Img1 to be spliced and image Img2 to be spliced, if pixel p is to be spliced Image Img1 then willValue be set as 0,Value be set as infinitely great;If pixel p comes from image Img2 to be spliced, Then willValue be set as infinitely great,Value be set as 0.Energy consumption of the pixel p from non-athletic target areaWithIt is indicated by the 8th formula, the 8th formula are as follows:
By the 8th formula it is found that only when pixel p is simultaneously from image Img1 to be spliced and image Img2 to be spliced, Data capacity item EdataIt (I) is 0, otherwise, data capacity item EdataIt (I) is infinity.To keep energy term E (I) minimum, define Splicing line is passed through from the overlapping region of image Img1 to be spliced and image Img2 to be spliced.
Smoothed energy item Esmooth(I) energy consumption of all pixels in neighborhood is represented, is indicated by the 9th formula, it is described 9th formula are as follows:
Wherein, N (I) represents all pixels pair after splicing in full-view image I, and (p, q) represents adjacent pixel, factor sigma (p, Q) it represents pixel p and whether q is identical, then σ (p, q) is set as 0 if they are the same, is otherwise set as 1. Esmooth(p, q) represents pixel p and q Between smoothed energy item, the design summation of smoothed energy item considers gray scale, gradient difference, Texture complication, thus non-athletic Target area guarantees that splicing line is passed through from the low region of uniform gray level, Texture complication as far as possible.
Gray scale, gradient difference and Texture complication can be calculated by the prior art.
(Graph-Cuts) energy optimizing method is cut using figure, to the data capacity item E of overlapping area imagedata(I)、 Smoothed energy item Esmooth(I) it is optimized with energy term E (I), optimization method includes α-expansion algorithm and alpha-beta-swap Algorithm, entire algorithm loop iteration no longer reduce up to energy term E (I) is minimum, obtain optimal splicing line, be implemented as existing Technology.
As shown in Figure 7 cuts the rough schematic that energy-optimised algorithm finds splicing line using figure, and square is two width in figure The pixel of image overlap area to be spliced, S and T are terminal vertex, and there are a lines between every two pixel, its weight is by upper The decision of smoothed energy item is stated, the thickness on side represents the size of smoothed energy, and the more thick corresponding energy in side is bigger;Pixel and terminal top The weight on side is determined that energy term represents splicing line from the total power consumption passed through between pixel by above-mentioned data capacity item between point. The energy minimum cut according to figure is split, and obtains optimal splicing line, cut is optimal splicing line in figure.
The imagery zone that splicing panorama image is used in image to be spliced is obtained according to optimal splicing line, by the image area Domain carries out splicing and obtains full-view image.
360 degree of full-view images as shown in Figure 8 splice schematic diagram, and Fig. 8 (a) is the raw video of image to be spliced, Fig. 8 It (b) is final full-view image splicing result.
As shown in figure 9, the present invention provides a kind of full-view image splicing system, the system comprises:
Correction module: by the adjustment of image to be spliced of several sequence arrangements and adjacent mutual overlapping region in same seat Mark system.
Segmentation module: the image to be spliced described in every width after correction is split, and obtains image to be spliced described in every width Interior all moving targets.
Determination module: carrying out motion state judgement to the moving target, obtains motion state and determines result.
Detection module: significance detection is carried out to the moving target, determines the moving target compared to image background Significance, determine to obtain moving target power as a result, assign corresponding weight to the moving target in conjunction with the motion state Multigraph.
Splicing module: image to be spliced described in two width for obtaining mutual overlapping region according to the moving target weight map it Between optimal splicing line, according to the optimal splicing line carry out splicing obtain full-view image.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of full-view image joining method, which is characterized in that the described method includes:
Step 1: by the adjustment of image to be spliced of several sequence arrangements and adjacent mutual overlapping region in the same coordinate system;
Step 2: the image to be spliced described in every width after correction is split, and is obtained all in image to be spliced described in every width Moving target;
Step 3: motion state judgement being carried out to the moving target, motion state is obtained and determines result;
Step 4: significance detection being carried out to the moving target, determines that the moving target is significant compared to image background Degree is determined in conjunction with the motion state as a result, assigning corresponding weight, acquisition moving target weight map to the moving target;
Step 5: between image to be spliced described in two width for obtaining mutual overlapping region according to the moving target weight map most Excellent splicing line carries out splicing according to the optimal splicing line and obtains full-view image.
2. full-view image joining method according to claim 1, which is characterized in that the step 2 specifically includes:
Step 2.1: being the stripping and slicing image of several adjacent mutual overlapping region by the image longitudinal direction to be spliced cutting;
Step 2.2: the stripping and slicing image being split using the example segmentation network based on deep learning, obtains the stripping and slicing The position coordinates of the moving target and the moving target in the image to be spliced in image.
3. full-view image joining method according to claim 2, which is characterized in that described to use the reality based on deep learning The specific implementation that example segmentation network is split the stripping and slicing image are as follows:
After being split to the overlapping region of the stripping and slicing image, the repeating motion target being partitioned into is merged into a movement mesh Mark.
4. full-view image joining method according to claim 3, which is characterized in that the step 3 specifically includes:
Step 3.1: according to the position coordinates of the moving target and the moving target in the image to be spliced, for appointing It anticipates the images to be spliced of two width mutual overlapping regions, by the moving target in image to be spliced described in a width and another The moving target in image to be spliced described in width is matched;
Step 3.2: according to matching result, according to scheduled criterion, motion state judgement being carried out to the moving target.
5. full-view image joining method according to claim 4, which is characterized in that the scheduled criterion is by first Formula expression, first formula are as follows:
M (i, j)=p (i) * exp (r (i, j))
Wherein, i and j respectively represents the moving target in image to be spliced described in two width of mutual overlapping region, m (i, j) The motion state of the moving target i and the moving target j are represented, p (i) represents the moving target i and the movement mesh The matching result for marking j, if successful match, value 1, otherwise value is the movement mesh that 0, r (i, j) represents successful match The minimum bounding box degree of overlapping of i and the moving target j are marked, the minimum bounding box degree of overlapping is indicated by the second formula, described Second formula are as follows:
R (i, j)=o (i, j)/(a (i)+a (j)-o (i, j))
Wherein, o (i, j) represents the overlapping area between the moving target i and the moving target j, and a (i) and a (j) are respectively The minimum bounding box area of the moving target i and the moving target j are represented, bounding box, which is one, can completely include some The minimum cube of object.
6. full-view image joining method according to claim 1, which is characterized in that the step 4 specifically includes:
Step 4.1: according to HC conspicuousness detection method, determining significance of the moving target compared to the image background;
Step 4.2: according to the significance, the motion state obtained in conjunction with step 3 determines result to the moving target Weight is assigned, the moving target weight map of the image to be spliced is obtained.
7. full-view image joining method according to claim 6, which is characterized in that the motion state determines that result includes Disappearance, grand movement, small range move and do not move four kinds of states, and the motion state that the combination step 3 obtains is sentenced Determine the specific implementation that result assigns weight to the moving target are as follows:
Enabling Img1 and Img2 is the image to be spliced of any two width mutual overlapping region, and i and j are respectively image to be spliced The moving target in Img1 and image Img2 to be spliced;
For the moving target i in the image Img1 to be spliced, determined according to the motion state as a result, if the fortune Moving-target i is judged to disappearing, in the image Img2 to be spliced of the image Img1 mutual overlapping region to be spliced not In the presence of the moving target to match with the moving target i, then weight W is assigned to the moving target i1(i)=θ;
If there is the moving target j to match with the moving target i in the image Img2 to be spliced, when the movement When target i and the moving target j are determined as grand movement, the moving target i and moving target j is carried out significant Degree compares, H (Ok) conspicuousness that represents any moving target k in any image Img to be spliced finally measures, the conspicuousness Finally measurement is indicated by third formula, the third formula are as follows:
Wherein, OkRegion belonging to referring to the moving target k in the image Img to be spliced, is defined as motion target area, HC (x) refers to positioned at the motion target area OkThe single pixel saliency value that interior pixel x is obtained based on HC algorithm, cnt (Ok) refer to The motion target area OkInterior number of pixels obtains the moving target i and the fortune according to the third formula respectively The conspicuousness of moving-target j finally measures H (Oi) and H (Oj), OiIt is moving target i institute in the image Img1 to be spliced The motion target area belonged to, OiIt is the moving target j movement mesh affiliated in the image Img2 to be spliced Mark region;
To the H (Oi) and the H (Oj) be normalized, obtain normalized value T (Oi) and T (Oj), the power of the moving target i Value W1(Oi) and the moving target j weight W2(Oj) indicated by the 4th formula, the 4th formula are as follows:
Wherein, θ is the weighted value of setting;
When the moving target i and the moving target j that match are determined as small range movement, by the moving target i Merge with the moving target j and assigns weight W1(Oi∪Oj)=θ, W2(Oi∪Oj)=θ;
When the moving target i and the moving target j that match are judged to not moving, to the moving target i and institute It states moving target j and assigns weight W respectively1(Oi)=0, W2(Oj)=0;
The weight of remaining non-athletic target area of the image Img1 to be spliced and the image Img2 to be spliced is set simultaneously It is set to 0.
8. full-view image joining method according to claim 1-7, which is characterized in that the step 5 it is specific It realizes are as follows:
Energy-optimised algorithm is cut using figure, according to the moving target weight map, obtains the described to be spliced of mutual overlapping region Optimal splicing line between image is spliced according to the optimal splicing line, obtains full-view image.
9. full-view image joining method according to claim 1-7, which is characterized in that the step 1 it is specific It realizes are as follows: using PTGUI software by the adjustment of image to be spliced in the same coordinate system.
10. a kind of full-view image splicing system, which is characterized in that the system comprises:
Correction module: by the adjustment of image to be spliced of several sequence arrangements and adjacent mutual overlapping region in the same coordinate system;
Segmentation module: the image to be spliced described in every width after correction is split, and is obtained in image to be spliced described in every width All moving targets;
Determination module: carrying out motion state judgement to the moving target, obtains motion state and determines result;
Detection module: carrying out significance detection to the moving target, determines that the moving target is aobvious compared to image background Work degree is determined in conjunction with the motion state as a result, assigning corresponding weight, acquisition moving target weight to the moving target Figure;
Splicing module: between image to be spliced described in two width for obtaining mutual overlapping region according to the moving target weight map Optimal splicing line carries out splicing according to the optimal splicing line and obtains full-view image.
CN201810811294.4A 2018-07-23 2018-07-23 A kind of full-view image joining method and system Withdrawn CN109146782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810811294.4A CN109146782A (en) 2018-07-23 2018-07-23 A kind of full-view image joining method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810811294.4A CN109146782A (en) 2018-07-23 2018-07-23 A kind of full-view image joining method and system

Publications (1)

Publication Number Publication Date
CN109146782A true CN109146782A (en) 2019-01-04

Family

ID=64801349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810811294.4A Withdrawn CN109146782A (en) 2018-07-23 2018-07-23 A kind of full-view image joining method and system

Country Status (1)

Country Link
CN (1) CN109146782A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111312403A (en) * 2020-01-21 2020-06-19 山东师范大学 Disease prediction system, device and medium based on instance and feature sharing cascade
CN112184541A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Image splicing method, device and equipment and storage medium
CN113344957A (en) * 2021-07-19 2021-09-03 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, and non-transitory storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184541A (en) * 2019-07-05 2021-01-05 杭州海康威视数字技术股份有限公司 Image splicing method, device and equipment and storage medium
CN111312403A (en) * 2020-01-21 2020-06-19 山东师范大学 Disease prediction system, device and medium based on instance and feature sharing cascade
CN113344957A (en) * 2021-07-19 2021-09-03 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, and non-transitory storage medium

Similar Documents

Publication Publication Date Title
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN111899334B (en) Visual synchronous positioning and map building method and device based on point-line characteristics
Shin et al. Vision-based navigation of an unmanned surface vehicle with object detection and tracking abilities
CN108304798B (en) Street level order event video detection method based on deep learning and motion consistency
Coughlan et al. The manhattan world assumption: Regularities in scene statistics which enable bayesian inference
US9245345B2 (en) Device for generating three dimensional feature data, method for generating three-dimensional feature data, and recording medium on which program for generating three-dimensional feature data is recorded
CN109961399B (en) Optimal suture line searching method based on image distance transformation
CN110473221B (en) Automatic target object scanning system and method
WO2010017255A2 (en) Cut-line steering methods for forming a mosaic image of a geographical area
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN109146782A (en) A kind of full-view image joining method and system
CN108681711A (en) A kind of natural landmark extracting method towards mobile robot
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN113744315B (en) Semi-direct vision odometer based on binocular vision
Cherian et al. Accurate 3D ground plane estimation from a single image
Xiao et al. Geo-spatial aerial video processing for scene understanding and object tracking
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner
CN113096016A (en) Low-altitude aerial image splicing method and system
Haines et al. Estimating Planar Structure in Single Images by Learning from Examples.
CN116958837A (en) Municipal facilities fault detection system based on unmanned aerial vehicle
D’Amicantonio et al. Homography estimation for camera calibration in complex topological scenes
Mecocci et al. Outdoor scenes interpretation suitable for blind people navigation
Park et al. Line-based single view 3D reconstruction in Manhattan world for augmented reality
CN117223034A (en) Method and apparatus for generating an countermeasure patch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190104

WW01 Invention patent application withdrawn after publication