CN109920055A - Construction method, device and the electronic equipment of 3D vision map - Google Patents

Construction method, device and the electronic equipment of 3D vision map Download PDF

Info

Publication number
CN109920055A
CN109920055A CN201910173949.4A CN201910173949A CN109920055A CN 109920055 A CN109920055 A CN 109920055A CN 201910173949 A CN201910173949 A CN 201910173949A CN 109920055 A CN109920055 A CN 109920055A
Authority
CN
China
Prior art keywords
dimensional map
image
map
pixel region
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910173949.4A
Other languages
Chinese (zh)
Inventor
王强
张小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd
Original Assignee
EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd filed Critical EASYAR INFORMATION TECHNOLOGY (SHANGHAI) Co Ltd
Priority to CN201910173949.4A priority Critical patent/CN109920055A/en
Publication of CN109920055A publication Critical patent/CN109920055A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides construction method, device and the electronic equipment of a kind of 3D vision map, the methods, comprising: obtains the first image;The first image is that first terminal is collected;Target pixel region is determined in the first image;The target pixel region is the pixel region in the first image other than dynamic object;The dynamic object is used to characterize at least partly object for being defined as to occur dynamic position variation in the first image;In the target pixel region, characteristic element is extracted;According to the characteristic element of different moments collected multiple first images, the first three-dimensional map is constructed.Meanwhile server can also merge the three-dimensional map that each terminal is sent, and obtain three-dimensional map after the first fusion;The present invention can be effectively reduced the error of map structuring, improve the accuracy of map structuring.

Description

Construction method, device and the electronic equipment of 3D vision map
Technical field
The present invention relates to the construction method of map structuring field more particularly to 3D vision map, device and electronic equipments.
Background technique
3D vision map is one of the basic function in augmented reality field, by being adopted using intelligent mobile phone terminal, image The equipment such as acquisition means, computer can acquire the image information of environment, and then utilize 3D vision technology, can be with constructing environment Three-dimensional live map.3D vision map is the key foundation of the augmented realities such as vision guided navigation, tourist guide, virtual tourism application Facility.
Existing 3D vision map structuring is easy to be influenced by the object moved in the related technology, such as pedestrian, movement The moving objects such as vehicle, can real-time change, easily cause the error of map structuring, accuracy is bad, therefore on building ground It needs to consider these dynamic objects when figure.On the other hand, true environment constantly changes over time, if It cannot timely update, map error also can be increasing, seriously affects map availability.
Summary of the invention
The present invention provides construction method, device and the electronic equipment of a kind of 3D vision map, to solve map structuring The bad problem of accuracy.
A kind of construction method of 3D vision map is provided according to embodiments of the present invention, comprising:
Obtain the first image;The first image is that first terminal is collected;
Target pixel region is determined in the first image;The target pixel region is dynamic in the first image Pixel region other than object;The dynamic object for characterizing is defined as that dynamic position can occur in the first image At least partly object of variation;
In the target pixel region, characteristic element is extracted;
According to the characteristic element of different moments collected multiple first images, the first three-dimensional map is constructed.
A kind of construction method of 3D vision map is additionally provided according to embodiments of the present invention, is applied to server, comprising:
The three-dimensional map that different terminals are sent is received, the three-dimensional map is that the terminal is collected according to different moments The characteristic element building of image;The characteristic element of described image is extracted in the target area of described image, described The target area of image is the pixel region in described image other than dynamic object, and the dynamic object is for characterizing described image In be defined as capable of occurring at least partly object of dynamic position variation;
The three-dimensional map that each terminal is sent is merged, three-dimensional map after the first fusion is obtained.
The embodiment of the invention also provides a kind of construction devices of 3D vision map, comprising:
Module is obtained, for obtaining the first image;The first image is that first terminal is collected;
Area-of-interest determining module, for determining target pixel region in the first image;The object pixel Region is the pixel region in the first image other than dynamic object;The dynamic object is for characterizing in the first image It is defined as that at least partly object of dynamic position variation can occur;
Element extraction module, for extracting characteristic element in the target pixel region;
Map structuring module, for the characteristic element according to different moments collected multiple first images, building first Three-dimensional map.
The embodiment of the invention also provides a kind of construction devices of 3D vision map, are applied to server, comprising:
Receiving module, for receiving the three-dimensional map of different terminals transmission, the three-dimensional map is the terminal according to not The characteristic element of acquired image constructs in the same time;The characteristic element of described image is in the target area of described image It extracts, the target area of described image is the pixel region in described image other than dynamic object;
First Fusion Module, the three-dimensional map sent for merging each terminal, obtains three-dimensional map after the first fusion.
The embodiment of the invention also provides a kind of electronic equipment, including memory and processor, in which:
The memory is used for store code and related data;
The processor, for executing the code in the memory to realize aforementioned any method and step.
The construction method of 3D vision map, device and electronic equipment, achieved provided by the embodiment of the present invention It has the beneficial effect that:
Method, apparatus provided by the present invention and electronic equipment can determine in the picture before extracting characteristic element Target pixel region other than dynamic object enables to the building of map not by the way that corresponding portion to be rejected to except map It is influenced by these dynamic objects, in turn, can be effectively reduced the error of map structuring, improve the accuracy of map structuring.
In the present invention, the map of local robust is established based on terminal and builds figure strategy by what server merged, is not required to make With expensive acquisition equipment, it is effectively saved cost.
In optinal plan of the present invention, can complete accordingly map delete, map rejuvenation strategy, guarantee 3D vision map letter The timeliness and accuracy of breath.
Detailed description of the invention
Fig. 1 is a kind of application scenarios schematic diagram of the embodiment of the present invention;
Fig. 2 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention One;
Fig. 3 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Two;
Fig. 4 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Three;
Fig. 5 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Four;
Fig. 6 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Five;
Fig. 7 is the process signal of the construction method of the 3D vision map based on server provided by the embodiment of the present invention Figure one;
Fig. 8 is the process signal of the construction method of the 3D vision map based on server provided by the embodiment of the present invention Figure two;
Fig. 9 is the structural schematic diagram of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention One;
Figure 10 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure two;
Figure 11 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure three;
Figure 12 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure four;
Figure 13 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure five;
Figure 14 is that the structure of the construction device of the 3D vision map based on server provided by the embodiment of the present invention is shown It is intended to one;
Figure 15 is that the structure of the construction device of the 3D vision map based on server provided by the embodiment of the present invention is shown It is intended to two;
Figure 16 is the structural schematic diagram of electronic equipment provided by the embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein Middle character "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or".In addition, the terms " system " and " network " It is often used interchangeably herein.
Fig. 1 is a kind of application scenarios schematic diagram of the embodiment of the present invention.
Referring to FIG. 1, the embodiment of the present invention is applicable to the system comprising at least one terminal 1 and server 2, it is therein Terminal can have memory and processor, and then the equipment of executable program for arbitrary disposition, specifically can such as mobile phone, plate electricity Brain, computer, vehicle device, robot etc..
In addition, can be conducive to make full use of the camera imaging and sensor capability on mobile phone if terminal 1 uses mobile phone, In turn, the accurate posture information of terminal can be obtained, compared to other scenes, it may be unnecessary to the sensing such as additional laser radar Device.Meanwhile terminal 1 uses mobile phone, may also be advantageous for the computing capability for giving full play to cell phone processor, can establish in mobile phone terminal Small vision map does not need external complex and expensive calculator such as GPU.As it can be seen that the scheme using mobile phone can be conducive to cost Reduction, and realize and be applicable in the diversity of scene, be not easy to be limited.
Fig. 2 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention One.
Referring to FIG. 2, the construction method of the 3D vision map based on terminal, comprising:
S101: the first image is obtained.
The first image is that first terminal is collected.
Since the method that the present embodiment is related to is realized by terminal, in turn, the first image be can be entrained by terminal The image itself that image acquisition device arrives is also possible to after acquisition by centainly compressing, handling, the image collecting device It can be such as camera, camera.
According to the mode of Image Acquisition, equipment, the difference of process flow, the format and data of the first accessed image Structure type etc. is also possible to multiplicity, such as can be through one or many compressions, be also possible to it is un-compressed, can be with It is to be also possible to untreated through default processing.
S102: target pixel region is determined in the first image.
Target pixel region, it can be understood as the pixel region in the first image other than dynamic object.
The dynamic object is defined as to occur dynamic position variation at least for characterizing in the first image Partial objects;The dynamic object may include at least one of: people, animal and the object that can be moved, energy therein Enough objects for moving can be for example including at least one of: vehicle, mobile toy, mobility model, can move flying object Various tools, robot etc..As it can be seen that the entity that can arbitrarily move, all without departing from the related dynamic object of the present embodiment Description.
Meanwhile it can specifically be realized using the means that arbitrary Object identifying, object divide, such as can be used housebroken Neural network, identification model etc. are realized.
Fig. 3 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Two.
In one of embodiment, step S102 may include:
S1021: divide the non-targeted pixel region in the first image and the target picture in the way of semantic segmentation Plain region, with the determination target pixel region.
Non-targeted pixel region is referred to target pixel region and compares understanding, it can be understood as in the first image The pixel region of the dynamic object.The mode of semantic segmentation can be specifically interpreted as being realized using the model of semantic segmentation, is somebody's turn to do Model can be for example based on the neural network of convolution, and the mapping relations of each interlayer can be true by the training of various material in neural network It is fixed.
By the object detection and semantic segmentation of terminal, the objects such as dynamic pedestrian and vehicle can be directly detected, into And corresponding portion can be rejected to except map structuring, so that map is not influenced by these dynamic objects.
In specific implementation process, the first image can be inputted to the model of semantic segmentation, the exportable all such as pedestrians of model With the position of the dynamic object of vehicle and corresponding encirclement frame, accordingly, it can be achieved that target pixel region and non-targeted pixel region Division.
If characteristic element is characterized a little: in the next steps, the feature of some such as characteristic points can be extracted from image Element calculates the three-dimensional position of these points then by three-dimensional matching.It before this, can be with using after semantic segmentation The corresponding mask mask of each frame image is obtained, wherein the class interested such as the corresponding vehicle of each pixel of mask or pedestrian Not, that is, a classification for corresponding to dynamic object, no longer extracts characteristic point for the region of mask, thus will not be triangulated, this A little parts will not appear in three-dimensional map.
S103: in the target pixel region, characteristic element is extracted.
S104: according to the characteristic element of different moments collected multiple first images, the first three-dimensional map is constructed.
Characteristic element, it can be understood as the reference element that arbitrarily can be used for three-dimensional map building can specifically include following At least one: characteristic point, characteristic curve and signature identification.Characteristic curve therein can such as horizontal line and vertical line, feature therein Mark can be, for example, the content that can be arbitrarily characterized in two-dimensional pixel.
In addition, step S103 involved in the present embodiment and step S104 can be the side of map structuring any in field Formula.
The present embodiment can determine in the picture the object pixel area other than dynamic object before extracting characteristic element Domain enables to the building of map not by the influence of these dynamic objects by the way that corresponding portion to be rejected to except map, into And can be effectively reduced the error of map structuring, improve the accuracy of map structuring.
Referring to FIG. 3, during terminal is mobile, pose may change in one of embodiment, The n-th posture information of nth frame image i.e. in described multiple images is different from the m-th posture information of M frame image, described N-th posture information is used to characterize the position of the first terminal and posture, the m-th position when acquiring the nth frame image Appearance information is used to characterize the position of the first terminal and posture when acquiring the M frame image.
So after step s 103, may also include that before step S104
S105: same characteristic element is determined in the nth frame image and the M frame image.
Same characteristic element therein, can refer to single feature element, can also refer to multiple characteristic elements.
S106: according to the n-th posture information and the m-th posture information, institute is calculated in the way of trigonometric ratio The position of same characteristic element in three dimensions is stated, the three dimensional local information of the same characteristic element is obtained.
If terminal is mobile phone, posture information therein is collected using camera on mobile phone and inertial sensor etc. Acquisition of information, such as position and the posture information of the six degree of freedom of the mobile phone that can arrive for real-time tracing in space.
The mode of trigonometric ratio therein can sit to solve location point applied to the arbitrary conventional trigonometric ratio in SLAM Target mode, such as:
Known two field pictures I1 and I2, corresponding pose are respectively P1 and P2.
Extract description of key point and key point respectively by each frame image.Wherein, all three-dimensional point, that is, groups At local small vision map.Since extracted key point is sparse distribution in the picture, this 3D vision map Commonly referred to as sparse map.Key point therein also is understood as characteristic point referred to above.
The point of identical corresponding relationship between two field pictures is obtained to information by matching.
For each corresponding point pair, three of a location point in space can be calculated by trigonometric ratio means Tie up position Pos.
Corresponding, step S104 may include:
S1041: according to the three dimensional local information of each characteristic element, first three-dimensional map is constructed.
Fig. 4 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Three.
Referring to FIG. 4, after step S104, may include: in one of embodiment
S107: sending first three-dimensional map to server, so that the server can be three-dimensional by described first Map is merged with another second three-dimensional map, obtains three-dimensional map after the first fusion.
Second three-dimensional map is determined according to collected second image of another second terminal, the second three-dimensional map With the relationship of the second image, the relationship for being referred to the first three-dimensional map and the first image understands, meanwhile, the second image is second Terminal is used to construct the basis of the second three-dimensional map, it may be assumed that the relationship of second terminal, the second image and the second three-dimensional map can join Understand according to the relationship of first terminal, the first image and the first three-dimensional map.
Above first three-dimensional map can be arbitrary different three of two different terminals generation from the second three-dimensional map Map is tieed up, so, in fusion, also need to determine whether two maps can merge, so, in one of embodiment, While sending the first three-dimensional map to server, it may also include that the server and send the of first three-dimensional map One location information, so that second location information of the server in the first location information and second three-dimensional map When associated, first three-dimensional map and second three-dimensional map are merged.
Location information therein can be the information of characterization position range, meanwhile, it is also not excluded for characterizing a certain location point, Or the information of multiple location points, as long as based on the information be able to achieve between three-dimensional map whether associated judgement, do not depart from The description of the present embodiment.
In addition, location information can be what terminal was sent out, it is also possible to the acquisition of server automatically scanning, can be Terminal oneself acquisition, be also possible to that other equipment are collected, or according to other and collected information or terminal are adopted What the other information collected calculated.
It is therein associated, can for example two three-dimensional maps have described by characteristic element or the band of position of overlapping Position range have be overlapped or two three-dimensional maps between it is close enough etc..
In one of embodiment, the fusion process of the first three-dimensional map and the second three-dimensional map can be such as:
Three-dimensional map is according to Mapping and Converting relationship at least portion in first three-dimensional map after first fusion Quartile sets a progress changes in coordinates, obtains after at least partly location point is transformed into second three-dimensional map, Alternatively, three-dimensional map is according to Mapping and Converting relationship at least partly position in second three-dimensional map after first fusion A progress changes in coordinates is set, is obtained after at least partly location point is transformed into first three-dimensional map.It is described Mapping and Converting relationship is determined according to first three-dimensional map and matched characteristic element in second three-dimensional map.
In other words, in order to realize fusion, the basis for looking for some benchmark as fusion is needed, can be the vision to match Characteristic element.A kind of embodiment is the common region by finding two small maps, then passes through common ground acquisition two The relative pose relationship of small map can be characterized by Mapping and Converting relationship referred to above, finally by coordinate transform Below two maps unifications to the same coordinate system.
In specific implementation process, process can be such as:
Assuming that each map is to be collectively formed by several key frames and several characteristic points, and then need to find possibility There are the map of integration region, i.e. the first three-dimensional map is associated with the second three-dimensional map: corresponding by each map The location informations such as GPS or wifi judge whether the first three-dimensional map and the second three-dimensional map are close enough, if geographical location If, trial is merged, otherwise without fusion.Assuming that there are multiple maps, such as the first three-dimensional map, the two or three It ties up map and third three-dimensional map, and if three three-dimensional maps are all close enough, is then gradually melted by the strategy of iteration Close: first merge the first three-dimensional map and obtained with the second three-dimensional map, then again by fused three-dimensional map and and third dimensionally Figure fusion, and so on.That is: three-dimensional map is also used to merge with another third three-dimensional map after described first fusion, obtains the Three-dimensional map after two fusions.Wherein, the third three-dimensional map is determined according to the collected third image of another third terminal 's.It is three-dimensional to can refer to first terminal, the first image and first for the relationship of the third terminal, third image and third three-dimensional map The relationship of map understands.
After finding out the first three-dimensional map and the second three-dimensional map that can merge, in one embodiment, pass through image Each frame of second three-dimensional map and the first three-dimensional map can be carried out vision matching, find corresponding point pair by matching algorithm, and It verifies whether to meet geometrical relationship.If what can be calculated is mapped to for the characteristic point of the second three-dimensional map to rigid body variation T In one three-dimensional map, then it is assumed that have found common overlapping region, rigid body variation T be the second three-dimensional map to first dimensionally The mapping relations of figure can be regarded as characterizing Mapping and Converting relationship referred to above.
Using acquired relationship above, all key frames and location point of the second three-dimensional map can be coordinately transformed, It converts it to below the coordinate system of the first three-dimensional map.Key frame and key point after all conversions add the first three-dimensional map Key frame and point map constitute merge after map, i.e., first fusion after three-dimensional map.It can similarly obtain to the first three-dimensional Map, which is coordinately transformed to realize, merges combined scheme.
In addition, the result obtained by above step is there may be repeating point map, and error may be introduced.It therefore can be with Combined optimization is carried out using the pose of two map key frames and the relationship of point map, obtains more accurate map.
As it can be seen that three-dimensional map constructed by terminal can be regarded as the small map in part, the end that can upload onto the server is protected It deposits, while the small map in part that multiple mobile phones independently create can be spliced into bigger 3D vision map on the server, i.e., Map after fusion.
Fig. 5 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Four.
Referring to FIG. 5, may also include that after step S104
S108: after getting the first new image, determining the location point in first three-dimensional map without processing, with And location point to be processed.
The location point without processing be in the location point of first three-dimensional map the first new image not Visible location point, and;With the location point for being greater than first threshold at a distance from the reference center of the first terminal.
S109: updating or deletes the location point to be processed, to obtain new first three-dimensional map.
It, can be such as whether being sightless judgement:
Assuming that the position of an existing point map is Pos in known spatial, the spatial pose of current image frame is P, that : whether the point map need to be judged in present frame, i.e., the first new image is visible.So, it is assumed that the imaging model of terminal is needle Hole camera imaging, then current map point can be obtained in the homogeneous coordinates of the projected position of present frame by P*Pos, be recorded as px,py。
Assuming that the length and width of image is respectively w and h, if meeting 0 < px < w and 0 < py < h, current map point can be with In present frame it can be seen that.Otherwise it is assumed that current map point be can't see for a moment at this, which is not updated and deleted.
Whether it is greater than the judgement of first threshold for distance, it can be such as:
Point map is judged to the position of image center, if it is greater than preset first threshold d, then it is assumed that point map to camera Position is too far, and it is unreliable to observe, it also hold that point map is not updated and deleted.
Point map referred to above can specifically be interpreted as location point referred to above.Part of location point can be Characteristic point, the location point that can also be characterized in line, or the location point in signature identification.
Fig. 6 is the flow diagram of the construction method of the 3D vision map based on terminal provided by the embodiment of the present invention Five.
Referring to FIG. 6, may also include that after step S104
S110: the location point in first three-dimensional map is projected to the plane of the first new image, determines each position The corresponding projected position of point.
S111: at least one key point around each projected position in preset range is determined;
S112: calculate each location point description son and at least one corresponding key point description son between it is European away from From;
S113: if the corresponding Euclidean distance of the location point is all larger than standard value, it is determined that the location point be it is candidate to Delete position point, and accumulate once statistics number;And: according to the statistics number, it is determined whether delete it is corresponding it is candidate to Delete position point.
Candidate location point to be deleted, it will be appreciated that be confirmed as to be observed for this location point but not be observed The location point arrived can be counted once when being determined as one time, such as control counter increases by 1;If statistics number is greater than secondary Number threshold value, it may be determined that the location point need to be deleted.
Meanwhile also need to meet frequency that candidate location point to be deleted is observed need to be lower than preset frequency threshold, should " being observed " can be regarded as: be confirmed as candidate location point to be deleted, frequency therein, it will be appreciated that in certain period of time In, it is confirmed as the number of candidate location point to be deleted in the unit time.
As it can be seen that the described judgement of above procedure can specifically: judgement should be observed the meter not being observed still Number is if it is greater than preset threshold N, and the frequency observed is lower than preset threshold, it is believed that this point map has occurred and that variation, needs It is deleted from map.
Above step S110 to step S113 can be implemented after step S109.
By above step, the present embodiment can be by Multiple-Scan according to new observation information, for should in map The region for observing but not observing repeatedly, it is believed that variation is had occurred that, to delete corresponding information in map.
In addition, the present embodiment can also be compared with existing map, the region repeatedly observed and do not include in existing map Part, it is believed that be the part of new map, be added in map, to complete the update of map.
In conclusion method provided by the present embodiment, can determine in the picture dynamic before extracting characteristic element Target pixel region other than object, by the way that corresponding portion is rejected to except map, enable to the building of map not by The influence of these dynamic objects can be effectively reduced the error of map structuring in turn, improve the accuracy of map structuring.
In addition, establishing the map of local robust based on terminal and building figure strategy by what cloud server merged, it is not required to make With expensive acquisition equipment.By the information of follow up scan, it is corresponding complete map delete, map rejuvenation strategy, guarantee three-dimensional The timeliness information of vision cartographic information.
Fig. 7 is the process signal of the construction method of the 3D vision map based on server provided by the embodiment of the present invention Figure one.
Referring to FIG. 7, the construction method of the 3D vision map based on server, comprising:
S201: the three-dimensional map that different terminals are sent is received.
The three-dimensional map is that the terminal is constructed according to the characteristic element of different moments acquired image;The figure The characteristic element of picture is extracted in the target area of described image, and the target area of described image is to move in described image Pixel region other than state object, the dynamic object are defined as to occur dynamic position change for characterizing in described image At least partly object changed.
S202: the three-dimensional map that each terminal is sent is merged, three-dimensional map after the first fusion is obtained.
Fig. 8 is the process signal of the construction method of the 3D vision map based on server provided by the embodiment of the present invention Figure two;
Step S202, comprising:
S2021: according to matched characteristic element in the first three-dimensional map and the second three-dimensional map, determine that Mapping and Converting closes System.
First three-dimensional map is constructed according to the characteristic element of the first image of first terminal acquisition, described second Three-dimensional map is constructed according to the characteristic element of the second image of second terminal acquisition.
S2022: according to the Mapping and Converting relationship, at least partly location point in first three-dimensional map is sat Mark variation, at least partly location point is transformed into second three-dimensional map, obtains three-dimensional after first fusion Map.
Before step S202, it may also include that
S203: the first location information of first three-dimensional map and the second confidence of second three-dimensional map are obtained Breath.
S204: determine that the first location information is associated with the second location information.
After step S202, it may also include that
S205: three-dimensional map after first fusion is merged with another third three-dimensional map, is obtained three after the second fusion Map is tieed up, the third three-dimensional map is determined according to the collected third image of another third terminal.
Optionally, the target pixel region is the non-targeted pixel region divided in described image in the way of semantic segmentation It is determined behind domain and the target pixel region, the non-targeted pixel region is the pixel of dynamic object described in described image Region.
Optionally, the dynamic object includes at least one of: people, animal and the object that can be moved;It is described Characteristic element includes at least one of: characteristic point, characteristic curve and signature identification.
Optionally, the three-dimensional map is constructed according to the three dimensional local information of characteristic element, the three-dimensional position letter Breath is calculated in different images in the way of trigonometric ratio obtained from the position of same characteristic element in three dimensions.
Embodiment referred to above is corresponding to embodiment illustrated in fig. 6 with Fig. 2, technical term, technological means and skill Art effect is similar, so, it can refer to Fig. 2 and understand to embodiment illustrated in fig. 6, be not repeated herein.
Method provided by the present embodiment can determine other than dynamic object in the picture before extracting characteristic element Target pixel region enable to the building of map not by these dynamics by the way that corresponding portion to be rejected to except map The influence of object can be effectively reduced the error of map structuring in turn, improve the accuracy of map structuring.
In optinal plan of the present invention, the map of local robust is established based on terminal and builds figure plan by what server merged Slightly, without the use of expensive acquisition equipment, it is effectively saved cost.
In optinal plan of the present invention, can complete accordingly map delete, map rejuvenation strategy, guarantee 3D vision map letter The timeliness information of breath.
Fig. 9 is the structural schematic diagram of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention One.
Referring to FIG. 9, the construction device 300 of 3D vision map, comprising:
Module 301 is obtained, for obtaining the first image;The first image is that first terminal is collected;
Area-of-interest determining module 302, for determining target pixel region in the first image;The target picture Plain region is the pixel region in the first image other than dynamic object;The dynamic object is for characterizing the first image In be defined as capable of occurring at least partly object of dynamic position variation;
Element extraction module 303, for extracting characteristic element in the target pixel region;
Map structuring module 304, for the characteristic element according to different moments collected multiple first images, building the One three-dimensional map.
Optionally, area-of-interest determining module 302, is specifically used for:
Divide the non-targeted pixel region in the first image and the target pixel region in the way of semantic segmentation, With the determination target pixel region, the non-targeted pixel region is the pixel region of dynamic object described in the first image Domain.
Optionally, the dynamic object includes at least one of: people, animal and the object that can be moved;It is described Characteristic element includes at least one of: characteristic point, characteristic curve and signature identification.
Figure 10 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure two.
Referring to FIG. 10, the m-th of the n-th posture information of the nth frame image in described multiple images and M frame image Posture information is different, the n-th posture information be used to characterize when acquiring the nth frame image position of the first terminal and Posture, the m-th posture information are used to characterize the position of the first terminal and posture when acquiring the M frame image;
The device, further includes:
Same characteristic determination module 305: same characteristic element is determined in the nth frame image and the M frame image.
Three dimensional local information computing module 306: it according to the n-th posture information and the m-th posture information, utilizes The mode of trigonometric ratio calculates the position of the same characteristic element in three dimensions, obtains the three-dimensional of the same characteristic element Location information;
Map structuring module 304 specifically can be used for: according to the three dimensional local information of each characteristic element, building described first Three-dimensional map.
Figure 11 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure three.
Please refer to Figure 11, the device, further includes:
Sending module 307, for sending first three-dimensional map to server, so that the server can be by institute It states the first three-dimensional map to merge with another second three-dimensional map, obtains three-dimensional map after the first fusion, second three-dimensional map It is to be determined according to collected second image of another second terminal.
Sending module 307 is also used to send the first location information of first three-dimensional map to the server, so that The server is obtained when the first location information is associated with the second location information of second three-dimensional map, merges institute State the first three-dimensional map and second three-dimensional map.
Optionally, three-dimensional map is also used to merge with another third three-dimensional map after first fusion, obtains second and melts Three-dimensional map after conjunction, the third three-dimensional map are determined according to the collected third image of another third terminal.
Optionally, three-dimensional map is according to Mapping and Converting relationship in first three-dimensional map after first fusion At least partly location point carries out changes in coordinates, after at least partly location point is transformed into second three-dimensional map Arrive, alternatively, it is described first fusion after three-dimensional map be according to Mapping and Converting relationship in second three-dimensional map at least Portion point carries out changes in coordinates, obtains after at least partly location point is transformed into first three-dimensional map , the Mapping and Converting relationship is true according to first three-dimensional map and matched characteristic element in second three-dimensional map Fixed.
Figure 12 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure four.
Please refer to Figure 12, the device, further includes:
Location point determining module 308, for determining nothing in first three-dimensional map after getting the first new image The location point and location point to be processed that need to be handled;The location point without processing is the position of first three-dimensional map It sets in the new sightless location point of the first image in a little, and;With it is big at a distance from the reference center of the first terminal In the location point of first threshold.
Removing module 309 is updated, for updating or deleting the location point to be processed, to obtain new the described 1st Tie up map.
Figure 13 is the structural representation of the construction device of the 3D vision map based on terminal provided by the embodiment of the present invention Figure five.
Please refer to Figure 13, the device, further includes:
Projection module 309, for the location point in first three-dimensional map to be projected to the plane of the first new image, Determine the corresponding projected position of each position point;
Key point determining module 310, for determining at least one key point around each projected position in preset range;
Euclidean distance computing module 311, for calculating description of each location point and at least one corresponding key point Description son between Euclidean distance;
Candidate to be deleted determining module 312, if being all larger than standard value for the corresponding Euclidean distance of the location point, It determines that the location point is candidate location point to be deleted, and accumulates once statistics number;
Determining module 313 is deleted, for according to the statistics number, it is determined whether delete corresponding candidate position to be deleted Point.
Embodiment and Fig. 2 to embodiment illustrated in fig. 6 referred to above matches, technical term, technological means and skill Art effect is similar, so, it can refer to Fig. 2 and understand to embodiment illustrated in fig. 6, be not repeated herein.
Device provided by the present embodiment can determine other than dynamic object in the picture before extracting characteristic element Target pixel region enable to the building of map not by these dynamics by the way that corresponding portion to be rejected to except map The influence of object can be effectively reduced the error of map structuring in turn, improve the accuracy of map structuring.
In optinal plan of the present invention, the map of local robust is established based on terminal and builds figure plan by what server merged Slightly, without the use of expensive acquisition equipment, it is effectively saved cost.
In optinal plan of the present invention, can complete accordingly map delete, map rejuvenation strategy, guarantee 3D vision map letter The timeliness information of breath.
Figure 14 is that the structure of the construction device of the 3D vision map based on server provided by the embodiment of the present invention is shown It is intended to one.
Please refer to Figure 14, the construction device 400 of 3D vision map, comprising:
Receiving module 401, for receiving the three-dimensional map of different terminals transmission.
The three-dimensional map is that the terminal is constructed according to the characteristic element of different moments acquired image;The figure The characteristic element of picture is extracted in the target area of described image, and the target area of described image is to move in described image Pixel region other than state object;
First Fusion Module 402, the three-dimensional map sent for merging each terminal, obtains three-dimensional map after the first fusion.
Optionally, the first Fusion Module 402, is specifically used for:
According to matched characteristic element in the first three-dimensional map and the second three-dimensional map, Mapping and Converting relationship is determined;It is described First three-dimensional map is constructed according to the characteristic element of the first image of first terminal acquisition, and second three-dimensional map is root It is constructed according to the characteristic element of the second image of second terminal acquisition;
According to the Mapping and Converting relationship, coordinate change is carried out at least partly location point in first three-dimensional map Change, at least partly location point is transformed into second three-dimensional map, obtains three-dimensional map after first fusion.
Figure 15 is that the structure of the construction device of the 3D vision map based on server provided by the embodiment of the present invention is shown It is intended to two.
Please refer to Figure 15, the device, further includes:
Position information acquisition module 403, for obtaining the first location information and described second of first three-dimensional map The second location information of three-dimensional map;
It is associated with determining module 404, for determining that the first location information is associated with the second location information.
Optionally, the device, further includes:
Second Fusion Module 405 is obtained for merging three-dimensional map after first fusion with another third three-dimensional map Three-dimensional map after to the second fusion, the third three-dimensional map are determined according to the collected third image of another third terminal 's.
Optionally, the target pixel region is the non-targeted pixel region divided in described image in the way of semantic segmentation It is determined behind domain and the target pixel region, the non-targeted pixel region is the pixel of dynamic object described in described image Region.
Optionally, the dynamic object includes at least one of: people, animal and the object that can be moved;It is described Characteristic element includes at least one of: characteristic point, characteristic curve and signature identification.
Embodiment and Fig. 7 to embodiment illustrated in fig. 8 referred to above matches, technical term, technological means and skill Art effect is similar, so, it can refer to Fig. 7 and understand to embodiment illustrated in fig. 8, be not repeated herein.
Device provided by the present embodiment can determine other than dynamic object in the picture before extracting characteristic element Target pixel region enable to the building of map not by these dynamics by the way that corresponding portion to be rejected to except map The influence of object can be effectively reduced the error of map structuring in turn, improve the accuracy of map structuring.
In optinal plan of the present invention, the map of local robust is established based on terminal and builds figure plan by what server merged Slightly, without the use of expensive acquisition equipment, it is effectively saved cost.
In optinal plan of the present invention, can complete accordingly map delete, map rejuvenation strategy, guarantee 3D vision map letter The timeliness information of breath.
Figure 16 is the structural schematic diagram of electronic equipment provided by the embodiment of the present invention.
Please refer to Figure 16, electronic equipment 50, including memory 52 and processor 51, in which:
The memory 52 is used for store code and related data;
The processor 51, for executing the code in the memory to realize aforementioned any method step Suddenly.
Wherein, memory 52 can be communicated by bus 53 and processor 51.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the module or The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units Or component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, institute Display or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit Indirect coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
Person of ordinary skill in the field, which is understood that, realizes that all or part of the steps of above method embodiment can be with It being done through the relevant hardware of the program instructions, program above-mentioned can store in computer-readable storage medium, and by Processor inside communication apparatus executes, and processor can be executed including above method embodiment program above-mentioned when executed All or part of step.Wherein, the processor can be used as the implementation of one or more processors chip, or can be A part of one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC); And storage medium above-mentioned can include but is not limited to following kind of storage medium: flash memory (Flash Memory) read-only is deposited Reservoir (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), mobile hard disk, The various media that can store program code such as magnetic or disk.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.

Claims (20)

1. a kind of construction method of 3D vision map characterized by comprising
Obtain the first image;The first image is that first terminal is collected;
Target pixel region is determined in the first image;The target pixel region is dynamic object in the first image Pixel region in addition;The dynamic object is defined as to occur dynamic position variation for characterizing in the first image At least partly object;
In the target pixel region, characteristic element is extracted;
According to the characteristic element of different moments collected multiple first images, the first three-dimensional map is constructed.
2. being wrapped the method according to claim 1, wherein determining target pixel region in the first image It includes:
Divide the non-targeted pixel region in the first image and the target pixel region, in the way of semantic segmentation with true The fixed target pixel region, the non-targeted pixel region are the pixel region of dynamic object described in the first image.
3. the method according to claim 1, wherein the dynamic object includes at least one of: people, animal With the object that can be moved;The characteristic element includes at least one of: characteristic point, characteristic curve and signature identification.
4. method according to any one of claims 1 to 3, which is characterized in that according to different moments collected multiple images Characteristic element, construct the first three-dimensional map after, further includes:
Send first three-dimensional map to server so that the server can by first three-dimensional map with it is another The fusion of second three-dimensional map, obtains three-dimensional map after the first fusion, second three-dimensional map is adopted according to another second terminal What the second image collected determined.
5. according to the method described in claim 4, it is characterized by further comprising:
The first location information of first three-dimensional map is sent to the server, so that the server is described first When location information is associated with the second location information of second three-dimensional map, first three-dimensional map and described the are merged Two three-dimensional maps.
6. according to the method described in claim 4, it is characterized in that, three-dimensional map is also used to and another the after first fusion The fusion of three three-dimensional maps, obtains three-dimensional map after the second fusion, and the third three-dimensional map is acquired according to another third terminal What the third image arrived determined.
7. according to the method described in claim 4, it is characterized in that, three-dimensional map is according to Mapping and Converting after first fusion Relationship carries out changes in coordinates at least partly location point in first three-dimensional map, and at least partly location point is turned It is obtained after changing in second three-dimensional map, alternatively, three-dimensional map is according to Mapping and Converting relationship after first fusion Changes in coordinates is carried out at least partly location point in second three-dimensional map, at least partly location point is transformed into It is obtained after in first three-dimensional map, the Mapping and Converting relationship is according to first three-dimensional map and the described 2nd 3 Tie up what matched characteristic element in map determined.
8. method according to any one of claims 1 to 3, which is characterized in that of the nth frame image in described multiple images N number of posture information is different from the m-th posture information of M frame image, and the n-th posture information is for characterizing acquisition described the The position of the first terminal and posture when N frame image, the m-th posture information acquire the M frame image for characterizing The position of Shi Suoshu first terminal and posture;
According to the characteristic element of different moments collected multiple first images, before constructing the first three-dimensional map, further includes:
Same characteristic element is determined in the nth frame image and the M frame image;
According to the n-th posture information and the m-th posture information, the same feature is calculated in the way of trigonometric ratio The position of element in three dimensions obtains the three dimensional local information of the same characteristic element;
According to the characteristic element of different moments collected multiple first images, the first three-dimensional map is constructed, comprising:
According to the three dimensional local information of each characteristic element, first three-dimensional map is constructed.
9. method according to any one of claims 1 to 3, which is characterized in that according to different moments collected multiple first The characteristic element of image, construct the first three-dimensional map after, further includes:
After getting the first new image, the location point and to be processed in first three-dimensional map without processing is determined Location point;The location point without processing be in the location point of first three-dimensional map the first new image not Visible location point, and;With the location point for being greater than first threshold at a distance from the reference center of the first terminal;
The location point to be processed is updated or deletes, to obtain new first three-dimensional map.
10. method according to any one of claims 1 to 3, which is characterized in that according to different moments collected multiple first The characteristic element of image, construct the first three-dimensional map after, further includes:
The plane that location point in first three-dimensional map is projected to the first new image determines the corresponding throwing of each position point Shadow position;
Determine at least one key point around each projected position in preset range;
Calculate the Euclidean distance between description of each location point and description of at least one corresponding key point;
If the corresponding Euclidean distance of the location point is all larger than standard value, it is determined that the location point is candidate position to be deleted Point, and accumulate once statistics number;And: according to the statistics number, it is determined whether delete corresponding candidate position to be deleted Point.
11. a kind of construction method of 3D vision map is applied to server characterized by comprising
The three-dimensional map that different terminals are sent is received, the three-dimensional map is the terminal according to different moments acquired image Characteristic element building;The characteristic element of described image is extracted in the target area of described image, described image Target area be pixel region in described image other than dynamic object, the dynamic object is for characterizing quilt in described image It is defined as that at least partly object of dynamic position variation can occur;
The three-dimensional map that each terminal is sent is merged, three-dimensional map after the first fusion is obtained.
12. according to the method for claim 11, which is characterized in that merge the three-dimensional map that each terminal is sent, merged Three-dimensional map afterwards, comprising:
According to matched characteristic element in the first three-dimensional map and the second three-dimensional map, Mapping and Converting relationship is determined;Described first Three-dimensional map is constructed according to the characteristic element of the first image of first terminal acquisition, and second three-dimensional map is according to the The characteristic element building of second image of two terminals acquisition;
According to the Mapping and Converting relationship, changes in coordinates is carried out at least partly location point in first three-dimensional map, with At least partly location point is transformed into second three-dimensional map, three-dimensional map after first fusion is obtained.
13. according to the method for claim 12, which is characterized in that merge the three-dimensional map that each terminal is sent, obtain first After fusion before three-dimensional map, further includes:
Obtain the first location information of first three-dimensional map and the second location information of second three-dimensional map;
Determine that the first location information is associated with the second location information.
14. according to the method for claim 11, which is characterized in that merge the three-dimensional map that each terminal is sent, obtain first After fusion after three-dimensional map, further includes:
Three-dimensional map after first fusion is merged with another third three-dimensional map, obtains three-dimensional map after the second fusion, institute Stating third three-dimensional map is determined according to the collected third image of another third terminal.
15. 1 to 14 any method according to claim 1, which is characterized in that the target pixel region is to utilize semanteme It is determined after non-targeted pixel region and the target pixel region in partitioning scheme segmentation described image, the non-targeted picture Plain region is the pixel region of dynamic object described in described image.
16. 1 to 14 any method according to claim 1, which is characterized in that the dynamic object include it is following at least it One: people, animal and the object that can be moved;The characteristic element includes at least one of: characteristic point, characteristic curve and spy Sign mark.
17. 1 to 14 any method according to claim 1, which is characterized in that the three-dimensional map is according to characteristic element Three dimensional local information building, the three dimensional local information is that same feature in different images is calculated in the way of trigonometric ratio Obtained from the position of element in three dimensions.
18. a kind of construction device of 3D vision map characterized by comprising
Module is obtained, for obtaining the first image;The first image is that first terminal is collected;
Area-of-interest determining module, for determining target pixel region in the first image;The target pixel region For the pixel region other than dynamic object in the first image;The dynamic object is determined in the first image for characterizing Justice is at least partly object that dynamic position variation can occur;
Element extraction module, for extracting characteristic element in the target pixel region;
Map structuring module, for the characteristic element according to different moments collected multiple first images, building first is three-dimensional Map.
19. a kind of construction device of 3D vision map is applied to server characterized by comprising
Receiving module, for receiving the three-dimensional map of different terminals transmission, the three-dimensional map be the terminal according to it is different when Carve the characteristic element building of acquired image;The characteristic element of described image is extracted in the target area of described image It arrives, the target area of described image is the pixel region in described image other than dynamic object;
First Fusion Module, the three-dimensional map sent for merging each terminal, obtains three-dimensional map after the first fusion.
20. a kind of electronic equipment, it is characterised in that: including memory and processor, in which:
The memory is used for store code and related data;
The processor, for executing the code in the memory to realize any method of claims 1 to 10 Step, or: realize any method and step of claim 11 to 17.
CN201910173949.4A 2019-03-08 2019-03-08 Construction method, device and the electronic equipment of 3D vision map Pending CN109920055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910173949.4A CN109920055A (en) 2019-03-08 2019-03-08 Construction method, device and the electronic equipment of 3D vision map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910173949.4A CN109920055A (en) 2019-03-08 2019-03-08 Construction method, device and the electronic equipment of 3D vision map

Publications (1)

Publication Number Publication Date
CN109920055A true CN109920055A (en) 2019-06-21

Family

ID=66963867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910173949.4A Pending CN109920055A (en) 2019-03-08 2019-03-08 Construction method, device and the electronic equipment of 3D vision map

Country Status (1)

Country Link
CN (1) CN109920055A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288710A (en) * 2019-06-26 2019-09-27 Oppo广东移动通信有限公司 A kind of processing method of three-dimensional map, processing unit and terminal device
CN110298320A (en) * 2019-07-01 2019-10-01 北京百度网讯科技有限公司 A kind of vision positioning method, device and storage medium
CN110363179A (en) * 2019-07-23 2019-10-22 联想(北京)有限公司 Ground picture capturing method, device, electronic equipment and storage medium
CN110555901A (en) * 2019-09-05 2019-12-10 亮风台(上海)信息科技有限公司 Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN110660134A (en) * 2019-09-25 2020-01-07 Oppo广东移动通信有限公司 Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN111126172A (en) * 2019-12-04 2020-05-08 江西洪都航空工业集团有限责任公司 Grassland autonomous mapping method based on vision
CN111177167A (en) * 2019-12-25 2020-05-19 Oppo广东移动通信有限公司 Augmented reality map updating method, device, system, storage and equipment
CN111724439A (en) * 2019-11-29 2020-09-29 中国科学院上海微***与信息技术研究所 Visual positioning method and device in dynamic scene
CN111829535A (en) * 2020-06-05 2020-10-27 北京百度网讯科技有限公司 Method and device for generating offline map, electronic equipment and storage medium
CN112150538A (en) * 2019-06-27 2020-12-29 北京初速度科技有限公司 Method and device for determining vehicle pose in three-dimensional map construction process
CN112308810A (en) * 2020-11-05 2021-02-02 广州小鹏自动驾驶科技有限公司 Map fusion method and device, server and storage medium
CN112967228A (en) * 2021-02-02 2021-06-15 中国科学院上海微***与信息技术研究所 Method and device for determining target optical flow information, electronic equipment and storage medium
CN113160270A (en) * 2021-02-24 2021-07-23 广州视源电子科技股份有限公司 Visual map generation method, device, terminal and storage medium
CN113610991A (en) * 2021-10-09 2021-11-05 创泽智能机器人集团股份有限公司 Method and equipment for determining observation position based on three-dimensional map
CN116486029A (en) * 2023-04-26 2023-07-25 深圳市喜悦智慧数据有限公司 Three-dimensional holographic geographic data establishment method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN107564012A (en) * 2017-08-01 2018-01-09 中国科学院自动化研究所 Towards the augmented reality method and device of circumstances not known
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN107564012A (en) * 2017-08-01 2018-01-09 中国科学院自动化研究所 Towards the augmented reality method and device of circumstances not known
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288710A (en) * 2019-06-26 2019-09-27 Oppo广东移动通信有限公司 A kind of processing method of three-dimensional map, processing unit and terminal device
CN110288710B (en) * 2019-06-26 2023-04-07 Oppo广东移动通信有限公司 Three-dimensional map processing method and device and terminal equipment
CN112150538B (en) * 2019-06-27 2024-04-12 北京初速度科技有限公司 Method and device for determining vehicle pose in three-dimensional map construction process
CN112150538A (en) * 2019-06-27 2020-12-29 北京初速度科技有限公司 Method and device for determining vehicle pose in three-dimensional map construction process
CN110298320B (en) * 2019-07-01 2021-06-22 北京百度网讯科技有限公司 Visual positioning method, device and storage medium
CN110298320A (en) * 2019-07-01 2019-10-01 北京百度网讯科技有限公司 A kind of vision positioning method, device and storage medium
CN110363179A (en) * 2019-07-23 2019-10-22 联想(北京)有限公司 Ground picture capturing method, device, electronic equipment and storage medium
CN110555901A (en) * 2019-09-05 2019-12-10 亮风台(上海)信息科技有限公司 Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN110555901B (en) * 2019-09-05 2022-10-28 亮风台(上海)信息科技有限公司 Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN110660134A (en) * 2019-09-25 2020-01-07 Oppo广东移动通信有限公司 Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN110660134B (en) * 2019-09-25 2023-05-30 Oppo广东移动通信有限公司 Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN111724439A (en) * 2019-11-29 2020-09-29 中国科学院上海微***与信息技术研究所 Visual positioning method and device in dynamic scene
CN111724439B (en) * 2019-11-29 2024-05-17 中国科学院上海微***与信息技术研究所 Visual positioning method and device under dynamic scene
CN111126172B (en) * 2019-12-04 2022-11-18 江西洪都航空工业集团有限责任公司 Grassland autonomous mapping method based on vision
CN111126172A (en) * 2019-12-04 2020-05-08 江西洪都航空工业集团有限责任公司 Grassland autonomous mapping method based on vision
CN111177167B (en) * 2019-12-25 2024-01-19 Oppo广东移动通信有限公司 Augmented reality map updating method, device, system, storage and equipment
CN111177167A (en) * 2019-12-25 2020-05-19 Oppo广东移动通信有限公司 Augmented reality map updating method, device, system, storage and equipment
US11761788B2 (en) 2020-06-05 2023-09-19 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating offline map, electronic device and storage medium
CN111829535A (en) * 2020-06-05 2020-10-27 北京百度网讯科技有限公司 Method and device for generating offline map, electronic equipment and storage medium
WO2022095473A1 (en) * 2020-11-05 2022-05-12 广州小鹏自动驾驶科技有限公司 Method and apparatus for map fusion, server and storage medium
CN112308810A (en) * 2020-11-05 2021-02-02 广州小鹏自动驾驶科技有限公司 Map fusion method and device, server and storage medium
CN112308810B (en) * 2020-11-05 2022-05-13 广州小鹏自动驾驶科技有限公司 Map fusion method and device, server and storage medium
CN112967228A (en) * 2021-02-02 2021-06-15 中国科学院上海微***与信息技术研究所 Method and device for determining target optical flow information, electronic equipment and storage medium
CN112967228B (en) * 2021-02-02 2024-04-26 中国科学院上海微***与信息技术研究所 Determination method and device of target optical flow information, electronic equipment and storage medium
CN113160270A (en) * 2021-02-24 2021-07-23 广州视源电子科技股份有限公司 Visual map generation method, device, terminal and storage medium
CN113610991B (en) * 2021-10-09 2022-02-22 创泽智能机器人集团股份有限公司 Method and equipment for determining observation position based on three-dimensional map
CN113610991A (en) * 2021-10-09 2021-11-05 创泽智能机器人集团股份有限公司 Method and equipment for determining observation position based on three-dimensional map
CN116486029A (en) * 2023-04-26 2023-07-25 深圳市喜悦智慧数据有限公司 Three-dimensional holographic geographic data establishment method and device
CN116486029B (en) * 2023-04-26 2023-11-17 深圳市喜悦智慧数据有限公司 Three-dimensional holographic geographic data establishment method and device

Similar Documents

Publication Publication Date Title
CN109920055A (en) Construction method, device and the electronic equipment of 3D vision map
JP7190842B2 (en) Information processing device, control method and program for information processing device
CN110400363B (en) Map construction method and device based on laser point cloud
CN109447121B (en) Multi-target tracking method, device and system for visual sensor network
CN103703758B (en) mobile augmented reality system
EP2614487B1 (en) Online reference generation and tracking for multi-user augmented reality
US20200364509A1 (en) System and method for training a neural network for visual localization based upon learning objects-of-interest dense match regression
US20160267326A1 (en) Image abstraction system
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN107690840B (en) Unmanned plane vision auxiliary navigation method and system
CN110568447A (en) Visual positioning method, device and computer readable medium
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
EP3274964B1 (en) Automatic connection of images using visual features
CN102959946A (en) Augmenting image data based on related 3d point cloud data
CN111553247B (en) Video structuring system, method and medium based on improved backbone network
CN109711274A (en) Vehicle checking method, device, equipment and storage medium
CN104484814B (en) A kind of advertising method and system based on video map
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN111833457A (en) Image processing method, apparatus and storage medium
WO2022088819A1 (en) Video processing method, video processing apparatus and storage medium
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN113920263A (en) Map construction method, map construction device, map construction equipment and storage medium
Yu et al. Intelligent visual-IoT-enabled real-time 3D visualization for autonomous crowd management
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
CN112270709A (en) Map construction method and device, computer readable storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190621