CN108597036A - Reality environment danger sense method and device - Google Patents

Reality environment danger sense method and device Download PDF

Info

Publication number
CN108597036A
CN108597036A CN201810412419.6A CN201810412419A CN108597036A CN 108597036 A CN108597036 A CN 108597036A CN 201810412419 A CN201810412419 A CN 201810412419A CN 108597036 A CN108597036 A CN 108597036A
Authority
CN
China
Prior art keywords
frame
world coordinates
image
key frame
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810412419.6A
Other languages
Chinese (zh)
Other versions
CN108597036B (en
Inventor
凌霄
谢启宇
杨辰
马俊青
黄耀清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201810412419.6A priority Critical patent/CN108597036B/en
Publication of CN108597036A publication Critical patent/CN108597036A/en
Application granted granted Critical
Publication of CN108597036B publication Critical patent/CN108597036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes reality environment danger sense method and device.Method includes:Characteristic point in each frame image that extraction VR video cameras acquire in real time;For the second frame and later frame image, present frame is matched with the characteristic point in previous frame image, according to each pair of match point, calculates motion vector when opposite acquisition previous frame image when VR camera acquisition current frame images;According to calculated motion vector, the world coordinates of current frame image and each characteristic point of successful match on previous frame image is calculated;According to all characteristic points for having calculated world coordinates in current frame image, it detects in current frame image with the presence or absence of concern object, and if it exists, calculate concern the distance between object and current VR video cameras, if the Euclidean distance is less than predetermined threshold value, dangerous prompting is issued the user with.The present invention can perceive dangerous in VR environment and send out prompting.

Description

Reality environment danger sense method and device
Technical field
The present invention relates to VR (Virtual Reality, virtual reality) technical fields.More particularly to a kind of VR environmental hazards Cognitive method and device.
Background technology
Present situation during analysis VR product uses is recognized:User is when using VR products because can not perceive residing True environment, does not know about whether oneself movement will produce danger, thus it is meticulous during the motion, it dare not move freely.
With the continuous promotion of computing chip performance, Rendering algorithms continue to optimize improvement, and the VR product drawings sense of reality is not Enhance disconnectedly, and under such background, user but can not whole-heartedly experience such feeling of immersion.It is badly in need of to perceive at present true Real environment and the method for prompt.
In VR applications at present, only real picture is shot simply by camera and mapped, lacked for true environment The accurate perception of itself.
Invention content
The present invention provides VR environmental hazards cognitive method and device, to realize dangerous criminal and the prompting in VR environment.
The technical proposal of the invention is realized in this way:
A kind of Virtual Reality environmental hazard cognitive method, this method include:
For each frame image that VR video cameras acquire in real time, the characteristic point in the image is extracted;
Each frame image of the second frame and later frame for VR camera acquisitions, by current frame image and previous frame image In characteristic point matched, according to position of each pair of match point in two field pictures, calculate VR camera acquisition present frames Motion vector when image when opposite acquisition previous frame image;According to calculated motion vector, calculate current frame image with The world coordinates of each characteristic point of successful match on previous frame image;
According to all characteristic points for having calculated world coordinates in current frame image, detects and whether deposited in current frame image In concern object, if so, when according to the initial world coordinates of VR video cameras and each frame image of VR camera acquisitions before opposite acquisition Motion vector when one frame image calculates the our times coordinate of VR video cameras, is sat according to the world of the concern object in present frame The our times coordinate of mark and VR video cameras calculates concern the distance between object and current VR video cameras, if the distance is less than pre- If threshold value, then dangerous prompting is issued the user with.
It is described to calculate on current frame image and previous frame image after the world coordinates of each characteristic point of successful match Further comprise:
The world coordinates for the characteristic point being calculated is put into local map to describe in library, meanwhile, it is retouched in local map It states and records the corresponding frame identification of each characteristic point in library;
And it is described current frame image is matched with the characteristic point in previous frame image after, according to each pair of match point Position in two field pictures calculates movement arrow when opposite acquisition previous frame image when VR camera acquisition current frame images Further comprise before amount:
Judge whether present frame meets one of following key frame decision condition:
One, the key frame sum in key frame set<First threshold;
Two, the sum of the characteristic point of the number of the characteristic point of present frame and previous frame image successful match/present frame extraction< Second threshold;
If satisfied, then determining that present frame is key frame, and the frame identification of present frame is put into key frame set, is then executed Before the acquisition opposite when calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures The action of motion vector when one frame image;Otherwise, it determines present frame is non-key frame, present frame is abandoned, is directly gone to next Frame.
When determine present frame be key frame when, it is described calculate current frame image on previous frame image successful match it is every Further comprise after the world coordinates of one characteristic point:
The calculating of all characteristic points and each key frame in key frame set of the present frame of world coordinates will be calculated All characteristic points for going out world coordinates are matched respectively, if matching rate is more than default third threshold value, then it is assumed that present frame redundancy, Then key frame set is not added in present frame, is not also retouched with the characteristic point update local map for calculating the present frame of world coordinates Library is stated, next frame is gone to, wherein if the world coordinates of two characteristic points is identical, two Feature Points Matchings.
It is described to calculate on current frame image and previous frame image after the world coordinates of each characteristic point of successful match Further comprise:
All characteristic points that the present frame of world coordinates will be calculated constitute a bag of words BOW vector, by the BOW of present frame Vector is matched with the BOW vectors of each key frame in key frame set respectively, if with a key frame successful match, is recognized To the reorientation of present frame success, that is, to think VR camera acquisitions location and the pass for acquiring successful match when the current frame It is identical the location of when key frame, then the characteristic point extracted in the current frame originally is abandoned, is looked into local map describes library The world coordinates for finding the corresponding all characteristic points of key frame of successful match, in the world coordinates of each characteristic point found Frame identification list in add the frame identification of present frame, and present frame is not put into key frame set
The key frame decision condition further comprises:
The last repositioning process of present frame takes duration and is more than default 5th threshold value.
The method further includes:
When preset closed loop detection cycle arrives, for the key frame in key frame set, newest pass is calculated separately At a distance from the BOW vectors of the BOW vectors of key frame and each key frame associated with it, using the minimum association key frame of distance as The candidate winding frame of newest key frame, wherein when the world coordinates of at least a pair of of characteristic point in two key frames is identical, recognize For two key frame associations;
According to the world coordinates of the characteristic point on newest key frame and candidate winding frame, the newest pass of VR camera acquisitions is calculated Motion vector when key frame when opposite acquisition candidate winding frame, using candidate winding frame as the former frame of newest key frame, by this The characteristic point of two frames is matched, and according to calculated motion vector, recalculates the world of all characteristic points of successful match Coordinate describes newest key frame in library with the world coordinates update local map of all characteristic points for the successful match being calculated All characteristic points world coordinates, and in local map describes library add successful match the corresponding frame mark of all characteristic points Know:The frame identification of newest key frame.
It is described extract the image in characteristic point be:The Accelerated fractionation test extracted in the image obtains feature FAST features Point.
It is described to issue the user with dangerous remind and include:
Default monophonic prompting audio data is copied as into two parts of left and right sound channels, to the prompting audio data point of two sound channels FFT transform is not carried out, and the frequency domain for obtaining left and right sound channels reminds audio data;
According to the world coordinates of the world coordinates and VR video cameras of concern object, the sounding position for reminding audio is determined, wherein The sounding position using HRTF normed space location parameters indicate, wherein remind the sounding position of audio be located at VR video cameras and The distance between pay close attention on the straight line line between object, and preset the sounding position for reminding audio and VR video cameras;
According to the sounding position for reminding audio, corresponding HRTF transformation data are read from HRTF standard databases, to this HRTF converts data and carries out FFT transform, obtains frequency domain HRTF transformation data, the frequency domain of left and right sound channels is reminded audio data point It is not multiplied with frequency domain HRTF transformation data, the frequency domain for obtaining left and right sound channels reminds spatial audio data, to the frequency of left and right sound channels Domain reminds spatial audio data to carry out IFFT transformation respectively, and the time domain for obtaining left and right sound channels reminds spatial audio data, and respectively User is played to by left and right sound channels.
It is described to issue the user with dangerous remind and include:
Overlapping display pays close attention to all characteristic point structures of the known world coordinate of object on the 3 dimension VR images for detect concern object At profile;Alternatively,
Overlapping display prompting text message, the text message include on the 2 dimension images for detect concern object:VR is imaged The range information of machine distance concern object;Alternatively,
Display eliminates the 3 dimension VR images for detecting concern object of background information, i.e., will detect the VR images of concern object The background image outside profile that the characteristic point of upper all the known world coordinates is constituted is deleted, only to show VR video cameras, user With concern object, wherein when object is paid close attention in display, according to concern object with the VR video cameras at a distance from from the distant to the near, to paying close attention to object Color carries out progressive coloured displaying.
A kind of Virtual Reality environmental hazard sensing device, the device include:
Feature extraction and computing module, each frame image for being acquired in real time for VR video cameras, are extracted in the image Characteristic point;Each frame image of the second frame and later frame for VR camera acquisitions, by current frame image and former frame figure Characteristic point as in is matched, and according to position of each pair of match point in two field pictures, it is current to calculate VR camera acquisitions Motion vector when frame image when opposite acquisition previous frame image;According to calculated motion vector, current frame image is calculated With the world coordinates of each characteristic point of successful match on previous frame image;
Hazard detection module, for according to all characteristic points for having calculated world coordinates in current frame image, detection With the presence or absence of concern object in current frame image, if so, each according to the initial world coordinates of VR video cameras and VR camera acquisitions Motion vector when frame image when opposite acquisition previous frame image, calculates the our times coordinate of VR video cameras, according to present frame In concern object world coordinates and VR video cameras our times coordinate, calculate pay close attention between object and current VR video cameras away from From if the distance issues the user with dangerous prompting less than predetermined threshold value.
The feature extraction and computing module calculate each spy of current frame image and successful match on previous frame image The world coordinates for levying point is further used for later,
The world coordinates for the characteristic point being calculated is put into local map to describe in library, meanwhile, it is retouched in local map It states and records the corresponding frame identification of each characteristic point in library;
And the feature extraction and computing module carry out current frame image with the characteristic point in previous frame image to match it Opposite acquisition is previous when afterwards, calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures It is further used for before motion vector when frame image,
Judge whether present frame meets one of following key frame decision condition:
One, the key frame sum in key frame set<First threshold;
Two, the sum of the characteristic point of the number of the characteristic point of present frame and previous frame image successful match/present frame extraction< Second threshold;
If satisfied, then determining that present frame is key frame, and the frame identification of present frame is put into key frame set, is then executed Before the acquisition opposite when calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures The action of motion vector when one frame image;Otherwise, it determines present frame is non-key frame, present frame is abandoned, is directly gone to next Frame.
When it is key frame to determine present frame, the feature extraction and computing module calculate current frame image and former frame It is further used for after the world coordinates of each characteristic point of successful match on image,
The calculating of all characteristic points and each key frame in key frame set of the present frame of world coordinates will be calculated All characteristic points for going out world coordinates are matched respectively, if matching rate is more than default third threshold value, then it is assumed that present frame redundancy, Then key frame set is not added in present frame, is not also retouched with the characteristic point update local map for calculating the present frame of world coordinates Library is stated, next frame is gone to, wherein if the world coordinates of two characteristic points is identical, two Feature Points Matchings.
The feature extraction and computing module calculate each spy of current frame image and successful match on previous frame image The world coordinates for levying point is further used for later,
All characteristic points that the present frame of world coordinates will be calculated constitute a bag of words BOW vector, by the BOW of present frame Vector is matched with the BOW vectors of each key frame in key frame set respectively, if with a key frame successful match, is recognized To the reorientation of present frame success, that is, to think VR camera acquisitions location and the pass for acquiring successful match when the current frame It is identical the location of when key frame, then the characteristic point extracted in the current frame originally is abandoned, is looked into local map describes library The world coordinates for finding the corresponding all characteristic points of key frame of successful match, in the world coordinates of each characteristic point found Frame identification list in add the frame identification of present frame, and present frame is not put into key frame set
The feature extraction and computing module judge that the key frame decision condition whether present frame meets further comprises:
The last repositioning process of present frame takes duration and is more than default 5th threshold value.
The feature extraction and computing module are further used for,
When preset closed loop detection cycle arrives, for the key frame in key frame set, newest pass is calculated separately At a distance from the BOW vectors of the BOW vectors of key frame and each key frame associated with it, using the minimum association key frame of distance as The candidate winding frame of newest key frame, wherein when the world coordinates of at least a pair of of characteristic point in two key frames is identical, recognize For two key frame associations;
According to the world coordinates of the characteristic point on newest key frame and candidate winding frame, the newest pass of VR camera acquisitions is calculated Motion vector when key frame when opposite acquisition candidate winding frame, using candidate winding frame as the former frame of newest key frame, by this The characteristic point of two frames is matched, and according to calculated motion vector, recalculates the world of all characteristic points of successful match Coordinate describes newest key frame in library with the world coordinates update local map of all characteristic points for the successful match being calculated All characteristic points world coordinates, and in local map describes library add successful match the corresponding frame mark of all characteristic points Know:The frame identification of newest key frame.
The characteristic point that the feature extraction and computing module extract in the image is:The Accelerated fractionation extracted in the image is surveyed Examination obtains feature FAST characteristic points.
The hazard detection module issues the user with dangerous remind:
Default monophonic prompting audio data is copied as into two parts of left and right sound channels, to the prompting audio data point of two sound channels FFT transform is not carried out, and the frequency domain for obtaining left and right sound channels reminds audio data;
According to the world coordinates of the world coordinates and VR video cameras of concern object, the sounding position for reminding audio is determined, wherein The sounding position using HRTF normed space location parameters indicate, wherein remind the sounding position of audio be located at VR video cameras and The distance between pay close attention on the straight line line between object, and preset the sounding position for reminding audio and VR video cameras;
According to the sounding position for reminding audio, corresponding HRTF transformation data are read from HRTF standard databases, to this HRTF converts data and carries out FFT transform, obtains frequency domain HRTF transformation data, the frequency domain of left and right sound channels is reminded audio data point It is not multiplied with frequency domain HRTF transformation data, the frequency domain for obtaining left and right sound channels reminds spatial audio data, to the frequency of left and right sound channels Domain reminds spatial audio data to carry out IFFT transformation respectively, and the time domain for obtaining left and right sound channels reminds spatial audio data, and respectively User is played to by left and right sound channels.
The hazard detection module issues the user with dangerous remind:
Overlapping display pays close attention to all characteristic point structures of the known world coordinate of object on the 3 dimension VR images for detect concern object At profile;Alternatively,
Overlapping display prompting text message, the text message include on the 2 dimension images for detect concern object:VR is imaged The range information of machine distance concern object;Alternatively,
Display eliminates the 3 dimension VR images for detecting concern object of background information, i.e., will detect the VR images of concern object The background image outside profile that the characteristic point of upper all the known world coordinates is constituted is deleted, only to show VR video cameras, user With concern object, wherein when object is paid close attention in display, according to concern object with the VR video cameras at a distance from from the distant to the near, to paying close attention to object Color carries out progressive coloured displaying.
The present invention realizes dangerous criminal and prompting in VR environment.
Description of the drawings
Fig. 1 is the VR environmental hazard cognitive method flow charts that one embodiment of the invention provides;
Fig. 2 is the VR environmental hazard cognitive method flow charts that another embodiment of the present invention provides;
Fig. 3 is the application exemplary plot provided by the invention that dangerous prompting is carried out using 3 d image mode;
Fig. 4 is the structural schematic diagram of VR environmental hazards sensing device provided in an embodiment of the present invention.
Specific implementation mode
Below in conjunction with the accompanying drawings and specific embodiment the present invention is further described in more detail.
Fig. 1 is the VR environmental hazard cognitive method flow charts that one embodiment of the invention provides, and is as follows:
Step 101:For each frame image that VR video cameras acquire in real time, the characteristic point in the image is extracted.
Step 102:Each frame image of the second frame and later frame for VR camera acquisitions, by current frame image with before Characteristic point in one frame image is matched.
Step 103:According to position of each pair of match point in two field pictures, VR camera acquisition current frame images are calculated When opposite acquisition previous frame image when motion vector.
Step 104:According to calculated motion vector, current frame image and successful match on previous frame image are calculated The world coordinates of each characteristic point.
Step 105:According to all characteristic points for having calculated world coordinates in current frame image, current frame image is detected In with the presence or absence of concern object, if so, according to the initial world coordinates of VR video cameras and each frame image phase of VR camera acquisitions Motion vector when to acquisition previous frame image, calculates the our times coordinate of VR video cameras, according to the concern object in present frame World coordinates and VR video cameras our times coordinate, calculate concern the distance between object and current VR video cameras, if this away from From less than predetermined threshold value, then dangerous prompting is issued the user with.
Fig. 2 is the VR environmental hazard cognitive method flow charts that another embodiment of the present invention provides, and is as follows:
Step 201:VR video camera real-time image acquisitions.
Step 202:For each frame image of VR camera acquisitions, FAST (the Features from the image are extracted Accelerated Segment Test, Accelerated fractionation test obtain feature) characteristic point.
FAST characteristic points refer to that gray value compares pixel brighter or darker around it in image.Such as:It sets in advance Determine first threshold, the gray value of every bit in image is compared with the gray scale of preset number preset around respectively, if It is both greater than first threshold with the gray scale absolute value of the difference of every bit around, it is determined that current point is FAST characteristic points.
Step 203:When collecting the second frame image, the FAST features in the second frame image and first frame image are clicked through Row matching, according to the principle that position of each pair of match point in two field pictures, VR camera acquisitions are calculated by Epipolar geometry Shift value and rotational value when the second frame image when opposite acquisition first frame image.
The matching way of FAST characteristic points belongs to mature technology, and the present invention repeats no more.
Step 204:By the principle of triangulation, and according to calculated shift value and rotational value, calculate matching at World coordinates of the every FAST characteristic points of work(under world coordinate system, while calculating and preserving the initial world of VR video cameras Coordinate.
The origin of world coordinate system is actually the initial optical center of VR video cameras, X, Y-axis be respectively with VR camera lens Horizontal sides, the parallel axis of vertical edge, Z axis is the axis vertical with VR camera lens.
Step 205:The world coordinates of the FAST characteristic points of successful match in first frame and the second frame image is put into office Portion's map describes in library, meanwhile, the successful corresponding frame mark of every FAST characteristic points of record matching in local map describes library Know:The frame identification of first frame and the second frame.
Such as:M FAST characteristic point is extracted on first frame image, and n FAST characteristic point is extracted on the second frame image, Matched FAST characteristic points have p (p≤m, p≤n) right on first and second frame image, then each pair of FAST characteristic points being mutually matched Three-dimensional world coordinate is identical, therefore, the three-dimensional world coordinate of p FAST characteristic point of the successful match is put into part Map describes in library, wherein p FAST characteristic point of successful match corresponds to the first and second frame simultaneously.
Step 206:It, will be in current frame image and previous frame image for each frame image of third frame and later frame FAST characteristic points are matched, according to the principle that position of each pair of match point in two field pictures, is calculated by Epipolar geometry Shift value and rotational value when VR camera acquisition current frame images when opposite acquisition previous frame image.
Step 207:By way of triangulation, and according to calculated shift value and rotational value, calculate present frame On image and previous frame image successful match but there has been no the generation that every FAST characteristic points in library are described in local map Boundary's coordinate.
Current frame image might have part FAST characteristic points with the FAST characteristic points of successful match on previous frame image World coordinates before treatment a frame (or more preceding frame) image when calculated, that is, be already present on local map Description library suffers, and the world coordinates of these FAST characteristic points just does not have to repeat to calculate.
Step 208:The world coordinates for the FAST characteristic points being calculated is put into local map to describe in library, meanwhile, The corresponding frame identification of every FAST characteristic points is recorded in local map describes library:The frame identification of present frame and former frame.
For current frame image on previous frame image successful match but be present in local map describe it is every in library One FAST characteristic points need to also be the corresponding upper present frame of frame identification list addition of each FAST characteristic points in local map describes library Mark.
Step 209:Describe to preserve in library according to local map present frame (including:First frame and later frame) in image All FAST characteristic points of world coordinates have been calculated, has detected and whether there is concern object in current frame image (such as:Edge or Barrier), if so, when according to the initial world coordinates of VR video cameras and each frame image of VR camera acquisitions it is opposite acquire it is previous Shift value when frame image and rotational value calculate the our times coordinate of VR video cameras, according to the generation of the concern object in present frame The our times coordinate of boundary's coordinate and VR video cameras calculates the Euclidean distance between concern object and current VR video cameras, if the Europe Family name's distance is less than predetermined threshold value, then confirms that needs issue the user with prompting.
When calculating the Euclidean distance between concern object and current VR video cameras, distance VR video cameras can be paid close attention in object most Subject to Euclidean distance between close point and VR video cameras.
Wherein, object is paid close attention to such as:Edge or barrier etc. can be examined by the existing image such as edge detection or detection of obstacles Survey method detects.
In practical applications, it is contemplated that:As the image of VR camera acquisitions is more and more, the FAST characteristic points that extract Sum can be more and more, to which the quantity of world coordinates that local map describes the FAST characteristic points of library storage also can be increasingly It is more, in order to save memory space, provide following solution:
Judge whether present frame meets one of two following conditions, if satisfied, then determine that present frame is key frame, it will be current The frame identification of frame is put into key frame set, and is continued to execute and the relevant subsequent processing of present frame;Otherwise, it determines present frame is Non-key frame abandons present frame, directly goes to next frame:
One, the key frame sum in key frame set<First threshold;
Two, the FAST features of the number of the FAST characteristic points of present frame and previous frame image successful match/present frame extraction The sum of point<Second threshold.
In order to further save memory space, present invention further propose that following prioritization scheme:
After present frame is determined as key frame, further comprise:According to the world coordinates of FAST characteristic points, by present frame FAST characteristic points matched respectively with the FAST characteristic points of each key frame in key frame set, if matching rate is more than pre- If third threshold value, then it is assumed that present frame redundancy does not execute subsequent processing (that is, key frame collection is not added in present frame to present frame Close, also not updating local map according to the FAST characteristic points of present frame describes library), directly go to next frame.Here, two FAST Feature Points Matching refers to that the world coordinates of two FAST characteristic points is identical.
Furthermore, it is contemplated that the evaluated error of translation and rotation, the present invention proposes following repositioning process:
All FAST characteristic points for calculating world coordinates of present frame are constituted into BOW (Bag-of-Words, a word Bag) vector, then the BOW vectors of present frame are matched with the BOW vectors of each key frame in key frame set respectively, If with a key frame successful match, then it is assumed that VR camera acquisitions institute when the current frame is thought in the reorientation success to present frame It is identical the location of when the position at place is with the key frame of acquisition successful match, then abandon what script extracted in the current frame FAST characteristic points, directly using all FAST characteristic points in the key frame of successful match as the FAST characteristic points of present frame, i.e., The world coordinates that the corresponding all FAST characteristic points of key frame of successful match are found in local map describes library, is being searched To each FAST characteristic points world coordinates frame identification list in add present frame frame identification, simultaneously because present frame is It is exactly matched through the key frame with successful match, then present frame is not put into key frame set;If present frame not with any key Frame successful match, then do not make specially treated.
Here, two BOW Vectors matchings refer to that the distance between two BOW vectors are less than default 4th threshold value.
In addition, when whether determine present frame is key frame, following condition also can be used:
Present frame the last time repositioning process takes duration and is more than default 5th threshold value.
In order to further eliminate the evaluated error of translation and rotation, present invention further propose that following scheme:
Default closed loop detection cycle, for the key frame in key frame set, is counted respectively when closed loop detection cycle arrives It calculates at a distance from the BOW vectors of newest key frame and the BOW vectors of each key frame associated with it, by the association that distance is minimum Candidate winding frame of the key frame as newest key frame, wherein when at least a pair of matched FAST features in two key frames When point (i.e. the world coordinates to FAST characteristic points is identical), it is considered as two key frames associations;
According to the world coordinates of the FAST characteristic points on newest key frame and candidate winding frame, VR camera acquisitions are calculated most Shift value and rotational value when new key frame when the candidate winding frame of opposite acquisition, using candidate winding frame as newest key frame before The FAST characteristic points of two frame are matched, according to calculated shift value and rotational value, recalculate successful match by one frame All FAST characteristic points world coordinates, with the world coordinates information of all FAST characteristic points for the successful match being calculated Update local map describes the world coordinates information of all FAST characteristic points of newest key frame in library, and is described in local map The corresponding frame identification of all FAST characteristic points of successful match is added in library:The frame identification of newest key frame.
In the present invention, audible or/and image mode may be used to user reminding.
When using audible, concrete scheme can be as follows:
Step 01:Default monophonic prompting audio data is copied as into two parts of left and right sound channels, to the prompting audio of two sound channels Data carry out FFT (Fast Fourier Transformation, Fast Fourier Transform) transformation respectively, obtain left and right sound channels Frequency domain remind audio data.
Step 02:(world of the nearest point of distance VR video cameras in concern object can be used according to the world coordinates of concern object Coordinate representation) with the world coordinates of VR video cameras, determine that the sounding position for reminding audio, wherein the sounding position use HRTF (Head Related Transfer Function, head related transfer function) normed space location parameter indicates.
Wherein, the sounding position of audio is reminded to be located at VR video cameras and pay close attention to object (point table that can be nearest with VR video cameras Show) between straight line line on, the sounding position for reminding audio and the distance between VR video cameras can be preset.
Step 03:According to the sounding position for reminding audio, corresponding HRTF transformation number is read from HRTF standard databases According to carrying out FFT transform to HRTF transformation data, obtain frequency domain HRTF transformation data, the frequency domain of left and right sound channels is reminded audio Data are multiplied with frequency domain HRTF transformation data respectively, and the frequency domain for obtaining left and right sound channels reminds spatial audio data, to left and right sound The frequency domain in road reminds spatial audio data to carry out IFFT transformation respectively, and the time domain for obtaining left and right sound channels reminds spatial audio data, And user is played to by left and right sound channels respectively.
Furthermore it is also possible to remind user by shaking motor.
When being reminded using image mode, concrete scheme can be as follows:
Overlapping display pays close attention to all FAST spies of the known world coordinate of object on VR (3D) image for detecting concern object The profile that sign point is constituted.As shown in Figure 3, wherein left figure is not carry out the VR original images of image prompting, and right figure is to have carried out image The VR images of prompting, it is seen then that profile has been carried out to positions such as corners in right figure and has been shown;
Alternatively, Overlapping display reminds text message, text message that can be taken the photograph for VR on the 2D images for detecting concern object The range information of camera distance concern object;
Alternatively, display eliminates the VR images for detecting concern object of background information, i.e., it will detect that the VR of concern object schemes Background image deletion outside the profile constituted as the FAST characteristic points of upper all the known world coordinates, only to show that VR is imaged Machine, user and concern object, wherein display pay close attention to object when, can according to concern object at a distance from VR video cameras from the distant to the near, it is right The color for paying close attention to object carries out progressive coloured displaying, that is, concern object is remoter with VR video cameras, then it is higher to pay close attention to object transparency, conversely, thoroughly Lightness is lower.
Fig. 4 is the structural schematic diagram of VR environmental hazards sensing device provided in an embodiment of the present invention, the device mainly includes: Feature extraction and computing module 41 and hazard detection module 42, wherein:
Feature extraction and computing module 41, each frame image for being acquired in real time for VR video cameras, extract the image In FAST characteristic points;Each frame image of the second frame and later frame for VR camera acquisitions, by current frame image with before FAST characteristic points in one frame image are matched, and according to position of each pair of match point in two field pictures, calculate VR camera shootings Machine acquires the shift value and rotational value when opposite acquisition previous frame image when current frame image;According to calculated shift value and rotation Turn value, calculates the world coordinates of current frame image and every FAST characteristic points of successful match on previous frame image.
Hazard detection module 42, in the current frame image in being calculated according to feature extraction and computing module 41 The world coordinates of FAST characteristic points is detected with the presence or absence of concern object in current frame image, if so, according to the initial generation of VR video cameras Shift value and rotational value when boundary's coordinate and VR camera acquisitions each frame image when opposite acquisition previous frame image calculate VR and take the photograph The our times coordinate of camera, according to the our times coordinate of the world coordinates and VR video cameras of the concern object in present frame, meter The Euclidean distance between concern object and current VR video cameras is calculated, if the Euclidean distance is less than predetermined threshold value, issues the user with danger It reminds danger.
In practical applications, feature extraction and computing module 41 calculate current frame image and are matched into on previous frame image It is further used for after the world coordinates of every FAST characteristic points of work(, the world coordinates for the FAST characteristic points being calculated is put Enter to local map and describes in library, meanwhile, the corresponding frame identification of every FAST characteristic points is recorded in local map describes library;
And feature extraction and computing module 41 match current frame image with the FAST characteristic points in previous frame image Before being acquired relatively when later, calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures It is further used for before shift value and rotational value when one frame image, judges whether present frame meets following key frame decision condition One of:
One, the key frame sum in key frame set<First threshold;
Two, the FAST features of the number of the FAST characteristic points of present frame and previous frame image successful match/present frame extraction The sum of point<Second threshold;
If satisfied, then determining that present frame is key frame, and the frame identification of present frame is put into key frame set, is then executed Before the acquisition opposite when calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures The action of shift value and rotational value when one frame image;Otherwise, it determines present frame is non-key frame, present frame is abandoned, is directly turned To next frame.
In practical applications, when it is key frame to determine present frame, feature extraction and computing module 41 calculate present frame It is further used for after image and the world coordinates of every FAST characteristic points of successful match on previous frame image, calculating is born The institute for calculating world coordinates of all FAST characteristic points of the present frame of boundary's coordinate and each key frame in key frame set There are FAST characteristic points to be matched respectively, if matching rate is more than default third threshold value, then it is assumed that present frame redundancy, then it will be current Key frame set is added in frame, does not also describe library with the FAST characteristic points update local map for calculating the present frame of world coordinates, Go to next frame, wherein if the world coordinates of two FAST characteristic points is identical, two FAST Feature Points Matchings.
In practical applications, feature extraction and computing module 41 calculate current frame image and are matched into on previous frame image It is further used for after the world coordinates of every FAST characteristic points of work(, all FAST of the present frame of world coordinates will be calculated Characteristic point constitutes a bag of words BOW vector, by the BOW vectors of present frame respectively with each key frame in key frame set BOW vectors are matched, if with a key frame successful match, then it is assumed that the reorientation success to present frame thinks that VR is imaged Machine acquisition the location of when the current frame with the key frame of acquisition successful match when the location of it is identical, then abandon script and working as The FAST characteristic points extracted in previous frame, the key frame that successful match is found in local map describes library are corresponding all The world coordinates of FAST characteristic points adds current in the frame identification list of the world coordinates of each FAST characteristic points found The frame identification of frame, and present frame is not put into key frame set.
In practical applications, feature extraction and computing module 41 judge key frame decision condition that whether present frame meets into One step includes:The last repositioning process of present frame takes duration and is more than default 5th threshold value.
In practical applications, feature extraction and computing module 41 are further used for, when preset closed loop detection cycle arrives When, for the key frame in key frame set, calculate separately the BOW vectors of newest key frame and each key associated with it The distance of the BOW vectors of frame, using the minimum association key frame of distance as the candidate winding frame of newest key frame, wherein when two When the world coordinates of at least a pair of FAST characteristic points is identical in a key frame, it is believed that two key frame associations;According to newest The world coordinates of key frame and the FAST characteristic points on candidate winding frame is opposite when calculating the newest key frame of VR camera acquisitions to adopt Shift value and rotational value when collection candidate's winding frame, using candidate winding frame as the former frame of newest key frame, by two frame FAST characteristic points are matched, and according to calculated shift value and rotational value, recalculate all FAST features of successful match The world coordinates of point describes library with the world coordinates update local map of all FAST characteristic points for the successful match being calculated In newest key frame all FAST characteristic points world coordinates, and all of successful match are added in local map describes library The corresponding frame identification of FAST characteristic points:The frame identification of newest key frame.
In practical applications, hazard detection module 42, which issues the user with danger and reminds, includes:
Default monophonic prompting audio data is copied as into two parts of left and right sound channels, to the prompting audio data point of two sound channels FFT transform is not carried out, and the frequency domain for obtaining left and right sound channels reminds audio data;
According to the world coordinates of the world coordinates and VR video cameras of concern object, the sounding position for reminding audio is determined, wherein The sounding position using HRTF normed space location parameters indicate, wherein remind the sounding position of audio be located at VR video cameras and The distance between pay close attention on the straight line line between object, and preset the sounding position for reminding audio and VR video cameras;
According to the sounding position for reminding audio, corresponding HRTF transformation data are read from HRTF standard databases, to this HRTF converts data and carries out FFT transform, obtains frequency domain HRTF transformation data, the frequency domain of left and right sound channels is reminded audio data point It is not multiplied with frequency domain HRTF transformation data, the frequency domain for obtaining left and right sound channels reminds spatial audio data, to the frequency of left and right sound channels Domain reminds spatial audio data to carry out IFFT transformation respectively, and the time domain for obtaining left and right sound channels reminds spatial audio data, and respectively User is played to by left and right sound channels.
In practical applications, hazard detection module 41, which issues the user with danger and reminds, includes:
Overlapping display pays close attention to all FAST features of the known world coordinate of object on the 3 dimension VR images for detect concern object The profile that point is constituted;Alternatively, Overlapping display reminds text message, the text message on the 2 dimension images for detect concern object Including:The range information of VR video camera distance concern objects;Alternatively, display eliminates 3 dimensions for detecting concern object of background information VR images, i.e., the back of the body outside profile constituted the FAST characteristic points of all the known world coordinates on the VR images for detecting concern object Scape image-erasing, only to show VR video cameras, user and concern object, wherein when object is paid close attention in display, according to concern object and VR From the distant to the near, the color to paying close attention to object carries out progressive coloured displaying to the distance of video camera.
Above-mentioned apparatus can be located in VR equipment.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention With within principle, any modification, equivalent substitution, improvement and etc. done should be included within the scope of protection of the invention god.

Claims (18)

1. a kind of Virtual Reality environmental hazard cognitive method, which is characterized in that this method includes:
For each frame image that VR video cameras acquire in real time, the characteristic point in the image is extracted;
Each frame image of the second frame and later frame for VR camera acquisitions, will be in current frame image and previous frame image Characteristic point is matched, and according to position of each pair of match point in two field pictures, calculates VR camera acquisition current frame images When opposite acquisition previous frame image when motion vector;According to calculated motion vector, calculate current frame image with it is previous The world coordinates of each characteristic point of successful match on frame image;
According to all characteristic points for having calculated world coordinates in current frame image, detect in current frame image with the presence or absence of pass Object is noted, if so, opposite when according to the initial world coordinates of VR video cameras and each frame image of VR camera acquisitions acquire former frame Motion vector when image, calculate VR video cameras our times coordinate, according in present frame concern object world coordinates with The our times coordinate of VR video cameras calculates concern the distance between object and current VR video cameras, if the distance is less than default threshold Value, then issue the user with dangerous prompting.
2. according to the method described in claim 1, it is characterized in that, described calculate current frame image and previous frame image on Further comprise after world coordinates with successful each characteristic point:
The world coordinates for the characteristic point being calculated is put into local map to describe in library, meanwhile, library is described in local map In record the corresponding frame identification of each characteristic point;
And it is described current frame image is matched with the characteristic point in previous frame image after, according to each pair of match point two Motion vector when position in frame image calculates VR camera acquisition current frame images when opposite acquisition previous frame image it Take a step forward including:
Judge whether present frame meets one of following key frame decision condition:
One, the key frame sum in key frame set<First threshold;
Two, the sum of the characteristic point of the number of the characteristic point of present frame and previous frame image successful match/present frame extraction<Second Threshold value;
If satisfied, then determining that present frame is key frame, and the frame identification of present frame is put into key frame set, then described in execution It is opposite when calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures to acquire former frame The action of motion vector when image;Otherwise, it determines present frame is non-key frame, present frame is abandoned, next frame is directly gone to.
3. according to the method described in claim 2, it is characterized in that, when it is key frame to determine present frame, described calculate is worked as Further comprise after prior image frame and the world coordinates of each characteristic point of successful match on previous frame image:
All characteristic points for the present frame for calculating world coordinates and the calculating of each key frame in key frame set are born All characteristic points of boundary's coordinate are matched respectively, if matching rate is more than default third threshold value, then it is assumed that present frame redundancy, then not Key frame set is added in present frame, also not to calculate the characteristic point update local map description of the present frame of world coordinates Library goes to next frame, wherein if the world coordinates of two characteristic points is identical, two Feature Points Matchings.
4. according to the method described in claim 2, it is characterized in that, described calculate current frame image and previous frame image on Further comprise after world coordinates with successful each characteristic point:
All characteristic points that the present frame of world coordinates will be calculated constitute a bag of words BOW vector, by the BOW vectors of present frame It is matched respectively with the BOW vectors of each key frame in key frame set, if with a key frame successful match, then it is assumed that right VR camera acquisitions location and the key frame for acquiring successful match when the current frame are thought in the reorientation success of present frame When the location of it is identical, then abandon the characteristic point that extracts in the current frame originally, found in local map describes library The world coordinates of the corresponding all characteristic points of key frame of successful match, in the frame of the world coordinates of each characteristic point found The frame identification of present frame is added in identification list, and present frame is not put into key frame set.
5. according to the method described in claim 4, it is characterized in that, the key frame decision condition further comprises:
The last repositioning process of present frame takes duration and is more than default 5th threshold value.
6. according to the method described in claim 2, it is characterized in that, the method further includes:
When preset closed loop detection cycle arrives, for the key frame in key frame set, newest key frame is calculated separately BOW vectors and each key frame associated with it BOW vectors at a distance from, using the minimum association key frame of distance as newest The candidate winding frame of key frame, wherein when the world coordinates of at least a pair of of characteristic point in two key frames is identical, it is believed that should Two key frame associations;
According to the world coordinates of the characteristic point on newest key frame and candidate winding frame, the newest key frame of VR camera acquisitions is calculated When opposite acquisition candidate winding frame when motion vector, using candidate winding frame as the former frame of newest key frame, by two frame Characteristic point matched, according to calculated motion vector, recalculate the world coordinates of all characteristic points of successful match, The institute of newest key frame in library is described with the world coordinates update local map of all characteristic points for the successful match being calculated There is the world coordinates of characteristic point, and adds the corresponding frame identification of all characteristic points of successful match in local map describes library: The frame identification of newest key frame.
7. according to the method described in claim 1, it is characterized in that, the characteristic point extracted in the image is:Extract the figure Accelerated fractionation test as in obtains feature FAST characteristic points.
8. according to the method described in claim 1, it is characterized in that, described issue the user with dangerous remind and include:
Default monophonic prompting audio data is copied as into two parts of left and right sound channels, to the prompting audio datas of two sound channels respectively into Row FFT transform, the frequency domain for obtaining left and right sound channels remind audio data;
According to the world coordinates of the world coordinates and VR video cameras of concern object, the sounding position for reminding audio, the wherein hair are determined Sound position is indicated using HRTF normed space location parameters, wherein the sounding position of audio is reminded to be located at VR video cameras and concern The distance between on straight line line between object, and preset the sounding position for reminding audio and VR video cameras;
According to the sounding position for reminding audio, corresponding HRTF transformation data are read from HRTF standard databases, to the HRTF Convert data and carry out FFT transform, obtain frequency domain HRTF transformation data, by the frequency domain of left and right sound channels remind audio data respectively with Frequency domain HRTF transformation data are multiplied, and the frequency domain for obtaining left and right sound channels reminds spatial audio data, is carried to the frequency domain of left and right sound channels Awake spatial audio data carries out IFFT transformation respectively, and the time domain for obtaining left and right sound channels reminds spatial audio data, and passes through respectively Left and right sound channels play to user.
9. according to the method described in claim 1, it is characterized in that, described issue the user with dangerous remind and include:
What all characteristic points for the known world coordinate that Overlapping display pays close attention to object on the 3 dimension VR images for detect concern object were constituted Profile;Alternatively,
Overlapping display prompting text message, the text message include on the 2 dimension images for detect concern object:VR video cameras away from Range information from concern object;Alternatively,
Display eliminates the 3 dimension VR images for detecting concern object of background information, i.e., by institute on the VR images for detecting concern object There is the background image outside the profile of the characteristic point composition of the known world coordinate to delete, only to show VR video cameras, user and pass Note object, wherein display pay close attention to object when, according to concern object with VR video cameras at a distance from from the distant to the near, to concern object color Carry out progressive coloured displaying.
10. a kind of Virtual Reality environmental hazard sensing device, which is characterized in that the device includes:
Feature extraction and computing module, each frame image for being acquired in real time for VR video cameras, extract the spy in the image Sign point;Each frame image of the second frame and later frame for VR camera acquisitions, will be in current frame image and previous frame image Characteristic point matched, according to position of each pair of match point in two field pictures, calculate VR camera acquisition present frame figures As when opposite acquisition previous frame image when motion vector;According to calculated motion vector, current frame image is calculated with before The world coordinates of each characteristic point of successful match on one frame image;
Hazard detection module, for according to all characteristic points for having calculated world coordinates in current frame image, detection to be current With the presence or absence of concern object in frame image, if so, according to the initial world coordinates of VR video cameras and each frame figure of VR camera acquisitions As when opposite acquisition previous frame image when motion vector, the our times coordinate of VR video cameras is calculated, according in present frame The our times coordinate of the world coordinates and VR video cameras of object is paid close attention to, concern the distance between object and current VR video cameras are calculated, If the distance is less than predetermined threshold value, dangerous prompting is issued the user with.
11. device according to claim 10, which is characterized in that the feature extraction and computing module calculate present frame It is further used for after image and the world coordinates of each characteristic point of successful match on previous frame image,
The world coordinates for the characteristic point being calculated is put into local map to describe in library, meanwhile, library is described in local map In record the corresponding frame identification of each characteristic point;
And the feature extraction and computing module current frame image is matched with the characteristic point in previous frame image after, It is opposite when calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures to acquire former frame It is further used for before motion vector when image,
Judge whether present frame meets one of following key frame decision condition:
One, the key frame sum in key frame set<First threshold;
Two, the sum of the characteristic point of the number of the characteristic point of present frame and previous frame image successful match/present frame extraction<Second Threshold value;
If satisfied, then determining that present frame is key frame, and the frame identification of present frame is put into key frame set, then described in execution It is opposite when calculating VR camera acquisition current frame images according to position of each pair of match point in two field pictures to acquire former frame The action of motion vector when image;Otherwise, it determines present frame is non-key frame, present frame is abandoned, next frame is directly gone to.
12. according to the devices described in claim 11, which is characterized in that when it is key frame to determine present frame, the feature carries It takes and computing module calculates on current frame image and previous frame image after the world coordinates of each characteristic point of successful match It is further used for,
All characteristic points for the present frame for calculating world coordinates and the calculating of each key frame in key frame set are born All characteristic points of boundary's coordinate are matched respectively, if matching rate is more than default third threshold value, then it is assumed that present frame redundancy, then not Key frame set is added in present frame, also not to calculate the characteristic point update local map description of the present frame of world coordinates Library goes to next frame, wherein if the world coordinates of two characteristic points is identical, two Feature Points Matchings.
13. according to the devices described in claim 11, which is characterized in that the feature extraction and computing module calculate present frame It is further used for after image and the world coordinates of each characteristic point of successful match on previous frame image,
All characteristic points that the present frame of world coordinates will be calculated constitute a bag of words BOW vector, by the BOW vectors of present frame It is matched respectively with the BOW vectors of each key frame in key frame set, if with a key frame successful match, then it is assumed that right VR camera acquisitions location and the key frame for acquiring successful match when the current frame are thought in the reorientation success of present frame When the location of it is identical, then abandon the characteristic point that extracts in the current frame originally, found in local map describes library The world coordinates of the corresponding all characteristic points of key frame of successful match, in the frame of the world coordinates of each characteristic point found The frame identification of present frame is added in identification list, and present frame is not put into key frame set.
14. device according to claim 13, which is characterized in that the feature extraction and computing module judge that present frame is The key frame decision condition of no satisfaction further comprises:
The last repositioning process of present frame takes duration and is more than default 5th threshold value.
15. according to the devices described in claim 11, which is characterized in that the feature extraction and computing module are further used for,
When preset closed loop detection cycle arrives, for the key frame in key frame set, newest key frame is calculated separately BOW vectors and each key frame associated with it BOW vectors at a distance from, using the minimum association key frame of distance as newest The candidate winding frame of key frame, wherein when the world coordinates of at least a pair of of characteristic point in two key frames is identical, it is believed that should Two key frame associations;
According to the world coordinates of the characteristic point on newest key frame and candidate winding frame, the newest key frame of VR camera acquisitions is calculated When opposite acquisition candidate winding frame when motion vector, using candidate winding frame as the former frame of newest key frame, by two frame Characteristic point matched, according to calculated motion vector, recalculate the world coordinates of all characteristic points of successful match, The institute of newest key frame in library is described with the world coordinates update local map of all characteristic points for the successful match being calculated There is the world coordinates of characteristic point, and adds the corresponding frame identification of all characteristic points of successful match in local map describes library: The frame identification of newest key frame.
16. device according to claim 10, which is characterized in that the feature extraction and computing module extract in the image Characteristic point be:The Accelerated fractionation test extracted in the image obtains feature FAST characteristic points.
17. device according to claim 10, which is characterized in that the hazard detection module issues the user with dangerous prompting Including:
Default monophonic prompting audio data is copied as into two parts of left and right sound channels, to the prompting audio datas of two sound channels respectively into Row FFT transform, the frequency domain for obtaining left and right sound channels remind audio data;
According to the world coordinates of the world coordinates and VR video cameras of concern object, the sounding position for reminding audio, the wherein hair are determined Sound position is indicated using HRTF normed space location parameters, wherein the sounding position of audio is reminded to be located at VR video cameras and concern The distance between on straight line line between object, and preset the sounding position for reminding audio and VR video cameras;
According to the sounding position for reminding audio, corresponding HRTF transformation data are read from HRTF standard databases, to the HRTF Convert data and carry out FFT transform, obtain frequency domain HRTF transformation data, by the frequency domain of left and right sound channels remind audio data respectively with Frequency domain HRTF transformation data are multiplied, and the frequency domain for obtaining left and right sound channels reminds spatial audio data, is carried to the frequency domain of left and right sound channels Awake spatial audio data carries out IFFT transformation respectively, and the time domain for obtaining left and right sound channels reminds spatial audio data, and passes through respectively Left and right sound channels play to user.
18. device according to claim 10, which is characterized in that the hazard detection module issues the user with dangerous prompting Including:
What all characteristic points for the known world coordinate that Overlapping display pays close attention to object on the 3 dimension VR images for detect concern object were constituted Profile;Alternatively,
Overlapping display prompting text message, the text message include on the 2 dimension images for detect concern object:VR video cameras away from Range information from concern object;Alternatively,
Display eliminates the 3 dimension VR images for detecting concern object of background information, i.e., by institute on the VR images for detecting concern object There is the background image outside the profile of the characteristic point composition of the known world coordinate to delete, only to show VR video cameras, user and pass Note object, wherein display pay close attention to object when, according to concern object with VR video cameras at a distance from from the distant to the near, to concern object color Carry out progressive coloured displaying.
CN201810412419.6A 2018-05-03 2018-05-03 Virtual reality environment danger sensing method and device Active CN108597036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810412419.6A CN108597036B (en) 2018-05-03 2018-05-03 Virtual reality environment danger sensing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810412419.6A CN108597036B (en) 2018-05-03 2018-05-03 Virtual reality environment danger sensing method and device

Publications (2)

Publication Number Publication Date
CN108597036A true CN108597036A (en) 2018-09-28
CN108597036B CN108597036B (en) 2022-04-12

Family

ID=63620609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810412419.6A Active CN108597036B (en) 2018-05-03 2018-05-03 Virtual reality environment danger sensing method and device

Country Status (1)

Country Link
CN (1) CN108597036B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892354A (en) * 2018-11-30 2020-03-17 深圳市大疆创新科技有限公司 Image processing method and unmanned aerial vehicle
CN111105467A (en) * 2019-12-16 2020-05-05 北京超图软件股份有限公司 Image calibration method and device and electronic equipment
TWI754959B (en) * 2020-06-04 2022-02-11 宏達國際電子股份有限公司 Method for dynamically displaying real-world scene, electronic device, and computer readable medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060280244A1 (en) * 2005-06-10 2006-12-14 Sony Corporation Moving picture converting apparatus and method, and computer program
US20090046864A1 (en) * 2007-03-01 2009-02-19 Genaudio, Inc. Audio spatialization and environment simulation
CN101391589A (en) * 2008-10-30 2009-03-25 上海大学 Vehicle intelligent alarming method and device
US20150154456A1 (en) * 2012-07-11 2015-06-04 Rai Radiotelevisione Italiana S.P.A. Method and an apparatus for the extraction of descriptors from video content, preferably for search and retrieval purpose
US20150371444A1 (en) * 2014-06-18 2015-12-24 Canon Kabushiki Kaisha Image processing system and control method for the same
US20160063330A1 (en) * 2014-09-03 2016-03-03 Sharp Laboratories Of America, Inc. Methods and Systems for Vision-Based Motion Estimation
CN105574552A (en) * 2014-10-09 2016-05-11 东北大学 Vehicle ranging and collision early warning method based on monocular vision
US20170148223A1 (en) * 2014-10-31 2017-05-25 Fyusion, Inc. Real-time mobile device capture and generation of ar/vr content
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
US9754167B1 (en) * 2014-04-17 2017-09-05 Leap Motion, Inc. Safety for wearable virtual reality devices via object detection and tracking
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
US20190051051A1 (en) * 2016-04-14 2019-02-14 The Research Foundation For The State University Of New York System and Method for Generating a Progressive Representation Associated with Surjectively Mapped Virtual and Physical Reality Image Data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060280244A1 (en) * 2005-06-10 2006-12-14 Sony Corporation Moving picture converting apparatus and method, and computer program
US20090046864A1 (en) * 2007-03-01 2009-02-19 Genaudio, Inc. Audio spatialization and environment simulation
CN101391589A (en) * 2008-10-30 2009-03-25 上海大学 Vehicle intelligent alarming method and device
US20150154456A1 (en) * 2012-07-11 2015-06-04 Rai Radiotelevisione Italiana S.P.A. Method and an apparatus for the extraction of descriptors from video content, preferably for search and retrieval purpose
US9754167B1 (en) * 2014-04-17 2017-09-05 Leap Motion, Inc. Safety for wearable virtual reality devices via object detection and tracking
US20150371444A1 (en) * 2014-06-18 2015-12-24 Canon Kabushiki Kaisha Image processing system and control method for the same
US20160063330A1 (en) * 2014-09-03 2016-03-03 Sharp Laboratories Of America, Inc. Methods and Systems for Vision-Based Motion Estimation
CN105574552A (en) * 2014-10-09 2016-05-11 东北大学 Vehicle ranging and collision early warning method based on monocular vision
US20170148223A1 (en) * 2014-10-31 2017-05-25 Fyusion, Inc. Real-time mobile device capture and generation of ar/vr content
US20190051051A1 (en) * 2016-04-14 2019-02-14 The Research Foundation For The State University Of New York System and Method for Generating a Progressive Representation Associated with Surjectively Mapped Virtual and Physical Reality Image Data
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖璐: "基于视频的运动目标检测与跟踪", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892354A (en) * 2018-11-30 2020-03-17 深圳市大疆创新科技有限公司 Image processing method and unmanned aerial vehicle
CN111105467A (en) * 2019-12-16 2020-05-05 北京超图软件股份有限公司 Image calibration method and device and electronic equipment
CN111105467B (en) * 2019-12-16 2023-08-29 北京超图软件股份有限公司 Image calibration method and device and electronic equipment
TWI754959B (en) * 2020-06-04 2022-02-11 宏達國際電子股份有限公司 Method for dynamically displaying real-world scene, electronic device, and computer readable medium
US11493764B2 (en) 2020-06-04 2022-11-08 Htc Corporation Method for dynamically displaying real-world scene, electronic device, and computer readable medium

Also Published As

Publication number Publication date
CN108597036B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
US20180189974A1 (en) Machine learning based model localization system
US20220012495A1 (en) Visual feature tagging in multi-view interactive digital media representations
EP2614487B1 (en) Online reference generation and tracking for multi-user augmented reality
JP2020535536A5 (en)
CN106033601B (en) The method and apparatus for detecting abnormal case
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
JP2019075156A (en) Method, circuit, device, and system for registering and tracking multifactorial image characteristic and code executable by related computer
KR20180100180A (en) How to create a customized / personalized head transfer function
KR101893771B1 (en) Apparatus and method for processing 3d information
Paletta et al. 3D attention: measurement of visual saliency using eye tracking glasses
US20200258309A1 (en) Live in-camera overlays
US20140126769A1 (en) Fast initialization for monocular visual slam
JP2008535116A (en) Method and apparatus for three-dimensional rendering
CN109242950A (en) Multi-angle of view human body dynamic three-dimensional reconstruction method under more close interaction scenarios of people
CN108597036A (en) Reality environment danger sense method and device
KR20150130483A (en) In situ creation of planar natural feature targets
CN103003843B (en) Create for following the tracks of the data set of the target with dynamic changing unit
US20150138193A1 (en) Method and device for panorama-based inter-viewpoint walkthrough, and machine readable medium
D'Apuzzo Surface measurement and tracking of human body parts from multi-image video sequences
CN104184938B (en) Image processing apparatus, image processing method and program
Porzi et al. Learning contours for automatic annotations of mountains pictures on a smartphone
CN109644280A (en) The method for generating the depth of seam division data of scene
TW201915952A (en) Method and apparatus for generating visualization object, and device
JP6950644B2 (en) Attention target estimation device and attention target estimation method
KR101253644B1 (en) Apparatus and method for displaying augmented reality content using geographic information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant