WO2019070869A1 - Combining synthetic imagery with real imagery for vehicular operations - Google Patents

Combining synthetic imagery with real imagery for vehicular operations Download PDF

Info

Publication number
WO2019070869A1
WO2019070869A1 PCT/US2018/054187 US2018054187W WO2019070869A1 WO 2019070869 A1 WO2019070869 A1 WO 2019070869A1 US 2018054187 W US2018054187 W US 2018054187W WO 2019070869 A1 WO2019070869 A1 WO 2019070869A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
view
processor
video image
Prior art date
Application number
PCT/US2018/054187
Other languages
French (fr)
Inventor
Paul Albert Voisin
Original Assignee
L3 Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by L3 Technologies, Inc. filed Critical L3 Technologies, Inc.
Priority to CN201880065422.3A priority Critical patent/CN111183639A/en
Priority to CA3077430A priority patent/CA3077430A1/en
Priority to EP18796168.5A priority patent/EP3692714A1/en
Priority to AU2018345666A priority patent/AU2018345666A1/en
Priority to JP2020519324A priority patent/JP2020537390A/en
Publication of WO2019070869A1 publication Critical patent/WO2019070869A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D43/00Arrangements or adaptations of instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D11/00Passenger or crew accommodation; Flight-deck installations not otherwise provided for
    • B64D2011/0061Windows displaying outside view, artificially generated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Definitions

  • Various display systems may benefit from the combination of synthetic imagery from a plurality of sources.
  • display systems for vehicular operations may benefit from combining synthetic imagery with real imagery.
  • synthetic image displays show an outside view on the instrument panel.
  • a heads up display I-IUD
  • display glasses can provide IIUD-like imagery to a user.
  • a method can include obtaining, by a processor, an interior video image based on a position of a user.
  • the method can also include obtaining, by the processor, an exterior video image based on the position of the user.
  • the method can further include combining the interior video image and the exterior video image to form a combined single view for the user.
  • the method can additionally include providing the combined single view to a display of the user.
  • an apparatus can include at least one processor and at least one memory including computer program code.
  • the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to obtain an interior video image based on a position of a user.
  • the at least one memory and the computer program code can also be configured to, with the at least one processor, cause the apparatus at least to obtain an exterior video image based on the position of the user.
  • the at least one memory and the computer program code can further be configured to, with the at least one processor, cause the apparatus at least to combine the interior video image and the exterior video image to form a combined single view for the user.
  • the at least one memory and the computer program code can additionally be configured to, with the at least one processor, cause the apparatus at least to provide the combined single view to a display of the user,
  • An apparatus in certain embodiments, can include means for obtaining, by a processor, an interior video image based on a position of a user.
  • the apparatus can also include means for obtaining, by the processor, an exterior video image based on the position of the user.
  • the apparatus can further include means for combining the interior video image and the exterior video image to form a combined single view for the user.
  • the apparatus can additionally include means for providing the combined single view to a display of the user.
  • a system can include a first camera configured to provide a near focus view of surroundings of a user.
  • the system can also include a second camera configured to provide a distance focus view of the surroundings of the user.
  • the system can further include a processor configured to provide a combined view of the surroundings based on the near focus view and the distance focus view.
  • the system can additionally include a display configured to display the combined view to the user.
  • Figure 1 illustrates markers according to certain embodiments of the present invention.
  • Figure 2 illustrates a mapping of mask areas according to certain embodiments of the present invention.
  • Figure 3 illustrates display glasses according to certain embodiments of the present invention.
  • Figure 4 illustrates a synthetic image mapped to a window, according to certain embodiments of the present invention.
  • Figure 5 illustrates a camera image mapped to a window, according to certain embodiments of the present invention.
  • Figure 6 illustrates a system according to certain embodiments of the present invention.
  • Figure 7 illustrates a method according to certain embodiments of the present invention.
  • Figure 8 illustrates a further system according to certain embodiments of the present invention.
  • Certain embodiments of the present invention provide mechanisms, systems, and methods for vehicle operators who encounter limited visiblity due to obscuration to maintain reference to the outside environment and also vehicle instruments / interior.
  • This obscuration may be from, for example, clouds, smoke, fog, night, snow, or the like.
  • Certain embodiments may display a synthetic image in the windscreen area, not just on the instrument panel. This synthetic image may appear larger to the pilot than traditional synthetic images. Moreover, the pilot may be able to avoid or limit cross-checking between the instrument panels and the windscreen.
  • the synthetic image can be in full color and can contain all major features. Moreover, the instrument panel and the interior can still be visible. Furthermore, coilimating optics can be avoided. All imager ⁇ ' can be presented at the same focal distance for the user.
  • Certain embodiments may align the synthetic image to the cockpit environment. Edge and/or object detection can be used to automatically update image alignment.
  • Certain embodiments can be applied to flying vehicles, such as airplanes. Nevertheless, other embodiments may be applied to other categories of vehicles, such as boats, amphibious vehicles, such as hovercraft, wheeled vehicles, such as cars and trucks, or treaded vehicles, such as snowmobiles.
  • flying vehicles such as airplanes.
  • other embodiments may be applied to other categories of vehicles, such as boats, amphibious vehicles, such as hovercraft, wheeled vehicles, such as cars and trucks, or treaded vehicles, such as snowmobiles.
  • Certain embodiments of the present invention can provide devices and methods for combining a real time synthetic image of the outside environment with real time video imager ⁇ '.
  • some of the components of a system can include a system processor, markers, and display glasses.
  • FIG. 1 illustrates markers according to certain embodiments of the present invention.
  • markers can be installed at fixed locations within a cockpit. These markers can be selected to be any recognizable form of marker, such as a marker having a particular predefined geometry, color, pattern, or reflectivity.
  • a plurality' of markers can be placed at predetermined locations throughout the cockpit. The example of a cockpit is used, but other locations such as the bridge of a ship or yacht or the driver's seat area of a car can be similarly equipped.
  • the markers can be located throughout a visual domain of the vehicle operator (for example, pilot). Thus, the position of markers can be distributed such that at least one marker will typically be visible within the field of vision of the operator during vehicle operation.
  • Figure 2 illustrates a mapping of mask areas according to certain embodiments of the present invention. As shown in Figure 2, the mask areas can correspond to the windscreen and other windows within the cockpit area.
  • the display glasses contains built in video camera(s), infra-red emitter and 3-axis angular rate gyros. Typical applications are for vehicles such as aircraft or cars.
  • FIG 3 illustrates display glasses according to certain embodiments of the present invention.
  • video camera(s) can be mounted on the display glasses facing forward and can provide focused imager ⁇ - for both near (interior) and distance (exterior) processing.
  • the display glasses can also include an infrared (IR) emitter.
  • IR infrared
  • the IR emitter can be used to illuminate the markers, which may be designed to reflect IR light particularly well.
  • the display glasses can also include rate gyros or other movement sensing devices, such as microelectromechanical sensors (MEMS) or the like.
  • MEMS microelectromechanical sensors
  • Figure 4 illustrates a synthetic image mapped to a window, according to certain embodiments of the present invention.
  • the synthetic image can be mapped only to the mask areas, such as those shown in Figure 2.
  • a single image is shown, optionally a stereoscopic image can be presented, such that each eye sees a slightly different image.
  • Figure 5 illustrates a camera image mapped to a window, according to certain embodiments of the present invention. As shown in Figure 4, the camera image can be mapped only to the mask areas, such as those shown in
  • Figure 2 Although a single image is shown, optionally a stereoscopic image can be presented, such that each eye sees a slightly different image.
  • Figure 6 illustrates a system according to certain embodiments of the present invention.
  • the system can include a near focus camera and a distance focus camera. Although only one of each camera is shown, a plurality of cameras can be provided, for example to provide a stereoscopic image or a telephoto option.
  • the distance focus camera can provide exterior video to an exterior image masking section.
  • the exterior image masking section can be implemented in a processor, such as a graphics processor.
  • the exterior video can refer to video corresponding to the exterior of the vehicle, such as the environment of the airplane.
  • the near focus camera can provide interior video to an interior image masking section.
  • the interior image masking section can be implemented in a processor, such as a graphics processor. This may be the same processor as for the exterior video masking section, or it may be a different processor.
  • the system may include a multicore processor, and the interior image masking section and exterior image masking section can be implemented in different threads on different cores of the multicore processor.
  • the interior video can refer to video corresponding to the interi or of the vehicle, such as the cockpit of the airplane.
  • the interior video can also be provided to a marker detection and location section.
  • the exterior video can optionally also be provided to this same marker detection and location section. If the focus of the exterior video is set to be longer than the interior walls of the cockpit, the exterior video may not be as useful for marker detection and location, as the markers may be out of focus.
  • the marker detection and location section can be implemented in the same or different processor(s) as those discussed above.
  • each processing section of this system may be implemented in one or many processors, and in one or many threads on such processors.
  • each referenced "section" herein can be similarly embodied alone or in combination with any of the other identified sections, even when such is not explicitly stated in the following discussion.
  • Three-axis angular rate gyros or similar accelerometers can provide rate data to an integrated angular displacement section.
  • the integrated angular displacement section can also receive time data, from a clock source.
  • the clock source may be a local clock source, a radio clock source, or any other clock source, such as clock data from a global positioning system (GPS) source.
  • GPS global positioning system
  • GPS and air data can be provided as inputs to a vehicle geo-reference data section.
  • the vehicle geo-reference data section can provide detailed information about the aircraft position and orientation, including such information as latitude, longitude, altitude, pitch, roll, and heading.
  • the information can include the current values of these, as well as rate or acceleration information regarding each of these.
  • the information from the vehicle geo-reference data section can be provided to an exterior synthetic image generator section.
  • the exterior synthetic image generator section can also receive data from a synthetic image database.
  • the synthetic image database may be local or remote.
  • a local synthetic image database can store data regarding the immediate vicinity of the aircraft or other vehicle. For example, ail the synthetic image data for one hour or one fuel tank of range may be stored locally, while additional synthetic image data can be remotely stored and retrievable by the aircraft.
  • a vehicle map database can provide interior mask data to a frame interior mask transformation section.
  • the vehicle map database can also provide exterior mask data to a frame exterior mask transformation section.
  • the vehicle map database can additionally provide marker locations to the marker detection and location section and to a user direction of view section.
  • the vehicle map database and the synthetic image database can each or both be implemented using one or more memory.
  • the memory may be any form of computer storage device, including optical storage such as CD-ROM or DVD storage, magnetic storage, such as tape drive or floppy disk storage, or solid state storage, such as flash random access memory (RAM) or solid state drives (SSDs). Any non-transitory computer-readable medium may be used to store the databases. The same or any other non-transitory computer-readable medium may be used to store computer instructions, such as computer computer commands, to implement the various computing sections described herein.
  • the database storage can be separate from or integrated with the computer command storage. Memory safety techniques, such as redundant array of inexpensive disks (RAID) can be employed. Backup of the memory can be performed locally or in a cloud system.
  • the memory of the system can be in communication with a flight recorder and can provide details of the operational state(s) of the system to the flight recorder.
  • the marker detection and location section can provide information based on the near focus camera and maker locations to the user direction of view section.
  • the user direction of view section can also receive integrated angular displacement data from the integrated angular displacement section.
  • the user direction of view section can, in turn, provide information regarding the current direction a user is viewing to the frame interior mask transformation section, the frame exterior mask transformation section, and the exterior synthetic image generator.
  • the frame interior mask transformation section can provide interior mask transformation data based on the interior mask data and the user direction of view data.
  • the interior mask transformation data can be provided to an interior image masking section.
  • the interior image masking section can also receive the interior video from the near focus camera.
  • the interior image masking section can provide interior image masking data to an interior exterior image combiner section.
  • the exterior synthetic image generator section can, based on data from the vehicle geo-reference data section, the synthetic image database, and the user direction of view section, provide an exterior synthetic image to the synthetic image masking section.
  • the synthetic image masking section can, based on the exterior synthetic image and the frame exterior mask transformation, create masked synthetic image data and provide such data to an exterior image mixing section.
  • the exterior image masking section can receive the frame exterior mask transformation data and the exterior video and can create a masked exterior image.
  • the masked exterior image can be provided to the exterior image mixing section as well as to an edge / object detection section.
  • the edge / object detection section can provide output to an automatic transparency section, which can, in turn, provide transparency information to the exterior image mixing section.
  • An overlay symbology generator section can provide overlay symbology to the exterior image mixing section.
  • the exterior image mixing section can provide an exterior image to the interior exterior image combiner section.
  • the interior exterior image combiner section can combine the interior and exterior images and can provide them to display glasses,
  • a system processor in certain embodiments can include vehicle geo-reference data, a synthetic imager ⁇ ' database, synthetic image generator and components for manipulating and displaying video/image data.
  • Markers can be located within the user's normal fieid-of-view inside the vehicle's interior.
  • the markers may be natural features, such as support columns, or intentionally placed fiducials. These can be features are provided in fixed positions relative to the visual obstacles of the interior.
  • Figure 1 provides an illustration of same example markers.
  • the processor can locate the markers in the video image and can use this information to determine the user's direction-of-view relative to the vehicle structure.
  • the user's direction-of-view may change due to head movement, seat change, and the like.
  • exterior mask(s) and interior mask(s) can be determined relative to the vehicle structure, by the use of fixed markers.
  • the exterior mask(s) can be the windscreen and windows, however the exterior mask(s) can be arbitrarily defined, if desired.
  • Figure 2 provides an example of an exterior mask.
  • the interior mask(s) can be the inverse of the exterior mask(s).
  • the interior mask(s) may typically be everything except the window areas.
  • the interior mask(s) can also be arbitrarily defined.
  • the interior mask(s) may include the instrument panel, the controls and the remainder of the vehicle interior.
  • the exterior mask(s), interior mask(s) and marker locations can be stored in the vehicle map database.
  • Enhanced imagery can be selectively displayed only in the exterior mask(s) and can be aligned to the users direction-of-view.
  • the level of image enhancement may vary from real time video, as illustrated in Figure 5, to fully synthetic imagery as illustrated in Figure 4, or any combination thereof. Additional information, such as vehicle parameters, obstacles, and traffic, may also be included as an overlay in the enhanced imagery.
  • the level of enhancement can be automatic or user selected,
  • Real time video imagery may always be displayed in the interior mask(s) and may be aligned to the user's direction-of-view.
  • the processor can maintain orientation and alignment of the mask(s) relative to the vehicle structure by locating the fixed marker(s) in the camera(s) image frame. As the user's head moves, the mask(s) can move in the opposite direction.
  • the user's di ecti on-of- view, geo-reference data and synthetic image database can be used to generate the real time synthetic imager ⁇ '.
  • the geo-reference data for the vehicle can include any of the following: latitude, longitude, attitude (pitch, roil), heading (yaw), and altitude.
  • Such data can be provided by, for example, GPS, attitude gyros, and air data sensors.
  • Long-term orientation of the user's direction-of-view can be based on locating the markers within the vehicle. This can be accomplished by numerous methods, such as reflection of IR emitter signal or object detection via image analysis. Short term stabilization of the direction-of-view can be provided by the 3-axis rate gyro (or similar) data.
  • Integration of the rate gyro data can provide total angular displacement. This can be useful for characterizing the marker location(s) during installation. Once known, the movement of the marker(s) can be correlated to the user's actual direction-of-view.
  • Data for marker characterization can be collected by wearing the display- glasses and scanning the entire allowable range of direction-of-view from the operator's station.
  • the display glasses can be sued fully left, right, up, and down.
  • the result can be a spherical or semi-spherical panoramic image.
  • the exterior mask(s) and interior mask(s) can be determined. These masks(s) can be arbitrary and can be defined by several methods. For example, software tools can be used to edit the panoramic image. Another option is to use chroma key by applying green fabric to the windows or other areas and automatically detecting the green areas as mask areas, A further option is to detect and filter bright areas when the vehicle is in bright daylight. [0060] Frame mask transformation can be variously accomplished. A transformation vector can be computed as the vector that will best move the markerfs) in the vehicle map database to the detected marker iocation(s) based on the user's direction of view.
  • the frame exterior maskfs) and frame interior mask(s) can be computed using the transformation vector, exterior maskfs) and interior mask(s).
  • the frame exterior mask(s) can be used to crop the exterior video and synthetic image.
  • the frame interior mask(s) can be used to crop interior video.
  • the vehicle exterior maskfs) and interior maskfs) do not need to be altered.
  • the system can dither the boundary between the exterior and interior masks, such that the boundary may not be pronounced or distracting.
  • Variable transparency can permit the generation of an enhanced image by mixing or combining exterior masked video and synthetic masked video.
  • the transparency ratio which can be an analog value, can be determined by the user or by an automatic algorithm.
  • the automatic algorithm can process the masked exterior video data for edge detection. Higher definition of edges can cause the exterior masked video to become dominant. Conversely, lower edge detection can result in synthetic masked video becoming dominant.
  • the interior maskfs can be the inverse of the exterior maskfs), as mentioned above. Therefore, the frame interior masked image can be combined with an enhanced image using a simple maximum value operation for each pixel. This can provide the user with imagery (real and enhanced) that is coherent with both the vehicle interior and the outside environment.
  • the alignment of the synthetic image to the outside environment can be accomplished via edge / object detection of visible features. This can happen on a continuous basis without user input.
  • the position of the sun relative to the direction of view may be known. Therefore, the sun may be tracked within the image and reduced in intensity, which may reduce and/or eliminate sun glare.
  • Figure 7 illustrates a method according to certain embodiments of the present invention.
  • a method can include, at 710, obtaining, by a processor, an interior video image based on a position of a user.
  • the interior video image can be a live camera feed, for example a live video image of the interior of a cockpit as in the previous examples.
  • the method can also include, at 720, obtaining, by the processor, an exterior video image based on the position of the user.
  • the obtaining the exterior video image can include, at 724, selecting from a live camera feed, a synthetic image, or a combination of the live camera feed and the synthetic image.
  • the method can include, at 726, selecting a transparency for the combination of the live camera feed and the synthetic image.
  • the method can also include, at 722, generating the synthetic image based on the position of the user.
  • an alignment of the synthetic image can be determined based on at least one of edge detection or image detection from the interior video image. Edge detection and/or object detection can also be used to help decide whether to select the synthetic image, the live video image, or some combination thereof.
  • the method can further include, at 730, combining the interior video image and the exterior video image to form a combined single view for the user.
  • the combined single view can be a live video image of a cockpit including the instrument panel view and window view, as described above.
  • the method can additionally include, at 740, providing the combined single view to a display of the user.
  • the display can be glasses worn by the pilot of an aircraft.
  • the display can be further configured to superimpose additional information similar to the way information is provided on a heads-up display.
  • Figure 8 illustrates an exemplary system, according to certain embodiments of the present invention, it should be understood that each block of the exemplary method of Figure 7 may be implemented by various means or their combinations, such as hardware, software, firmware, one or more processors and/or circuitry.
  • a system may include several devices, such as, for example, device 810 and display device 820.
  • the system may include more than one display device 820 and more than one device 810, although only one of each is shown for the purposes of illustration.
  • the device 810 may be any suitable piece of avionics hardware, such as a line replaceable unit of an avionics system.
  • the display device 820 may be any desired display device, such as display glasses, which may provide a single image or a pair of coordinated stereoscopic images.
  • the device 810 may include at least one processor or control unit or module, indicated as 814. At least one memory may be provided in the device 810, indicated as 815. The memory 815 may include computer program instructions or computer code contained therein, for example, for carrying out the embodiments of the present invention, as described above.
  • One or more transceivers 816 may be provided, and the device 810 may also include an antenna, illustrated as 817. Although only one antenna is shown, many antennas and multiple antenna elements may be provided for the device 810. Other configurations of the device 810, for example, may be provided.
  • device 810 may be configured for wired communication (as shown to connect to display device 820), in addition to or instead of wireless communication, and in such a case, antenna 817 may illustrate anv form of communication hardware, without being limited to merely an antenna.
  • Transceiver 816 may be a transmitter, a receiver, or both a transmitter and a receiver, or a unit or a device that may be configured both for transmission and reception.
  • Processor 814 may be embodied by any computational or data processing device, such as a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), a digitally enhanced circuit, or a comparable device or a combination thereof.
  • the processor 814 may be implemented as a single controller, or a plurality of controllers or processors. Additionally, the processor 814 may be implemented as a pool of processors in a local configuration, in a cloud configuration, or in a combination thereof.
  • the term “circuitry” may refer to one or more electric or electronic circuits.
  • processor may refer to circuitry, such as logic circuitry, that responds to and processes instructions that drive a computer.
  • Memory 815 may be any suitable storage device, such as a non-transitory computer-readable medium.
  • a hard disk drive (HDD), random access memory (RAM), flash memory, or other suitable memory may be used.
  • the memory 815 may be combined on a single integrated circuit as the processor, or may be separate therefrom.
  • the computer program instructions which may be stored in the memory 815 and processed by the processor 814 can be any suitable form of computer program code, for example, a compiled or interpreted computer program written in any- suitable programming language.
  • the memory 815 or data storage entity is typically internal but may also be external or a combination thereof, such as in the case when additional memory capacity is obtained from a service provider.
  • the memory may be fixed or removable.
  • the memory 815 and the computer program instructions may be configured, with the processor 814 for the particular device, to cause a hardware apparatus, such as device 810, to perform any of the processes described above (see, for example, Figures 1 and 2). Therefore, in certain embodiments of the present invention, a non-transitory computer-readable medium may be encoded with computer instructions or one or more computer programs (such as added or updated software routines, applets or macros) that, when executed in hardware, may perform a process, such as one or more of the processes described herein.
  • a non-transitory computer-readable medium may be encoded with computer instructions or one or more computer programs (such as added or updated software routines, applets or macros) that, when executed in hardware, may perform a process, such as one or more of the processes described herein.
  • Computer programs may be coded by any programming language, which may be a high-level programming language, such as objective-C, C, C++, C#, Java, etc., or a low-level programming language, such as a machine language, or an assembler. Alternatively, certain embodiments of the invention may be performed entirely in hardware.
  • a high-level programming language such as objective-C, C, C++, C#, Java, etc.
  • a low-level programming language such as a machine language, or an assembler.
  • certain embodiments of the invention may be performed entirely in hardware.
  • a left eye view may have a different combination of images than the right eye view.
  • the right eye view may be purely live video images, whereas the left eye view may have a synthetic exterior video image.
  • one eye view may simply pass through the glasses transparently.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Various display systems may benefit from the combination of synthetic imagery from a plurality of sources. For example, display systems for vehicular operations may benefit from combining synthetic imagery with real imagery. A method can include obtaining, by a processor, an interior video image based on a position of a user. The method can also include obtaining, by the processor, an exterior video image based on the position of the user. The method can further include combining the interior video image and the exterior video image to form a combined single view for the user. The method can additionally include providing the combined single view to a display of the user.

Description

TITLE:
Combining Synthetic Imagery with Real Imagery for Vehicular Operations
BACKGROUND:
Field:
[0001 J Various display systems may benefit from the combination of synthetic imagery from a plurality of sources. For example, display systems for vehicular operations may benefit from combining synthetic imagery with real imagery.
Description of the Related Art:
[0002] Since the 1920s, aircraft makers have incorporated instruments into planes, in order to permit operation of planes in limited or zero visibility conditions. Traditionally, these instalments were located on an instrument panel. Thus, it was necessary for the pilot to look away from the windows of the aircraft in order to verify the flight conditions using the instruments.
[0003] More recently, synthetic image displays show an outside view on the instrument panel. Also, in the case of certain military aircraft such as F18s, a heads up display (I-IUD) can provide a visual display of certain aircraft parameters, such as attitude, altitude, and the like. Furthermore, in some case display glasses can provide IIUD-like imagery to a user.
[0004] Major aircraft modifications may be required to install a HUD. Certain installations must typically be boresighted, and the viewing box can be very limited. Synthetic image displays require the pilot to look down at the instruments while on approach and cross check the windscreen to find the runway environment. The image is limited in size and focal distance of the pilot must change, from near to far, back to near, and so on. Display glasses may have to collimate the image to create the same focal distance as the outside environment, otherwise the image may be blurry. [0005] According to certain embodiments, a method can include obtaining, by a processor, an interior video image based on a position of a user. The method can also include obtaining, by the processor, an exterior video image based on the position of the user. The method can further include combining the interior video image and the exterior video image to form a combined single view for the user. The method can additionally include providing the combined single view to a display of the user.
[0006] In certain embodiments, an apparatus can include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to obtain an interior video image based on a position of a user. The at least one memory and the computer program code can also be configured to, with the at least one processor, cause the apparatus at least to obtain an exterior video image based on the position of the user. The at least one memory and the computer program code can further be configured to, with the at least one processor, cause the apparatus at least to combine the interior video image and the exterior video image to form a combined single view for the user. The at least one memory and the computer program code can additionally be configured to, with the at least one processor, cause the apparatus at least to provide the combined single view to a display of the user,
[0007] An apparatus, in certain embodiments, can include means for obtaining, by a processor, an interior video image based on a position of a user. The apparatus can also include means for obtaining, by the processor, an exterior video image based on the position of the user. The apparatus can further include means for combining the interior video image and the exterior video image to form a combined single view for the user. The apparatus can additionally include means for providing the combined single view to a display of the user.
[0008] A system, according to certain embodiments, can include a first camera configured to provide a near focus view of surroundings of a user. The system can also include a second camera configured to provide a distance focus view of the surroundings of the user. The system can further include a processor configured to provide a combined view of the surroundings based on the near focus view and the distance focus view. The system can additionally include a display configured to display the combined view to the user.
BRIEF DESCRIPTION OF THE DRAWINGS:
[0009] For proper understanding of the invention, reference should be made to the accompanying drawings, wherein:
[0010] Figure 1 illustrates markers according to certain embodiments of the present invention.
[001.1] Figure 2 illustrates a mapping of mask areas according to certain embodiments of the present invention.
[0012] Figure 3 illustrates display glasses according to certain embodiments of the present invention.
[0013] Figure 4 illustrates a synthetic image mapped to a window, according to certain embodiments of the present invention.
[0014] Figure 5 illustrates a camera image mapped to a window, according to certain embodiments of the present invention.
[0015] Figure 6 illustrates a system according to certain embodiments of the present invention.
[0016] Figure 7 illustrates a method according to certain embodiments of the present invention.
[0017] Figure 8 illustrates a further system according to certain embodiments of the present invention. DETAILED DESCRIPTION:
[0018] Certain embodiments of the present invention provide mechanisms, systems, and methods for vehicle operators who encounter limited visiblity due to obscuration to maintain reference to the outside environment and also vehicle instruments / interior. This obscuration may be from, for example, clouds, smoke, fog, night, snow, or the like.
[0019] Certain embodiments may display a synthetic image in the windscreen area, not just on the instrument panel. This synthetic image may appear larger to the pilot than traditional synthetic images. Moreover, the pilot may be able to avoid or limit cross-checking between the instrument panels and the windscreen.
[0020] The synthetic image can be in full color and can contain all major features. Moreover, the instrument panel and the interior can still be visible. Furthermore, coilimating optics can be avoided. All imager}' can be presented at the same focal distance for the user.
[0021] Certain embodiments may align the synthetic image to the cockpit environment. Edge and/or object detection can be used to automatically update image alignment.
[0022] Certain embodiments can be applied to flying vehicles, such as airplanes. Nevertheless, other embodiments may be applied to other categories of vehicles, such as boats, amphibious vehicles, such as hovercraft, wheeled vehicles, such as cars and trucks, or treaded vehicles, such as snowmobiles.
[0023] Certain embodiments of the present invention can provide devices and methods for combining a real time synthetic image of the outside environment with real time video imager}'. As will be described below, some of the components of a system can include a system processor, markers, and display glasses.
[0024] Figure 1 illustrates markers according to certain embodiments of the present invention. As shown in Figure 1, markers can be installed at fixed locations within a cockpit. These markers can be selected to be any recognizable form of marker, such as a marker having a particular predefined geometry, color, pattern, or reflectivity. As shown, a plurality' of markers can be placed at predetermined locations throughout the cockpit. The example of a cockpit is used, but other locations such as the bridge of a ship or yacht or the driver's seat area of a car can be similarly equipped. The markers can be located throughout a visual domain of the vehicle operator (for example, pilot). Thus, the position of markers can be distributed such that at least one marker will typically be visible within the field of vision of the operator during vehicle operation.
[0025] Figure 2 illustrates a mapping of mask areas according to certain embodiments of the present invention. As shown in Figure 2, the mask areas can correspond to the windscreen and other windows within the cockpit area.
[0026] The display glasses contains built in video camera(s), infra-red emitter and 3-axis angular rate gyros. Typical applications are for vehicles such as aircraft or cars.
[0027] Figure 3 illustrates display glasses according to certain embodiments of the present invention. As shown in Figure 3, video camera(s) can be mounted on the display glasses facing forward and can provide focused imager}- for both near (interior) and distance (exterior) processing.
[0028] The display glasses can also include an infrared (IR) emitter. The IR emitter can be used to illuminate the markers, which may be designed to reflect IR light particularly well. The display glasses can also include rate gyros or other movement sensing devices, such as microelectromechanical sensors (MEMS) or the like.
[0029] Figure 4 illustrates a synthetic image mapped to a window, according to certain embodiments of the present invention. As shown in Figure 4, the synthetic image can be mapped only to the mask areas, such as those shown in Figure 2. Although a single image is shown, optionally a stereoscopic image can be presented, such that each eye sees a slightly different image.
[0030] Figure 5 illustrates a camera image mapped to a window, according to certain embodiments of the present invention. As shown in Figure 4, the camera image can be mapped only to the mask areas, such as those shown in
Figure 2. Although a single image is shown, optionally a stereoscopic image can be presented, such that each eye sees a slightly different image.
[0031] Figure 6 illustrates a system according to certain embodiments of the present invention. As shown in Figure 6, the system can include a near focus camera and a distance focus camera. Although only one of each camera is shown, a plurality of cameras can be provided, for example to provide a stereoscopic image or a telephoto option.
[0032] The distance focus camera can provide exterior video to an exterior image masking section. The exterior image masking section can be implemented in a processor, such as a graphics processor. The exterior video can refer to video corresponding to the exterior of the vehicle, such as the environment of the airplane.
[0033] The near focus camera can provide interior video to an interior image masking section. The interior image masking section can be implemented in a processor, such as a graphics processor. This may be the same processor as for the exterior video masking section, or it may be a different processor. In certain cases, the system may include a multicore processor, and the interior image masking section and exterior image masking section can be implemented in different threads on different cores of the multicore processor.
[0034] The interior video can refer to video corresponding to the interi or of the vehicle, such as the cockpit of the airplane. The interior video can also be provided to a marker detection and location section. Although not shown, the exterior video can optionally also be provided to this same marker detection and location section. If the focus of the exterior video is set to be longer than the interior walls of the cockpit, the exterior video may not be as useful for marker detection and location, as the markers may be out of focus. The marker detection and location section can be implemented in the same or different processor(s) as those discussed above. Optionally each processing section of this system may be implemented in one or many processors, and in one or many threads on such processors. For ease of reading, each referenced "section" herein can be similarly embodied alone or in combination with any of the other identified sections, even when such is not explicitly stated in the following discussion.
[0035] Three-axis angular rate gyros or similar accelerometers, such as MEMS devices, can provide rate data to an integrated angular displacement section. The integrated angular displacement section can also receive time data, from a clock source. The clock source may be a local clock source, a radio clock source, or any other clock source, such as clock data from a global positioning system (GPS) source.
[0036] GPS and air data can be provided as inputs to a vehicle geo-reference data section. The vehicle geo-reference data section can provide detailed information about the aircraft position and orientation, including such information as latitude, longitude, altitude, pitch, roll, and heading. The information can include the current values of these, as well as rate or acceleration information regarding each of these.
[0037] The information from the vehicle geo-reference data section can be provided to an exterior synthetic image generator section. The exterior synthetic image generator section can also receive data from a synthetic image database. The synthetic image database may be local or remote. Optionally, a local synthetic image database can store data regarding the immediate vicinity of the aircraft or other vehicle. For example, ail the synthetic image data for one hour or one fuel tank of range may be stored locally, while additional synthetic image data can be remotely stored and retrievable by the aircraft.
[0038] A vehicle map database can provide interior mask data to a frame interior mask transformation section. The vehicle map database can also provide exterior mask data to a frame exterior mask transformation section. The vehicle map database can additionally provide marker locations to the marker detection and location section and to a user direction of view section.
[0039] The vehicle map database and the synthetic image database can each or both be implemented using one or more memory. The memory may be any form of computer storage device, including optical storage such as CD-ROM or DVD storage, magnetic storage, such as tape drive or floppy disk storage, or solid state storage, such as flash random access memory (RAM) or solid state drives (SSDs). Any non-transitory computer-readable medium may be used to store the databases. The same or any other non-transitory computer-readable medium may be used to store computer instructions, such as computer computer commands, to implement the various computing sections described herein. The database storage can be separate from or integrated with the computer command storage. Memory safety techniques, such as redundant array of inexpensive disks (RAID) can be employed. Backup of the memory can be performed locally or in a cloud system. Although not shown, the memory of the system can be in communication with a flight recorder and can provide details of the operational state(s) of the system to the flight recorder.
[0040] The marker detection and location section can provide information based on the near focus camera and maker locations to the user direction of view section. The user direction of view section can also receive integrated angular displacement data from the integrated angular displacement section. The user direction of view section can, in turn, provide information regarding the current direction a user is viewing to the frame interior mask transformation section, the frame exterior mask transformation section, and the exterior synthetic image generator.
[0041] The frame interior mask transformation section can provide interior mask transformation data based on the interior mask data and the user direction of view data. The interior mask transformation data can be provided to an interior image masking section. The interior image masking section can also receive the interior video from the near focus camera. The interior image masking section can provide interior image masking data to an interior exterior image combiner section.
[0042] The exterior synthetic image generator section can, based on data from the vehicle geo-reference data section, the synthetic image database, and the user direction of view section, provide an exterior synthetic image to the synthetic image masking section.
[0043] The synthetic image masking section can, based on the exterior synthetic image and the frame exterior mask transformation, create masked synthetic image data and provide such data to an exterior image mixing section.
[0044] The exterior image masking section can receive the frame exterior mask transformation data and the exterior video and can create a masked exterior image. The masked exterior image can be provided to the exterior image mixing section as well as to an edge / object detection section. The edge / object detection section can provide output to an automatic transparency section, which can, in turn, provide transparency information to the exterior image mixing section. An overlay symbology generator section can provide overlay symbology to the exterior image mixing section.
[0045] Based on its many inputs, the exterior image mixing section can provide an exterior image to the interior exterior image combiner section. The interior exterior image combiner section can combine the interior and exterior images and can provide them to display glasses,
[0046] Thus, as can be seen from Figure 6 and the above discussion, a system processor in certain embodiments can include vehicle geo-reference data, a synthetic imager}' database, synthetic image generator and components for manipulating and displaying video/image data.
[0047] Markers can be located within the user's normal fieid-of-view inside the vehicle's interior. The markers may be natural features, such as support columns, or intentionally placed fiducials. These can be features are provided in fixed positions relative to the visual obstacles of the interior. Figure 1 provides an illustration of same example markers.
[0048] The processor can locate the markers in the video image and can use this information to determine the user's direction-of-view relative to the vehicle structure. The user's direction-of-view may change due to head movement, seat change, and the like.
[0049] During installation, exterior mask(s) and interior mask(s) can be determined relative to the vehicle structure, by the use of fixed markers. Typically, the exterior mask(s) can be the windscreen and windows, however the exterior mask(s) can be arbitrarily defined, if desired. Figure 2 provides an example of an exterior mask. The interior mask(s) can be the inverse of the exterior mask(s).
[0050] Thus, the interior mask(s) may typically be everything except the window areas. The interior mask(s) can also be arbitrarily defined. Typically, the interior mask(s) may include the instrument panel, the controls and the remainder of the vehicle interior. The exterior mask(s), interior mask(s) and marker locations can be stored in the vehicle map database.
[0051] Enhanced imagery can be selectively displayed only in the exterior mask(s) and can be aligned to the users direction-of-view. The level of image enhancement may vary from real time video, as illustrated in Figure 5, to fully synthetic imagery as illustrated in Figure 4, or any combination thereof. Additional information, such as vehicle parameters, obstacles, and traffic, may also be included as an overlay in the enhanced imagery. The level of enhancement can be automatic or user selected,
[0052] Real time video imagery may always be displayed in the interior mask(s) and may be aligned to the user's direction-of-view.
[0053] The processor can maintain orientation and alignment of the mask(s) relative to the vehicle structure by locating the fixed marker(s) in the camera(s) image frame. As the user's head moves, the mask(s) can move in the opposite direction.
[0054] The user's di ecti on-of- view, geo-reference data and synthetic image database can be used to generate the real time synthetic imager}'.
[0055] The geo-reference data for the vehicle can include any of the following: latitude, longitude, attitude (pitch, roil), heading (yaw), and altitude. Such data can be provided by, for example, GPS, attitude gyros, and air data sensors.
[0056] Long-term orientation of the user's direction-of-view can be based on locating the markers within the vehicle. This can be accomplished by numerous methods, such as reflection of IR emitter signal or object detection via image analysis. Short term stabilization of the direction-of-view can be provided by the 3-axis rate gyro (or similar) data.
[0057] Integration of the rate gyro data can provide total angular displacement. This can be useful for characterizing the marker location(s) during installation. Once known, the movement of the marker(s) can be correlated to the user's actual direction-of-view.
[0058] Data for marker characterization can be collected by wearing the display- glasses and scanning the entire allowable range of direction-of-view from the operator's station. For example, the display glasses can be sued fully left, right, up, and down. The result can be a spherical or semi-spherical panoramic image.
[0059] Once the markers have been characterized, the exterior mask(s) and interior mask(s) can be determined. These masks(s) can be arbitrary and can be defined by several methods. For example, software tools can be used to edit the panoramic image. Another option is to use chroma key by applying green fabric to the windows or other areas and automatically detecting the green areas as mask areas, A further option is to detect and filter bright areas when the vehicle is in bright daylight. [0060] Frame mask transformation can be variously accomplished. A transformation vector can be computed as the vector that will best move the markerfs) in the vehicle map database to the detected marker iocation(s) based on the user's direction of view. The frame exterior maskfs) and frame interior mask(s) can be computed using the transformation vector, exterior maskfs) and interior mask(s). The frame exterior mask(s) can be used to crop the exterior video and synthetic image. The frame interior mask(s) can be used to crop interior video. The vehicle exterior maskfs) and interior maskfs) do not need to be altered. The system can dither the boundary between the exterior and interior masks, such that the boundary may not be pronounced or distracting.
[0061] Variable transparency can permit the generation of an enhanced image by mixing or combining exterior masked video and synthetic masked video. The transparency ratio, which can be an analog value, can be determined by the user or by an automatic algorithm. The automatic algorithm can process the masked exterior video data for edge detection. Higher definition of edges can cause the exterior masked video to become dominant. Conversely, lower edge detection can result in synthetic masked video becoming dominant.
[0062] The interior maskfs) can be the inverse of the exterior maskfs), as mentioned above. Therefore, the frame interior masked image can be combined with an enhanced image using a simple maximum value operation for each pixel. This can provide the user with imagery (real and enhanced) that is coherent with both the vehicle interior and the outside environment.
[0063] The alignment of the synthetic image to the outside environment can be accomplished via edge / object detection of visible features. This can happen on a continuous basis without user input.
[0064] The position of the sun relative to the direction of view may be known. Therefore, the sun may be tracked within the image and reduced in intensity, which may reduce and/or eliminate sun glare.
[0065] Figure 7 illustrates a method according to certain embodiments of the present invention. As shown in Figure 7, a method can include, at 710, obtaining, by a processor, an interior video image based on a position of a user. The interior video image can be a live camera feed, for example a live video image of the interior of a cockpit as in the previous examples.
[0066] The method can also include, at 720, obtaining, by the processor, an exterior video image based on the position of the user. The obtaining the exterior video image can include, at 724, selecting from a live camera feed, a synthetic image, or a combination of the live camera feed and the synthetic image. The method can include, at 726, selecting a transparency for the combination of the live camera feed and the synthetic image. The method can also include, at 722, generating the synthetic image based on the position of the user. As described above, an alignment of the synthetic image can be determined based on at least one of edge detection or image detection from the interior video image. Edge detection and/or object detection can also be used to help decide whether to select the synthetic image, the live video image, or some combination thereof.
[0067] The method can further include, at 730, combining the interior video image and the exterior video image to form a combined single view for the user. The combined single view can be a live video image of a cockpit including the instrument panel view and window view, as described above. The method can additionally include, at 740, providing the combined single view to a display of the user. The display can be glasses worn by the pilot of an aircraft. The display can be further configured to superimpose additional information similar to the way information is provided on a heads-up display.
[0068] Figure 8 illustrates an exemplary system, according to certain embodiments of the present invention, it should be understood that each block of the exemplary method of Figure 7 may be implemented by various means or their combinations, such as hardware, software, firmware, one or more processors and/or circuitry. In one embodiment of the present invention, a system may include several devices, such as, for example, device 810 and display device 820. The system may include more than one display device 820 and more than one device 810, although only one of each is shown for the purposes of illustration. The device 810 may be any suitable piece of avionics hardware, such as a line replaceable unit of an avionics system. The display device 820 may be any desired display device, such as display glasses, which may provide a single image or a pair of coordinated stereoscopic images.
[0069] The device 810 may include at least one processor or control unit or module, indicated as 814. At least one memory may be provided in the device 810, indicated as 815. The memory 815 may include computer program instructions or computer code contained therein, for example, for carrying out the embodiments of the present invention, as described above. One or more transceivers 816 may be provided, and the device 810 may also include an antenna, illustrated as 817. Although only one antenna is shown, many antennas and multiple antenna elements may be provided for the device 810. Other configurations of the device 810, for example, may be provided. For example, device 810 may be configured for wired communication (as shown to connect to display device 820), in addition to or instead of wireless communication, and in such a case, antenna 817 may illustrate anv form of communication hardware, without being limited to merely an antenna.
[0070] Transceiver 816 may be a transmitter, a receiver, or both a transmitter and a receiver, or a unit or a device that may be configured both for transmission and reception.
[0071] Processor 814 may be embodied by any computational or data processing device, such as a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), a digitally enhanced circuit, or a comparable device or a combination thereof. The processor 814 may be implemented as a single controller, or a plurality of controllers or processors. Additionally, the processor 814 may be implemented as a pool of processors in a local configuration, in a cloud configuration, or in a combination thereof. The term "circuitry" may refer to one or more electric or electronic circuits. The term "processor" may refer to circuitry, such as logic circuitry, that responds to and processes instructions that drive a computer.
[0072] For firmware or software, the implementation may include modules or units of at least one chip set (e.g., procedures, functions, and so on). Memory 815 may be any suitable storage device, such as a non-transitory computer-readable medium. A hard disk drive (HDD), random access memory (RAM), flash memory, or other suitable memory may be used. The memory 815 may be combined on a single integrated circuit as the processor, or may be separate therefrom. Furthermore, the computer program instructions which may be stored in the memory 815 and processed by the processor 814 can be any suitable form of computer program code, for example, a compiled or interpreted computer program written in any- suitable programming language. The memory 815 or data storage entity is typically internal but may also be external or a combination thereof, such as in the case when additional memory capacity is obtained from a service provider. The memory may be fixed or removable.
[0073] The memory 815 and the computer program instructions may be configured, with the processor 814 for the particular device, to cause a hardware apparatus, such as device 810, to perform any of the processes described above (see, for example, Figures 1 and 2). Therefore, in certain embodiments of the present invention, a non-transitory computer-readable medium may be encoded with computer instructions or one or more computer programs (such as added or updated software routines, applets or macros) that, when executed in hardware, may perform a process, such as one or more of the processes described herein. Computer programs may be coded by any programming language, which may be a high-level programming language, such as objective-C, C, C++, C#, Java, etc., or a low-level programming language, such as a machine language, or an assembler. Alternatively, certain embodiments of the invention may be performed entirely in hardware.
[0074] Further modifications to the above embodiments are possible. For example, various filters may be applied to both real and synthetic imagery, for example to provide balance or contrast enhancement, to highlight objects of interest, or to suppress visual distractions. In certain embodiments, a left eye view may have a different combination of images than the right eye view. For example, the right eye view may be purely live video images, whereas the left eye view may have a synthetic exterior video image. Alternatively, one eye view may simply pass through the glasses transparently.
[0075] One having ordinary skill in the art will readily understand that the invention as discussed above may be practiced with steps in a different order, and/or with hardware elements in configurations which are different than those which are disclosed. Therefore, although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention.

Claims

WE CLAIM:
1. A method, comprising:
obtaining, by a processor, an interior video image based on a position of a user;
obtaining, by the processor, an exterior video image based on the position of the user;
combining the interior video image and the exterior video image to form a combined single view for the user; and
providing the combined single view to a display of the user,
2. The method of claim 1, wherein the interior video image comprises a live camera feed.
3. The method of claim I, wherein the obtaining the exterior video image comprises selecting from a live camera feed, a synthetic image, or a combination of the live cam era feed and the synthetic image.
4. The method of claim 3, further comprising:
selecting a transparency for the combination of the live camera feed and the synthetic image.
5. The method of claim 3, further comprising:
generating the synthetic im age based on the position of the user.
6. The method of claim 5, wherein an alignment of the synthetic image is determined based on at least one of edge detection or image detection from the interior video image.
7. The method of claim 1, wherein the combined single view comprises a live video image of a cockpit including the instalment panel view and window view.
8. An apparatus, comprising:
at least one processor, and
at least one memory including computer program code,
wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to obtain an interior video image based on a position of a user;
obtain an exterior video image based on the position of the user;
combine the interior video image and the exterior video image to form a combined single view for the user, and
provide the combined single view to a display of the user.
9. The apparatus of claim 8, wherein the interior video image comprises a live camera feed.
10. The apparatus of claim 8, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to obtain the exterior video image by selecting from a live camera feed, a synthetic image, or a combination of the live camera feed and the synthetic image.
11. The apparatus of claim 10, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to select a transparency for the combination of the live camera feed and the synthetic image.
12. The apparatus of claim 10, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to generate the synthetic image based on the position of the user.
13. The apparatus of claim 12, wherein an alignment of the synthetic image is determined based on at least one of edge detection or image detection from the interior video image.
14. The apparatus of claim 8, wherein the combined single view comprises a live video image of a cockpit including the instrument panel view and window view.
15. A system, comprising:
a first camera configured to provide a near focus view of surroundings of a user;
a second camera configured to provide a distance focus view of the surroundings of the user;
a processor configured to provide a combined view of the surroundings based on the near focus view and the distance focus view; and
a display configured to display the combined view to the user.
16. The system of claim 15, wherein the near focus view comprises a live camera feed.
17. The system of claim 15, wherein providing the combined view comprises selecting from a live camera feed, a synthetic image, or a combination of the live camera feed and the synthetic image.
18. The system of claim 17, wherein the processor is configured to select a transparency for the combination of the live camera feed and the synthetic image.
19. The system of claim 17, wherein the processor is configured to generate the synthetic image based on the position of the user.
20. The system of claim 17, wherein the processor is configured to align the synthetic image based on at least one of edge detection or image detection from the near focus view.
PCT/US2018/054187 2017-10-04 2018-10-03 Combining synthetic imagery with real imagery for vehicular operations WO2019070869A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201880065422.3A CN111183639A (en) 2017-10-04 2018-10-03 Combining the composite image with the real image for vehicle operation
CA3077430A CA3077430A1 (en) 2017-10-04 2018-10-03 Combining synthetic imagery with real imagery for vehicular operations
EP18796168.5A EP3692714A1 (en) 2017-10-04 2018-10-03 Combining synthetic imagery with real imagery for vehicular operations
AU2018345666A AU2018345666A1 (en) 2017-10-04 2018-10-03 Combining synthetic imagery with real imagery for vehicular operations
JP2020519324A JP2020537390A (en) 2017-10-04 2018-10-03 Combining composite and real images for vehicle manipulation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/724,667 US20190102923A1 (en) 2017-10-04 2017-10-04 Combining synthetic imagery with real imagery for vehicular operations
US15/724,667 2017-10-04

Publications (1)

Publication Number Publication Date
WO2019070869A1 true WO2019070869A1 (en) 2019-04-11

Family

ID=64051674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/054187 WO2019070869A1 (en) 2017-10-04 2018-10-03 Combining synthetic imagery with real imagery for vehicular operations

Country Status (7)

Country Link
US (1) US20190102923A1 (en)
EP (1) EP3692714A1 (en)
JP (1) JP2020537390A (en)
CN (1) CN111183639A (en)
AU (1) AU2018345666A1 (en)
CA (1) CA3077430A1 (en)
WO (1) WO2019070869A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021109082A1 (en) * 2021-04-12 2022-10-13 Bayerische Motoren Werke Aktiengesellschaft Method and device for determining a pose in data glasses

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151060A (en) * 1995-12-14 2000-11-21 Olympus Optical Co., Ltd. Stereoscopic video display apparatus which fuses real space image at finite distance
GB2532464A (en) * 2014-11-19 2016-05-25 Bae Systems Plc Apparatus and method for selectively displaying an operational environment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0321083D0 (en) * 2003-09-09 2003-10-08 British Telecomm Video communications method and system
JP6091866B2 (en) * 2012-11-30 2017-03-08 株式会社キーエンス Measurement microscope apparatus, image generation method, measurement microscope apparatus operation program, and computer-readable recording medium
US20150151838A1 (en) * 2013-12-03 2015-06-04 Federal Express Corporation System and method for enhancing vision inside an aircraft cockpit
CN105139451B (en) * 2015-08-10 2018-06-26 中国商用飞机有限责任公司北京民用飞机技术研究中心 A kind of Synthetic vision based on HUD guides display system
WO2017145645A1 (en) * 2016-02-25 2017-08-31 富士フイルム株式会社 Driving assistance apparatus, driving assistance method, and driving assistance program
US20170291716A1 (en) * 2016-04-07 2017-10-12 Gulfstream Aerospace Corporation Cockpit augmented vision system for aircraft
JP6877115B2 (en) * 2016-09-27 2021-05-26 株式会社東海理化電機製作所 Vehicle visibility device
CN110419063A (en) * 2017-03-17 2019-11-05 麦克赛尔株式会社 AR display device and AR display methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151060A (en) * 1995-12-14 2000-11-21 Olympus Optical Co., Ltd. Stereoscopic video display apparatus which fuses real space image at finite distance
GB2532464A (en) * 2014-11-19 2016-05-25 Bae Systems Plc Apparatus and method for selectively displaying an operational environment

Also Published As

Publication number Publication date
US20190102923A1 (en) 2019-04-04
JP2020537390A (en) 2020-12-17
EP3692714A1 (en) 2020-08-12
CN111183639A (en) 2020-05-19
AU2018345666A1 (en) 2020-04-23
CA3077430A1 (en) 2019-04-11

Similar Documents

Publication Publication Date Title
US9950807B2 (en) Adjustable synthetic vision
US9594248B2 (en) Method and system for operating a near-to-eye display
US8487787B2 (en) Near-to-eye head tracking ground obstruction system and method
EP3111170B1 (en) Projected synthetic vision
US8218006B2 (en) Near-to-eye head display system and method
EP2133728B1 (en) Method and system for operating a display device
US11398078B2 (en) Gradual transitioning between two-dimensional and three-dimensional augmented reality images
EP3438614B1 (en) Aircraft systems and methods for adjusting a displayed sensor image field of view
EP3173847B1 (en) System for displaying fov boundaries on huds
EP3742118A1 (en) Systems and methods for managing a vision system display of an aircraft
US11249306B2 (en) System and method for providing synthetic information on a see-through device
WO2019070869A1 (en) Combining synthetic imagery with real imagery for vehicular operations
EP3933805A1 (en) Augmented reality vision system for vehicular crew resource management
US10777013B1 (en) System and method for enhancing approach light display
JP7367930B2 (en) Image display system for mobile objects
CN118229946A (en) Scanning assistance for camera-based search

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18796168

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3077430

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2020519324

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018345666

Country of ref document: AU

Date of ref document: 20181003

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018796168

Country of ref document: EP

Effective date: 20200504