WO2023247606A1 - Method and system to provide an image to be displayed by an output device - Google Patents

Method and system to provide an image to be displayed by an output device Download PDF

Info

Publication number
WO2023247606A1
WO2023247606A1 PCT/EP2023/066762 EP2023066762W WO2023247606A1 WO 2023247606 A1 WO2023247606 A1 WO 2023247606A1 EP 2023066762 W EP2023066762 W EP 2023066762W WO 2023247606 A1 WO2023247606 A1 WO 2023247606A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
output device
image
camera
cutout
Prior art date
Application number
PCT/EP2023/066762
Other languages
English (en)
French (fr)
Inventor
Pierre ESCRIEUT
Original Assignee
Valeo Comfort And Driving Assistance
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Comfort And Driving Assistance filed Critical Valeo Comfort And Driving Assistance
Publication of WO2023247606A1 publication Critical patent/WO2023247606A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Definitions

  • the invention relates to a method to provide an image to be displayed by an output device.
  • the invention furthermore relates to a system to carry out such a method, a camera to provide an image to be displayed by an output device and an output device to display an image.
  • the invention relates to a computer program product to carry out such a method.
  • a 360-degree image captured by a camera is provided for display on a screen of an output device.
  • the output device may be a smartphone, a tablet, a computer or any other mobile or stationary device comprising a screen.
  • the 360-degree image is typically sent to the output device in a compressed form.
  • this typically results in reduction of image quality of the image displayed by the output device compared to the image quality of the original image captured by the camera.
  • US 2019/0149731 A1 discloses a method and system for live sharing 360-degree video streams on a mobile device by tethering the mobile device to a 360-degree camera source to host live video streams from various venues.
  • a sharing platform is configured to ingest live video stream from one or more host devices coupled to the 360-degree camera.
  • One or more image or video processing techniques are applied and the processed image or video is transmitted to one or more viewing devices via the network.
  • the 360-degree image is transmitted via the network.
  • US 2021/0243418 A1 discloses a method comprising receiving meta data indicating which portion of the 360-degree video corresponds to a first view port of the 360-degree video displayed on a fixed display. An orientation of a second display relative to the first display is tracked and a second viewport of the immersive video synchronized in time and orientation relative to the immersive video displayed on the first display is determined based on the metadata and the orientation of the second display relative to the first display. The second viewport is then displayed on the second display.
  • the object of the invention is to increase quality of an image displayed on an output device while live streaming image data by a camera.
  • a first aspect of the invention relates to a method to provide an image to be displayed by an output device.
  • the output device comprises at least one screen, on which the image can be displayed.
  • the image is described by an image information.
  • the image information may alternatively be referred to as image data or may at least comprise image data.
  • the output device may be smartphone, a tablet and/or a computer, such as a personal computer or a laptop.
  • the output device may be any mobile or stationary device comprising the at least one screen.
  • the method comprises providing a raw image information by a camera.
  • the raw image information describes a 360-degree image.
  • the raw image information can be referred to as raw image data or may be described by raw image data.
  • the camera used is a 360- degree camera.
  • the camera captures the raw image information.
  • the camera comprises a communication interface configured to send or receive information, particularly to send the raw image information to another device, such as the output device.
  • the camera may be positioned in a room or in another environment. It is configured to, for example, constantly provide the raw image information of the room or the other environment.
  • the raw image information provided by the camera can be provided in real time, meaning that the camera can live stream the raw image information.
  • the provided raw image information is stored information meaning that it had been taken by the camera at a previous point in time and has been stored in a memory unit of a control unit or in another storage device since then.
  • the method furthermore comprises providing a point of view information by the output device.
  • the point of view information is determined by the output device.
  • the point of view information describes a point of view out of which a cutout image of the 360-degree image is taken.
  • the cutout image is intended for display on the screen of the output device.
  • the point of view information can alternatively be referred to as point of view data or may be described by point of view data.
  • the point of view information defines a perspective from which the 360-degree image is supposed to be viewed when it is displayed on the screen of the output device. Due to choosing a point of view, only a section of the 360-degree image is intended for display on the screen.
  • the section can alternatively be referred to as area or partial area of the 360-degree image.
  • the image displayed is hence not the whole 360-degree image and is hence referred to as cutout image of the 360-degre image.
  • the point of view information only defines details on, for example, a position and direction out of which the 360-degree image is supposedly viewed by a user when he or she looks at the screen of the output device.
  • the point of view information can hence, for example, comprise a position information describing a relative position of the point of view to a point of the 360-degree of the image.
  • a further step of the method comprises determining pixel information by converting the provided point of view information into corresponding pixel coordinates.
  • the pixel information is determined by the output device.
  • the pixel information can alternatively be referred to as pixel data or can be described by pixel data.
  • the pixel information is determined in a way that it describes a pixel area of the 360-degree image corresponding to the provided point of view.
  • the cutout image intended for display on the screen of the output device is defined as pixel area of the 360-degree image.
  • the pixel information comprises the information which part of the 360-degree image corresponds to the wanted cutout image.
  • the pixel information is only given, for example, in form of pixel coordinates since so far the raw image information has not necessarily been shared with the output device.
  • the pixel information is hence preferably independent from the raw image information and only depends on the point of view information.
  • the method further comprises providing the determined pixel information to the camera.
  • the determined pixel information is hence provided for the camera by the output device.
  • the determined pixel information can be sent to the camera via a cable-free and in particular a wireless communication connection between the camera and the output device.
  • the output device thus comprises as well as the camera a communication interface via which the output device may provide the determined pixel information for the camera.
  • the wireless communication connection may be based on a wireless local area network (WLAN for Wireless Local Area Network), a Bluetooth connection and/or a mobile data network, for example based on Long Term Evolution (LTE), Long Term Evolution Advanced (LTE-A), Fifth Generation (5G) or Sixth Generation (6G) mobile radio standard.
  • the communication connection may be corded. All information exchange between the camera and the output device preferably takes place via the communication connection.
  • the communication connection may comprise an intermediate interface, for example, a cloud server, a backend or a server.
  • the method comprises determining a cutout image information.
  • the cutout image information is determined by the camera. Determination of the cutout image information comprises reducing the provided raw image information to the pixel area according to the provided pixel information.
  • the cutout image information describes the cutout image.
  • the cutout image information may be referred to as cutout image data or may be described by cutout image data.
  • the camera crops the 360-degree image to the cutout image, which comprises when displayed on a screen only a section of the 360-degree image.
  • the section of the 360-degree image comprised by the cutout image is not chosen randomly, but is predetermined by the provided pixel information.
  • the cutout image information describes the section or detail of the 360-degree image that is visible when the user views a scene shown by the 360-degree image from the point of view according to the point of view information.
  • the method comprises providing the determined cutout image information to the output device.
  • the determined cutout image information is hence provided for the output device. This can be achieved by sending the determined cutout image information from the camera to the output device via the communication connection between the camera and the output device. Compared to the raw image information, the cutout image information is a smaller data package, since the cutout image information does not comprise the whole 360-degree image but only the cutout image.
  • the method furthermore comprises displaying the cutout image according to the provided cutout image information on the screen of the output device. This means that the camera does not share the whole raw image information with the output device but only the cutout image information, which is chosen depending on the point of view information determined by the output device.
  • the described method allows precise requesting for specific parts of the 360-degree image which correspond to the point of view to reduce data size of the image information that has to be sent to the output device compared to a method comprising sending the whole 360-degree image to the output device. Therefore, live streaming of image information provided by the camera on the output device is possible without reducing quality of the provided image information. This is achieved by cutting down or cropping the 360-degree image to the cutout image which prevents the sharing of the entire 360- degree image with the output device. It is hence achieved to increase quality of an image displayed on an output device while live streaming image data by a camera compared to methods which comprise providing the entire 360-degree image to the output device.
  • the method comprises determining a position information and/or an angle information of the output device. It further comprises determining the point of view information under consideration of the determined position information and/or angle information. In other words, the point of view information is determined dependent on the determined position information and/or angle information.
  • the position information can alternatively be referred to as position data or may be described by position data.
  • the angle information can alternatively be referred to as angle data or may be described by angle data.
  • the position information describes a position of the point of view.
  • the position may be an absolute position or a relative position of the output device in relation to, for example, a position of the camera.
  • the position can be described by coordinates.
  • the angle information describes a viewing angle under which the 360- degree image is viewed.
  • the angle information hence describes an angle, wherein a starting point of the angle is the position of the point of view.
  • the angle describes an orientation towards an image surface of the 360-degree image, wherein the 360-degre image is viewed according to the orientation by the user.
  • the viewing angle can be defined depending on the position of the output device.
  • the point of view information may comprise an arrangement of a viewer to the scene viewed in the 360- degree image.
  • the viewer is the user who views the displayed cutout image on the screen of the output device.
  • Performing the determination of the position and/or angle information can rely on known methods to determine the position and/or orientation of the output device, which are, for example, typically implemented in a smartphone.
  • the position information and/or angle information allow precise determination of the point of view information and thus of the cutout image information.
  • Another embodiment comprises that the position information and/or the angle information are determined depending on an arrangement of the output device to the camera.
  • the arrangement describes position and orientation of the output device relative to the camera.
  • the camera can be positioned in a center of a room capturing hence the surrounding room as scene viewed in the 360-degree image.
  • the user can be present who holds a smartphone as output device in his or her hands.
  • the position information can be, for example, determined as the relative position of the output device to the camera.
  • the orientation of the output device can be described as relative angle information of the output device in relation to the camera. Based on this position information and/or angle information, the output device may calculate the point of view in relation to the scene, meaning in relation to the room. It is possible that the camera provides its own position as camera position information to the output device.
  • the cutout image information changes according to the movement of the output device. It is hence necessary to continually provide the position and/or angle information to continually update the point of view information. As a result, the provided cutout image information may follow the movement of the output device and hence adapts to the current arrangement of the output device in relation to the camera. This results in a comfortable way for the user to choose the cutout image that is intended for display on the screen.
  • an embodiment comprises receiving the raw image information by the output device. It further comprises displaying the 360-degree image according to the received raw image information on the output device.
  • the image shown on the screen is of low quality compared to the 360-degree image according to the raw image data if the raw image information has been reduced in image quality to decrease the amount of image information to be sent to the output device.
  • the embodiment further comprises providing a manual positioning of the point of view and/or a manual adjusting of the viewing angle by means of an operating element of the output device. The user can hence, for example, virtually chose the point of view within the displayed 360-degree image. The user can in this scenario hence decide his or her view on the 360-degree image by means of a manual input.
  • the operating element can be an element displayed on a touchscreen of the output device.
  • the operating element can be a knob, a button, a switch or another control means provided by or for the output device.
  • the method comprises determining the position information and/or the angle information under consideration of the manually provided position and/or viewing angle, respectively. It is hence not required that the output device is placed in proximity or at least in the same environment as the camera since a purely virtual choosing of the point of view is possible. This allows to provide previously captured raw image information, which can be stored in the memory unit, as raw image information. It is hence not necessary that a live stream of image information that is captured live by the camera is performed. However, even in a purely virtual environment the user can still set or at least influence the point of view according to the point of view information.
  • determining the pixel information comprises considering a screen size information.
  • the screen size information describes a screen size of the screen of the output device so that a size of the cutout image corresponds to the screen size.
  • the screen size information can be referred to as screen size data or can be described by screen size data. It is in other words considered whether the output device comprises a large screen or a comparatively smaller screen.
  • the screen size information is hence different, for example, for a tablet (large screen) or a smartphone (smaller screen) as output devices. On the large screen it is possible to show a cutout image which comprises a larger part of the 360-degree image compared to a smaller screen which only provides screen place for a comparably smaller cutout image.
  • the provided cutout image information is hence not only dependent on the provided point of view information but also on the screen size information which depends on the used output device. This results in further user friendliness because the cutout image is determined under consideration of the screen on which it is viewed. Therefore, the cutout image can be adapted to the used output device.
  • determining the cutout image information comprises scaling down the cutout image by a factor.
  • the factor depends on the screen size information.
  • the cutout image views just a section of the 360-degree image, wherein the viewed section is further reduced in scale.
  • a specific distance in the 360-degree image for example, a distance between two objects shown in the 360-degree image is thus smaller in the cutout image compared to the 360- degree image due to the scaling down. This means that the scene viewed in the cutout image can be shrunk in its proportions compared to respective proportions in the 360- degree image.
  • the screen size of the screen of the output device is typically small compared to the size of the scene shown in the 360-degree image, it is possible to show a relatively big area of the 360-degree image on the display as cutout image due to additional downscaling of the scene shown in the cutout image.
  • the factor can be a natural number, resulting in a factor of 2, 3 or more. It alternatively or additionally possible to choose any factor between natural numbers, for example, a factor of 2.1 or 2.15.
  • Downsizing the cutout image can alternatively be referred to as shrinking the section of the 360-degree image that is viewed by the cutout image. Due to the scaling down, the method is even more comfortable for the user due to the increased adaptation to the used output device.
  • Another embodiment comprises converting the provided pixel information into a viewport information and determining the cutout image information under consideration of the viewport information.
  • the viewport information describes a size and position of the cutout image within the 360-degree image.
  • the viewport information can alternatively be referred to as viewport data or can be described by viewport data.
  • the converting is preferably performed by means of the camera.
  • the viewport information can be understood as an equivalent to the determined pixel information.
  • the difference between the two information is at least that the viewport information is defined in a way that it is directly usable by the camera to provide the wanted cutout image.
  • the viewport is defined by the point of view according to the point of view information. Considering the viewport information results in a particularly precise determination of the cutout image information by the camera.
  • determining the cutout image information comprises blackening of all regions of the 360-degree image which are outside the cutout image.
  • the blackening is done under consideration of the viewport information.
  • the part of the 360-degree image which corresponds to the cutout image is hence kept while all other parts of the 360-degree image are blackened.
  • the image pixels at all regions of the 360- degree image which are outside the cutout image are thus automatically set to a pixel value corresponding to the color black.
  • they can be set to a pixel value representing another color, for example, white or grey.
  • the cutout image information can represent an image which has the same size regarding a length and height of the image as the 360-degree image.
  • Length and height are here dimensions of the image in x- and y-direction, wherein an x-y-plane represents a surface plane of the image.
  • the 360-degree image and the cutout image can hence have the same dimension regarding length and height of the respective image.
  • the blackened regions require less memory storage compared to parts of the respective image that actually comprise varying pixel values according to the section of the 360-degree image.
  • the blackening results hence in reduces data size of the cutout image information compared to the raw image information, which facilitates providing live image data to the output device without having to reduce image quality.
  • the method comprises verifying whether a mode of the output device has been activated in which the output device shares the determined pixel information with the camera. Only in an activated mode of the output device the camera determines the cutout image information. It is hence possible to activate or deactivate the described method so that it is still possible to, for example, provide image data from the camera to the output device without first providing the point of view information.
  • the mode of the output device can be activated or deactivated by means of the operating element of the output device. Preferably, the mode is activated or deactivated manually by the user of the output device. This makes the described method particularly customer friendly.
  • an embodiment comprises that in an inactivated mode of the output device the camera provides the raw image information to the output device. This means that unless the mode is actually activated no cutout image is provided and displayed by the output device. It is still possible that the raw image information is, for example, compressed, meaning that it is reduced in image size and hence image quality Thus, it is possible to decrease the size of the image information the camera sends to the output device without providing the cutout image information.
  • an embodiment comprises that the output device provides a mode information for the camera.
  • the mode information can alternatively be referred to as mode data or can be described by mode data.
  • the mode information describes the mode of the output device that has been activated.
  • the mode, which has been activated is in particular a shared point of view mode of the output device.
  • the mode can be intended for a situation in which multiple output devices each receive image information from the camera. The users of the multiple output devices can then, for example, decide on one common point of view by activating the shared point of view mode of at least one of the output devices. It is hence possible that an application run on the output device comprises the specific shared point of view mode which can be activated or deactivate manually.
  • the mode furthermore allows for a quick check by means of the camera to determine whether point of view information is to be expected from the output device or not. It is possible that determining and providing of the point of view information and the pixel information by the output device depends on whether the mode is activated or not. This results in less calculation efforts required by the output device when the mode is inactivated.
  • the method comprises providing the determined cutout image information to the output device in real time.
  • a low latency transmission system which, for example, is configured to provide image information from the camera for the output device via the communication connection in set time intervals.
  • the time intervals can be set to, for example, 100 milliseconds. This means that, for example, every 100 milliseconds a new image information, particularly a cutout image information, is sent to the output device for display on the screen. This allows for actual real-time and hence live streaming of image information provided by the camera.
  • the system comprises a camera and the output device, wherein the camera is configured to provide a raw image information describing a 360-degree image.
  • the output device is configured to provide a point of view information describing a point of view out of which a cutout image of the 360-degree image is taken, wherein the cutout image is intended for display on a screen of the output device; to determine a pixel information by converting the provided point of view information into corresponding pixel coordinates so that the pixel information describes a pixel area of the 360-degree image corresponding to the provided point of view; and to provide the determined pixel information to the camera.
  • the camera is configured to determine a cutout image information by reducing the provided raw image information to the pixel area according to the provided pixel information, wherein the cutout image information describes the cutout image; and to provide the determined cutout image information to the output device.
  • the output device is configured to display the cutout image according to the provided cutout image information on the screen of the output device.
  • the invention also refers to a camera to provide an image to be displayed by an output device.
  • the camera is configured to provide a raw image information describing a 360- degree image; to receive a pixel information from an output device, wherein the pixel information describes a pixel area of the 360-degree image; to determine a cutout image information by reducing the provided raw image information to the pixel area according to the received pixel information, wherein the cutout image information describes a cutout image of the 360-degree image; and to provide the determined cutout image information to the output device.
  • a further aspect of the invention relates to an output device to display an image.
  • the output device is configured to provide a point of view information describing a point of view out of which a cutout image of a 360-degree image is taken, wherein the cutout image is intended for display on a screen of the output device; to determine a pixel information by converting the provided point of view information into corresponding pixel coordinates so that the pixel information describes a pixel area of the 360-degree image corresponding to the provided point of view; to provide the determined pixel information to a camera; to receive a cutout image information describing a cutout image of a 360- degree image from a camera; and to display the cutout image according to the provided cutout image information on the screen of the output device.
  • an aspect of the invention relates to a computer program product.
  • the computer program product comprises instructions which, when the program is executed by a camera as described above and an output device as described above cause the camera and the output device, respectively, to carry out the steps of the above-described method.
  • the camera carries out the steps intended for the camera and the output device carries out the steps intended for the output device.
  • the computer program product can be a computer program.
  • the invention also comprises a control unit to execute the inventive computer program product.
  • the control unit is configured to at least perform the above-mentioned steps involving determining a specific information, operating the screen to display a specific image or operating the camera to provide the raw image information, which are in particular captured by the camera.
  • the respective control unit may be referred to as processing unit.
  • the control unit may comprise one or more microprocessors and/or one or more microcontrollers and/or one or more ASIC (application specific integrated circuit).
  • the control unit may comprise program code that is designed to perform the method when executed by the control unit.
  • the program code may be stored in a data storage of the control unit.
  • the control units perform the steps of the inventive method.
  • the program code can be understood as the inventive computer program product.
  • the invention also includes further embodiments of the system, the camera, the output device, the control unit and/or the computer program product which have features as already described in connection with the embodiments of the inventive method.
  • the invention also includes combinations of the features of the embodiments described.
  • Fig. 1 a schematic representation of the system to provide an image to be displayed by an output device
  • Fig. 2 a schematic representation of a method to provide an image to be displayed by an output device
  • Fig. 3 a schematic representation of further steps of the method shown in Fig. 2.
  • Fig. 1 shows a room 1 in which a system 2 is located.
  • the system 2 comprises a camera 3, which is exemplarily mounted on a tripod 4.
  • the camera 3 is a 360-degree camera, meaning that it can capture 360-degree images of the room 1 .
  • the camera 3 can provide the 360-degree image for another device of the system 2.
  • a user 5 is present.
  • the user 5 is a person, who holds a smartphone is his or her hands.
  • the smartphone is an example for an output device 6.
  • the system 2 comprises the output device 6.
  • the output device 6 can be a computer, a tablet or any other electronic device which comprises a screen 7.
  • the user 5 holds the output device 6 at a specific position 8 as well as at a specific viewing angle 9 in relation to the room 1 or the camera 3.
  • the output device 6 is positioned in a specific arrangement within the room 1 .
  • the arrangement can be described by the position 8 and an orientation of the output device 6, which is here described by the viewing angle 9.
  • the position 8 and/or viewing angle 9 can define a point of view from which the user 5 may view the room 1 if an image captured by the camera 3 was displayed on the screen 7 of the output device 6.
  • Fig. 2 shows steps of a method to provide an image to be displayed by the output device 6.
  • the camera 3 provides raw image information 10.
  • the raw image information 10 describes a 360-degree image taken by the camera 3.
  • the 360-degree image shows the interior of the room 1 , which is here exemplarily a church.
  • the 360-degree image can show any kind of environment. It is hence not limited to the interior of the room 1 .
  • Providing the raw image information 10 by the camera 3 does not necessarily comprise sending the raw image information 10 to the output device 6.
  • a step S2 comprises determining a position information and/or an angle information of the output device 6.
  • the position information describes the position 8 of the point of view.
  • the angle information describes the viewing angle 9 under which the 360-degree image is viewed.
  • the position information and/or angle information can be determined depending on the arrangement of the output device 6 to the camera 3. Alternatively or additionally, it is possible that the position information and/or the angle information are determined in the following way:
  • the output device 6 receives the raw image information 10 from the camera 3. Afterwards, the 360-degree image according to the raw image information 10 is displayed on the screen 7 of the output device 6.
  • the output device 6 then provides a manual positioning of the point of view and/or a manual adjusting of the viewing angle 9 by means of an operating element of the output device 6.
  • a point of view information 11 is provided by the output device 6.
  • the point of view information 11 describes the point of view out of which a cutout image of the 360- degree image is taken.
  • the cutout image is intended for display on the screen 7 of the output device 6.
  • a step S4 comprises determining a pixel information 12.
  • the pixel information 12 is determined by converting the provided point of view information 11 into corresponding pixel coordinates, so that the pixel information 12 describes a pixel area for the 360- degree image corresponding to the provided point of view.
  • the pixel information 12 can hence depend on the position information and/or the angle information as determined before. It is possible that the determination of the pixel information 12 comprises considering a screen size information 13.
  • the screen size information 13 describes a screen size of the screen 7 of the output device 6. This results in the scenario that a size of the cutout image corresponds to the screen size according to the screen size information 13.
  • the output device 6 provides the determined pixel information 12 for the camera 3, meaning that the determined pixel information 12 is sent or transmitted from the output device 6 to the camera 3. Therefore, there is a communication connection between a communication interface of the camera 3 and a communication interface of the output device 6.
  • the communication connection may be cable-free or cable-bound.
  • a cutout image information 15 is determined by means of the camera 3. This is done by reducing the provided raw image information 10 to the pixel area according to the provided pixel information 12.
  • the cutout image information 15 describes the cutout image that is supposed to be displayed on the screen 7 by the output device 6.
  • the provided pixel information 12 is converted into the viewport information 14 and afterwards the viewport information 14 is considered for determining the cutout image information 15.
  • the viewport information 14 describes the size and position of the cutout image within the 360-degree image.
  • Determining the cutout image information 15 can also comprise scaling down the cutout image by a factor.
  • the factor depends on the screen size information 13.
  • the scale of the cutout image can hence vary according to the screen size of the screen 7 of the output device 6.
  • the cutout image can hence show a bigger part of the 360-degree image if the screen 7 is relatively large compared to a smaller screen 7.
  • a screen 7 is relatively small as it can be the case for the screen 7 of a smartphone as output device 6, the output image can still show a relatively large part of the 360-degree image due to scaling down the cutout image by the factor.
  • the cutout image information 15 By determining the cutout image information 15, all regions of the 360-degree image which are outside the cutout image may be blackened. To do so, the viewport information 14 is considered. This means that a size of an image that is described by the cutout image information 15 can have the same size in length and height of the image, meaning in x- and y-direction of the image, as the 360-degree image because the parts surrounding the cutout image are blackened but not cut out.
  • a further step S8 comprises providing the determined cutout image information 15 to the output device 6. Therefore, the determined cutout image information 15 can be sent to the output device 6 via the communication connection.
  • a step S9 comprises displaying the cutout image according to the provided cutout image information 15 on the screen 7 of the output device 6. It is hereby intended that the blackened regions of the 360-degree image are not displayed on the screen 7. Preferably, the only image visible on the screen 7 is the pixel area of the 360-degree image that was chosen as cutout image of the 360- degree image.
  • Fig. 3 shows additional steps of the method.
  • step S11 comprising verifying whether a mode of the output device 6 has been activated in which the output device 6 shares the determined pixel information 12 with the camera 3 or not.
  • step S10 that comprises providing a mode information 16.
  • the mode information 16 describes that the mode of the output device 6 has been activated.
  • the mode is in particular a shared point of view mode.
  • the output device 6 provides the mode information 16 for the camera 3 by, for example, sending it to the camera 3.
  • step S9 meaning in displaying the cutout image on the screen 7 of the output device 6.
  • step S11 If, however, the verification in step S11 is negative, meaning that the mode has not been activated and is thus inactive, the camera 3 provides the raw image information 10 in step S12 to the output device 6. In this case, it is possible that the raw image information 10 is reduced in quality compared to the originally captured raw image information 10 of the camera 3 in order to provide fast transmittance of the raw image information 10 to the output device 6 via the communication connection.
  • step S9 only the cutout image is shown on the screen 7, whereas in step S12 the whole 360-degree image is displayed on the screen 7.
  • the method can provide a real-time or live display of the cutout image. This means that determining and providing the cutout image information 15 to the output device 6 takes place in real time. It is also intended that the point of view information 11 is provided from the output device 6 to the camera 3 in real time, so that changes in position 8 and/or viewing angle 9 can be considered in real time.
  • the 360-degree image of the camera 3 is used for live streaming.
  • a mode can be provided that forces users 5 to see a common point of view of the 360-degree image or to see the same part of the 360-degree image. This mode is called share point of view mode.
  • the point of view information 11 as determined by one output device 6 is shared with other output devices 6 which then receive the cutout image information 15 from the camera 3. It is the idea of the invention to share the point of view information 11 of the output device 6 responsible of the 360-degree camera 3 to convert the point of view to pixel coordinates meaning to the pixel information 12 and send it to the 360-degree camera 3 to just keep the part of the image displayed in the point of view. This is feasible because of use of low latency transmission systems. This results in increased image quality of the cutout image due to blackening of all regions which are not regions of interest because they are surrounding the cutout image.
  • the camera 3 may provide dynamic images such as a video.
  • the raw image information 10 can hence be raw video data and the cutout image information 15 cutout video data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
PCT/EP2023/066762 2022-06-24 2023-06-21 Method and system to provide an image to be displayed by an output device WO2023247606A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022115806.3 2022-06-24
DE102022115806.3A DE102022115806A1 (de) 2022-06-24 2022-06-24 Verfahren und System zum Bereitstellen eines durch eine Ausgabevorrichtung anzuzeigenden Bildes

Publications (1)

Publication Number Publication Date
WO2023247606A1 true WO2023247606A1 (en) 2023-12-28

Family

ID=87060021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/066762 WO2023247606A1 (en) 2022-06-24 2023-06-21 Method and system to provide an image to be displayed by an output device

Country Status (2)

Country Link
DE (1) DE102022115806A1 (de)
WO (1) WO2023247606A1 (de)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2568020A (en) * 2017-09-05 2019-05-08 Nokia Technologies Oy Transmission of video content based on feedback
US20190141311A1 (en) * 2016-04-26 2019-05-09 Lg Electronics Inc. Method for transmitting 360-degree video, method for receiving 360-degree video, apparatus for transmitting 360-degree video, apparatus for receiving 360-degree video
US20190149731A1 (en) 2016-05-25 2019-05-16 Livit Media Inc. Methods and systems for live sharing 360-degree video streams on a mobile device
EP3672251A1 (de) * 2018-12-20 2020-06-24 Koninklijke KPN N.V. Verarbeitung von videodaten für ein videospielgerät
WO2021105552A1 (en) * 2019-11-29 2021-06-03 Nokia Technologies Oy A method, an apparatus and a computer program product for video encoding and video decoding
US20210243418A1 (en) 2018-04-27 2021-08-05 Pcms Holdings, Inc. 360 degree multi-viewport system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6331869B1 (en) 1998-08-07 2001-12-18 Be Here Corporation Method and apparatus for electronically distributing motion panoramic images
US20130141526A1 (en) 2011-12-02 2013-06-06 Stealth HD Corp. Apparatus and Method for Video Image Stitching
US10291910B2 (en) 2016-02-12 2019-05-14 Gopro, Inc. Systems and methods for spatially adaptive video encoding

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190141311A1 (en) * 2016-04-26 2019-05-09 Lg Electronics Inc. Method for transmitting 360-degree video, method for receiving 360-degree video, apparatus for transmitting 360-degree video, apparatus for receiving 360-degree video
US20190149731A1 (en) 2016-05-25 2019-05-16 Livit Media Inc. Methods and systems for live sharing 360-degree video streams on a mobile device
GB2568020A (en) * 2017-09-05 2019-05-08 Nokia Technologies Oy Transmission of video content based on feedback
US20210243418A1 (en) 2018-04-27 2021-08-05 Pcms Holdings, Inc. 360 degree multi-viewport system
EP3672251A1 (de) * 2018-12-20 2020-06-24 Koninklijke KPN N.V. Verarbeitung von videodaten für ein videospielgerät
WO2021105552A1 (en) * 2019-11-29 2021-06-03 Nokia Technologies Oy A method, an apparatus and a computer program product for video encoding and video decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NOKIA CORPORATION (ITT4RT RAPPORTEUR): "ITT4RT Permanent Document - Requirements, Working Assumptions and Potential Solutions", vol. SA WG4, no. Online Meeting; 20210818 - 20210827, 27 August 2021 (2021-08-27), XP052064495, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_sa/WG4_CODEC/TSGS4_115-e/Docs/S4-211265.zip S4-211265 ITT4RT Permanent Document v.0.13.0.doc> [retrieved on 20210827] *

Also Published As

Publication number Publication date
DE102022115806A1 (de) 2024-01-04

Similar Documents

Publication Publication Date Title
US10789671B2 (en) Apparatus, system, and method of controlling display, and recording medium
US20220174252A1 (en) Selective culling of multi-dimensional data sets
US10437545B2 (en) Apparatus, system, and method for controlling display, and recording medium
US11146773B2 (en) Point cloud data communication system, point cloud data transmitting apparatus, and point cloud data transmission method
US10855916B2 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
US9392248B2 (en) Dynamic POV composite 3D video system
JP5734525B2 (ja) 医療支援システムおよびその方法
CA2866459A1 (en) Teleconference system and teleconference terminal
JP5886242B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
EP3432588A1 (de) Verfahren, vorrichtung, speichermedium und system zur verarbeitung von bildinformationen
US20190289206A1 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
US20180338093A1 (en) Eye-tracking-based image transmission method, device and system
KR101408719B1 (ko) 3차원 영상의 스케일 변환 장치 및 그 방법
CN110928509B (zh) 显示控制方法、显示控制装置、存储介质、通信终端
EP3190566A1 (de) Sphärische kamera für virtuelle realität
JPWO2013145820A1 (ja) 撮影装置、方法、記憶媒体及びプログラム
JP2019149785A (ja) 映像変換装置及びプログラム
JP2008042702A (ja) 被写体撮影装置、被写体表示装置、被写体表示システム及びプログラム
EP3599763A2 (de) Verfahren und vorrichtung zur steuerung der bildanzeige
WO2023247606A1 (en) Method and system to provide an image to be displayed by an output device
US20240015264A1 (en) System for broadcasting volumetric videoconferences in 3d animated virtual environment with audio information, and procedure for operating said device
CN113126942B (zh) 一种封面图片的显示方法、装置、电子设备及存储介质
JP2005142765A (ja) 撮像装置及び方法
WO2021147749A1 (zh) 实现3d显示的方法、装置及3d显示***
CN112887620A (zh) 视频拍摄方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23735615

Country of ref document: EP

Kind code of ref document: A1