WO2021176991A1 - Information processing device, video display system, and video display method - Google Patents

Information processing device, video display system, and video display method Download PDF

Info

Publication number
WO2021176991A1
WO2021176991A1 PCT/JP2021/005194 JP2021005194W WO2021176991A1 WO 2021176991 A1 WO2021176991 A1 WO 2021176991A1 JP 2021005194 W JP2021005194 W JP 2021005194W WO 2021176991 A1 WO2021176991 A1 WO 2021176991A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
display
event
time point
image
Prior art date
Application number
PCT/JP2021/005194
Other languages
French (fr)
Japanese (ja)
Inventor
健太郎 土場
厚史 泉原
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021176991A1 publication Critical patent/WO2021176991A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/18Timing circuits for raster scan displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/391Resolution modifying circuits, e.g. variable screen formats
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/66Transforming electric information into light information

Definitions

  • This disclosure relates to an information processing device, a video display system, and a video display method.
  • the present disclosure provides a device for determining the display speed of the video in order to reduce the sickness felt by the viewer of the video.
  • the information processing device on one aspect of the present disclosure includes an event information generation unit and a determination unit.
  • the event information generation unit generates first information regarding the occurrence of an event related to the movement of the device that captures the image.
  • the determination unit determines a time point at which the display speed of the image is reduced based on the first information.
  • the display device that displays the image can reduce the display speed from the image at the time point, and the sickness felt by the viewer of the image is reduced.
  • the device for displaying images may be the same as the information processing device, or may be a device different from the information processing device. That is, the device that determines the time point at which the image is lowered and the device that displays the image while slowing down the image speed at the determined time point may be the same or may be separate.
  • the event information generation unit includes a prediction unit that predicts the occurrence time of the event and includes the predicted occurrence time in the first information, and the determination unit is based on the predicted occurrence time.
  • the time point for lowering the display speed may be determined.
  • the determination unit may set the predicted occurrence time point as a time point for lowering the display speed.
  • the event information generation unit may include a detection unit.
  • the detection unit may detect the occurrence of the event and include the detection time point of the occurrence of the event in the first information. Then, the determination unit may set the detection time point as the time point for lowering the display speed.
  • the event information generation unit includes a detection unit that detects the occurrence of the event and includes the detection time of the occurrence of the event in the first information, and the determination unit includes the detection time. It is possible to take a configuration such as setting the time when the display speed is lowered.
  • the event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event, and the determination unit determines a time point at which the display speed is reduced based on the detection. It is possible to take a configuration such as modifying it.
  • the detection unit detects at least one of vibration, rotation, and acceleration of the device or an object to which the device is attached, and the determination unit detects at least one of the vibration, rotation, and acceleration. Based on the detection of, the time point at which the display speed is lowered can be corrected.
  • the prediction unit may have a configuration in which the time point of occurrence of the event is predicted based on the second information regarding the planned movement route of the device or the object to which the device is attached.
  • the event is a vibration received by the device or an object to which the device is attached, and the prediction unit generates the event based on the road surface condition of the planned movement route included in the second information. It is possible to take a configuration such as predicting the time point.
  • the prediction unit further predicts the end time of the event, and the determination unit determines the period from the predicted occurrence time to the predicted end time as the display speed reduction period. In addition to the decrease period, the display speed increase period may be determined.
  • the rate of increase / decrease of the display speed in the ascending period with respect to the standard speed may be smaller than the absolute value of the rate of increase / decrease of the display speed in the decreasing period with respect to the standard speed.
  • the event information generation unit further includes a detection unit that detects the occurrence of the event or an event different from the event, and the determination unit has the decrease period and the increase period based on the detection. It is also possible to modify at least one of the above.
  • the detection unit further detects the voice, and the determination unit determines a time point at which the display speed or the output speed of the sound corresponding to the video is increased based on the detection time point of the voice. obtain.
  • the determination unit further determines a time point at which at least one of the settings related to the display of the video is changed in the direction in which the visibility deteriorates. Can also be taken.
  • the device or the object to which the device is attached is further provided with a receiving unit for receiving the fourth information regarding the deviation between the actual position of the device or the object to which the device is attached and the planned movement path of the device or the object. Based on the fourth information, it is possible to further determine the time point at which the display speed is reduced.
  • the device may be further provided, and the device may increase the rate of shooting the video based on the first information.
  • a route determination unit that determines a planned movement route to a given destination based on the device, a fifth information regarding its own position and surrounding conditions, and a movement control unit that moves the planned movement route. It is also possible to take a configuration further provided with.
  • an image display including a mobile device to which an image sensor for capturing an image is attached, an information processing device, and first and second display devices for displaying recorded images.
  • the system is provided.
  • the information processing device has an event information generation unit that generates first information regarding the occurrence of an event related to the movement of the device that captures the image, and a determination unit that determines a time point at which the display speed of the image is reduced based on the first information.
  • the first display device includes, at least, a transmission unit that transmits information about a determined time point to the first display device, and the first display device determines the display speed of the video during the display of the video.
  • the second display device keeps the display speed of the image constant before and after the time point during the display of the image. Such a configuration can be taken.
  • a step of generating the first information regarding the occurrence of an event related to the movement of the device for capturing the image and a time point for reducing the display speed of the image are determined based on the first information.
  • an image display method including a step, a step of starting the display of the image, and a step of lowering the display speed of the image at the time point.
  • FIG. 1 is a diagram showing a configuration example of a video display system according to the first embodiment.
  • the video display system of the example of FIG. 1 includes an information processing device 100 and a display device 200.
  • the information processing device 100 includes a receiving unit 110, a sensor 120, a movement information generation unit 130, a movement control unit 140, an event information generation unit 150, a display information generation unit (determination unit) 160, and a transmission unit 170. , Equipped with.
  • the display device 200 includes an input receiving unit 210, a transmitting unit 220, a receiving unit 230, and a display control unit 240.
  • the video display system in the present disclosure is a system that displays the video captured by the sensor 120 that captures the video.
  • the sensor 120 is assumed to be an image sensor or the like, and can be said to be a device that captures an image. It is assumed that the sensor 120 is built in or externally attached to the moving body, and the imaging is performed while the moving body is moving. Therefore, the displayed image may include blurring caused by movement.
  • this video display system suppresses the viewer's sickness by adjusting the video display speed (which can also be said to be the playback speed).
  • the first embodiment shows an example in which the video display system is composed of an information processing device 100 and a display device 200.
  • the information processing device 100 is a mobile body, includes a sensor 120 that captures an image, and moves while capturing the image. Therefore, the information processing device 100 of the first embodiment can be said to be a mobile device and a photographing device.
  • the display device 200 is separated from the information processing device 100, and receives and displays an image from the information processing device 100 via wireless communication or the like.
  • a system that displays an image taken by a movable robot on a monitor at a remote location, a head-mounted display, or the like corresponds to the image display system according to the first embodiment.
  • FIG. 2 is a diagram for explaining the video display according to the first embodiment. It is shown that the information processing device 100, which is a movable robot, moves while avoiding the obstacle 300. The planned movement route of the information processing device 100 is indicated by an arrow 401. Further, the display device 200 is shown as a head-mounted display. A plurality of bars 500 arranged on the right side of the display device 200 conceptually represent a frame of an image displayed on the display device 200.
  • the information processing device 100 moves in a large bend in order to avoid the obstacle 300.
  • the images taken during this period of large bends and movements include distortion, which is a cause of sickness. Therefore, by reducing the display speed when displaying the video captured during the period, it is possible to prevent the viewer of the video from getting sick. Therefore, in the example of FIG. 2, the interval of the frames corresponding to the period (frames from the first time point to the second time point in FIG. 2) is wider than the initial time.
  • the display speed of the image may be lower than the display speed before the time when the display speed is lowered, or may be lower than the predetermined standard speed. For example, when the display speed is 1.5 times the speed before the time point, the display speed may be set to 1.2 times speed from the time point. Alternatively, from that point in time, the speed may be 0.8 times, which is slower than the standard speed of 1x (1.0x). In this way, the amount of reduction in the display speed may be appropriately determined.
  • the information processing device 100 moves without making a large bend. Therefore, the display speed is returned so that the viewer of the captured image does not feel uncomfortable with the display speed. Therefore, in the example of FIG. 2, the interval of the frame after the period of large bending and movement (frame after the second time point) is returned to the initial interval.
  • the period during which the display speed is reduced is represented as the display delay period.
  • the display speed was set to be returned, but it is assumed that the display speed does not have to be returned all at once, but is gradually returned. That is, the display speed may be gradually increased and returned to the standard speed. The amount for one dose to increase the display speed may be appropriately determined.
  • the delay until the captured video is displayed will gradually increase. This can cause problems when viewing in real time. For example, when a captured image is viewed in real time and an instruction is given based on the viewed image, a problem may occur in which the instruction is delayed and a desired action is not realized. For example, although the information processing apparatus 100 is instructed to pause, a situation may occur in which the position where the information processing device 100 is desired to be stopped is exceeded. In addition, although an instruction to change the direction of the sensor 120 is issued in order to photograph an object in the middle of the path, a situation may occur in which the direction of the sensor 120 is changed after passing the object. Therefore, in order to eliminate the delay caused by the decrease in the display speed, the period for increasing the display speed may be determined.
  • the video display speed will be increased shortly after the display speed is returned so that there is no delay in the display delay period.
  • the period from the time when the display speed is increased to the time when the display speed is returned is described as the recovery period.
  • the recovery period can be said to be the period during which the display speed increases. Since the display speed is increased, the interval between frames (frames after the third time point) in the recovery period is narrower than the initial interval. In this case as well, the amount of increase in display speed may be appropriately determined.
  • the amount of increasing the display speed is smaller than the amount of decreasing the display speed.
  • the rate of increase / decrease of the display speed with respect to the standard speed during the recovery period is smaller than the absolute value of the rate of increase / decrease of the display speed with respect to the standard speed during the display delay period. For example, when the display speed is set to 0.8 times in the display delay period, the rate of increase / decrease of the display speed in the display delay period with respect to the standard speed is ⁇ 0.2.
  • the rate of increase / decrease of the display speed with respect to the standard speed during the recovery period is suppressed rather than 0.2, and it is preferable to set the speed between 1x speed and 1.2x speed.
  • a plurality of recovery periods may be provided.
  • a robot is used, but shooting and movement may be handled by different devices, and instead of the robot, a car, a bicycle, a motorcycle, or a playset equipped with an image sensor is used. May be done. Further, if the movement of the information processing device 100 can be predicted and detected, a person may carry the information processing device 100 in charge of photographing and move the information processing device 100.
  • the information processing device 100 determines the time point, and the display device 200 adjusts the display speed when displaying the image at the time point. Specifically, when the receiving unit 230 of the display device 200 receives the information indicating the time determined together with the image and the display control unit 240 of the display device 200 displays the image, the display speed of the image is based on the information. To adjust.
  • the information processing device 100 decides to reduce the display speed when 10 seconds have passed from the start of shooting.
  • the receiving unit 230 receives the information that the display speed is lowered 10 seconds after the start of the video together with the video.
  • the display control unit 240 starts displaying the image and reduces the display speed 10 seconds after the start of the image. In this way, the display speed of the image is changed.
  • the display of the video may be started before the information regarding the time when the display speed is lowered is received.
  • the video display system is not limited to the configuration shown in FIG.
  • the processing of the information processing device 100 may be performed by the display device 200, or may be distributed to devices (not shown). There can also be.
  • the information processing device 100 is also a mobile device, but in the example of FIG. 1, it is an autonomous mobile type capable of receiving a destination and autonomously moving to the destination. It is configured as a mobile device.
  • the components in the information processing device 100 and the display device 200 may be aggregated or further dispersed. In addition, components not shown or described may also be present in the information processing device 100 and the display device 200. For example, one or more memories or storages for storing information necessary for processing may exist in the information processing device 100 and the display device 200.
  • the receiving unit 110 receives information necessary for processing of the information processing device 100 from an external device of the information processing device 100. For example, it is possible to receive information necessary for the information processing device 100 to move, such as a map of the vicinity where the information processing device 100 exists and the coordinates of the destination of movement.
  • the information received is not particularly limited.
  • a user who uses the display device 200 performs voice communication with a person in the vicinity of the information processing device 100 via the information processing device 100.
  • the input receiving unit 210 of the display device 200 receives the voice
  • the transmitting unit 220 of the display device 200 transmits a packet related to the voice.
  • the receiving unit 110 may receive the packet.
  • the actual position of the information processing device 100 may be received from an external device.
  • the sensor 120 is a sensor 120 mounted on the information processing device 100.
  • the sensor 120 includes at least an image sensor for photographing such as an RGB (Red, Green, and Blue) sensor.
  • a sensor other than the image sensor may be included in the sensor 120 in order to collect information for movement or to detect the movement of the body of the information processing device 100.
  • a sensor such as a ToF (Time of Flight) sensor, a LiDAR (Light Detection and Ranking), a radar, a sonar, an IMU (Inertial measurement unit), a GPS (Global Positioning System), and an odometer may be included.
  • the movement information generation unit 130 generates information for the information processing device 100 to move.
  • the information is referred to as movement information.
  • the movement information includes information on objects around the moving device, the state of the surrounding environment, and the like. In this disclosure, the information is referred to as an environmental map.
  • the movement information also includes information on the route on which the information processing apparatus 100 is scheduled to move.
  • the movement information generation unit 130 estimates the environment around the mobile device based on the information from the sensor 120, and creates an environment map around the mobile device by combining the estimation results in chronological order.
  • the environmental map may be created from scratch, or the environmental map may be created based on the map around the mobile device received via the receiving unit. For example, the detected obstacle may be placed on a map around the mobile device.
  • the movement information generation unit 130 includes an object detection unit 131, an environmental state estimation unit 132, a self-position estimation unit 133, and an environmental map creation unit 134. ing.
  • the object detection unit 131 detects an object around the mobile device, that is, an obstacle for the mobile device, based on information from the sensor 120 or the like. For example, an obstacle may be detected by acquiring a distance image from the sensor 120. Environmental maps show these obstacles and are used to determine the route of travel. In this embodiment, it is assumed that the environmental map is updated even while the moving device is moving so that the moving of obstacles can be dealt with.
  • the environmental state estimation unit 132 estimates the environment around the mobile device based on an image from the sensor 120 or the like. For example, the unevenness of the road surface transferred to the image may be detected. Alternatively, the inclination of the road surface may be detected.
  • the movement information is used not only for the information processing device 100 to move, but also for predicting the occurrence and termination of image deviation. For example, if the road surface is in poor condition, the moving device vibrates, leading to a shift in the image. Therefore, it is preferable to reduce the display speed when displaying the image captured while moving on a road surface in poor condition. Therefore, it is preferable that the movement information includes the state of the road surface and the like in order to predict the period of movement on the road surface in poor condition.
  • FIG. 3 is a diagram for explaining the estimation of the path state.
  • the figure shows an image of the road surface.
  • the dotted square indicates that the position surrounded by the square has unevenness of a predetermined value or more.
  • a technique for discriminating the degree of unevenness on the surface of an object projected on an image is known, and such a technique may be used.
  • the environmental state estimation unit 132 may estimate the state from the image of the road surface by using such a model.
  • the self-position estimation unit 133 estimates its own position based on the information from the sensor 120. For example, there is known a technique for determining a position on a map by using an image of the surroundings of oneself. Using such a technique, the position where the image from the sensor 120 is expected to be obtained may be determined on the environment map.
  • the environmental map creation unit 134 creates an environmental map based on this information.
  • FIG. 4 is a diagram showing an example of an environmental map.
  • the dotted line in FIG. 4 shows an actual map, and the solid line shows a part determined by information processing.
  • the double circle indicates the position of the self.
  • the environmental map may show a part of the actual map, and the part determined by moving may increase.
  • the road surface condition shown in FIG. 3 is also included in the environmental map.
  • the route determination unit 135 determines the planned travel route based on the environmental map and the destination.
  • the planned movement route 402 is indicated by an arrow.
  • the environment map is updated by the movement, the newly calculated part increases, so the planned movement route may also be updated.
  • the destination may be specified from the outside via the receiving unit 110, but the destination may be predetermined. For example, when a surveillance robot reciprocates in a fixed place, the destination is fixed, but the position of obstacles, the situation of the surrounding environment, etc. can be different each time, so the planned movement route can also be different each time. ..
  • the created environmental map may be transmitted to the display device 200 via the transmission unit 170, and the display device 200 may display the environmental map. Thereby, the user can also specify the destination on the environment map via the display device 200.
  • the movement control unit 140 controls the actual movement based on the movement information. Specifically, the action determination unit 141 determines its own action content based on the environment map showing its own position and the route information. The aircraft control unit 142 controls the aircraft and executes the determined action content. It should be noted that the processing related to autonomous movement is known, and a known technique may be used, and since there is no processing related to the adjustment of the display of the image, the detailed description of the movement control unit 140 will be omitted.
  • the event information generation unit 150 generates information regarding the occurrence of an event related to the movement of the device that captures the image. It also generates information about the end of the event.
  • the display information generation unit 160 determines the time when the display speed is changed based on the information. Information indicating the determined time point is described as display information.
  • the event information generation unit 150 includes a prediction unit 151 and a detection unit 152.
  • the prediction unit 151 predicts the registered event based on the movement information.
  • the prediction unit 151 predicts an event that may occur due to the movement of the sensor 120 for photographing or the object to which the sensor 120 is attached, based on the environment map, the planned movement route, and the like. For example, in the region of the frame of the alternate long and short dash line surrounding a part of the planned movement path 402 in FIG. 4, it is predicted that the image sensor will make a sharp turn because the curve is steep. That is, the prediction unit 151 predicts that an event will occur while traveling in the region. In this way, the prediction unit 151 predicts the occurrence of an event related to the movement of the sensor 120, which causes sickness factors such as a sharp turn and the occurrence of vibration, based on the planned movement path.
  • the detection unit 152 detects a registered event based on the information obtained from the sensor 120 or the sensor 120 while the object is moving. In other words, the detection unit 152 detects an event that actually occurs during movement.
  • the prediction unit 151 and the detection unit 152 may be provided depending on the performance of the device, the installation environment, the cost, the safety, the application, and the like.
  • the event predicted by the prediction unit 151 and the event detected by the detection unit 152 may be different. That is, there may be an event that is predicted by the prediction unit 151 but not detected by the detection unit 152. Although not predicted by the prediction unit 151, there may be an event detected by the detection unit 152. Further, there may be an event that is predicted by the prediction unit 151 and detected by the detection unit 152.
  • Events that can be predicted or detected and are the cause of sickness are registered as targets for prediction or detection.
  • the sensor 120 when the sensor 120 is vibrated by movement and the image is blurred, sickness is induced. Therefore, vibration is a cause of sickness and can be registered for prediction or detection.
  • the image shakes which induces sickness. Therefore, turning is also a cause of sickness and can be registered as a target for prediction or detection.
  • images during sudden acceleration and deceleration also cause sickness. Therefore, acceleration and deceleration above a certain value are also factors of sickness and can be registered as targets for prediction or detection.
  • information on the occurrence and termination of the event is generated for the event related to the movement of the sensor 120 in this way.
  • the sensor 120 also moves. Therefore, when the movement of the object to which the sensor 120 is attached is predicted or detected, it can be said that the movement of the sensor 120 is predicted or detected. Further, an object having a built-in sensor 120 and an object equipped with the sensor 120 can also be said to be an object to which the sensor 120 is attached.
  • the time of occurrence and the time of termination of vibration can be predicted based on the road surface condition of the planned movement route. Since vibration occurs in the route section where the road surface condition is bad, it is sufficient to predict the time when the road surface condition is bad and the time when the road surface condition is bad.
  • the time of occurrence and the time of end of turning can be predicted based on the shape of the planned movement route. If the bending degree of the planned movement route is steep, the turning becomes sharp, so it is sufficient to predict the time when the section where the value indicating the bending degree of the planned movement route (for example, curvature) is large and the time when it exits.
  • the bending degree of the planned movement route is steep, the turning becomes sharp, so it is sufficient to predict the time when the section where the value indicating the bending degree of the planned movement route (for example, curvature) is large and the time when it exits.
  • the time of occurrence and the time of termination of an event in which the absolute value of acceleration is equal to or higher than a certain value can be predicted based on the planned movement route. For example, if the inclination of the planned movement path with respect to the traveling direction is large, the acceleration suddenly increases or decreases, so it is sufficient to predict the time when the position where the inclination is large is reached.
  • the detection of these events may be performed by image analysis or may be detected by the acceleration sensor 120.
  • the detection unit 152 may be a target of prediction or detection.
  • the detection target is not limited to the event that occurs due to the movement of the sensor 120.
  • the detection unit 152 may be a device that detects a voice.
  • the determination unit determines the time when the voice is detected as the time when the display speed of the image is increased. In this case as well, the amount to be increased may be appropriately determined. It may be returned to the standard speed, or it may be increased to a speed slower than the standard speed.
  • the video may include audio. If the video display speed and the audio output speed can be controlled separately, the video display speed may be kept low and the audio output speed may be raised. That is, in that case, the determination unit determines the time when the voice is detected as the time when the output speed of the voice is increased.
  • the frequency component contained in voice is unique unlike noise. It is also known that the ratio of the periodic component to the aperiodic component of the voice is also unique, unlike noise. By using these, it is possible to distinguish between voice and noise.
  • object detection it is possible to detect a person within a predetermined distance from the aircraft. Further, if a plurality of microphones are provided, the direction in which the voice arrives can be estimated from the time difference in which the voice arrives at each microphone. By combining these, it is possible to determine whether or not the sound acquired by the microphone is a voice from a human being within a predetermined distance. In this way, it may be detected that a human being is talking to the information processing device 100.
  • a sound may be received from the display device 200 or the like via the receiving unit, and it may be detected whether or not the sound includes a human voice. Further, not only the voice may be detected but also the words may be detected. When the detected word is a predetermined keyword, the time when the keyword is detected may be the time when the display speed is changed.
  • the method of detecting a keyword is used in smartphones, portable automatic translators, etc., and the method may be used.
  • the display information generation unit (determination unit) 160 determines the time point at which the video display speed is reduced based on the information regarding the occurrence of the event targeted for prediction or detection. For example, the display information generation unit 160 may set the predicted occurrence time or occurrence detection time point indicated by the information as a time point for reducing the display speed of the image. It does not necessarily have to be the same time point, and may be, for example, a time point immediately before the prediction time point or the detection time point.
  • the display information generation unit 160 determines a time point for increasing the display speed of the image based on the information regarding the end of the event. For example, the display information generation unit 160 may set the predicted end time or the end time of the event indicated by the information as the time point for returning the display speed of the image to the standard speed. It should be noted that the time points do not necessarily have to be the same, and may be, for example, a predicted end time point or a time point one time after the end time point is detected.
  • the time point related to the change of the display speed determined by the display information generation unit 160 is transmitted to the display device 200 via the transmission unit 170 as display information. This allows the display device 200 to change the display speed.
  • the display information generation unit 160 includes a change determination unit 161 and a change update unit 162. Further, in the present embodiment, the change determination unit 161 determines the time point of change based on the prediction result of the prediction unit 151. That is, although the event has not actually occurred, it is predicted that it will occur, and the time to change it is determined in advance. The change update unit 162 determines the time point of change based on the detection result of the detection unit 152. That is, after detecting that the event actually occurred, the time to change is determined.
  • the time point determined by the change determination unit 161 is modified by the change update unit 162.
  • the time point determined by the change determination unit 161 may be deleted by the change update unit 162.
  • the time determined by the change determination unit 161 based on the information is the change update unit. It may be modified by 162. Further, a time point that is not determined as a change time point by the change determination unit 161 may be determined as a change time point by the change update unit 162.
  • the change time point determined based on the prediction result is updated (corrected) based on the detection result, and therefore is named the change update unit 162.
  • the prediction unit 151 may perform re-prediction based on the detection result. Further, when only one of prediction and detection is executed, the change update unit 162 may be omitted.
  • the change determination unit 161 may determine the period for increasing the display speed described above. However, if an event occurs while the playback speed is being increased and the display is lowered, the difference in display speed becomes large, which may induce sickness. Therefore, in order to avoid such a risk as much as possible, it is preferable to increase the display speed during the period in which the event is predicted not to occur.
  • the prediction unit 151 predicts the time of occurrence and the time of termination of the event and notifies the change decision unit 161, and the change determination unit 161 predicts the time of occurrence and the time of termination of the event and the period during which the event does not occur. It is preferable to identify and decide to increase the display speed within the specified period. In other words, it is preferable to determine the period for increasing the display speed other than the period from the predicted occurrence time to the predicted end time.
  • the display information generation unit 160 may decide to change the setting related to the display of the video in addition to the display speed in order to suppress the sickness of the viewer of the video. For example, it is known that sickness is less likely to occur when the visibility of an image is poor. Therefore, the display information generation unit 160 may decide to change at least one of the settings related to the display of the image, which is related to the visibility, in a direction in which the visibility is deteriorated. For example, it may be decided to reduce the brightness, contrast, etc. of the image at the same time when the display speed is reduced.
  • the sensor 120 may increase the shooting rate from the predicted or detected occurrence point.
  • FIG. 5 is a flowchart of the display speed determination process based on the prediction. It is assumed that this flow is executed at regular intervals during shooting.
  • the movement information generation unit 130 acquires new sensor 120 information and updates the environment map based on the sensor 120 information (S101). In addition, the movement information generation unit 130 updates the planned movement route based on the updated environment map (S102). If the environment map and the planned movement route have not been created yet, the environment map and the planned movement route are generated.
  • the prediction unit 151 predicts the occurrence and termination of an event within a predetermined time based on the environment map and the planned movement route (S103).
  • the predetermined time may be adjusted as appropriate.
  • the change determination unit 161 determines that the display speed of the event occurrence period is reduced, and it is predicted. The time point of occurrence is lowered, and the time point of returning the predicted end time point is set (S105). Further, the change determination unit 161 adds the time length of the event occurrence period to the total delay time (S106).
  • the total delay time means the total time of delay in displaying the image due to the decrease in the image speed.
  • the recovery condition for determining whether to increase the display speed is confirmed. There may be multiple recovery conditions. If there is no delay, it is not necessary to increase the display speed, so at least one of the recovery conditions is that the total delay time is not zero. Further, the recovery condition may include a condition that the time length of the period during which the event does not occur is equal to or longer than the predetermined length. This is to prevent the display speed from being repeated in a short period of time and causing discomfort to the viewer.
  • the change determination unit 161 determines the scheduled recovery period within the period in which the event does not occur (S108).
  • the planned recovery period means from the time when the display speed is scheduled to be increased to the time when the increased display speed is scheduled to be restored. Then, the change determination unit 161 subtracts the time length of the planned recovery period from the total delay time (S109). After the processing of S109 or when the recovery condition is not satisfied (NO in S107), this flow ends.
  • FIG. 6 is a flowchart of the display speed update process based on the detection. This flow is performed independently of the flow of FIG. Therefore, this flow may be performed in parallel with the flow of FIG.
  • the detection unit 152 detects whether or not an event has occurred based on the sensor 120 data (S201). That is, it may be detected that it has occurred, or it may be detected that it has not occurred.
  • the change update unit 162 determines whether the current prediction is correct based on the detection result (S202). The flow branches based on the determination result.
  • condition branch 2 of S203 determines the time point at which the change update unit 162 is detected to lower the time of occurrence (S204). .. Then, the addition of the total delay time is started (S205).
  • the recovery review conditions are met. For example, if an unexpected event occurs during recovery or immediately before the scheduled recovery period, recovery may not be performed normally.
  • the conditions for reviewing the recovery are set in advance so that the recovery will not be performed in such a case.
  • the change update unit 162 deletes the change time point determined based on the planned recovery period (S207). Further, since the change update unit 162 subtracts the time length of the planned recovery period from the total delay time at the time of determining the planned recovery period, the change update unit 162 adds the time length of the deleted planned recovery period from the total delay time (S208). .. If the recovery review condition is not satisfied (NO in S206), the processes of S207 and S208 are not executed.
  • the change update unit 162 Deletes the change time point determined based on the prediction (S209). That is, it is assumed that there is no display delay period defined by the prediction.
  • the prediction time and the detection time do not always match, and a time lag may occur. Therefore, if the deviation between the prediction time and the detection time is within a predetermined threshold value, it may be regarded as as predicted.
  • the change update unit 162 determines the end time of the event and the time when the display speed is increased (S210). .. Further, since the addition of the total delay time is started in the conditional branch 2 of S203, the addition of the total delay time is stopped (S211). This adds the time length of the delay due to the unpredicted event to the total delay time.
  • the occurrence of an event that is a predetermined cause of sickness is predicted, detected, or both. Then, the time point at which the display speed of the image is reduced is determined based on the time point at which the event occurs. By lowering the display speed of the image at the determined time point by the display device 200, it is possible to suppress the sickness received by the viewer of the image.
  • the time point for lowering the video display speed is determined, the time point for increasing the video display speed is determined within the period during which the event does not occur. As a result, it is possible to recover the delay caused by slowing down the display speed of the image, and it is possible to prevent the delay from becoming enormous gradually. Thereby, for example, when the captured video is viewed in real time and the instruction is transmitted based on the viewed video, it is possible to prevent the instruction from being delayed.
  • the video display system is not limited to the configuration shown in FIG.
  • the mobile information generation unit 130, the event information generation unit 150, and the display information generation unit 160 of the information processing device 100 may be configured to exist on the display device 200 or a communication network such as the Internet.
  • a communication network such as the Internet.
  • the information necessary for the moving device to move may be transmitted from the management system of the building or the like to the moving device.
  • FIG. 7 is a diagram showing a configuration example of the video display system according to the second embodiment.
  • the information processing device 100 of FIG. 1 is divided into a sensor 100A for capturing an image, a server 100B existing on a communication network, and a mobile device 100C.
  • the sensor 100A is described in order to clarify that the functions of the information processing apparatus 100 of the first embodiment are separated into the above three, but the sensor 100A of the second embodiment is described as the sensor 100A. , The same as the sensor 120 of the first embodiment.
  • the sensor 100A is not built in the moving device 100C, but is mounted in the moving device 100C and moves together with the moving device 100C.
  • the sensor information from the sensor 100A is sent to the server 100B.
  • the server 100B includes a movement information generation unit 130, an event information generation unit 150, and a display information generation unit 160 shown in the first embodiment. Therefore, the server 100B can generate movement information, event information, and display information based on the sensor information.
  • the mobile device 100C includes a movement control unit 140, and can move by receiving movement information from the server 100B.
  • the first display device 200A is for a general viewer
  • the second display device 200B is for a driver who operates the mobile device 100C.
  • the first display device 200A receives an image from the sensor 100A and receives display information from the server 100B.
  • the first display device 200A can change the display of the image at the time determined by the display information generation unit 160 at a speed slower than the standard speed, as in the example of FIG.
  • the second display device 200B receives the image from the sensor 100A, but does not receive the display information from the server 100B. Therefore, the second display device 200B displays the image at the time determined by the display information generation unit 160 at the standard speed.
  • the display speed can be changed according to the viewer of the video.
  • the subject of each process of the second embodiment is different from that of the first embodiment, the description thereof will be omitted because the content and flow thereof are the same as those of the first embodiment.
  • the processing of the device according to the embodiment of the present disclosure can be realized by software (program) executed by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or the like. It should be noted that, instead of executing all the processes of the device by software, some processes may be executed by hardware such as a dedicated circuit.
  • the present disclosure may also have the following structure.
  • An event information generator that generates the first information about the occurrence of an event related to the movement of the device that captures the image
  • An information processing device including a determination unit that determines a time point at which the display speed of the image is reduced based on the first information.
  • the event information generation unit includes a prediction unit that predicts the occurrence time of the event and includes the predicted occurrence time in the first information.
  • the information processing apparatus according to [2], wherein the determination unit sets the predicted occurrence time point as a time point at which the display speed is reduced.
  • the event information generation unit includes a detection unit that detects the occurrence of the event and includes the detection time point of the occurrence of the event in the first information.
  • the information processing apparatus according to [1], wherein the determination unit sets the detection time point as a time point at which the display speed is reduced.
  • the event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event.
  • the information processing apparatus according to any one of [2] and [3], wherein the determination unit corrects a time point at which the display speed is reduced based on the detection.
  • the detector detects the occurrence of at least one of vibration, turning, and acceleration of the device or an object to which the device is attached.
  • the information processing apparatus wherein the determination unit corrects a time point at which the display speed is lowered based on the detection of at least one of the vibration, the turning, and the acceleration.
  • the prediction unit predicts the time point of occurrence of the event based on the second information regarding the planned movement path of the device or the object to which the device is attached [2], [3], [5], or [6]. ]
  • the information processing apparatus described in. [8] The event is a vibration received by the device or an object to which the device is attached.
  • the information processing device according to [7], wherein the prediction unit predicts the time when the event occurs based on the road surface condition of the planned movement route included in the second information.
  • the event is a swirl of the device or an object to which the device is attached.
  • the information processing apparatus according to [7], wherein the prediction unit predicts the time when the event occurs based on the shape of the planned movement route included in the second information. [10] The prediction unit further predicts the end time of the event, The decision unit The period from the predicted occurrence time to the predicted end time is determined as the display speed reduction period.
  • the information processing apparatus according to any one of [2] to [9], which determines an increase period of the display speed other than the decrease period.
  • the information processing apparatus wherein the rate of increase / decrease of the display speed in the ascending period with respect to the standard speed is smaller than the absolute value of the rate of increase / decrease of the display speed in the decreasing period with respect to the standard speed.
  • the event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event.
  • the information processing apparatus according to [10], wherein the determination unit corrects at least one of the decrease period and the increase period based on the detection.
  • the detection unit further detects the voice and The determination unit determines a time point for increasing the display speed or the output speed of the voice corresponding to the video based on the detection time of the voice. Of [4] to [12], the item including the detection unit.
  • the determination unit Based on the first information, the determination unit further determines the time point at which at least one of the settings related to the display of the image is changed in the direction in which the visibility is deteriorated [1] to [1] to [ The information processing apparatus according to any one of 13]. [15] Further provided with a receiver for receiving fourth information regarding the deviation between the actual position of the device or the object to which the device is attached and the planned movement path of the device or the object. The information processing apparatus according to any one of [1] to [14], wherein the determination unit further determines a time point for lowering the display speed based on the fourth information.
  • An image display system including a mobile device equipped with a device for capturing an image, an information processing device, and first and second display devices for displaying the image.
  • the information processing device An event information generation unit that generates first information regarding the occurrence of an event related to the movement of the mobile device, and an event information generation unit. Based on the first information, a determination unit that determines the time point for reducing the display speed of the image, and A transmitter that transmits information about the determined time point to at least the first display device, and With While the image is being displayed, the first display device reduces the display speed of the image at the time point.
  • the second display device is an image display system that keeps the display speed of the image constant before and after the time point during the display of the image.
  • Information processing device 100A Sensor of the second embodiment 100B Server 100C Mobile device 110 Information processing device receiving unit 120 Sensor of the first embodiment 130 Moving information generation unit 131 Object detection unit 132 Environmental state estimation unit 133 Self-position estimation Unit 134 Environmental map creation unit 135 Route determination unit 140 Movement control unit 141 Action determination unit 142 Aircraft control unit 150 Event information generation unit 151 Prediction unit 152 Detection unit 160 Display information generation unit 161 Change determination unit 162 Change update unit 170 Information processing device Transmission unit 200 Display device 200A 1st display device 200B 2nd display device 210 Input reception unit 220 Transmission unit 230 Reception unit 240 Display control unit 300 Obstacles 401, 402 Arrows (planned movement route) 500 bars (video frame)

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure provides a device, for example, for making a determination concerning the display speed of video so as to mitigate motion sickness felt by a viewer of the video. An information processing device according to an aspect of the present disclosure is provided with a generating unit and a determination unit. The generating unit generates first information pertaining to the occurrence of an event pertaining to motion of equipment for capturing a video. The determination unit determines when to lower the display speed of the video on the basis of the first information.

Description

情報処理装置、映像表示システム、および映像表示方法Information processing equipment, video display system, and video display method
 本開示は、情報処理装置、映像表示システム、および映像表示方法に関する。 This disclosure relates to an information processing device, a video display system, and a video display method.
 遠隔地にあるロボットなどの移動体を操縦し、当該移動体を介して現地の映像および音声を取得する技術の開発が盛んに行われている。さらに、近年の自動運転に関する技術の発展に伴い、タッチパネルなどを用いて目的地を指定することにより、移動体を当該目的地に自動で移動させることが可能となっている。これにより、移動体の操縦者は、操縦以外の行為に注意を割くことが可能となった。例えば、移動体からの映像を注視する、移動体を介したコミュニケーションに集中するといった行為が行えるようになった。 Technology is being actively developed to control a moving object such as a robot in a remote location and acquire local video and audio via the moving object. Further, with the development of technology related to automatic driving in recent years, it has become possible to automatically move a moving body to the destination by designating a destination using a touch panel or the like. This allowed the operator of the mobile to pay attention to actions other than maneuvering. For example, it has become possible to perform actions such as watching an image from a moving body and concentrating on communication through the moving body.
 しかしながら、自動運転によって操縦者の意思と無関係に移動体が移動すると、移動体からの映像が、操縦者以外はもちろんのこと、操縦者にも酔いを誘発する。特に、ヘッドマウントディスプレイを介して映像を視聴した場合は、当該映像以外の視覚情報が遮断されているため、その傾向が顕著である。ゆえに、酔いの誘発を抑える方法が求められている。 However, when the moving body moves regardless of the driver's intention due to automatic driving, the image from the moving body induces sickness not only to the driver but also to the driver. In particular, when an image is viewed via a head-mounted display, the tendency is remarkable because visual information other than the image is blocked. Therefore, there is a need for a method of suppressing the induction of sickness.
 酔いを誘発する要因の一つは、映像のぶれである。そのため、例えば、移動体に取り付けられた撮影のためのセンサの振動を、ダンパなどを用いて抑制することが行われている。しかし、ダンパでは回転運動による映像のブレを抑えることはできない。また、映像をそのまま表示せず、複数の時点の映像に基づいてブレのない映像を合成する方法もあるが、合成された映像に欠損が生じるなどの問題が生じ、期待されるほどの効果を得ることができていない。そのため、新たな手法が求められている。 One of the factors that induces sickness is blurring of images. Therefore, for example, the vibration of the sensor for photographing attached to the moving body is suppressed by using a damper or the like. However, the damper cannot suppress the blurring of the image due to the rotational movement. There is also a method of synthesizing a blur-free image based on images at multiple time points without displaying the image as it is, but problems such as defects in the combined image occur, and the expected effect is achieved. I haven't been able to get it. Therefore, a new method is required.
 本開示は、映像の視聴者が感じる酔いを軽減するために、当該映像の表示速度に関する決定を行う装置などを提供する。 The present disclosure provides a device for determining the display speed of the video in order to reduce the sickness felt by the viewer of the video.
 本開示の一側面の情報処理装置は、事象情報生成部と、決定部と、を備える。前記事象情報生成部は、映像を撮影する機器の動きに関する事象の発生に関する第1情報を生成する。前記決定部は、前記第1情報に基づき、前記映像の表示速度を下げる時点を決定する。 The information processing device on one aspect of the present disclosure includes an event information generation unit and a determination unit. The event information generation unit generates first information regarding the occurrence of an event related to the movement of the device that captures the image. The determination unit determines a time point at which the display speed of the image is reduced based on the first information.
 これにより、前記映像を表示する表示装置が、前記時点における映像から、表示速度を下げることが可能となり、映像の視聴者が感じる酔いが軽減される。 As a result, the display device that displays the image can reduce the display speed from the image at the time point, and the sickness felt by the viewer of the image is reduced.
 なお、映像を表示する装置は、前記情報処理装置と同一でもよいし、前記情報処理装置とは別の装置でもよい。すなわち、映像を下げる時点を決定する装置と、決定された時点において映像速度を遅くしつつ映像を表示する装置と、が同じであってもよいし、別々でであってもよい。 The device for displaying images may be the same as the information processing device, or may be a device different from the information processing device. That is, the device that determines the time point at which the image is lowered and the device that displays the image while slowing down the image speed at the determined time point may be the same or may be separate.
 また、前記事象情報生成部は、前記事象の発生時点を予測して予測された発生時点を前記第1情報に含める予測部を備え、前記決定部は、前記予測された発生時点に基づき、前記表示速度を下げる時点を決定する、といった構成も取り得る。なお、前記決定部は、前記予測された発生時点を、前記表示速度を下げる時点としてもうよい。 Further, the event information generation unit includes a prediction unit that predicts the occurrence time of the event and includes the predicted occurrence time in the first information, and the determination unit is based on the predicted occurrence time. , The time point for lowering the display speed may be determined. In addition, the determination unit may set the predicted occurrence time point as a time point for lowering the display speed.
 また、前記事象情報生成部は、検知部を備えていてもよい。前記検知部は、前記事象の発生を検知して前記事象の発生の検知時点を前記第1情報に含めてもよい。そして、前記決定部は、前記検知時点を、前記表示速度を下げる時点としてもよい。 Further, the event information generation unit may include a detection unit. The detection unit may detect the occurrence of the event and include the detection time point of the occurrence of the event in the first information. Then, the determination unit may set the detection time point as the time point for lowering the display speed.
 また、前記事象情報生成部は、前記事象の発生を検知して前記事象の発生の検知時点を前記第1情報に含める検知部を備え、前記決定部は、前記検知時点を、前記表示速度を下げる時点とする、といった構成も取り得る。 Further, the event information generation unit includes a detection unit that detects the occurrence of the event and includes the detection time of the occurrence of the event in the first information, and the determination unit includes the detection time. It is possible to take a configuration such as setting the time when the display speed is lowered.
 また、前記事象情報生成部は、前記事象または前記事象とは別の事象の発生を検知する検知部をさらに備え、前記決定部は、前記検知に基づき、前記表示速度を下げる時点を修正する、といった構成も取り得る。 Further, the event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event, and the determination unit determines a time point at which the display speed is reduced based on the detection. It is possible to take a configuration such as modifying it.
 また、前記検知部は、前記機器または前記機器が取り付けられた物体の振動、旋回、および加速度の少なくともいずれかの発生を検知し、前記決定部は、前記振動、旋回、および加速度の少なくともいずれかの検知に基づき、前記表示速度を下げる時点を修正する、といった構成も取り得る。 Further, the detection unit detects at least one of vibration, rotation, and acceleration of the device or an object to which the device is attached, and the determination unit detects at least one of the vibration, rotation, and acceleration. Based on the detection of, the time point at which the display speed is lowered can be corrected.
 また、前記予測部は、前記機器または前記機器が取り付けられた物体の移動予定経路に関する第2情報に基づき、前記事象の発生時点を予測する、といった構成も取り得る。 Further, the prediction unit may have a configuration in which the time point of occurrence of the event is predicted based on the second information regarding the planned movement route of the device or the object to which the device is attached.
 また、前記事象は、前記機器または前記機器が取り付けられた物体が受ける振動であり、前記予測部は、前記第2情報に含まれる前記移動予定経路の路面状態に基づき、前記事象の発生時点を予測する、といった構成も取り得る。 Further, the event is a vibration received by the device or an object to which the device is attached, and the prediction unit generates the event based on the road surface condition of the planned movement route included in the second information. It is possible to take a configuration such as predicting the time point.
 また、前記予測部は、前記事象の終息時点をさらに予測し、前記決定部は、前記予測された発生時点から予測された終息時点までの期間を、前記表示速度の低下期間と決定し、前記低下期間以外において、前記表示速度の上昇期間を決定する、といった構成も取り得る。 Further, the prediction unit further predicts the end time of the event, and the determination unit determines the period from the predicted occurrence time to the predicted end time as the display speed reduction period. In addition to the decrease period, the display speed increase period may be determined.
 また、前記上昇期間における表示速度の標準速度に対する増減率が、前記低下期間における表示速度の前記標準速度に対する増減率の絶対値よりも小さい、といった構成も取り得る。 Further, the rate of increase / decrease of the display speed in the ascending period with respect to the standard speed may be smaller than the absolute value of the rate of increase / decrease of the display speed in the decreasing period with respect to the standard speed.
 また、前記事象情報生成部は、前記事象または前記事象とは別の事象の発生を検知する検知部をさらに備え、前記決定部は、前記検知に基づき、前記低下期間および前記上昇期間の少なくともいずれかを修正する、といった構成も取り得る。 Further, the event information generation unit further includes a detection unit that detects the occurrence of the event or an event different from the event, and the determination unit has the decrease period and the increase period based on the detection. It is also possible to modify at least one of the above.
 また、前記検知部は、声をさらに検知し、前記決定部は、前記声の検知時点に基づき、前記表示速度または前記映像に対応する音声の出力速度を上げる時点を決定する、といった構成も取り得る。 Further, the detection unit further detects the voice, and the determination unit determines a time point at which the display speed or the output speed of the sound corresponding to the video is increased based on the detection time point of the voice. obtain.
 また、前記決定部は、前記第1情報に基づき、前記映像の表示に関する設定のうちの視認性に関する設定の少なくとも一つを前記視認性が劣化する方向に変更する時点をさらに決定する、といった構成も取り得る。 Further, based on the first information, the determination unit further determines a time point at which at least one of the settings related to the display of the video is changed in the direction in which the visibility deteriorates. Can also be taken.
 また、前記機器または前記機器が取り付けられた物体の実際の位置と、前記機器または前記物体の移動予定経路と、の乖離に関する第4情報を受信する受信部をさらに備え、前記決定部は、前記第4情報に基づき、前記表示速度を下げる時点をさらに決定する、といった構成も取り得る。 Further, the device or the object to which the device is attached is further provided with a receiving unit for receiving the fourth information regarding the deviation between the actual position of the device or the object to which the device is attached and the planned movement path of the device or the object. Based on the fourth information, it is possible to further determine the time point at which the display speed is reduced.
 また、前記機器をさらに備え、前記機器が、前記第1情報に基づき、前記映像を撮影するレートを上げる、といった構成も取り得る。 Further, the device may be further provided, and the device may increase the rate of shooting the video based on the first information.
 また、前記機器と、自身の位置および周囲の状況に関する第5情報に基づき、与えられた目的地までの移動予定経路を決定する経路決定部と、前記移動予定経路を移動する移動制御部と、をさらに備える構成も取り得る。 In addition, a route determination unit that determines a planned movement route to a given destination based on the device, a fifth information regarding its own position and surrounding conditions, and a movement control unit that moves the planned movement route. It is also possible to take a configuration further provided with.
 また、本開示の他の一態様では、映像を撮影するイメージセンサが取り付けられた移動装置と、情報処理装置と、記映像を表示する第1および第2の表示装置と、を備えた映像表示システムが提供される。前記情報処理装置は、前記映像を撮影する機器の動きに関する事象の発生に関する第1情報を生成する事象情報生成部と、前記第1情報に基づいて映像の表示速度を下げる時点を決定する決定部と、決定された時点に関する情報を、少なくとも前記第1の表示装置に送信する送信部と、を備えており、前記第1の表示装置は、前記映像の表示中に、前記映像の表示速度を、前記時点において下げ、前記第2の表示装置は、前記映像の表示中に、前記映像の表示速度を前記時点の前後において一定とする。といった構成が取り得る。 Further, in another aspect of the present disclosure, an image display including a mobile device to which an image sensor for capturing an image is attached, an information processing device, and first and second display devices for displaying recorded images. The system is provided. The information processing device has an event information generation unit that generates first information regarding the occurrence of an event related to the movement of the device that captures the image, and a determination unit that determines a time point at which the display speed of the image is reduced based on the first information. The first display device includes, at least, a transmission unit that transmits information about a determined time point to the first display device, and the first display device determines the display speed of the video during the display of the video. The second display device keeps the display speed of the image constant before and after the time point during the display of the image. Such a configuration can be taken.
 また、本開示の他の一態様では、映像を撮影する機器の動きに関する事象の発生に関する第1情報を生成するステップと、前記第1情報に基づき、前記映像の表示速度を下げる時点を決定するステップと、前記映像の表示を開始するステップと、前記映像の表示速度を、前記時点において下げるステップと、を備える映像表示方法が提供される。 Further, in another aspect of the present disclosure, a step of generating the first information regarding the occurrence of an event related to the movement of the device for capturing the image and a time point for reducing the display speed of the image are determined based on the first information. Provided is an image display method including a step, a step of starting the display of the image, and a step of lowering the display speed of the image at the time point.
第1の実施形態に係る映像表示システムの構成例を示す図。The figure which shows the configuration example of the image display system which concerns on 1st Embodiment. 第1の実施形態に係る映像表示について説明する図。The figure explaining the image display which concerns on 1st Embodiment. 経路状態の推定について説明する図。The figure explaining the estimation of the path state. 環境地図の一例を示す図。The figure which shows an example of the environment map. 予測に基づく表示速度の決定処理のフローチャート。Flowchart of display speed determination process based on prediction. 検知に基づく表示速度の更新処理のフローチャート。Flowchart of display speed update process based on detection. 第2の実施形態に係る映像表示システムの構成例を示す図。The figure which shows the configuration example of the image display system which concerns on 2nd Embodiment.
 以下、図面を参照して、本開示の実施形態について説明する。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
(第1の実施形態)
 図1は、第1の実施形態に係る映像表示システムの構成例を示す図である。図1の例の映像表示システムは、情報処理装置100と、表示装置200と、を備える。情報処理装置100は、受信部110と、センサ120と、移動情報生成部130と、移動制御部140と、事象情報生成部150と、表示情報生成部(決定部)160と、送信部170と、を備える。表示装置200は、入力受付部210と、送信部220と、受信部230と、表示制御部240と、を備える。
(First Embodiment)
FIG. 1 is a diagram showing a configuration example of a video display system according to the first embodiment. The video display system of the example of FIG. 1 includes an information processing device 100 and a display device 200. The information processing device 100 includes a receiving unit 110, a sensor 120, a movement information generation unit 130, a movement control unit 140, an event information generation unit 150, a display information generation unit (determination unit) 160, and a transmission unit 170. , Equipped with. The display device 200 includes an input receiving unit 210, a transmitting unit 220, a receiving unit 230, and a display control unit 240.
 本開示における映像表示システムは、映像を撮影するセンサ120によって撮影された映像を表示するシステムである。センサ120は、イメージセンサなどが想定され、映像を撮影する機器とも言える。センサ120は、移動体に内蔵または外部に取り付けられており、撮影が当該移動体の移動中にも行われることを想定する。ゆえに、表示される映像には移動によって生じたぶれなどが含まれることがある。 The video display system in the present disclosure is a system that displays the video captured by the sensor 120 that captures the video. The sensor 120 is assumed to be an image sensor or the like, and can be said to be a device that captures an image. It is assumed that the sensor 120 is built in or externally attached to the moving body, and the imaging is performed while the moving body is moving. Therefore, the displayed image may include blurring caused by movement.
 そのような映像をそのまま表示すると、当該映像の視聴者に酔いを誘発しやすい。そのため、本映像表示システムは、映像の表示速度(再生速度とも言える)を調整することにより、視聴者の酔いを抑える。 If such a video is displayed as it is, it is easy to induce sickness to the viewer of the video. Therefore, this video display system suppresses the viewer's sickness by adjusting the video display speed (which can also be said to be the playback speed).
 第1の実施形態では、映像表示システムが、情報処理装置100と、表示装置200と、から構成されている例を示す。第1の実施形態では、情報処理装置100が移動体であって、映像を撮影するセンサ120を備え、映像を撮影しながら移動する。ゆえに、第1の実施形態の情報処理装置100は、移動装置および撮影装置とも言える。表示装置200は、情報処理装置100とは離れており、無線通信などを介して情報処理装置100からの映像を受信して表示する。 The first embodiment shows an example in which the video display system is composed of an information processing device 100 and a display device 200. In the first embodiment, the information processing device 100 is a mobile body, includes a sensor 120 that captures an image, and moves while capturing the image. Therefore, the information processing device 100 of the first embodiment can be said to be a mobile device and a photographing device. The display device 200 is separated from the information processing device 100, and receives and displays an image from the information processing device 100 via wireless communication or the like.
 例えば、移動可能なロボットによって撮影された映像を遠隔地のモニタ、ヘッドマウントディスプレイなどで表示するシステムは、第1の実施形態に係る映像表示システムに該当する。 For example, a system that displays an image taken by a movable robot on a monitor at a remote location, a head-mounted display, or the like corresponds to the image display system according to the first embodiment.
 図2は、第1の実施形態に係る映像表示について説明する図である。移動可能なロボットである情報処理装置100が、障害物300を避けて移動することが示されている。情報処理装置100の移動予定経路は矢印401にて示されている。また、表示装置200は、ヘッドマウントディスプレイとして示されている。表示装置200の右側に配置された複数のバー500は、表示装置200に表示されていく映像のフレームを概念的に表している。 FIG. 2 is a diagram for explaining the video display according to the first embodiment. It is shown that the information processing device 100, which is a movable robot, moves while avoiding the obstacle 300. The planned movement route of the information processing device 100 is indicated by an arrow 401. Further, the display device 200 is shown as a head-mounted display. A plurality of bars 500 arranged on the right side of the display device 200 conceptually represent a frame of an image displayed on the display device 200.
 矢印401から分かるように、情報処理装置100が、障害物300を避けるために大きく曲がって移動する。この大きく曲がって移動する期間において撮影された映像には酔いの要因である歪みが含まれる。そのため、当該期間において撮影された映像を表示する際に表示速度を下げることにより、当該映像の視聴者の酔いを防ぐ。そのため、図2の例では、当該期間に該当するフレーム(図2の第1時点から第2時点までのフレーム)の間隔が、当初よりも広くされている。 As can be seen from the arrow 401, the information processing device 100 moves in a large bend in order to avoid the obstacle 300. The images taken during this period of large bends and movements include distortion, which is a cause of sickness. Therefore, by reducing the display speed when displaying the video captured during the period, it is possible to prevent the viewer of the video from getting sick. Therefore, in the example of FIG. 2, the interval of the frames corresponding to the period (frames from the first time point to the second time point in FIG. 2) is wider than the initial time.
 なお、映像の表示速度は、表示速度を下げる時点よりも前の表示速度よりも低くされてもよいし、所定の標準速度よりも低くされてもよい。例えば、当該時点の前において表示速度が1.5倍速であったときは、当該時点からは表示速度を1.2倍速としてもよい。あるいは、当該時点からは標準速度である等倍速(1.0倍速)よりも遅い0.8倍速としてもよい。このように、表示速度の下げる量は適宜に定めてよい。 The display speed of the image may be lower than the display speed before the time when the display speed is lowered, or may be lower than the predetermined standard speed. For example, when the display speed is 1.5 times the speed before the time point, the display speed may be set to 1.2 times speed from the time point. Alternatively, from that point in time, the speed may be 0.8 times, which is slower than the standard speed of 1x (1.0x). In this way, the amount of reduction in the display speed may be appropriately determined.
 障害物300を避けた後は、情報処理装置100が大きく曲がらずに移動する。そのため、撮影された映像の視聴者が表示速度に対して違和感を感じないように、表示速度が戻される。そのため、図2の例では、大きく曲がって移動する期間の後のフレーム(第2時点より後のフレーム)の間隔が、当初の間隔に戻されている。図2の例では、表示速度が低下している期間を表示遅延期間と表している。 After avoiding the obstacle 300, the information processing device 100 moves without making a large bend. Therefore, the display speed is returned so that the viewer of the captured image does not feel uncomfortable with the display speed. Therefore, in the example of FIG. 2, the interval of the frame after the period of large bending and movement (frame after the second time point) is returned to the initial interval. In the example of FIG. 2, the period during which the display speed is reduced is represented as the display delay period.
 なお、図2の例では、表示速度を戻すとしたが、表示速度は、一度に戻す必要はなく、徐々に戻すといったことも想定される。すなわち、表示速度が徐々に上げられていき、標準速度に戻されてもよい。表示速度を上げる1回分の量も適宜に定めてよい。 In the example of Fig. 2, the display speed was set to be returned, but it is assumed that the display speed does not have to be returned all at once, but is gradually returned. That is, the display speed may be gradually increased and returned to the standard speed. The amount for one dose to increase the display speed may be appropriately determined.
 なお、何回も映像の表示速度を下げると、撮影された映像が表示されるまでの遅延が徐々に膨大となる。これにより、リアルタイムで視聴している場合に不具合が生じ得る。例えば、撮影された映像をリアルタイムで視聴し、視聴した映像に基づいて指示を行う場合に、指示が遅れてしまい、所望の行動が実現されないといった不具合が起こりうる。例えば、情報処理装置100に一時停止の指示を出したが、停止したい位置を超えてしまうといった事態が起こりうる。また、経路途中にある物体を撮影するためにセンサ120の向きを変える指示を出したが、センサ120の向きが当該物体を通り過ぎてから変更されるという事態も起こりうる。ゆえに、表示速度を低下させたことにより生じた遅延を解消させるため、表示速度を上昇させる期間を決定してもよい。 If the video display speed is reduced many times, the delay until the captured video is displayed will gradually increase. This can cause problems when viewing in real time. For example, when a captured image is viewed in real time and an instruction is given based on the viewed image, a problem may occur in which the instruction is delayed and a desired action is not realized. For example, although the information processing apparatus 100 is instructed to pause, a situation may occur in which the position where the information processing device 100 is desired to be stopped is exceeded. In addition, although an instruction to change the direction of the sensor 120 is issued in order to photograph an object in the middle of the path, a situation may occur in which the direction of the sensor 120 is changed after passing the object. Therefore, in order to eliminate the delay caused by the decrease in the display speed, the period for increasing the display speed may be determined.
 表示遅延期間における遅延がなくなるように、表示速度を戻した時点から暫く後に、映像の表示速度が上げられる。図2の例では、表示速度を上げた時点から戻した時点までをリカバリ期間と記載している。リカバリ期間は、表示速度の上昇期間とも言える。表示速度が上昇しているため、リカバリ期間におけるフレーム(第3時点より後のフレーム)の間隔が、当初の間隔よりも狭められている。この場合も、表示速度の上げる量は適宜に定めてよい。 The video display speed will be increased shortly after the display speed is returned so that there is no delay in the display delay period. In the example of FIG. 2, the period from the time when the display speed is increased to the time when the display speed is returned is described as the recovery period. The recovery period can be said to be the period during which the display speed increases. Since the display speed is increased, the interval between frames (frames after the third time point) in the recovery period is narrower than the initial interval. In this case as well, the amount of increase in display speed may be appropriately determined.
 但し、表示速度の上げる量が多いと、ユーザに不快感を与える恐れがある。そのため、表示速度の上げる量が、表示速度を下げた量よりも少ないことが好ましい。言い換えると、リカバリ期間における表示速度の標準速度に対する増減率が、表示遅延期間における表示速度の標準速度に対する増減率の絶対値よりも小さいことが好ましい。例えば、表示遅延期間において表示速度を0.8倍速とした場合、表示遅延期間における表示速度の標準速度に対する増減率は-0.2である。そのため、リカバリ期間における表示速度の標準速度に対する増減率は0.2よりも抑えたほうが好ましく、等倍速から1.2倍速までの間に設定することが好ましい。このように、表示速度の低下期間によって生じた映像の遅れを取り戻すために、表示速度を急激に上げることは避け、表示遅延期間よりもリカバリ期間のほうが長くするほうが好ましい。あるいは、複数のリカバリ期間を設けてもよい。 However, if the display speed is increased too much, it may cause discomfort to the user. Therefore, it is preferable that the amount of increasing the display speed is smaller than the amount of decreasing the display speed. In other words, it is preferable that the rate of increase / decrease of the display speed with respect to the standard speed during the recovery period is smaller than the absolute value of the rate of increase / decrease of the display speed with respect to the standard speed during the display delay period. For example, when the display speed is set to 0.8 times in the display delay period, the rate of increase / decrease of the display speed in the display delay period with respect to the standard speed is −0.2. Therefore, it is preferable that the rate of increase / decrease of the display speed with respect to the standard speed during the recovery period is suppressed rather than 0.2, and it is preferable to set the speed between 1x speed and 1.2x speed. As described above, in order to recover the delay of the image caused by the decrease period of the display speed, it is preferable to avoid suddenly increasing the display speed and to make the recovery period longer than the display delay period. Alternatively, a plurality of recovery periods may be provided.
 なお、図2の例では、ロボットを用いたが、撮影と移動が別の機器に担当されてもよく、ロボットの代わりに、イメージセンサを搭載した自動車、自転車、バイク、または遊具といったものが用いられてもよい。また、情報処理装置100の動きの予測および検知が可能であるならば、撮影を担当する情報処理装置100を人が担いで移動してもよい。 In the example of FIG. 2, a robot is used, but shooting and movement may be handled by different devices, and instead of the robot, a car, a bicycle, a motorcycle, or a playset equipped with an image sensor is used. May be done. Further, if the movement of the information processing device 100 can be predicted and detected, a person may carry the information processing device 100 in charge of photographing and move the information processing device 100.
 映像の表示速度を調整するためには、表示速度を変更する時点を決定する必要がある。本実施形態では、情報処理装置100が当該時点を決定し、表示装置200が当該時点の映像を表示する際に表示速度を調整する。具体的には、表示装置200の受信部230が映像とともに決定された時点を示す情報を受信し、表示装置200の表示制御部240が映像を表示する際に当該情報に基づいて映像の表示速度を調整する。 In order to adjust the display speed of the image, it is necessary to determine the time when the display speed is changed. In the present embodiment, the information processing device 100 determines the time point, and the display device 200 adjusts the display speed when displaying the image at the time point. Specifically, when the receiving unit 230 of the display device 200 receives the information indicating the time determined together with the image and the display control unit 240 of the display device 200 displays the image, the display speed of the image is based on the information. To adjust.
 例えば、情報処理装置100が、撮影開始から10秒が経過した時点において表示速度を下げると決定したとする。この場合、受信部230は、映像の始めから10秒後の時点において表示速度を下げるという情報を、映像と共に受信する。表示制御部240は、映像の表示を開始し、映像の始めから10秒後の時点において表示速度を下げる。このようにして、映像の表示速度が変更される。なお、表示速度を下げる時点に関する情報を受信する前に、映像の表示が開始される場合もあり得る。 For example, suppose that the information processing device 100 decides to reduce the display speed when 10 seconds have passed from the start of shooting. In this case, the receiving unit 230 receives the information that the display speed is lowered 10 seconds after the start of the video together with the video. The display control unit 240 starts displaying the image and reduces the display speed 10 seconds after the start of the image. In this way, the display speed of the image is changed. It should be noted that the display of the video may be started before the information regarding the time when the display speed is lowered is received.
 なお、映像表示システムは、図1の構成に限られるわけではない。装置の性能、設置環境、コスト、安全性、用途などを考慮して、情報処理装置100の処理が、表示装置200によって実施される場合もあり得るし、図示されていない装置に分散される場合もあり得る。 The video display system is not limited to the configuration shown in FIG. In consideration of the performance, installation environment, cost, safety, application, etc. of the device, the processing of the information processing device 100 may be performed by the display device 200, or may be distributed to devices (not shown). There can also be.
 情報処理装置100の処理を、図1に示した内部構成とともに説明する。なお、前述の通り、本実施形態では、情報処理装置100は移動装置でもあるが、図1の例では、目的地を受信して当該目的地まで自律的に移動することが可能な自律移動型の移動装置の構成としている。 The processing of the information processing apparatus 100 will be described together with the internal configuration shown in FIG. As described above, in the present embodiment, the information processing device 100 is also a mobile device, but in the example of FIG. 1, it is an autonomous mobile type capable of receiving a destination and autonomously moving to the destination. It is configured as a mobile device.
 なお、情報処理装置100および表示装置200内の構成要素は、集約されてもよいし、さらに分散されてもよい。また、図示または説明されていない構成要素も情報処理装置100および表示装置200には存在し得る。例えば、処理に必要な情報を記憶しておく、一つ以上のメモリまたはストレージが情報処理装置100および表示装置200に存在してもよい。 The components in the information processing device 100 and the display device 200 may be aggregated or further dispersed. In addition, components not shown or described may also be present in the information processing device 100 and the display device 200. For example, one or more memories or storages for storing information necessary for processing may exist in the information processing device 100 and the display device 200.
 受信部110は、情報処理装置100の外部の装置から、情報処理装置100の処理に必要な情報を受信する。例えば、情報処理装置100が存在する周辺の地図、移動の目的地の座標などといった情報処理装置100が移動するために必要な情報を受信し得る。 The receiving unit 110 receives information necessary for processing of the information processing device 100 from an external device of the information processing device 100. For example, it is possible to receive information necessary for the information processing device 100 to move, such as a map of the vicinity where the information processing device 100 exists and the coordinates of the destination of movement.
 なお、受信される情報は、特に限られるものではない。例えば、表示装置200を利用するユーザが、情報処理装置100の近辺にいる人と、情報処理装置100を介して、音声通信を行うことも考えられる。そのような場合、表示装置200の入力受付部210が音声を受けつけて、表示装置200の送信部220が音声に関するパケットを送信する。受信部110は、当該パケットを受信すればよい。また、外部の装置から、情報処理装置100の実際の位置を受信してもよい。 The information received is not particularly limited. For example, it is conceivable that a user who uses the display device 200 performs voice communication with a person in the vicinity of the information processing device 100 via the information processing device 100. In such a case, the input receiving unit 210 of the display device 200 receives the voice, and the transmitting unit 220 of the display device 200 transmits a packet related to the voice. The receiving unit 110 may receive the packet. Further, the actual position of the information processing device 100 may be received from an external device.
 センサ120は、情報処理装置100に搭載されたセンサ120である。RGB(Red,Green,and Blue)センサなどといった撮影のためのイメージセンサがセンサ120に少なくとも含まれる。また、移動のための情報を収集する、または、情報処理装置100の機体の動きを検知するために、イメージセンサ以外のセンサが、センサ120に含まれてもよい。例えば、ToF(Time of Flight)センサ、LiDAR(Light Detection and Ranging)、レーダー、ソナー、IMU(Inertial measurement unit)、GPS(Global Positioning System)、オドメータなどがセンサ120として含まれていてもよい。 The sensor 120 is a sensor 120 mounted on the information processing device 100. The sensor 120 includes at least an image sensor for photographing such as an RGB (Red, Green, and Blue) sensor. In addition, a sensor other than the image sensor may be included in the sensor 120 in order to collect information for movement or to detect the movement of the body of the information processing device 100. For example, a sensor such as a ToF (Time of Flight) sensor, a LiDAR (Light Detection and Ranking), a radar, a sonar, an IMU (Inertial measurement unit), a GPS (Global Positioning System), and an odometer may be included.
 移動情報生成部130は、情報処理装置100が移動するための情報を生成する。本開示では、当該情報を移動情報と記載する。移動情報は、移動装置の周辺の物体、周辺環境の状態などに関する情報を含む。本開示では、当該情報を環境地図と記載する。また、移動情報は、情報処理装置100が移動する予定の経路に関する情報も含む。 The movement information generation unit 130 generates information for the information processing device 100 to move. In this disclosure, the information is referred to as movement information. The movement information includes information on objects around the moving device, the state of the surrounding environment, and the like. In this disclosure, the information is referred to as an environmental map. The movement information also includes information on the route on which the information processing apparatus 100 is scheduled to move.
 移動情報生成部130は、センサ120からの情報に基づいて移動装置の周辺の環境について推定し、当該推定結果を時系列に組み合わせることによって移動装置の周辺の環境地図を作成する。環境地図は一から作成されてもよいし、受信部を介して受信した移動装置の周辺の地図に基づいて環境地図を作成してもよい。例えば、移動装置の周辺の地図上に、検出された障害物を配置してもよい。 The movement information generation unit 130 estimates the environment around the mobile device based on the information from the sensor 120, and creates an environment map around the mobile device by combining the estimation results in chronological order. The environmental map may be created from scratch, or the environmental map may be created based on the map around the mobile device received via the receiving unit. For example, the detected obstacle may be placed on a map around the mobile device.
 図1の例では、環境地図を作成するために、移動情報生成部130は、物体検出部131と、環境状態推定部132と、自己位置推定部133と、環境地図作成部134と、を備えている。 In the example of FIG. 1, in order to create an environmental map, the movement information generation unit 130 includes an object detection unit 131, an environmental state estimation unit 132, a self-position estimation unit 133, and an environmental map creation unit 134. ing.
 物体検出部131は、センサ120からの情報などによって移動装置の周辺の物体、つまりは、移動装置にとっての障害物を検知する。例えば、センサ120から距離画像を取得して障害物を検知してもよい。環境地図には、これらの障害物が示され、移動する際の経路を決定するために用いられる。なお、本実施形態では、障害物の移動にも対応できるように、環境地図は、移動装置の移動中にも更新されることを想定する。 The object detection unit 131 detects an object around the mobile device, that is, an obstacle for the mobile device, based on information from the sensor 120 or the like. For example, an obstacle may be detected by acquiring a distance image from the sensor 120. Environmental maps show these obstacles and are used to determine the route of travel. In this embodiment, it is assumed that the environmental map is updated even while the moving device is moving so that the moving of obstacles can be dealt with.
 環境状態推定部132は、センサ120からの映像などによって移動装置の周辺の環境を推定する。例えば、映像に移された路面の凹凸を検出してもよい。あるいは、路面の傾きを検出してもよい。 The environmental state estimation unit 132 estimates the environment around the mobile device based on an image from the sensor 120 or the like. For example, the unevenness of the road surface transferred to the image may be detected. Alternatively, the inclination of the road surface may be detected.
 なお、移動情報は、情報処理装置100が移動するために用いられるだけでなく、映像のずれの発生および終息の予測にも用いられる。例えば、路面の状態が悪いと移動装置が振動して映像のずれにつながる。ゆえに、状態の悪い路面を移動している間に撮影された映像を表示する際は、表示速度を下げることが好ましい。したがって、状態の悪い路面を移動している期間を予測するために、路面の状態などが移動情報に含まれていることが好ましい。 The movement information is used not only for the information processing device 100 to move, but also for predicting the occurrence and termination of image deviation. For example, if the road surface is in poor condition, the moving device vibrates, leading to a shift in the image. Therefore, it is preferable to reduce the display speed when displaying the image captured while moving on a road surface in poor condition. Therefore, it is preferable that the movement information includes the state of the road surface and the like in order to predict the period of movement on the road surface in poor condition.
 図3は、経路状態の推定について説明する図である。図には、路面の画像が示されている。点線の四角は、当該四角に囲まれた位置に、所定値以上の凹凸があることを示している。このように、画像に映された物体の表面の凹凸の程度を見分ける技術は公知であり、このような技術を用いてもよい。 FIG. 3 is a diagram for explaining the estimation of the path state. The figure shows an image of the road surface. The dotted square indicates that the position surrounded by the square has unevenness of a predetermined value or more. As described above, a technique for discriminating the degree of unevenness on the surface of an object projected on an image is known, and such a technique may be used.
 例えば、路面の画像を入力データとして用い、当該画像の路面を移動した際の振動のデータを正解データとして用いて機械学習を行うことにより、路面の画像から振動を予測可能な、ニューラルネットワークに基づくモデルを生成することが可能である。環境状態推定部132は、このようなモデルを用いて、路面の画像から状態を推定してもよい。 For example, it is based on a neural network that can predict vibration from the road surface image by performing machine learning using the road surface image as input data and the vibration data when moving the road surface of the image as correct answer data. It is possible to generate a model. The environmental state estimation unit 132 may estimate the state from the image of the road surface by using such a model.
 自己位置推定部133は、センサ120からの情報に基づき、自己の位置について推定する。例えば、自身の周囲を写した画像を用いて、地図上の位置を割り出す技術が知られている。このような技術を用い、センサ120からの画像が得られると想定される位置を環境地図において割り出せばよい。 The self-position estimation unit 133 estimates its own position based on the information from the sensor 120. For example, there is known a technique for determining a position on a map by using an image of the surroundings of oneself. Using such a technique, the position where the image from the sensor 120 is expected to be obtained may be determined on the environment map.
 環境地図作成部134は、これらの情報に基づき、環境地図を作成する。図4は環境地図の一例を示す図である。図4の点線は実際の地図を示し、実線が情報処理によって割り出された部分を示す。また、二重丸が自己の位置を示す。このように、環境地図は実際の地図の一部を示すものであってもよく、移動することによって割り出された部分が増えていってもよい。また、図4では示されていないが、図3で示した路面状態が環境地図にも含まれている。 The environmental map creation unit 134 creates an environmental map based on this information. FIG. 4 is a diagram showing an example of an environmental map. The dotted line in FIG. 4 shows an actual map, and the solid line shows a part determined by information processing. In addition, the double circle indicates the position of the self. In this way, the environmental map may show a part of the actual map, and the part determined by moving may increase. Further, although not shown in FIG. 4, the road surface condition shown in FIG. 3 is also included in the environmental map.
 経路決定部135は、環境地図と、目的地と、に基づき、移動予定経路を決定する。図4の例では、矢印によって移動予定経路402が示されている。なお、移動によって環境地図が更新されると、新たに割り出された部分が増えるため、移動予定経路も更新されてよい。 The route determination unit 135 determines the planned travel route based on the environmental map and the destination. In the example of FIG. 4, the planned movement route 402 is indicated by an arrow. When the environment map is updated by the movement, the newly calculated part increases, so the planned movement route may also be updated.
 なお、目的地は、受信部110を介して、外部から指定されてもよいが、目的地が予め定められていてもよい。例えば、監視ロボットなどが決まった場所を往復する場合は、目的地が固定されているが、障害物の位置、周辺環境の状況などは都度異なり得るため、移動予定経路も都度異なり得るからである。 The destination may be specified from the outside via the receiving unit 110, but the destination may be predetermined. For example, when a surveillance robot reciprocates in a fixed place, the destination is fixed, but the position of obstacles, the situation of the surrounding environment, etc. can be different each time, so the planned movement route can also be different each time. ..
 なお、作成された環境地図は、送信部170を介して表示装置200に送信され、表示装置200が環境地図を表示してもよい。それにより、ユーザは、表示装置200を介して、目的地を環境地図上で指定することもできる。 The created environmental map may be transmitted to the display device 200 via the transmission unit 170, and the display device 200 may display the environmental map. Thereby, the user can also specify the destination on the environment map via the display device 200.
 移動制御部140は、移動情報に基づき、実際に移動するための制御を行う。具体的には、行動決定部141が、自己位置が示された環境地図と、経路情報と、に基づき、自身の行動内容を決定する。機体制御部142が、機体を制御して、決定された行動内容を実行する。なお、自律移動に関する処理については公知であり、公知の技術を用いればよく、映像の表示の調整に関係する処理はないため、移動制御部140の詳細な説明は省略する。 The movement control unit 140 controls the actual movement based on the movement information. Specifically, the action determination unit 141 determines its own action content based on the environment map showing its own position and the route information. The aircraft control unit 142 controls the aircraft and executes the determined action content. It should be noted that the processing related to autonomous movement is known, and a known technique may be used, and since there is no processing related to the adjustment of the display of the image, the detailed description of the movement control unit 140 will be omitted.
 事象情報生成部150は、映像を撮影する機器の動きに関する事象の発生に関する情報を生成する。また、当該事象の終息に関する情報を生成する。表示情報生成部160は、これらの情報に基づき、表示速度が変更される時点を決定する。決定された時点を示す情報を、表示情報と記載する。 The event information generation unit 150 generates information regarding the occurrence of an event related to the movement of the device that captures the image. It also generates information about the end of the event. The display information generation unit 160 determines the time when the display speed is changed based on the information. Information indicating the determined time point is described as display information.
 本実施形態では、事象情報生成部150は、予測部151と、検知部152と、を備える。予測部151は、移動情報に基づき、登録された事象を予測する。言い換えれば、予測部151は、撮影のためのセンサ120または当該センサ120が取り付けられた物体の移動によって生じる可能性がある事象を、環境地図、移動予定経路などに基づいて予測する。例えば、図4の移動予定経路402の一部を囲う一点破線の枠の領域においては、カーブが急なため、イメージセンサが急旋回すると予測される。すなわち、予測部151は、当該領域を走行中において事象が発生すると予測する。このように、予測部151は、当該移動予定経路に基づいて、急旋回、振動の発生などといった酔いの要因を生み出す、センサ120の動きに関する事象の発生を予測する。 In the present embodiment, the event information generation unit 150 includes a prediction unit 151 and a detection unit 152. The prediction unit 151 predicts the registered event based on the movement information. In other words, the prediction unit 151 predicts an event that may occur due to the movement of the sensor 120 for photographing or the object to which the sensor 120 is attached, based on the environment map, the planned movement route, and the like. For example, in the region of the frame of the alternate long and short dash line surrounding a part of the planned movement path 402 in FIG. 4, it is predicted that the image sensor will make a sharp turn because the curve is steep. That is, the prediction unit 151 predicts that an event will occur while traveling in the region. In this way, the prediction unit 151 predicts the occurrence of an event related to the movement of the sensor 120, which causes sickness factors such as a sharp turn and the occurrence of vibration, based on the planned movement path.
 検知部152は、当該センサ120または当該物体の移動中にセンサ120から得られた情報に基づき、登録された事象を検知する。言い換えれば、検知部152は、移動中に実際に生じた事象を検知する。 The detection unit 152 detects a registered event based on the information obtained from the sensor 120 or the sensor 120 while the object is moving. In other words, the detection unit 152 detects an event that actually occurs during movement.
 なお、装置の性能、設置環境、コスト、安全性、用途などに応じて、予測部151および検知部152のいずれか一方のみを備えている場合もあり得る。 It should be noted that there may be a case where only one of the prediction unit 151 and the detection unit 152 is provided depending on the performance of the device, the installation environment, the cost, the safety, the application, and the like.
 なお、予測部151によって予測される事象と、検知部152によって検知される事象と、は、異なっていてもよい。すなわち、予測部151によって予測されるが、検知部152によって検知されない事象があってもよい。予測部151によって予測されないが、検知部152によって検知される事象があってもよい。また、予測部151によって予測され、かつ、検知部152によって検知される事象もあってもよい。 Note that the event predicted by the prediction unit 151 and the event detected by the detection unit 152 may be different. That is, there may be an event that is predicted by the prediction unit 151 but not detected by the detection unit 152. Although not predicted by the prediction unit 151, there may be an event detected by the detection unit 152. Further, there may be an event that is predicted by the prediction unit 151 and detected by the detection unit 152.
 予測または検知が可能であって酔いの要因である事象が予測または検知の対象として登録される。例えば、移動によってセンサ120が振動を受けて映像がぶれると、酔いを誘発する。ゆえに、振動は、酔いの要因であり、予測または検知の対象として登録され得る。また、移動によってセンサ120が旋回すると映像が揺れるため、酔いを誘発する。ゆえに、旋回も酔いの要因であり、予測または検知の対象として登録され得る。また、急な加速および減速中の映像も酔いを引き起こすことが知られている。ゆえに、一定値以上の加速および減速も、酔いの要因であり、予測または検知の対象として登録され得る。本実施形態では、このように、センサ120の動きに関する事象を対象として、事象の発生および終息に関する情報が生成される。 Events that can be predicted or detected and are the cause of sickness are registered as targets for prediction or detection. For example, when the sensor 120 is vibrated by movement and the image is blurred, sickness is induced. Therefore, vibration is a cause of sickness and can be registered for prediction or detection. In addition, when the sensor 120 turns due to movement, the image shakes, which induces sickness. Therefore, turning is also a cause of sickness and can be registered as a target for prediction or detection. It is also known that images during sudden acceleration and deceleration also cause sickness. Therefore, acceleration and deceleration above a certain value are also factors of sickness and can be registered as targets for prediction or detection. In this embodiment, information on the occurrence and termination of the event is generated for the event related to the movement of the sensor 120 in this way.
 なお、センサ120が取り付けられた物体が動いた場合、センサ120も動く。そのため、センサ120が取り付けられた物体の動きを予測または検知した場合、センサ120の動きを予測または検知したと言える。また、センサ120を内蔵する物体、センサ120を搭載した物体も、センサ120が取り付けられた物体と言える。 If the object to which the sensor 120 is attached moves, the sensor 120 also moves. Therefore, when the movement of the object to which the sensor 120 is attached is predicted or detected, it can be said that the movement of the sensor 120 is predicted or detected. Further, an object having a built-in sensor 120 and an object equipped with the sensor 120 can also be said to be an object to which the sensor 120 is attached.
 例えば、振動の発生時点および終息時点は、移動予定経路の路面状態に基づいて予測することができる。路面状態が悪い経路区間において振動が発生するため、路面状態が悪い経路区間に入る時点と出る時点とを予測すればよい。 For example, the time of occurrence and the time of termination of vibration can be predicted based on the road surface condition of the planned movement route. Since vibration occurs in the route section where the road surface condition is bad, it is sufficient to predict the time when the road surface condition is bad and the time when the road surface condition is bad.
 例えば、旋回の発生時点および終息時点は、移動予定経路の形状に基づき、予測することができる。移動予定経路の曲がり具合が急であると旋回も急激になるため、移動予定経路の曲がり具合を示す値(例えば、曲率など)が大きい区間に入る時点と出る時点とを予測すればよい。 For example, the time of occurrence and the time of end of turning can be predicted based on the shape of the planned movement route. If the bending degree of the planned movement route is steep, the turning becomes sharp, so it is sufficient to predict the time when the section where the value indicating the bending degree of the planned movement route (for example, curvature) is large and the time when it exits.
 例えば、加速度の絶対値が一定値以上であるという事象の発生時点および終息時点は、移動予定経路に基づき、予測することができる。例えば、進行方向に対する移動予定経路の傾きが大きいと加速度も急に増減するため、当該傾きが大きい位置に到達する時点を予測すればよい。 For example, the time of occurrence and the time of termination of an event in which the absolute value of acceleration is equal to or higher than a certain value can be predicted based on the planned movement route. For example, if the inclination of the planned movement path with respect to the traveling direction is large, the acceleration suddenly increases or decreases, so it is sufficient to predict the time when the position where the inclination is large is reached.
 また、これらの事象に対する検知は、映像解析によって行われてもよいし、加速度センサ120によって検知することもできる。 Further, the detection of these events may be performed by image analysis or may be detected by the acceleration sensor 120.
 なお、映像の視聴者が不快感をなるべく感じないように、表示速度を調整したほうが好ましい。そのため、酔いの要因となる事象以外も、予測または検知の対象としてもよい。例えば、検知部152は、センサ120の移動中に生じた事象を検知するため、検知対象は、センサ120の移動によって生じた事象に限られない。例えば、検知部152は、声を検知する機器であってもよい。 It is preferable to adjust the display speed so that the viewer of the video does not feel uncomfortable as much as possible. Therefore, other than the event that causes sickness, it may be a target of prediction or detection. For example, since the detection unit 152 detects an event that occurs while the sensor 120 is moving, the detection target is not limited to the event that occurs due to the movement of the sensor 120. For example, the detection unit 152 may be a device that detects a voice.
 声が検知されても映像を遅らせたままにすると、声が聞き取りにくい。そこで、声が検知された場合に映像の表示速度を上げると決定してもよい。すなわち、その場合、決定部は、声が検知された時点を、映像の表示速度を上げる時点と決定する。なお、この場合も上げる量は適宜に定めてよい。標準速度に戻してもよいし、標準速度よりも遅い速度に上げてもよい。 If the video was delayed even if the voice was detected, it would be difficult to hear the voice. Therefore, it may be decided to increase the display speed of the image when the voice is detected. That is, in that case, the determination unit determines the time when the voice is detected as the time when the display speed of the image is increased. In this case as well, the amount to be increased may be appropriately determined. It may be returned to the standard speed, or it may be increased to a speed slower than the standard speed.
 なお、映像には音声が含まれていてもよい。また、映像の表示速度と、音声の出力速度と、を別々に制御できる場合は、映像の表示速度は下げたままにし、音声の出力速度を標上げてもよい。すなわち、その場合、決定部は、声が検知された時点を、音声の出力速度を上げる時点と決定する。 Note that the video may include audio. If the video display speed and the audio output speed can be controlled separately, the video display speed may be kept low and the audio output speed may be raised. That is, in that case, the determination unit determines the time when the voice is detected as the time when the output speed of the voice is increased.
 声に含まれる周波数成分は、雑音とは異なり独特であることが知られている。また、声の周期的な成分と非周期的な成分との比も、雑音とは異なり独特であることが知られている。これらを利用して、声と雑音を区別することが可能である。また、物体検出によって、機体から所定距離以内の人間も検知することができる。また、複数のマイクを備えていれば、各マイクに声が到着する時間差から、声が到来した方角を推定することができる。これらを組み合わせることにより、マイクよって取得された音が、所定距離以内の人間からの声であるかどうかを判定することができる。このようにして、人間が情報処理装置100に話しかけていることを検知してもよい。 It is known that the frequency component contained in voice is unique unlike noise. It is also known that the ratio of the periodic component to the aperiodic component of the voice is also unique, unlike noise. By using these, it is possible to distinguish between voice and noise. In addition, by object detection, it is possible to detect a person within a predetermined distance from the aircraft. Further, if a plurality of microphones are provided, the direction in which the voice arrives can be estimated from the time difference in which the voice arrives at each microphone. By combining these, it is possible to determine whether or not the sound acquired by the microphone is a voice from a human being within a predetermined distance. In this way, it may be detected that a human being is talking to the information processing device 100.
 また、受信部を介して表示装置200などから音を受信し、当該音に人の声が含まれているかを検知してもよい。また、単に声を検知するだけでなく言葉を検知してもよい。検知された言葉が予め定められたキーワードである場合に、当該キーワードが検知された時点を表示速度の変更時点としてもよい。キーワードを検知する手法は、スマートフォン、携帯型自動翻訳機などで活用されており、当該手法を用いればよい。 Further, a sound may be received from the display device 200 or the like via the receiving unit, and it may be detected whether or not the sound includes a human voice. Further, not only the voice may be detected but also the words may be detected. When the detected word is a predetermined keyword, the time when the keyword is detected may be the time when the display speed is changed. The method of detecting a keyword is used in smartphones, portable automatic translators, etc., and the method may be used.
 表示情報生成部(決定部)160は、予測または検知の対象とした事象の発生に関する情報に基づき、映像の表示速度を下げる時点を決定する。例えば、表示情報生成部160は、当該情報によって示された、予測された発生時点または発生の検知時点を、映像の表示速度を下げる時点としてもよい。なお、必ずしも同じ時点でなくてもよく、例えば、予測時点または検知時点よりも一つ前の時点としてもよい。 The display information generation unit (determination unit) 160 determines the time point at which the video display speed is reduced based on the information regarding the occurrence of the event targeted for prediction or detection. For example, the display information generation unit 160 may set the predicted occurrence time or occurrence detection time point indicated by the information as a time point for reducing the display speed of the image. It does not necessarily have to be the same time point, and may be, for example, a time point immediately before the prediction time point or the detection time point.
 また、表示情報生成部160は、当該事象の終息に関する情報に基づき、映像の表示速度を上げる時点を決定する。例えば、表示情報生成部160は、当該情報によって示された、事象の予測された終息時点または終息の検知時点を、映像の表示速度を標準速度に戻す時点としてもよい。なお、必ずしも同じ時点でなくてもよく、例えば、予測された終息時点または終息の検知時点よりも一つ後の時点としてもよい。 Further, the display information generation unit 160 determines a time point for increasing the display speed of the image based on the information regarding the end of the event. For example, the display information generation unit 160 may set the predicted end time or the end time of the event indicated by the information as the time point for returning the display speed of the image to the standard speed. It should be noted that the time points do not necessarily have to be the same, and may be, for example, a predicted end time point or a time point one time after the end time point is detected.
 表示情報生成部160により決定された表示速度の変更に係る時点は、表示情報として、送信部170を介して表示装置200に送信される。これにより、表示装置200が表示速度を変更可能になる。 The time point related to the change of the display speed determined by the display information generation unit 160 is transmitted to the display device 200 via the transmission unit 170 as display information. This allows the display device 200 to change the display speed.
 なお、本実施形態では、表示情報生成部160は、変更決定部161と、変更更新部162と、を備える。また、本実施形態では、変更決定部161は、予測部151の予測結果に基づいて変更する時点を決定する。つまり、事象が実際には発生していないが、発生することを予測して、予め変更する時点を決定する。変更更新部162は、検知部152の検知結果に基づいて変更する時点を決定する。つまり、事象が実際に発生したことを検知してから、変更する時点を決定する。 In the present embodiment, the display information generation unit 160 includes a change determination unit 161 and a change update unit 162. Further, in the present embodiment, the change determination unit 161 determines the time point of change based on the prediction result of the prediction unit 151. That is, although the event has not actually occurred, it is predicted that it will occur, and the time to change it is determined in advance. The change update unit 162 determines the time point of change based on the detection result of the detection unit 152. That is, after detecting that the event actually occurred, the time to change is determined.
 本実施形態では、変更決定部161によって決定された時点が変更更新部162によって修整されることを想定する。例えば、予測部151によって振動が発生すると予測されたが、検知部152によって検知された振動の大きさが所定基準値よりも小さかった場合もあり得る。このような場合に、変更決定部161によって決定された時点が変更更新部162によって削除されてもよい。また、センサ120の実際の位置、または、当該実際の位置と移動予定経路との乖離、に関する情報を受信した場合は、当該情報に基づいて、変更決定部161によって決定された時点が変更更新部162によって修整されてよい。また、変更決定部161によって変更時点と決定されなかった時点が、変更更新部162によって変更時点と決定されることもあり得る。このように、予測結果に基づいて決定された変更時点が、検知結果に基づいて更新(修整)されるため、変更更新部162と名付けている。なお、検知結果に基づいて予測部151が再予測を行うとしてもよい。また、予測および検知のいずれか一方しか実行しない場合は、変更更新部162はなくてもよい。 In the present embodiment, it is assumed that the time point determined by the change determination unit 161 is modified by the change update unit 162. For example, although it is predicted that vibration will be generated by the prediction unit 151, the magnitude of the vibration detected by the detection unit 152 may be smaller than the predetermined reference value. In such a case, the time point determined by the change determination unit 161 may be deleted by the change update unit 162. Further, when information regarding the actual position of the sensor 120 or the deviation between the actual position and the planned movement route is received, the time determined by the change determination unit 161 based on the information is the change update unit. It may be modified by 162. Further, a time point that is not determined as a change time point by the change determination unit 161 may be determined as a change time point by the change update unit 162. In this way, the change time point determined based on the prediction result is updated (corrected) based on the detection result, and therefore is named the change update unit 162. The prediction unit 151 may perform re-prediction based on the detection result. Further, when only one of prediction and detection is executed, the change update unit 162 may be omitted.
 また、変更決定部161は、前述の表示速度を上げる期間を決定してもよい。ただし、再生速度を上昇させている間に事象が発生して表示を下げることになると、表示速度の差が大きくなるため、酔いを誘発する恐れがある。そのため、そのようなリスクをなるべく回避するために、事象が発生しないと予測された期間において、表示速度を上げることが好ましい。 Further, the change determination unit 161 may determine the period for increasing the display speed described above. However, if an event occurs while the playback speed is being increased and the display is lowered, the difference in display speed becomes large, which may induce sickness. Therefore, in order to avoid such a risk as much as possible, it is preferable to increase the display speed during the period in which the event is predicted not to occur.
 ゆえに、予測部151が、事象の発生時点および終息時点を予測して変更決定部161に通知し、変更決定部161は、予測された発生時点および終息時点に基づいて事象が発生していない期間を特定し、特定された期間以内において表示速度を上げることを決定することが好ましい。言い換えれば、予測された発生時点から予測された終息時点までの期間以外において、表示速度の上昇期間を決定することが好ましい。 Therefore, the prediction unit 151 predicts the time of occurrence and the time of termination of the event and notifies the change decision unit 161, and the change determination unit 161 predicts the time of occurrence and the time of termination of the event and the period during which the event does not occur. It is preferable to identify and decide to increase the display speed within the specified period. In other words, it is preferable to determine the period for increasing the display speed other than the period from the predicted occurrence time to the predicted end time.
 なお、表示情報生成部160は、映像の視聴者の酔いを抑えるために、表示速度以外にも映像の表示に関する設定の変更を決定してもよい。例えば、映像の視認性が悪いほうが、酔いが起こりにくいことが知られている。そのため、表示情報生成部160は、映像の表示に関する設定のうちの視認性に関する設定の少なくとも一つを視認性が劣化する方向に変更することを決定してもよい。例えば、映像の明るさ、コントラストなどを、表示速度を下げる時点において同時に下げると決定してもよい。 Note that the display information generation unit 160 may decide to change the setting related to the display of the video in addition to the display speed in order to suppress the sickness of the viewer of the video. For example, it is known that sickness is less likely to occur when the visibility of an image is poor. Therefore, the display information generation unit 160 may decide to change at least one of the settings related to the display of the image, which is related to the visibility, in a direction in which the visibility is deteriorated. For example, it may be decided to reduce the brightness, contrast, etc. of the image at the same time when the display speed is reduced.
 また、映像の表示速度を下げたことによる視聴者の違和感を抑えるために、表示速度を下げる期間において、撮影のレートを上げたほうが好ましい。そのため、センサ120は、予測または検知された発生時点から撮影のレートを上げてもよい。 Also, in order to suppress the discomfort of the viewer due to the decrease in the display speed of the image, it is preferable to increase the shooting rate during the period in which the display speed is decreased. Therefore, the sensor 120 may increase the shooting rate from the predicted or detected occurrence point.
 次に、映像の表示速度の変更時点を決定する処理の流れについて説明する。図5は、予測に基づく表示速度の決定処理のフローチャートである。本フローは、撮影中に一定間隔で実行されることを想定する。 Next, the flow of processing for determining the time when the video display speed is changed will be described. FIG. 5 is a flowchart of the display speed determination process based on the prediction. It is assumed that this flow is executed at regular intervals during shooting.
 移動情報生成部130は、新たなセンサ120情報を取得し、当該センサ120情報に基づいて環境地図を更新(S101)する。また、移動情報生成部130は、更新された環境地図に基づいて移動予定経路を更新する(S102)。なお、環境地図および移動予定経路がまだ作成されていない場合は、環境地図および移動予定経路が生成される。 The movement information generation unit 130 acquires new sensor 120 information and updates the environment map based on the sensor 120 information (S101). In addition, the movement information generation unit 130 updates the planned movement route based on the updated environment map (S102). If the environment map and the planned movement route have not been created yet, the environment map and the planned movement route are generated.
 予測部151が、環境地図と移動予定経路に基づき、所定時間内の事象の発生および終息を予測する(S103)。所定時間は、適宜に調整されてよい。事象が発生している期間(以降、事象発生期間と記載する)が存在する場合(S104のYES)は、変更決定部161が事象発生期間の表示速度を低下させることを決定し、予測された発生時点を下げる時点とし、予測された終息時点を戻す時点とする(S105)。また、変更決定部161が事象発生期間の時間長を総遅延時間に加算する(S106)。総遅延時間は、映像速度を下げたことによる映像の表示の遅れの総時間を意味する。 The prediction unit 151 predicts the occurrence and termination of an event within a predetermined time based on the environment map and the planned movement route (S103). The predetermined time may be adjusted as appropriate. When there is a period in which the event occurs (hereinafter referred to as the event occurrence period) (YES in S104), the change determination unit 161 determines that the display speed of the event occurrence period is reduced, and it is predicted. The time point of occurrence is lowered, and the time point of returning the predicted end time point is set (S105). Further, the change determination unit 161 adds the time length of the event occurrence period to the total delay time (S106). The total delay time means the total time of delay in displaying the image due to the decrease in the image speed.
 S106の処理の後、または、事象発生期間が存在しない場合(S104のNO)は、表示速度を速めるか否かを判定するためのリカバリ条件について確認される。リカバリ条件は、複数あってよい。遅延が存在しない場合は、表示速度を速める必要がないため、総遅延時間が0ではないという条件がリカバリ条件の一つとして少なくとも含まれる。また、リカバリ条件として、事象が発生しない期間の時間長が所定長以上であるという条件が含まれていてもよい。表示速度の変動が短期間において繰り返されて、視聴者に不快感を与えるのを防ぐためである。 After the processing of S106, or when the event occurrence period does not exist (NO in S104), the recovery condition for determining whether to increase the display speed is confirmed. There may be multiple recovery conditions. If there is no delay, it is not necessary to increase the display speed, so at least one of the recovery conditions is that the total delay time is not zero. Further, the recovery condition may include a condition that the time length of the period during which the event does not occur is equal to or longer than the predetermined length. This is to prevent the display speed from being repeated in a short period of time and causing discomfort to the viewer.
 リカバリ条件を満たす場合(S107のYES)は、変更決定部161は、事象が発生しない期間内において、リカバリ予定期間を決定する(S108)。リカバリ予定期間は、表示速度を上げる予定の時点から、速くした表示速度を戻す予定の時点までを意味する。そして、変更決定部161がリカバリ予定期間の時間長を総遅延時間から減算する(S109)。S109の処理の後、または、リカバリ条件を満たさない場合(S107のNO)は、本フローは終了する。 When the recovery condition is satisfied (YES in S107), the change determination unit 161 determines the scheduled recovery period within the period in which the event does not occur (S108). The planned recovery period means from the time when the display speed is scheduled to be increased to the time when the increased display speed is scheduled to be restored. Then, the change determination unit 161 subtracts the time length of the planned recovery period from the total delay time (S109). After the processing of S109 or when the recovery condition is not satisfied (NO in S107), this flow ends.
 このようにして、本フローが撮影期間中に繰り返し実行されることにより、映像の表示速度の変更時点が決定されていく。 In this way, by repeatedly executing this flow during the shooting period, the time point at which the video display speed is changed is determined.
 図6は、検知に基づく表示速度の更新処理のフローチャートである。本フローは、図5のフローとは独立して行われる。そのため、本フローは、図5のフローと並行して行われてもよい。 FIG. 6 is a flowchart of the display speed update process based on the detection. This flow is performed independently of the flow of FIG. Therefore, this flow may be performed in parallel with the flow of FIG.
 検知部152がセンサ120データに基づき事象が発生しているかどうかを検知する(S201)。すなわち、発生していると検知する場合もあれば、発生していないと検知する場合もあり得る。変更更新部162が、検知結果に基づき、現時点における予測が正しいかを判定する(S202)。当該判定結果に基づいてフローが分岐する。 The detection unit 152 detects whether or not an event has occurred based on the sensor 120 data (S201). That is, it may be detected that it has occurred, or it may be detected that it has not occurred. The change update unit 162 determines whether the current prediction is correct based on the detection result (S202). The flow branches based on the determination result.
 判定結果が予測通りであった場合(S203の条件分岐1)は、本フローは終了する。つまり、予測も検知も現時点において事象が発生しているという結果であった場合と、予測も検知も現時点において事象が発生していないという結果であった場合と、では、表示速度は更新されない。 If the judgment result is as expected (conditional branch 1 of S203), this flow ends. That is, the display speed is not updated depending on whether the prediction or the detection is the result that the event is occurring at the present time or the prediction or the detection is the result that the event is not occurring at the present time.
 判定結果が予測通りでなく、予測されなかった事象の発生が検知された場合は、(S203の条件分岐2)は、変更更新部162が検知された発生時点を下げる時点に決定する(S204)。そして、総遅延時間の加算を開始する(S205)。 If the determination result is not as predicted and the occurrence of an unpredicted event is detected, (conditional branch 2 of S203) determines the time point at which the change update unit 162 is detected to lower the time of occurrence (S204). .. Then, the addition of the total delay time is started (S205).
 また、リカバリの見直し条件を満たすかどうかが確認される。例えば、リカバリ実施中、または、リカバリ予定期間の直前において、予測されなかった事象が発生した場合は、リカバリを正常に実施することができない恐れがある。そのような場合にリカバリを実施しないとするための、リカバリの見直し条件を予め定めておく。リカバリの見直し条件を満たす場合(S206のYES)は、変更更新部162がリカバリ予定期間に基づき決定された変更時点を削除する(S207)。また、変更更新部162は、リカバリ予定期間の決定時においてリカバリ予定期間の時間長を総遅延時間から減算していたため、総遅延時間から削除されたリカバリ予定期間の時間長を加算する(S208)。リカバリの見直し条件を満たさない場合(S206のNO)は、S207およびS208の処理は実施されない。 Also, it is confirmed whether the recovery review conditions are met. For example, if an unexpected event occurs during recovery or immediately before the scheduled recovery period, recovery may not be performed normally. The conditions for reviewing the recovery are set in advance so that the recovery will not be performed in such a case. When the recovery review condition is satisfied (YES in S206), the change update unit 162 deletes the change time point determined based on the planned recovery period (S207). Further, since the change update unit 162 subtracts the time length of the planned recovery period from the total delay time at the time of determining the planned recovery period, the change update unit 162 adds the time length of the deleted planned recovery period from the total delay time (S208). .. If the recovery review condition is not satisfied (NO in S206), the processes of S207 and S208 are not executed.
 判定結果が予測通りでなく、予測発生時点において予測されなかった事象が続いている場合、または、予測された事象の発生を検知しなかった場合(S203の条件分岐3)は、変更更新部162は、予測に基づいて決定された変更時点を削除する(S209)。すなわち、予測によって定められた表示遅延期間をなかったことにする。 If the determination result is not as predicted and an unpredicted event continues at the time of the prediction occurrence, or if the occurrence of the predicted event is not detected (conditional branch 3 of S203), the change update unit 162 Deletes the change time point determined based on the prediction (S209). That is, it is assumed that there is no display delay period defined by the prediction.
 なお、予測時点と検知時点は、必ずしも一致せず、タイムラグが生じ得る。そのため、予測時点と検知時点との乖離が所定閾値以内であった場合は、予測通りとみなしてよい。 Note that the prediction time and the detection time do not always match, and a time lag may occur. Therefore, if the deviation between the prediction time and the detection time is within a predetermined threshold value, it may be regarded as as predicted.
 予測によって定められた表示遅延期間をなかったことにした場合、リカバリを実施する必要がなくなることもある。そのため、処理S209の後は、S203の条件分岐2と同様、リカバリの見直し条件を満たすかどうかが確認され、リカバリの見直し条件を満たす場合(S206のYES)は、前述の処理S207とS208が実施される。 If the display delay period specified by the forecast is not met, it may not be necessary to carry out recovery. Therefore, after the process S209, it is confirmed whether or not the recovery review condition is satisfied as in the conditional branch 2 of S203, and if the recovery review condition is satisfied (YES in S206), the above-mentioned processes S207 and S208 are executed. Will be done.
 判定結果が予測通りでなく、予測されなかった事象の終息を検知した場合(S203の条件分岐4)は、変更更新部162が事象の終息時点を、表示速度を上げる時点を決定する(S210)。また、S203の条件分岐2において、総遅延時間の加算が開始されているため、総遅延時間の加算を停止する(S211)。これにより、予測されなかった事象による遅延の時間長が総遅延時間に加算される。 When the determination result is not as predicted and the end of the unpredicted event is detected (conditional branch 4 of S203), the change update unit 162 determines the end time of the event and the time when the display speed is increased (S210). .. Further, since the addition of the total delay time is started in the conditional branch 2 of S203, the addition of the total delay time is stopped (S211). This adds the time length of the delay due to the unpredicted event to the total delay time.
 このようにして、本フローが撮影期間中に繰り返し実行されることにより、映像の表示速度の変更時点が修整される。 In this way, by repeatedly executing this flow during the shooting period, the time point at which the video display speed is changed is corrected.
 なお、本開示のフローチャートは一例であり、各処理が上記フローの通りに必ず行われる必要はない。例えば、総遅延時間の加算は予測処理で行わずに検知処理のみで行うとしてもよい。 Note that the flowchart of the present disclosure is an example, and each process does not necessarily have to be performed according to the above flow. For example, the addition of the total delay time may be performed only by the detection process without performing the prediction process.
 以上のように、本実施形態によれば、予め定められた酔いの要因である事象の発生について予測、検知またはその両方が行われる。そして、当該事象の発生時点に基づいて映像の表示速度を下げる時点が決定される。この決定された時点における映像の表示速度を表示装置200が下げることにより、当該映像の視聴者が受ける酔いを抑えることが可能になる。 As described above, according to the present embodiment, the occurrence of an event that is a predetermined cause of sickness is predicted, detected, or both. Then, the time point at which the display speed of the image is reduced is determined based on the time point at which the event occurs. By lowering the display speed of the image at the determined time point by the display device 200, it is possible to suppress the sickness received by the viewer of the image.
 また、映像の表示速度を下げる時点が決定された場合に、当該事象が発生していない期間内において、映像の表示速度を上げる時点が決定される。これにより、映像の表示速度を遅くしたことによって生じる遅延を回復することができ、徐々に遅延が膨大になってしまうことを防ぐことができる。これにより、例えば、撮影された映像をリアルタイムで視聴し、視聴した映像に基づいて指示を送信する場合に、当該指示が遅れるといったことを防ぐことができる。 Also, when the time point for lowering the video display speed is determined, the time point for increasing the video display speed is determined within the period during which the event does not occur. As a result, it is possible to recover the delay caused by slowing down the display speed of the image, and it is possible to prevent the delay from becoming enormous gradually. Thereby, for example, when the captured video is viewed in real time and the instruction is transmitted based on the viewed video, it is possible to prevent the instruction from being delayed.
 また、人の声を検知した場合など、表示速度を調整しないほうがよい場合に、表示速度を標準速度に戻すといったことも行うことができる。 It is also possible to return the display speed to the standard speed when it is not necessary to adjust the display speed, such as when a human voice is detected.
 また、前述の通り、映像表示システムは、図1の構成に限られるわけではない。情報処理装置100の移動情報生成部130、事象情報生成部150、および表示情報生成部160は、表示装置200、または、インターネットなどの通信ネットワーク上に存在する構成も取り得る。例えば、監視センサなどを有する建物の内部を移動して撮影する場合、移動装置が移動するために必要な情報が、当該建物の管理システムなどから移動装置に送信されるといった構成もあり得る。 Also, as described above, the video display system is not limited to the configuration shown in FIG. The mobile information generation unit 130, the event information generation unit 150, and the display information generation unit 160 of the information processing device 100 may be configured to exist on the display device 200 or a communication network such as the Internet. For example, when the inside of a building having a monitoring sensor or the like is moved and photographed, the information necessary for the moving device to move may be transmitted from the management system of the building or the like to the moving device.
(第2の実施形態)
 図7は、第2の実施形態に係る映像表示システムの構成例を示す図である。図7の例では、図1の情報処理装置100が、映像を撮影するセンサ100Aと、通信ネットワーク上に存在するサーバ100Bと、移動装置100Cと、に別れている。第2の実施形態では、第1の実施形態の情報処理装置100が担う機能が上記三つに分離したことを明確にするため、センサ100Aと記載したが、第2の実施形態のセンサ100Aは、第1の実施形態のセンサ120と同じでよい。
(Second Embodiment)
FIG. 7 is a diagram showing a configuration example of the video display system according to the second embodiment. In the example of FIG. 7, the information processing device 100 of FIG. 1 is divided into a sensor 100A for capturing an image, a server 100B existing on a communication network, and a mobile device 100C. In the second embodiment, the sensor 100A is described in order to clarify that the functions of the information processing apparatus 100 of the first embodiment are separated into the above three, but the sensor 100A of the second embodiment is described as the sensor 100A. , The same as the sensor 120 of the first embodiment.
 センサ100Aは、移動装置100Cに内蔵されていないが、移動装置100Cに搭載されていて、移動装置100Cと一緒に移動する。センサ100Aによるセンサ情報は、サーバ100Bに送られる。サーバ100Bは、第1の実施形態で示した、移動情報生成部130と、事象情報生成部150と、表示情報生成部160と、を備えている。そのため、サーバ100Bは、センサ情報に基づき、移動情報と、事象情報と、表示情報と、を生成することができる。移動装置100Cは、移動制御部140を備えており、移動情報をサーバ100Bから受信することにより、移動することができる。 The sensor 100A is not built in the moving device 100C, but is mounted in the moving device 100C and moves together with the moving device 100C. The sensor information from the sensor 100A is sent to the server 100B. The server 100B includes a movement information generation unit 130, an event information generation unit 150, and a display information generation unit 160 shown in the first embodiment. Therefore, the server 100B can generate movement information, event information, and display information based on the sensor information. The mobile device 100C includes a movement control unit 140, and can move by receiving movement information from the server 100B.
 図7の例では、第1表示装置200Aと、第2表示装置200Bという2台の表示装置200が存在する。図7の例では、第1表示装置200Aは、一般の視聴者のためのものであり、第2表示装置200Bは、移動装置100Cを操縦する操縦者のためのものである。第1表示装置200Aは、センサ100Aから映像を受信し、サーバ100Bから表示情報を受信する。これにより、第1表示装置200Aは、図1の例と同様、表示情報生成部160に決定された時点における映像を標準速度よりも遅くして表示変更することができる。第2表示装置200Bは、センサ100Aから映像を受信するが、サーバ100Bから表示情報を受信しない。そのため、第2表示装置200Bは、表示情報生成部160に決定された時点における映像を標準速度のままで表示する。 In the example of FIG. 7, there are two display devices 200, a first display device 200A and a second display device 200B. In the example of FIG. 7, the first display device 200A is for a general viewer, and the second display device 200B is for a driver who operates the mobile device 100C. The first display device 200A receives an image from the sensor 100A and receives display information from the server 100B. As a result, the first display device 200A can change the display of the image at the time determined by the display information generation unit 160 at a speed slower than the standard speed, as in the example of FIG. The second display device 200B receives the image from the sensor 100A, but does not receive the display information from the server 100B. Therefore, the second display device 200B displays the image at the time determined by the display information generation unit 160 at the standard speed.
 このように、第2の実施形態に係る映像表示システムでは、映像の視聴者に合わせて、表示速度を変えることができる。なお、第2の実施形態の各処理は、実施される主体が第1の実施形態と異なるが、その内容およびフローは、第1の実施形態と同じであるため、説明は省略する。 As described above, in the video display system according to the second embodiment, the display speed can be changed according to the viewer of the video. Although the subject of each process of the second embodiment is different from that of the first embodiment, the description thereof will be omitted because the content and flow thereof are the same as those of the first embodiment.
 本開示の実施形態における装置の処理は、CPU(Central Processing Unit)またはGPU(Graphics Processing Unit)などが実行するソフトウェア(プログラム)により実現できる。なお、当該装置の全ての処理をソフトウェアで実行するのではなく、一部の処理が、専用の回路などのハードウェアにより実行されてもよい。 The processing of the device according to the embodiment of the present disclosure can be realized by software (program) executed by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or the like. It should be noted that, instead of executing all the processes of the device by software, some processes may be executed by hardware such as a dedicated circuit.
 なお、上述の実施形態は本開示を具現化するための一例を示したものであり、その他の様々な形態で本開示を実施することが可能である。例えば、本開示の要旨を逸脱しない範囲で、種々の変形、置換、省略またはこれらの組み合わせが可能である。そのような変形、置換、省略などを行った形態も、本開示の範囲に含まれると同様に、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。 Note that the above-described embodiment shows an example for embodying the present disclosure, and the present disclosure can be implemented in various other forms. For example, various modifications, substitutions, omissions, or combinations thereof are possible without departing from the gist of the present disclosure. Such modifications, substitutions, omissions, and the like are also included in the scope of the invention described in the claims and the equivalent scope thereof, as are included in the scope of the present disclosure.
 なお、本開示は以下のような構成を取ることもできる。
[1]
 映像を撮影する機器の動きに関する事象の発生に関する第1情報を生成する事象情報生成部と、
 前記第1情報に基づき、前記映像の表示速度を下げる時点を決定する決定部
 を備える情報処理装置。
[2]
 前記事象情報生成部は、前記事象の発生時点を予測して予測された発生時点を前記第1情報に含める予測部を備え、
 前記決定部は、前記予測された発生時点に基づき、前記表示速度を下げる時点を決定する
 [1]に記載の情報処理装置。
[3]
 前記決定部は、前記予測された発生時点を、前記表示速度を下げる時点とする
 [2]に記載の情報処理装置。
[4]
 前記事象情報生成部は、前記事象の発生を検知して前記事象の発生の検知時点を前記第1情報に含める検知部を備え、
 前記決定部は、前記検知時点を、前記表示速度を下げる時点とする
 [1]に記載の情報処理装置。
[5]
 前記事象情報生成部は、前記事象または前記事象とは別の事象の発生を検知する検知部をさらに備え、
 前記決定部は、前記検知に基づき、前記表示速度を下げる時点を修正する
 [2]または[3]のいずれか一項に記載の情報処理装置。
[6]
 前記検知部は、前記機器または前記機器が取り付けられた物体の振動、旋回、および加速度の少なくともいずれかの発生を検知し、
 前記決定部は、前記振動、旋回、および加速度の少なくともいずれかの検知に基づき、前記表示速度を下げる時点を修正する
 [5]に記載の情報処理装置。
[7]
 前記予測部は、前記機器または前記機器が取り付けられた物体の移動予定経路に関する第2情報に基づき、前記事象の発生時点を予測する
 [2]、[3]、[5]、または[6]に記載の情報処理装置。
[8]
 前記事象は、前記機器または前記機器が取り付けられた物体が受ける振動であり、
 前記予測部は、前記第2情報に含まれる前記移動予定経路の路面状態に基づき、前記事象の発生時点を予測する
 [7]に記載の情報処理装置。
[9]
 前記事象は、前記機器または前記機器が取り付けられた物体の旋回であり、
 前記予測部は、前記第2情報に含まれる前記移動予定経路の形状に基づき、前記事象の発生時点を予測する
 [7]に記載の情報処理装置。
[10]
 前記予測部は、前記事象の終息時点をさらに予測し、
 前記決定部は、
  前記予測された発生時点から予測された終息時点までの期間を、前記表示速度の低下期間と決定し、
  前記低下期間以外において、前記表示速度の上昇期間を決定する
 [2]ないし[9]のいずれか一項に記載の情報処理装置。
[11]
 前記上昇期間における表示速度の標準速度に対する増減率が、前記低下期間における表示速度の前記標準速度に対する増減率の絶対値よりも小さい
 [10]に記載の情報処理装置。
[12]
 前記事象情報生成部は、前記事象または前記事象とは別の事象の発生を検知する検知部をさらに備え、
 前記決定部は、前記検知に基づき、前記低下期間および前記上昇期間の少なくともいずれかを修正する
 [10]に記載の情報処理装置。
[13]
 前記検知部は、声をさらに検知し、
 前記決定部は、前記声の検知時点に基づき、前記表示速度または前記映像に対応する音声の出力速度を上げる時点を決定する
 [4]ないし[12]のうち、前記検知部を備える一項に記載の情報処理装置。
[14]
 前記決定部は、前記第1情報に基づき、前記映像の表示に関する設定のうちの視認性に関する設定の少なくとも一つを前記視認性が劣化する方向に変更する時点をさらに決定する
 [1]ないし[13]のいずれか一項に記載の情報処理装置。
[15]
 前記機器または前記機器が取り付けられた物体の実際の位置と、前記機器または前記物体の移動予定経路と、の乖離に関する第4情報を受信する受信部
 をさらに備え、
 前記決定部は、前記第4情報に基づき、前記表示速度を下げる時点をさらに決定する
 [1]ないし[14]のいずれか一項に記載の情報処理装置。
[16]
 前記機器をさらに備え、
 前記機器が、前記第1情報に基づき、前記映像を撮影するレートを上げる
 [1]ないし[15]のいずれか一項に記載の情報処理装置。
[17]
 前記機器と、
 自身の位置および周囲の状況に関する第5情報に基づき、与えられた目的地までの移動予定経路を決定する経路決定部と、
 前記移動予定経路を移動する移動制御部と、
 をさらに備える[16]に記載の情報処理装置。
[18]
 前記映像を表示する予定の表示装置に、前記表示速度を下げる時点を示す情報を送信する送信部
 [1]ないし[17]のいずれか一項に記載の情報処理装置。
[19]
 映像を撮影する機器が取り付けられた移動装置と、情報処理装置と、前記映像を表示する第1および第2の表示装置と、を備えた映像表示システムであって、
 前記情報処理装置は、
  前記移動装置の動きに関する事象の発生に関する第1情報を生成する事象情報生成部と、
  前記第1情報に基づき、映像の表示速度を下げる時点を決定する決定部と、
  決定された時点に関する情報を、少なくとも前記第1の表示装置に送信する送信部と、
 を備え、
 前記第1の表示装置は、前記映像の表示中に、前記映像の表示速度を、前記時点において下げ、
 前記第2の表示装置は、前記映像の表示中に、前記映像の表示速度を前記時点の前後において一定とする
 映像表示システム。
[20]
 映像を撮影する機器の動きに関する事象の発生に関する第1情報を生成するステップと、
 前記第1情報に基づき、前記映像の表示速度を下げる時点を決定するステップと、
 前記映像の表示を開始するステップと、
 前記映像の表示速度を、前記時点において下げるステップと、
 を備える映像表示方法。
The present disclosure may also have the following structure.
[1]
An event information generator that generates the first information about the occurrence of an event related to the movement of the device that captures the image,
An information processing device including a determination unit that determines a time point at which the display speed of the image is reduced based on the first information.
[2]
The event information generation unit includes a prediction unit that predicts the occurrence time of the event and includes the predicted occurrence time in the first information.
The information processing apparatus according to [1], wherein the determination unit determines a time point at which the display speed is reduced based on the predicted occurrence time point.
[3]
The information processing apparatus according to [2], wherein the determination unit sets the predicted occurrence time point as a time point at which the display speed is reduced.
[4]
The event information generation unit includes a detection unit that detects the occurrence of the event and includes the detection time point of the occurrence of the event in the first information.
The information processing apparatus according to [1], wherein the determination unit sets the detection time point as a time point at which the display speed is reduced.
[5]
The event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event.
The information processing apparatus according to any one of [2] and [3], wherein the determination unit corrects a time point at which the display speed is reduced based on the detection.
[6]
The detector detects the occurrence of at least one of vibration, turning, and acceleration of the device or an object to which the device is attached.
The information processing apparatus according to [5], wherein the determination unit corrects a time point at which the display speed is lowered based on the detection of at least one of the vibration, the turning, and the acceleration.
[7]
The prediction unit predicts the time point of occurrence of the event based on the second information regarding the planned movement path of the device or the object to which the device is attached [2], [3], [5], or [6]. ] The information processing apparatus described in.
[8]
The event is a vibration received by the device or an object to which the device is attached.
The information processing device according to [7], wherein the prediction unit predicts the time when the event occurs based on the road surface condition of the planned movement route included in the second information.
[9]
The event is a swirl of the device or an object to which the device is attached.
The information processing apparatus according to [7], wherein the prediction unit predicts the time when the event occurs based on the shape of the planned movement route included in the second information.
[10]
The prediction unit further predicts the end time of the event,
The decision unit
The period from the predicted occurrence time to the predicted end time is determined as the display speed reduction period.
The information processing apparatus according to any one of [2] to [9], which determines an increase period of the display speed other than the decrease period.
[11]
The information processing apparatus according to [10], wherein the rate of increase / decrease of the display speed in the ascending period with respect to the standard speed is smaller than the absolute value of the rate of increase / decrease of the display speed in the decreasing period with respect to the standard speed.
[12]
The event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event.
The information processing apparatus according to [10], wherein the determination unit corrects at least one of the decrease period and the increase period based on the detection.
[13]
The detection unit further detects the voice and
The determination unit determines a time point for increasing the display speed or the output speed of the voice corresponding to the video based on the detection time of the voice. Of [4] to [12], the item including the detection unit. The information processing device described.
[14]
Based on the first information, the determination unit further determines the time point at which at least one of the settings related to the display of the image is changed in the direction in which the visibility is deteriorated [1] to [1] to [ The information processing apparatus according to any one of 13].
[15]
Further provided with a receiver for receiving fourth information regarding the deviation between the actual position of the device or the object to which the device is attached and the planned movement path of the device or the object.
The information processing apparatus according to any one of [1] to [14], wherein the determination unit further determines a time point for lowering the display speed based on the fourth information.
[16]
Further equipped with the above equipment
The information processing device according to any one of [1] to [15], wherein the device raises the rate at which the moving image is captured based on the first information.
[17]
With the above equipment
A route determination unit that determines the planned travel route to a given destination based on the fifth information regarding its own position and surrounding conditions, and
A movement control unit that moves along the planned movement route,
The information processing apparatus according to [16].
[18]
The information processing device according to any one of the transmission units [1] to [17], which transmits information indicating a time point at which the display speed is reduced to the display device scheduled to display the video.
[19]
An image display system including a mobile device equipped with a device for capturing an image, an information processing device, and first and second display devices for displaying the image.
The information processing device
An event information generation unit that generates first information regarding the occurrence of an event related to the movement of the mobile device, and an event information generation unit.
Based on the first information, a determination unit that determines the time point for reducing the display speed of the image, and
A transmitter that transmits information about the determined time point to at least the first display device, and
With
While the image is being displayed, the first display device reduces the display speed of the image at the time point.
The second display device is an image display system that keeps the display speed of the image constant before and after the time point during the display of the image.
[20]
The step of generating the first information about the occurrence of an event related to the movement of the device that shoots the image, and
Based on the first information, a step of determining a time point for reducing the display speed of the image and
The step of starting the display of the video and
The step of lowering the display speed of the image at the time point and
Video display method including.
 100 情報処理装置
 100A 第2の実施形態のセンサ
 100B サーバ
 100C 移動装置
 110 情報処理装置の受信部
 120 第1の実施形態のセンサ
 130 移動情報生成部
 131 物体検出部
 132 環境状態推定部
 133 自己位置推定部
 134 環境地図作成部
 135 経路決定部
 140 移動制御部
 141 行動決定部
 142 機体制御部
 150 事象情報生成部
 151 予測部
 152 検知部
 160 表示情報生成部
 161 変更決定部
 162 変更更新部
 170 情報処理装置の送信部
 200 表示装置
 200A 第1表示装置
 200B 第2表示装置
 210 入力受付部
 220 送信部
 230 受信部
 240 表示制御部
 300 障害物
 401、402 矢印(移動予定経路)
 500 バー(映像フレーム)
100 Information processing device 100A Sensor of the second embodiment 100B Server 100C Mobile device 110 Information processing device receiving unit 120 Sensor of the first embodiment 130 Moving information generation unit 131 Object detection unit 132 Environmental state estimation unit 133 Self-position estimation Unit 134 Environmental map creation unit 135 Route determination unit 140 Movement control unit 141 Action determination unit 142 Aircraft control unit 150 Event information generation unit 151 Prediction unit 152 Detection unit 160 Display information generation unit 161 Change determination unit 162 Change update unit 170 Information processing device Transmission unit 200 Display device 200A 1st display device 200B 2nd display device 210 Input reception unit 220 Transmission unit 230 Reception unit 240 Display control unit 300 Obstacles 401, 402 Arrows (planned movement route)
500 bars (video frame)

Claims (20)

  1.  映像を撮影する機器の動きに関する事象の発生に関する第1情報を生成する事象情報生成部と、
     前記第1情報に基づき、前記映像の表示速度を下げる時点を決定する決定部
     を備える情報処理装置。
    An event information generator that generates the first information about the occurrence of an event related to the movement of the device that captures the image,
    An information processing device including a determination unit that determines a time point at which the display speed of the image is reduced based on the first information.
  2.  前記事象情報生成部は、前記事象の発生時点を予測して予測された発生時点を前記第1情報に含める予測部を備え、
     前記決定部は、前記予測された発生時点に基づき、前記表示速度を下げる時点を決定する
     請求項1に記載の情報処理装置。
    The event information generation unit includes a prediction unit that predicts the occurrence time of the event and includes the predicted occurrence time in the first information.
    The information processing device according to claim 1, wherein the determination unit determines a time point at which the display speed is reduced based on the predicted occurrence time point.
  3.  前記決定部は、前記予測された発生時点を、前記表示速度を下げる時点とする
     請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein the determination unit sets the predicted occurrence time point as a time point at which the display speed is reduced.
  4.  前記事象情報生成部は、前記事象の発生を検知して前記事象の発生の検知時点を前記第1情報に含める検知部を備え、
     前記決定部は、前記検知時点を、前記表示速度を下げる時点とする
     請求項1に記載の情報処理装置。
    The event information generation unit includes a detection unit that detects the occurrence of the event and includes the detection time point of the occurrence of the event in the first information.
    The information processing apparatus according to claim 1, wherein the determination unit sets the detection time point as a time point at which the display speed is reduced.
  5.  前記事象情報生成部は、前記事象または前記事象とは別の事象の発生を検知する検知部をさらに備え、
     前記決定部は、前記検知に基づき、前記表示速度を下げる時点を修正する
     請求項2に記載の情報処理装置。
    The event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event.
    The information processing device according to claim 2, wherein the determination unit corrects a time point at which the display speed is reduced based on the detection.
  6.  前記検知部は、前記機器または前記機器が取り付けられた物体の振動、旋回、および加速度の少なくともいずれかの発生を検知し、
     前記決定部は、前記振動、旋回、および加速度の少なくともいずれかの検知に基づき、前記表示速度を下げる時点を修正する
     請求項5に記載の情報処理装置。
    The detector detects the occurrence of at least one of vibration, turning, and acceleration of the device or an object to which the device is attached.
    The information processing apparatus according to claim 5, wherein the determination unit corrects a time point at which the display speed is lowered based on the detection of at least one of the vibration, the turning, and the acceleration.
  7.  前記予測部は、前記機器または前記機器が取り付けられた物体の移動予定経路に関する第2情報に基づき、前記事象の発生時点を予測する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the prediction unit predicts the time when the event occurs based on the second information regarding the planned movement route of the device or the object to which the device is attached.
  8.  前記事象は、前記機器または前記機器が取り付けられた物体が受ける振動であり、
     前記予測部は、前記第2情報に含まれる前記移動予定経路の路面状態に基づき、前記事象の発生時点を予測する
     請求項7に記載の情報処理装置。
    The event is a vibration received by the device or an object to which the device is attached.
    The information processing device according to claim 7, wherein the prediction unit predicts the time when the event occurs based on the road surface condition of the planned movement route included in the second information.
  9.  前記事象は、前記機器または前記機器が取り付けられた物体の旋回であり、
     前記予測部は、前記第2情報に含まれる前記移動予定経路の形状に基づき、前記事象の発生時点を予測する
     請求項7に記載の情報処理装置。
    The event is a swirl of the device or an object to which the device is attached.
    The information processing device according to claim 7, wherein the prediction unit predicts the time when the event occurs based on the shape of the planned movement route included in the second information.
  10.  前記予測部は、前記事象の終息時点をさらに予測し、
     前記決定部は、
      前記予測された発生時点から予測された終息時点までの期間を、前記表示速度の低下期間と決定し、
      前記低下期間以外において、前記表示速度の上昇期間を決定する
     請求項2に記載の情報処理装置。
    The prediction unit further predicts the end time of the event,
    The decision unit
    The period from the predicted occurrence time to the predicted end time is determined as the display speed reduction period.
    The information processing apparatus according to claim 2, wherein the display speed increase period is determined in a period other than the decrease period.
  11.  前記上昇期間における表示速度の標準速度に対する増減率が、前記低下期間における表示速度の前記標準速度に対する増減率の絶対値よりも小さい
     請求項10に記載の情報処理装置。
    The information processing apparatus according to claim 10, wherein the rate of increase / decrease of the display speed in the ascending period with respect to the standard speed is smaller than the absolute value of the rate of increase / decrease of the display speed in the decreasing period with respect to the standard speed.
  12.  前記事象情報生成部は、前記事象または前記事象とは別の事象の発生を検知する検知部をさらに備え、
     前記決定部は、前記検知に基づき、前記低下期間および前記上昇期間の少なくともいずれかを修正する
     請求項10に記載の情報処理装置。
    The event information generation unit further includes a detection unit that detects the occurrence of the event or an event other than the event.
    The information processing device according to claim 10, wherein the determination unit corrects at least one of the decrease period and the increase period based on the detection.
  13.  前記検知部は、声をさらに検知し、
     前記決定部は、前記声の検知時点に基づき、前記表示速度または前記映像に対応する音声の出力速度を上げる時点を決定する
     請求項5に記載の情報処理装置。
    The detection unit further detects the voice and
    The information processing device according to claim 5, wherein the determination unit determines a time point at which the display speed or the output speed of the voice corresponding to the video is increased based on the time point at which the voice is detected.
  14.  前記決定部は、前記第1情報に基づき、前記映像の表示に関する設定のうちの視認性に関する設定の少なくとも一つを前記視認性が劣化する方向に変更する時点をさらに決定する
     請求項1に記載の情報処理装置。
    The determination unit further determines the time point at which at least one of the settings related to the display of the video is changed in the direction in which the visibility deteriorates, based on the first information. Information processing equipment.
  15.  前記機器または前記機器が取り付けられた物体の実際の位置と、前記機器または前記物体の移動予定経路と、の乖離に関する第4情報を受信する受信部
     をさらに備え、
     前記決定部は、前記第4情報に基づき、前記表示速度を下げる時点をさらに決定する
     請求項1に記載の情報処理装置。
    Further provided with a receiver for receiving fourth information regarding the deviation between the actual position of the device or the object to which the device is attached and the planned movement path of the device or the object.
    The information processing device according to claim 1, wherein the determination unit further determines a time point for reducing the display speed based on the fourth information.
  16.  前記機器をさらに備え、
     前記機器が、前記第1情報に基づき、前記映像を撮影するレートを上げる
     請求項1に記載の情報処理装置。
    Further equipped with the above equipment
    The information processing device according to claim 1, wherein the device raises the rate at which the video is captured based on the first information.
  17.  前記機器と、
     自身の位置および周囲の状況に関する第5情報に基づき、与えられた目的地までの移動予定経路を決定する経路決定部と、
     前記移動予定経路を移動する移動制御部と、
     をさらに備える請求項16に記載の情報処理装置。
    With the above equipment
    A route determination unit that determines the planned travel route to a given destination based on the fifth information regarding its own position and surrounding conditions, and
    A movement control unit that moves along the planned movement route,
    16. The information processing apparatus according to claim 16.
  18.  前記映像を表示する予定の表示装置に、前記表示速度を下げる時点を示す情報を送信する送信部
     をさらに備える請求項1に記載の情報処理装置。
    The information processing device according to claim 1, further comprising a transmission unit that transmits information indicating a time point at which the display speed is reduced to the display device that is scheduled to display the video.
  19.  映像を撮影する機器が取り付けられた移動装置と、情報処理装置と、前記映像を表示する第1および第2の表示装置と、を備えた映像表示システムであって、
     前記情報処理装置は、
      前記移動装置の動きに関する事象の発生に関する第1情報を生成する事象情報生成部と、
      前記第1情報に基づき、映像の表示速度を下げる時点を決定する決定部と、
      決定された時点に関する情報を、少なくとも前記第1の表示装置に送信する送信部と、
     を備え、
     前記第1の表示装置は、前記映像の表示中に、前記映像の表示速度を、前記時点において下げ、
     前記第2の表示装置は、前記映像の表示中に、前記映像の表示速度を前記時点の前後において一定とする
     映像表示システム。
    An image display system including a mobile device equipped with a device for capturing an image, an information processing device, and first and second display devices for displaying the image.
    The information processing device
    An event information generation unit that generates first information regarding the occurrence of an event related to the movement of the mobile device, and an event information generation unit.
    Based on the first information, a determination unit that determines the time point for reducing the display speed of the image, and
    A transmitter that transmits information about the determined time point to at least the first display device, and
    With
    While the image is being displayed, the first display device reduces the display speed of the image at the time point.
    The second display device is an image display system that keeps the display speed of the image constant before and after the time point during the display of the image.
  20.  映像を撮影する機器の動きに関する事象 の発生に関する第1情報を生成するステップと、
     前記第1情報に基づき、前記映像の表示速度を下げる時点を決定するステップと、
     前記映像の表示を開始するステップと、
     前記映像の表示速度を、前記時点において下げるステップと、
     を備える映像表示方法。
    The step of generating the first information about the occurrence of an event related to the movement of the device that shoots the image, and
    Based on the first information, a step of determining a time point for reducing the display speed of the image and
    The step of starting the display of the video and
    The step of lowering the display speed of the image at the time point and
    Video display method including.
PCT/JP2021/005194 2020-03-05 2021-02-12 Information processing device, video display system, and video display method WO2021176991A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020038037A JP2023044735A (en) 2020-03-05 2020-03-05 Information processing device, image display system, and image display method
JP2020-038037 2020-03-05

Publications (1)

Publication Number Publication Date
WO2021176991A1 true WO2021176991A1 (en) 2021-09-10

Family

ID=77613373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/005194 WO2021176991A1 (en) 2020-03-05 2021-02-12 Information processing device, video display system, and video display method

Country Status (2)

Country Link
JP (1) JP2023044735A (en)
WO (1) WO2021176991A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006262135A (en) * 2005-03-17 2006-09-28 Casio Comput Co Ltd Photographing device, moving picture reproducing device, and moving picture recording/reproduction program
JP2007218655A (en) * 2006-02-15 2007-08-30 Matsushita Electric Ind Co Ltd Navigation device
JP2008041161A (en) * 2006-08-04 2008-02-21 Denso Corp Protection device for storage device
JP2009260442A (en) * 2008-04-11 2009-11-05 Panasonic Corp On-vehicle imager
JP2011101300A (en) * 2009-11-09 2011-05-19 Konica Minolta Opto Inc Photographing device and remote operation support system
JP2012009984A (en) * 2010-06-23 2012-01-12 Nikon Corp Imaging apparatus
JP2013005423A (en) * 2011-06-22 2013-01-07 Nec Casio Mobile Communications Ltd Video reproducer, video reproduction method and program
WO2018211672A1 (en) * 2017-05-18 2018-11-22 株式会社ソニー・インタラクティブエンタテインメント Image generation device, image display system, and image generation method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006262135A (en) * 2005-03-17 2006-09-28 Casio Comput Co Ltd Photographing device, moving picture reproducing device, and moving picture recording/reproduction program
JP2007218655A (en) * 2006-02-15 2007-08-30 Matsushita Electric Ind Co Ltd Navigation device
JP2008041161A (en) * 2006-08-04 2008-02-21 Denso Corp Protection device for storage device
JP2009260442A (en) * 2008-04-11 2009-11-05 Panasonic Corp On-vehicle imager
JP2011101300A (en) * 2009-11-09 2011-05-19 Konica Minolta Opto Inc Photographing device and remote operation support system
JP2012009984A (en) * 2010-06-23 2012-01-12 Nikon Corp Imaging apparatus
JP2013005423A (en) * 2011-06-22 2013-01-07 Nec Casio Mobile Communications Ltd Video reproducer, video reproduction method and program
WO2018211672A1 (en) * 2017-05-18 2018-11-22 株式会社ソニー・インタラクティブエンタテインメント Image generation device, image display system, and image generation method

Also Published As

Publication number Publication date
JP2023044735A (en) 2023-04-03

Similar Documents

Publication Publication Date Title
CN113811920B (en) Distributed pose estimation
JP6469932B2 (en) System and method for performing automatic zoom
US20190361436A1 (en) Remote monitoring system and remote monitoring device
US20240070947A1 (en) Information processing apparatus and information processing method
US9609290B2 (en) Telepresence method and system for supporting out of range motion by aligning remote camera with user's head
US10924691B2 (en) Control device of movable type imaging device and control method of movable type imaging device
WO2016013409A1 (en) Control device, control method, program, and control system
JPWO2019116784A1 (en) Information processing equipment, mobiles, control systems, information processing methods and programs
WO2018179305A1 (en) Travel route providing system and control method for same, and program
WO2013121471A1 (en) Image generating device
WO2017212958A1 (en) Information processing device, information processing method, and program
WO2019188390A1 (en) Exposure control device, exposure control method, program, imaging device, and moving body
WO2018065857A1 (en) Systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle
JP2006270404A (en) Device and method for controlling photographing and photographing control program
WO2021176991A1 (en) Information processing device, video display system, and video display method
WO2018163492A9 (en) Driving mode switching control device, method, and program
US20150015707A1 (en) Telepresence method and system for tracking head movement of a user
JP2015163943A (en) Imaging apparatus
WO2022091787A1 (en) Communication system, robot, and storage medium
US20220244726A1 (en) Information processing apparatus, information processing method, and program
US9619714B2 (en) Device and method for video generation
JP2022159908A (en) Remote driving system, remote driving device and running video display method
JP2019001203A (en) Railway operation support system
JP2014076746A (en) Vehicle approach alarm sound output apparatus, and vehicle approach alarm sound output apparatus management system
KR20180035452A (en) Infotainment System Mounted on Vehicle and Operation Method Of The System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21764390

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21764390

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP