CN116170694A - Method, device and storage medium for displaying content - Google Patents

Method, device and storage medium for displaying content Download PDF

Info

Publication number
CN116170694A
CN116170694A CN202310003404.5A CN202310003404A CN116170694A CN 116170694 A CN116170694 A CN 116170694A CN 202310003404 A CN202310003404 A CN 202310003404A CN 116170694 A CN116170694 A CN 116170694A
Authority
CN
China
Prior art keywords
automobile
vehicle
obstacle
display
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310003404.5A
Other languages
Chinese (zh)
Inventor
张琳
赵小伟
徐达学
姜灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202310003404.5A priority Critical patent/CN116170694A/en
Publication of CN116170694A publication Critical patent/CN116170694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle
    • B60R16/0232Circuits relating to the driving or the functioning of the vehicle for measuring vehicle parameters and indicating critical, abnormal or dangerous conditions
    • B60R16/0234Circuits relating to the driving or the functioning of the vehicle for measuring vehicle parameters and indicating critical, abnormal or dangerous conditions related to maintenance or repairing of vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a method, a device and a storage medium for displaying content, and belongs to the technical field of driving-assisting panoramic images. The method comprises the following steps: acquiring the position of a first camera, wherein the first camera is a camera with faults in a plurality of cameras included in a first automobile; displaying a first model corresponding to the first automobile in a panoramic display of the first automobile; acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera; and displaying a fault mark at the target position of the first vehicle model, wherein the fault mark is used for indicating that the first camera breaks down. The method and the device can enrich the content displayed on the panoramic display.

Description

Method, device and storage medium for displaying content
Technical Field
The present disclosure relates to the field of driving assistance panoramic imaging technologies, and in particular, to a method and apparatus for displaying content, and a storage medium.
Background
Cameras are arranged around the automobile, and a panoramic display is arranged in the cab of the automobile. The automobile can shoot images through the cameras around, the images shot by the cameras around are spliced into a panoramic image, and the panoramic image is displayed on the panoramic display. However, panoramic displays are currently used to display panoramic images, such that the content displayed by the panoramic display is too single.
Disclosure of Invention
In order to enrich content displayed on a panoramic display, embodiments of the present application provide a method, an apparatus, and a storage medium for displaying content. The technical scheme is as follows:
according to a first aspect of embodiments of the present application, there is provided a method of displaying content, the method comprising:
acquiring the position of a first camera, wherein the first camera is a camera with faults in a plurality of cameras included in a first automobile;
displaying a first model corresponding to the first automobile in a panoramic display of the first automobile;
acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera;
and displaying a fault mark at the target position of the first vehicle model, wherein the fault mark is used for indicating that the first camera breaks down.
Optionally, the first automobile further comprises at least one ultrasonic radar, and the method further comprises:
detecting object information of an obstacle in the vicinity of the first automobile by the at least one ultrasonic radar, the object information including a type of the obstacle, a position of the obstacle, and a size of the obstacle;
Determining a first display location on a panoramic display of the first automobile based on the location of the obstacle;
and displaying an obstacle model corresponding to the obstacle at the first display position based on the type and the size of the obstacle, wherein the relative position between the first vehicle model and the obstacle model is the same as the relative position between the first vehicle and the obstacle.
Optionally, the detecting, by the at least one ultrasonic radar, object information of an obstacle near the first vehicle includes:
and detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar when the speed of the first automobile is lower than a speed threshold value.
Optionally, the first automobile further comprises at least one millimeter wave radar, and the method further comprises:
detecting vehicle information of a second vehicle positioned behind the first vehicle by the at least one millimeter wave radar, wherein the running direction of the first vehicle is the same as that of the second vehicle, and the vehicle information comprises the vehicle type of the second vehicle and the position of the second vehicle;
determining a second display location on a panoramic display of the first car based on the location of the second car;
And displaying a second model corresponding to the second automobile at the second display position based on the automobile type of the second automobile, wherein the relative position between the first model and the second model is the same as the relative position between the first automobile and the second automobile.
According to a second aspect of embodiments of the present application, there is provided an apparatus for displaying content, the apparatus comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the position of a first camera, and the first camera is a camera with faults in a plurality of cameras included in a first automobile;
the display module is used for displaying a first vehicle model corresponding to the first vehicle in the panoramic display of the first vehicle;
the acquisition module is further used for acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera;
the display module is further used for displaying a fault mark at the target position of the first vehicle model, and the fault mark is used for indicating that the first camera breaks down.
Optionally, the first car further comprises at least one ultrasonic radar,
The acquisition module is further used for detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar, wherein the object information comprises the type of the obstacle, the position of the obstacle and the size of the obstacle;
the acquisition module is further used for determining a first display position on a panoramic display of the first automobile based on the position of the obstacle;
the display module is further configured to display an obstacle model corresponding to the obstacle at the first display position based on the type and the size of the obstacle, where a relative position between the first vehicle model and the obstacle model is the same as a relative position between the first vehicle and the obstacle.
Optionally, the display module is configured to:
and detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar when the speed of the first automobile is lower than a speed threshold value.
Optionally, the first car further comprises at least one millimeter wave radar,
the acquiring module is further configured to detect, by using the at least one millimeter wave radar, vehicle information of a second vehicle located behind the first vehicle, where a driving direction of the first vehicle is the same as a driving direction of the second vehicle, and the vehicle information includes a vehicle type of the second vehicle and a position of the second vehicle;
The acquisition module is further used for determining a second display position on the panoramic display of the first automobile based on the position of the second automobile, wherein the second display position is proportional to the relative position between the first automobile model and the relative position between the first automobile and the second automobile displayed in the panoramic display;
the display module is further configured to display a second model corresponding to the second automobile at the second display position based on a model of the second automobile.
According to a third aspect of embodiments of the present application, there is provided an apparatus for displaying content, the apparatus comprising:
at least one processor configured to couple to the memory, read and execute instructions in the memory to implement the method of the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
and displaying a first model corresponding to the first automobile in a panoramic display of the first automobile by acquiring the position of the first camera of the fault in the first automobile. And acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera. A fault flag is displayed at a target location of the first model vehicle for indicating that the first camera is faulty. Therefore, the user can obtain the position of the first camera which breaks down on the first automobile by watching the fault mark on the first model displayed by the panoramic display, and the maintenance of the user is convenient. Meanwhile, the content which can be displayed on the panoramic display is expanded, so that the content displayed on the panoramic display is enriched.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of an automobile provided in an embodiment of the present application;
FIG. 2 is a schematic illustration of another automobile provided in an embodiment of the present application;
FIG. 3 is a schematic illustration of another automobile provided in an embodiment of the present application;
FIG. 4 is a flowchart of a method for displaying content according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for displaying content according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a panoramic display for displaying content according to an embodiment of the present application;
FIG. 7 is a flowchart of another method for displaying content according to an embodiment of the present application;
fig. 8 is a schematic diagram of display content of a panoramic display according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a device for displaying content according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Referring to fig. 1, an embodiment of the present application provides a first automobile 100, the first automobile 100 including a controller 101, a plurality of cameras 102, at least one ultrasonic radar 103, at least one millimeter wave radar 104, and a panoramic display 105. The controller 101 communicates with each camera 102, each ultrasonic radar 103, each millimeter wave radar 104, and the panoramic display 105, respectively.
The plurality of cameras 101 are distributed on the surrounding vehicle body of the first vehicle 100, the at least one ultrasonic radar 103 is distributed on the vehicle head and/or the vehicle tail of the first vehicle 100, and the at least one millimeter wave radar 104 is distributed on the rear left corner and/or the rear right corner of the first vehicle 100.
When the panorama display switch of the first automobile 100 is turned on or the gear of the first automobile 100 is engaged in the reverse gear, the controller 101 may capture video images around the automobile 100 through the plurality of cameras 102, stitch the video images captured by each of the plurality of cameras 102 into one panorama image, and display the panorama image through the panorama display 105.
When any one of the plurality of cameras 102 fails, the failed camera 102 will be referred to as a first camera for convenience of explanation. The controller 101 obtains the position of the first camera, displays a first model of the first vehicle 100 corresponding to the first camera in the panoramic display 105, obtains a target position corresponding to the first camera on the first model of the first vehicle based on the position of the first camera, and displays a fault mark at the target position on the first model of the first vehicle, wherein the fault mark is used for indicating that the first camera has a fault.
In some embodiments, the controller 101 detects object information of an obstacle in the vicinity of the first car 100, including the type of the obstacle, the position of the obstacle, and the size of the obstacle, through the at least one ultrasonic radar 103. Based on the location of the obstacle, a first display location is determined on the panoramic display 105 of the first car 100. Based on the object type and size of the obstacle, an obstacle model corresponding to the obstacle is displayed at a first display position, wherein the relative position between the first model and the obstacle model is the same as the relative position between the first vehicle 100 and the obstacle.
In some embodiments, controller 101 detects, via at least one millimeter wave radar 103, vehicle information of a second vehicle located behind first vehicle 100, the direction of travel of first vehicle 100 being the same as the direction of travel of the second vehicle, the vehicle information including the model of the second vehicle and the location of the second vehicle. A second display position is determined on the panoramic display 105 of the first car 100 based on the position of the second car, the second display position being proportional to the relative position between the first model displayed in the panoramic display 105 and the relative position between the first car 100 and the second car. And displaying the corresponding automobile model of the second automobile at a second display position based on the automobile type of the second automobile.
In some embodiments, the plurality of cameras 102 is four, one camera 102 is disposed on each side of the first automobile 100, and one camera 102 is disposed on each of the front and rear of the first automobile 100, so that the four cameras 102 can photograph the periphery of the first automobile 100.
In some embodiments, the at least one ultrasonic radar 103 is two in total, the two ultrasonic radars 103 comprising a front ultrasonic radar 103 and a rear ultrasonic radar 103. The front ultrasonic radar 103 is disposed at the head of the first car 100, and the rear ultrasonic radar 103 is disposed at the tail of the first car 100.
In some embodiments, there are two of the at least one millimeter-wave radar 104, with one millimeter-wave radar 104 disposed at a rear left-hand position of the first car 100 and the other millimeter-wave radar 104 disposed at a rear right-hand position of the first car 100.
Referring to fig. 2, the controller 101 includes an ultrasonic radar controller 1011, a millimeter wave radar controller 1012, a panorama controller 1013, and an acoustic host 1014. The panorama controller 1013 communicates with an ultrasonic radar controller 1011, a millimeter wave radar controller 1012, and a sound host 1014, and the sound host 1014 also communicates with the panorama display 105.
The ultrasonic radar controller 1011 also communicates with at least one ultrasonic radar 103. After the ultrasonic radar controller 1011 detects the position of the obstacle in the vicinity of the first automobile 100 through the at least one ultrasonic radar 103, the position of the obstacle may be transmitted to the panorama controller 1013.
The panorama controller 1013 controls the camera 102 to take a picture of the obstacle based on the position of the obstacle, and obtains a first video picture. And identifying the type, the size and other contents of the obstacle from the first video picture, and obtaining the object information of the obstacle. Based on the object information of the obstacle, the host sound unit 1014 is controlled to display the first model and the obstacle model corresponding to the obstacle on the panoramic display 105.
Millimeter-wave radar controller 1012 also communicates with at least one millimeter-wave radar 104. Millimeter-wave radar controller 1012 detects whether a second vehicle having the same traveling direction as that of first vehicle 100 is behind first vehicle 100 via the at least one millimeter-wave radar 104. If a second car is detected, the position of the second car is detected by the at least one millimeter wave radar 104 and sent to the panorama controller 1013
The panorama controller 1013 controls the camera 102 to take a picture of the second car based on the position of the second car, resulting in a second video picture. And identifying the contents such as the vehicle type of the second automobile from the second video picture, and obtaining the vehicle information of the second automobile. Based on the vehicle information of the second vehicle, the control host 1014 displays the second model and a second model corresponding to the second vehicle on the panoramic display 105.
In some embodiments, the panoramic controller 1013 is coupled to each camera 102 via a Low-voltage differential signal (Low-Voltage Differential Signaling, LVDS).
Referring to fig. 3, the first automobile 100 further includes a body controller (body control module, BCM) 106 and a gateway 107, a panorama controller 1013 communicates with a millimeter wave radar controller 1012 through the gateway 107, and the panorama controller 1013 communicates with an ultrasonic radar controller 1011 through the AB 106.
Referring to fig. 4, an embodiment of the present application provides a method 400 for displaying content, where the method 400 is applied to the first automobile shown in fig. 1, fig. 2, or fig. 3, and the method 400 includes the following steps 401 to 404.
Step 401: the method comprises the steps of obtaining the position of a first camera, wherein the first camera is a camera with faults in a plurality of cameras included in a first automobile.
The panoramic controller of the first car communicates with each camera of the first car, and when a certain camera (first camera) of the first car fails, the panoramic controller of the first car detects the failed first camera. The method includes the steps of obtaining a position of a first camera, wherein the position is a position of the first camera on a first automobile.
The position of the first camera is likewise the position in the coordinate system of the first vehicle. The origin of coordinates of the first car's coordinate system may be a center point of the first car and the lateral axis of the first car's coordinate system may be parallel or perpendicular to the body of the first car.
Step 402: and displaying the first model corresponding to the first automobile in the panoramic display of the first automobile.
In some embodiments, the appearance of the first model car and the appearance of the first car may be the same.
In some embodiments, the first model is displayed in a middle position of the panoramic display of the first automobile, that is, the center point of the first model is located in a middle position of the panoramic display of the first automobile.
In some embodiments, the first model vehicle model also has a coordinate system, the origin of the coordinate system of the first model vehicle model may be the center point of the first model vehicle model, and the lateral axis of the coordinate system of the first model vehicle model may be parallel or perpendicular to the body of the first model vehicle model. Optionally, the transverse axis of the coordinate system of the first car is generally parallel to the transverse axis of the coordinate system of the first model.
In step 402, the panoramic controller of the first car sends a first model to the host machine of the first car. The host-sound system of the first automobile receives the first model and displays the first model on the panoramic display of the first automobile.
Step 403: and acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera.
In step 403, the panorama controller of the first automobile converts the position of the first camera into a position in the coordinate system of the first model vehicle based on a preset conversion relationship, so as to obtain a target position, where the conversion relationship is used to convert the position in the coordinate system of the first automobile into a position in the coordinate system of the first model vehicle.
Step 404: a fault flag is displayed at a target location of the first model vehicle for indicating that the first camera is faulty.
Therefore, a user can directly watch the fault mark on the first model of the first automobile displayed by the panoramic display of the first automobile, the position of the first camera with the fault on the first automobile can be determined, and the user can conveniently maintain the first camera.
In the embodiment of the application, when a first camera on a first automobile fails, a first model corresponding to the first automobile is displayed on a panoramic display of the first automobile. And acquiring a corresponding target position of the first camera on the first vehicle model based on the position of the first camera, and displaying a fault mark at the target position of the first vehicle model so as to prompt the position of the first camera to a user. Thus, the user can maintain the first camera conveniently, and the content which can be displayed by the panoramic display of the first automobile is also expanded.
Referring to fig. 5, an embodiment of the present application provides a method 500 for displaying content, where the method 500 is applied to the first automobile shown in fig. 1, fig. 2, or fig. 3, and the method 500 includes the following steps 501 to 504.
Step 501: object information of an obstacle in the vicinity of the first car is detected by at least one ultrasonic radar, the object information including a type of the obstacle, a position of the obstacle, and a size of the obstacle.
The detection distance of the ultrasonic radar of the first automobile is short, and generally the detection distance of the ultrasonic radar of the first automobile does not exceed a first threshold value. The first threshold may be 1.5 meters, 2 meters, 3 meters, etc.
When the speed of the first automobile is high, the ultrasonic radar of the first automobile can quickly pass through the obstacle when detecting the obstacle near the first automobile, and the obstacle does not need to be presented to a user.
When the speed of the first automobile is lower than the speed threshold value, object information of an obstacle near the first automobile is detected through at least one ultrasonic radar. For example, when the first automobile is traveling at a low speed or the first automobile is backing up, the user is presented with an obstacle in the vicinity of the first automobile.
Object information of an obstacle near the first automobile can be detected in step 501 by the operations of 5011 to 5013 as follows.
5011: the panoramic controller of the first automobile receives the location of the obstacle detected by the first ultrasonic radar, the at least one ultrasonic radar comprising the first ultrasonic radar.
When detecting the position of an obstacle near the first automobile, the first ultrasonic radar transmits the position of the obstacle to the ultrasonic radar controller of the first automobile. The ultrasonic radar controller of the first car sends the position of the obstacle to the panoramic controller of the first car.
5012: the panoramic controller of the first automobile controls a second camera to shoot the obstacle to obtain a first video picture, and the second camera is a camera with a shooting range including the position of the obstacle.
If the first ultrasonic radar is a front ultrasonic radar located at the head of the first car, the second camera is a camera arranged at the head of the first car. If the first ultrasonic radar is a rear ultrasonic radar located at the rear of the first car, the second camera is a camera arranged at the rear of the first car.
The panoramic controller of the first automobile controls the second camera to focus the obstacle based on the position of the obstacle, and controls the second camera to shoot the obstacle to obtain a first video picture, so that the first video picture comprises an obstacle image with higher definition.
5013: the panorama controller of the first car identifies a type of obstacle from the first video image through the object identification model, and acquires a size of the obstacle based on the obstacle image in the first video image.
The object recognition model is a pre-trained intelligent model or an intelligent model downloaded from a third party, and is used for recognizing an obstacle image in the first video image, and based on the obstacle image, information such as the type of the obstacle and the size of the obstacle can be recognized, so that object information of the obstacle can be obtained.
Step 502: and displaying the first model corresponding to the first automobile in the panoramic display of the first automobile.
In step 502, the detailed implementation of the first model is displayed in the same manner as in step 402 of the method 400 shown in fig. 4, and will not be described in detail here.
Step 503: a first display location is determined on a panoramic display of the first automobile based on the location of the obstacle.
The first display position is proportional to a relative position between the first model of the vehicle and the obstacle displayed in the panoramic display.
In step 503, the panorama controller of the first automobile converts the position of the obstacle into a position in the coordinate system of the first model vehicle based on a preset conversion relationship, to obtain a first display position.
Step 504: based on the type and size of the obstacle, an obstacle model corresponding to the obstacle is displayed at a first display position.
In some embodiments, the relative position between the first model vehicle and the obstacle model is the same as the relative position between the first vehicle and the obstacle.
In step 504, the panorama controller of the first automobile obtains an obstacle model corresponding to the obstacle from a first correspondence based on the type of the obstacle, where the first correspondence is used to save the correspondence between the type of the obstacle and the obstacle model. Scaling the obstacle model based on the size of the obstacle, the ratio between the scaled size of the obstacle model and the size of the obstacle being equal to the ratio between the size of the first model and the size of the first car.
The panoramic controller of the first car sends the scaled obstacle model and the first display location to the head unit of the first car. The first car head unit receives the scaled obstacle model and a first display location at which the scaled obstacle model is displayed.
The obstacle may be a stone or road pile, etc. For example, referring to fig. 6, assuming that the obstacle is a road stake, a first model of the first car and a model of the road stake are displayed on a panoramic display of the first car.
Therefore, a user can directly watch the content displayed on the panoramic display of the first automobile, see the obstacles near the first automobile, and avoid the obstacles conveniently.
In some embodiments, information such as a distance from the first vehicle to the obstacle, a type of the obstacle, and/or a location of the obstacle may also be displayed near an obstacle model corresponding to the obstacle.
In the embodiment of the application, when an obstacle near the first automobile is detected, a first automobile model corresponding to the first automobile and an obstacle model corresponding to the obstacle are displayed on a panoramic display of the first automobile so as to prompt a user of the position of the obstacle. Therefore, the user can avoid conveniently, and the content which can be displayed by the panoramic display of the first automobile is also expanded.
Referring to fig. 7, an embodiment of the present application provides a method 700 for displaying content, where the method 700 is applied to a first automobile shown in fig. 1, fig. 2 or fig. 3, and the method 700 includes the following steps 701 to 704.
Step 701: detecting vehicle information of a second vehicle positioned behind the first vehicle by at least one millimeter wave radar, wherein the running direction of the first vehicle is the same as that of the second vehicle, and the vehicle information comprises the vehicle type of the second vehicle and the position of the second vehicle;
The millimeter wave radar of the first automobile has a longer detection distance, and the detection distance of the millimeter wave radar of the first automobile generally exceeds a second threshold value. The second threshold may be 50 meters, 60 meters, 70 meters, 80 meters, 90 meters, etc.
The vehicle information of the second vehicle located behind the first vehicle may be detected in step 701 by the operations 7011 to 7013 as follows.
7011: the panoramic controller of the first car receives a location of the second car detected by a first millimeter wave radar, the at least one millimeter wave radar comprising the first millimeter wave radar.
When the first millimeter wave radar detects information such as the position and/or the speed of a second automobile which drives to the first automobile, the first millimeter wave radar sends the information such as the position and/or the speed of the second automobile to a millimeter wave radar controller of the first automobile. The millimeter wave radar controller of the first automobile sends information such as the position and/or the speed of the second automobile to the panorama controller of the first automobile.
7012: the panoramic controller of the first automobile controls a third camera to shoot the second automobile to obtain a second video picture, and the third camera is a camera with a shooting range including the position of the second automobile.
If the first millimeter wave radar is located in the rear left corner of the first car, the third camera includes a camera disposed at the rear of the first car and/or a camera disposed at the left body of the first car. If the first millimeter wave radar is located at the rear right corner of the first automobile, the third camera includes a camera disposed at the rear of the first automobile and/or a camera disposed at the right-hand body of the first automobile.
The panoramic controller of the first automobile is based on the position of the second automobile, controls the third camera to focus the second automobile, and controls the third camera to shoot the second automobile to obtain a second video picture, so that the second video picture comprises an obstacle image with higher definition.
7013: the panoramic controller of the first automobile identifies the model of the second automobile from the second video image through the vehicle identification model.
The vehicle identification model is a pre-trained intelligent model or an intelligent model downloaded from a third party, and is used for identifying a second automobile image in the second video image, and the vehicle type of the second automobile can be identified based on the second automobile image, so that the vehicle information of the second automobile is obtained.
Step 702: and displaying the first model corresponding to the first automobile in the panoramic display of the first automobile.
In step 702, the detailed implementation of the first model is displayed in the same manner as in step 402 of the method 400 shown in fig. 4, and will not be described in detail herein.
Step 703: a second display position is determined on the panoramic display of the first vehicle based on the position of the second vehicle, the second display position being proportional to the relative position between the first model and the relative position between the first vehicle and the second vehicle displayed in the panoramic display.
In step 703, the panorama controller of the first automobile converts the position of the second automobile into a position in the coordinate system of the first model based on the preset conversion relationship, to obtain a second display position.
Step 704: and displaying a second model corresponding to the second automobile at a second display position based on the model of the second automobile.
In some embodiments, the relative position between the first and second vehicle model is the same as the relative position between the first and second vehicle.
In step 704, the panorama controller of the first automobile obtains a second model corresponding to the second automobile from the second correspondence based on the model of the second automobile, where the second correspondence is used to store the correspondence between the model of the automobile and the model. The panoramic controller of the first automobile sends the second model and the second display position to the sound host of the first automobile. The host machine of the first automobile receives the second model and a second display position, and displays the second model at the second display position.
For example, referring to fig. 8, a first model car and a second model car of the first car are displayed on a panoramic display of the first car. Therefore, the user can directly watch the content displayed on the panoramic display of the first automobile, and can see the second automobile which is driven from the rear of the first automobile, and the user can conveniently avoid the second automobile.
In some embodiments, information such as the distance of the first car to the second car, the model of the second car, and/or the speed of the second car may also be displayed in the vicinity of the second model.
In the embodiment of the application, when the second car behind the first car is detected, a first car model corresponding to the first car and a second car model corresponding to the second car are displayed on a panoramic display of the first car so as to prompt a user of the position of the second car. Therefore, the user can avoid conveniently, and the content which can be displayed by the panoramic display of the first automobile is also expanded.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 9, an embodiment of the present application provides an apparatus 900 for displaying content, where the apparatus 900 is disposed on a first automobile as shown in fig. 1, fig. 2, or fig. 3, or where the apparatus 900 is disposed on a first automobile as shown in the method 400 of fig. 4, the method 500 of fig. 5, or the method 700 of fig. 7. The apparatus 900 includes:
an obtaining module 901, configured to obtain a position of a first camera, where the first camera is a camera that has a fault in a plurality of cameras included in a first automobile;
A display module 902, configured to display, in a panoramic display of the first automobile, a first model corresponding to the first automobile;
the acquiring module 901 is further configured to acquire a target position corresponding to the first camera on the first model vehicle based on the position of the first camera;
the display module 902 is further configured to display a fault flag at the target location of the first model vehicle, where the fault flag is used to indicate that the first camera is faulty.
Optionally, the first car further comprises at least one ultrasonic radar,
the acquiring module 901 is further configured to detect object information of an obstacle near the first automobile through the at least one ultrasonic radar, where the object information includes a type of the obstacle, a position of the obstacle, and a size of the obstacle;
the acquiring module 901 is further configured to determine a first display position on a panoramic display of the first automobile based on a position of the obstacle;
the display module 902 is further configured to display, at the first display position, an obstacle model corresponding to the obstacle, where a relative position between the first model and the obstacle model is the same as a relative position between the first automobile and the obstacle, based on the type and the size of the obstacle.
Optionally, the display module 902 is configured to:
and detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar when the speed of the first automobile is lower than a speed threshold value.
Optionally, the first car further comprises at least one millimeter wave radar,
the acquiring module 901 is further configured to detect, by using the at least one millimeter wave radar, vehicle information of a second vehicle located behind the first vehicle, where a driving direction of the first vehicle is the same as a driving direction of the second vehicle, and the vehicle information includes a vehicle type of the second vehicle and a position of the second vehicle;
the acquiring module 901 is further configured to determine a second display position on the panoramic display of the first automobile based on the position of the second automobile;
the display module 902 is further configured to display, at the second display position, a second model corresponding to the second automobile based on a model of the second automobile, where a relative position between the first model and the second model is the same as a relative position between the first automobile and the second automobile.
In the embodiment of the application, the acquiring module acquires the position of the first camera which is in fault in the first automobile, and the display module displays a first automobile model corresponding to the first automobile in the panoramic display of the first automobile. The acquisition module acquires a target position corresponding to the first camera on the first vehicle model based on the position of the first camera. The display unit displays a failure flag at a target position of the first model vehicle, the failure flag indicating that the first camera fails. Therefore, the user can obtain the position of the first camera which breaks down on the first automobile by watching the fault mark on the first model displayed by the panoramic display, and the maintenance of the user is convenient. Meanwhile, the content which can be displayed on the panoramic display is expanded, so that the content displayed on the panoramic display is enriched.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 10 shows a block diagram of an electronic device 1000 according to an exemplary embodiment of the present application. The electronic device 1000 may be the controller on the first vehicle described above with respect to fig. 1, 2, or 3, or may be the controller on the first vehicle of the method 400 of fig. 4, the method 500 of fig. 5, or the method 700 of fig. 7, for example, may be a panoramic controller on the first vehicle. Generally, the electronic device 1000 includes: a processor 1001 and a memory 1002.
The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1001 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen needs to display. In some embodiments, the processor 1001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for execution by processor 1001 to implement the display content method provided by the method embodiments herein.
In some embodiments, the electronic device 1000 may further optionally include: a peripheral interface 1003, and at least one peripheral. The processor 1001, the memory 1002, and the peripheral interface 1003 may be connected by a bus or signal line. The various peripheral devices may be connected to the peripheral device interface 1003 via a bus, signal wire, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, a display 1005, a camera assembly 1006, audio circuitry 1007, a positioning assembly 1008, and a power supply 1009.
Peripheral interface 1003 may be used to connect I/O (Input/Output) related at least one peripheral to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1001, memory 1002, and peripheral interface 1003 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
Radio Frequency circuit 1004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. Radio frequency circuitry 1004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. Radio frequency circuitry 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1004 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1005 is a touch screen, the display 1005 also has the ability to capture touch signals at or above the surface of the display 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this time, the display 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1005 may be one, disposed on the front panel of the electronic device 1000; in other embodiments, the display 1005 may be at least two, respectively disposed on different surfaces of the electronic device 1000 or in a folded design; in other embodiments, the display 1005 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 1000. Even more, the display 1005 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 1005 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1006 is used to capture images or video. Optionally, camera assembly 1006 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing, or inputting the electric signals to the radio frequency circuit 1004 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple and separately disposed at different locations of the electronic device 1000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 1007 may also include a headphone jack.
The location component 1008 is used to locate a current geographic location of the electronic device 1000 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1008 may be a GPS (Global Positioning System ), beidou system or galileo system based positioning component.
The power supply 1009 is used to power the various components in the electronic device 1000. The power source 1009 may be alternating current, direct current, disposable battery or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1000 also includes one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyroscope sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
The acceleration sensor 1011 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the electronic apparatus 1000. For example, the acceleration sensor 1011 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1001 may control the display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect a body direction and a rotation angle of the electronic apparatus 1000, and the gyro sensor 1012 may collect a 3D motion of the user on the electronic apparatus 1000 in cooperation with the acceleration sensor 1011. The processor 1001 may implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1013 may be disposed at a side frame of the electronic device 1000 and/or at an underlying layer of the display 1005. When the pressure sensor 1013 is provided at a side frame of the electronic apparatus 1000, a grip signal of the electronic apparatus 1000 by a user can be detected, and the processor 1001 performs right-and-left hand recognition or quick operation according to the grip signal collected by the pressure sensor 1013. When the pressure sensor 1013 is provided at the lower layer of the display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1014 may be disposed on the front, back, or side of the electronic device 1000. When a physical key or vendor Logo is provided on the electronic device 1000, the fingerprint sensor 1014 may be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the display screen 1005 based on the ambient light intensity collected by the optical sensor 1015. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1005 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1005 is turned down. In another embodiment, the processor 1001 may dynamically adjust the shooting parameters of the camera module 1006 according to the ambient light intensity collected by the optical sensor 1015.
A proximity sensor 1016, also referred to as a distance sensor, is typically provided on the front panel of the electronic device 1000. The proximity sensor 1016 is used to capture the distance between the user and the front of the electronic device 1000. In one embodiment, when the proximity sensor 1016 detects a gradual decrease in the distance between the user and the front of the electronic device 1000, the processor 1001 controls the display 1005 to switch from the bright screen state to the off screen state; when the proximity sensor 1016 detects that the distance between the user and the front surface of the electronic apparatus 1000 gradually increases, the processor 1001 controls the display screen 1005 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 10 is not limiting of the electronic device 1000 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of displaying content, the method comprising:
acquiring the position of a first camera, wherein the first camera is a camera with faults in a plurality of cameras included in a first automobile;
displaying a first model corresponding to the first automobile in a panoramic display of the first automobile;
acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera;
And displaying a fault mark at the target position of the first vehicle model, wherein the fault mark is used for indicating that the first camera breaks down.
2. The method of claim 1, wherein the first car further comprises at least one ultrasonic radar, the method further comprising:
detecting object information of an obstacle in the vicinity of the first automobile by the at least one ultrasonic radar, the object information including a type of the obstacle, a position of the obstacle, and a size of the obstacle;
determining a first display location on a panoramic display of the first automobile based on the location of the obstacle;
and displaying an obstacle model corresponding to the obstacle at the first display position based on the type and the size of the obstacle, wherein the relative position between the first vehicle model and the obstacle model is the same as the relative position between the first vehicle and the obstacle.
3. The method of claim 2, wherein the detecting object information of an obstacle in the vicinity of the first car by the at least one ultrasonic radar comprises:
and detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar when the speed of the first automobile is lower than a speed threshold value.
4. A method according to any one of claims 1-3, wherein the first car further comprises at least one millimeter wave radar, the method further comprising:
detecting vehicle information of a second vehicle positioned behind the first vehicle by the at least one millimeter wave radar, wherein the running direction of the first vehicle is the same as that of the second vehicle, and the vehicle information comprises the vehicle type of the second vehicle and the position of the second vehicle;
determining a second display location on a panoramic display of the first car based on the location of the second car;
and displaying a second model corresponding to the second automobile at the second display position based on the automobile type of the second automobile, wherein the relative position between the first model and the second model is the same as the relative position between the first automobile and the second automobile.
5. An apparatus for displaying content, the apparatus comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the position of a first camera, and the first camera is a camera with faults in a plurality of cameras included in a first automobile;
the display module is used for displaying a first vehicle model corresponding to the first vehicle in the panoramic display of the first vehicle;
The acquisition module is further used for acquiring a target position corresponding to the first camera on the first vehicle model based on the position of the first camera;
the display module is further used for displaying a fault mark at the target position of the first vehicle model, and the fault mark is used for indicating that the first camera breaks down.
6. The apparatus of claim 5, wherein the first vehicle further comprises at least one ultrasonic radar,
the acquisition module is further used for detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar, wherein the object information comprises the type of the obstacle, the position of the obstacle and the size of the obstacle;
the acquisition module is further used for determining a first display position on a panoramic display of the first automobile based on the position of the obstacle;
the display module is further configured to display an obstacle model corresponding to the obstacle at the first display position based on the type and the size of the obstacle, where a relative position between the first vehicle model and the obstacle model is the same as a relative position between the first vehicle and the obstacle.
7. The apparatus of claim 6, wherein the display module is to:
and detecting object information of an obstacle near the first automobile through the at least one ultrasonic radar when the speed of the first automobile is lower than a speed threshold value.
8. The apparatus of any one of claims 5-7, wherein the first car further comprises at least one millimeter wave radar,
the acquiring module is further configured to detect, by using the at least one millimeter wave radar, vehicle information of a second vehicle located behind the first vehicle, where a driving direction of the first vehicle is the same as a driving direction of the second vehicle, and the vehicle information includes a vehicle type of the second vehicle and a position of the second vehicle;
the acquisition module is further used for determining a second display position on the panoramic display of the first automobile based on the position of the second automobile;
the display module is further configured to display a second model corresponding to the second automobile at the second display position based on a model of the second automobile, where a relative position between the first model and the second model is the same as a relative position between the first automobile and the second automobile.
9. An apparatus for displaying content, the apparatus comprising:
at least one processor configured to couple with a memory, read and execute instructions in the memory to implement the method of any of claims 1-4.
10. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1-4.
CN202310003404.5A 2023-01-03 2023-01-03 Method, device and storage medium for displaying content Pending CN116170694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310003404.5A CN116170694A (en) 2023-01-03 2023-01-03 Method, device and storage medium for displaying content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310003404.5A CN116170694A (en) 2023-01-03 2023-01-03 Method, device and storage medium for displaying content

Publications (1)

Publication Number Publication Date
CN116170694A true CN116170694A (en) 2023-05-26

Family

ID=86417597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310003404.5A Pending CN116170694A (en) 2023-01-03 2023-01-03 Method, device and storage medium for displaying content

Country Status (1)

Country Link
CN (1) CN116170694A (en)

Similar Documents

Publication Publication Date Title
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
WO2021082483A1 (en) Method and apparatus for controlling vehicle
CN110333834B (en) Frame frequency adjusting method and device, display device and computer readable storage medium
CN111854780B (en) Vehicle navigation method, device, vehicle, electronic equipment and storage medium
CN110920631B (en) Method and device for controlling vehicle, electronic equipment and readable storage medium
CN111553050B (en) Structure checking method and device for automobile steering system and storage medium
CN112406707B (en) Vehicle early warning method, vehicle, device, terminal and storage medium
CN113590070A (en) Navigation interface display method, navigation interface display device, terminal and storage medium
CN111010537B (en) Vehicle control method, device, terminal and storage medium
CN110349527B (en) Virtual reality display method, device and system and storage medium
CN114299468A (en) Method, device, terminal, storage medium and product for detecting convergence of lane
CN110775056B (en) Vehicle driving method, device, terminal and medium based on radar detection
CN112241987B (en) System, method, device and storage medium for determining defense area
CN111383243B (en) Method, device, equipment and storage medium for tracking target object
CN109189068B (en) Parking control method and device and storage medium
CN111717205B (en) Vehicle control method, device, electronic equipment and computer readable storage medium
CN111369684B (en) Target tracking method, device, equipment and storage medium
CN110471613B (en) Data storage method, data reading method, device and system
CN116170694A (en) Method, device and storage medium for displaying content
CN110717365B (en) Method and device for obtaining picture
CN111741226A (en) Method and device for controlling camera and warning lamp and vehicle
CN114506383B (en) Steering wheel alignment control method, device, terminal, storage medium and product
CN110944294B (en) Movement track recording method, device, system, computer equipment and storage medium
CN112241662B (en) Method and device for detecting drivable area
CN112991790B (en) Method, device, electronic equipment and medium for prompting user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination