WO2022143181A1 - Procédé et appareil de traitement d'informations et système de traitement d'informations - Google Patents

Procédé et appareil de traitement d'informations et système de traitement d'informations Download PDF

Info

Publication number
WO2022143181A1
WO2022143181A1 PCT/CN2021/138526 CN2021138526W WO2022143181A1 WO 2022143181 A1 WO2022143181 A1 WO 2022143181A1 CN 2021138526 W CN2021138526 W CN 2021138526W WO 2022143181 A1 WO2022143181 A1 WO 2022143181A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
data
real
predetermined
Prior art date
Application number
PCT/CN2021/138526
Other languages
English (en)
Chinese (zh)
Inventor
聂兰龙
Original Assignee
青岛千眼飞凤信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛千眼飞凤信息技术有限公司 filed Critical 青岛千眼飞凤信息技术有限公司
Publication of WO2022143181A1 publication Critical patent/WO2022143181A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the present application relates to the technical field of information processing, and in particular, to an information processing method and device, and an information processing system.
  • the technologies that provide positioning services for consumers mainly include satellite positioning and wireless base station positioning, and their positioning accuracy is about 10m, and they can only provide positioning services for service recipients, but cannot provide other more information for service recipients.
  • the embodiments of the present application provide an information processing method and apparatus, and an information processing system, to at least solve the technical problem in the related art that the system for providing positioning services for service recipients can only provide positioning services for users and has a relatively single function.
  • an information processing method comprising: determining that a target enters a predetermined area; and in response to a relay tracking thread for the target, acquiring a real-time position of the target generated by the relay tracking thread information, wherein the relay tracking thread is used to relay tracking the target between at least one sampling device in the predetermined area, and generate the real-time position data based on frame image information collected by the at least one sampling device determining specific information based on the real-time location information; sending the specific information to a terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information information.
  • determining that the target has entered a predetermined area includes: acquiring frame image data collected by the at least one sampling device; performing image recognition on the frame image data to obtain a recognition result; determining that there is an image of the target in the recognition result. identification information, it is determined that the target enters the predetermined area.
  • the information processing method further includes: identifying identification information for identifying the target from the target; wherein identifying information for identifying the target from the target Identification information, including: identifying biological information and/or non-biological information of the target from the collected images; using the biological information and/or non-biological information as identification information for identifying the target; wherein the non-biological information and/or non-biological information are used as identification information for identifying the target;
  • the biological information includes at least one of the following: the outline of the target, the color of the target, the text on the target, the identification code of the target; the biological features of the target include one of the following: facial features, body features .
  • the information processing method further includes: determining initial position information of the target; wherein, determining the initial position information of the target includes at least one of the following: acquiring sampling information, and generating an initial position information of the target based on the sampling information location information, wherein the sampling information is obtained from the at least one sampling device, and the at least one sampling device is triggered by a predetermined condition to perform a shooting task; obtain the predetermined terminal information of the terminal device, and based on the predetermined terminal information to determine the initial location information of the target.
  • the specific information includes at least one of the following: direction data of the target at the position corresponding to the real-time location information, risk prediction data of the target in the predetermined area, media resources, and navigation paths data, and driving instructions, wherein the media resource is associated with predetermined location data.
  • sending the specific information to the terminal device includes: sending direction data and media resources in the specific information to the terminal device, wherein the terminal device performs at least one of the following operations: based on the The direction data is used to generate voice navigation information, and the voice navigation information is played; the media resource is played.
  • sending the specific information to the target includes: determining that the target activates an automatic driving mode or a controlled driving model; sending the specific information to the target, wherein when the target is activated In the automatic driving mode, the target generates a driving instruction based on at least one of the following: direction data in the specific information, risk prediction data in the specific information, navigation path data in the specific information, sensing data and state data, and run based on the driving command, the sensing data and the state data are the data sensed by the target; when the target starts the controlled driving mode, the characteristic information carries A travel command on which the target operates based on the travel command.
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • an information processing device including: a first determination unit, configured to determine that a target enters a predetermined area; an acquisition unit, configured to respond to a relay tracking thread for the target, Acquiring the real-time position information of the target generated by the relay tracking thread, wherein the relay tracking thread is used for relay tracking the target between at least one sampling device in the predetermined area, and based on the at least one sampling device in the predetermined area.
  • the frame image information collected by a sampling device generates the real-time position data; the second determining unit is configured to determine specific information based on the real-time position information; the sending unit is configured to send the specific information to the terminal device or the target , wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • the first determination unit includes: a first acquisition module configured to acquire frame image data collected by the at least one sampling device; a first identification module configured to perform image recognition on the frame image data, The identification result is obtained; the first determination module is configured to determine that the identification information of the target exists in the identification result, and then determine that the target enters the predetermined area.
  • the information processing device further includes: an identification unit, configured to identify identification information for identifying the target from the target before it is determined that the target enters the predetermined area; wherein the identification unit includes: a first a second identification module, configured to identify the biological information and/or non-biological information of the target from the collected images; a second determination module, configured to use the biological information and/or non-biological information as the information used to identify the target Identification information; wherein, the non-biological information includes at least one of the following: the outline of the target, the color of the target, the text on the target, and the identification code of the target; the biological characteristics of the target include the following One: facial features, body features.
  • the information processing apparatus further includes: a third determination unit, configured to determine initial position information of the target; wherein, the third determination unit includes at least one of the following: a second acquisition module, configured to acquire sampling information, and generate initial position information of the target based on the sampling information, wherein the sampling information is obtained from the at least one sampling device, and the at least one sampling device is triggered by a predetermined condition to perform a shooting task; third The determining module is configured to acquire predetermined terminal information of the terminal device, and determine the initial position information of the target based on the predetermined terminal information.
  • the specific information includes at least one of the following: direction data of the target at the position corresponding to the real-time location information, risk prediction data of the target in the predetermined area, media resources, and navigation paths data, and driving instructions, wherein the media resource is associated with predetermined location data.
  • the sending unit includes: a sending module configured to send the direction data and media resources in the specific information to a terminal device, wherein the terminal device performs at least one of the following operations: based on the direction
  • the data generates voice navigation information, and plays the voice navigation information; and plays the media resource.
  • the sending unit includes: a fourth determining module configured to determine that the target activates an automatic driving mode or a controlled driving mode; a sending module configured to send the specific information to the target, wherein, When the target activates the automatic driving mode, the target generates a driving instruction based on at least one of the following: direction data in the specific information, risk prediction data in the specific information, and risk prediction data in the specific information navigation route data, sensing data and state data, and run based on the driving command, the sensing data and the state data are data sensed by the target; when the target starts the controlled driving mode, The characteristic information carries a driving instruction, and the target operates based on the driving instruction.
  • a fourth determining module configured to determine that the target activates an automatic driving mode or a controlled driving mode
  • a sending module configured to send the specific information to the target, wherein, When the target activates the automatic driving mode, the target generates a driving instruction based on at least one of the following: direction data in the specific information, risk prediction data in the specific information, and risk
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • a server which is applied to the information processing method described in any one of the above, including: an identification and positioning unit for identifying and determining initial position data of a target, and generating initial position information of the target based on the initial position data; a relay tracking unit for relay tracking the target between at least one sampling device in a predetermined area, and generating real-time position information of the target; directional characteristics a unit for generating the direction information of the target from the real-time position information; a risk prediction unit for judging whether the target is at risk based on the real-time position information through predetermined rules, and generating the target in the predetermined regional risk prediction data; a location navigation unit for generating navigation path data based on the real-time location information; a media association unit for retrieving media resources corresponding to the real-time location information of the target; wherein the target One or more of the direction information of the target, the risk prediction data of the target in the predetermined area, the navigation path data and the media resource
  • a terminal device which is applied to the information processing method described in any one of the above, including: a receiving module for receiving specific information; a processing unit for The specific information generates predetermined information; the execution unit is configured to respond to the specific information and/or the predetermined information.
  • an information processing system which is applied to the information processing method described in any one of the above, including: at least one sampling device for collecting frame image information of a target;
  • the server is used to generate real-time position information of the target based on the frame image information, and determine specific information based on the real-time position information, and send the specific information to the terminal device or the target, the The terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • a computer-readable storage medium includes a stored computer program, wherein when the computer program is run by a processor, the computer is controlled
  • the device where the storage medium is located executes any one of the information processing methods described above.
  • a processor is also provided, and the processor is configured to run a computer program, wherein, when the computer program runs, any one of the information processing methods described above is executed.
  • the determined target is used to enter the predetermined area; in response to the relay tracking thread for the target, the real-time location information of the target generated by the relay tracking thread is obtained, wherein the relay tracking thread is used for at least one sampling in the predetermined area.
  • the information generates predetermined information, and in response to the predetermined information, the information processing method provided by the embodiment of the present application achieves the purpose of pushing the information required by the target in the current environment to the target or a device related to the target according to the real-time location data of the target.
  • FIG. 1 is a flowchart of an information processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an information processing method according to an embodiment of the present application.
  • Fig. 3 (a) is a control schematic diagram of a controlled traveling device according to an embodiment of the present application.
  • FIG. 3(b) is a schematic diagram of the control of an automatic driving device according to an embodiment of the present application.
  • FIG. 4 is a schematic control diagram of a navigation device according to an embodiment of the present application.
  • FIG. 5 is a schematic control diagram of a playback device according to an embodiment of the present application.
  • FIG. 6 is a schematic control diagram of an intelligent portable device according to an embodiment of the present application.
  • FIG. 7 is a schematic control diagram of an automatic driving vehicle according to an embodiment of the present application.
  • FIG. 8 is a control schematic diagram of a controlled driving vehicle according to an embodiment of the present application.
  • FIG. 9(a) is a schematic diagram of a museum exhibition hall according to an embodiment of the present application.
  • FIG. 9(b) is a schematic diagram of an information processing method based on a smartphone according to an embodiment of the present application.
  • Fig. 9(c) is a schematic diagram of an information processing method based on a navigator according to an embodiment of the present application.
  • Fig. 10(a) is a schematic diagram of a scene of an autonomous driving vehicle according to an embodiment of the present application.
  • FIG. 10(b) is a schematic diagram of the control of an autonomous driving vehicle according to an embodiment of the present application.
  • Fig. 10(c) is a control schematic diagram of a controlled driving vehicle according to an embodiment of the present application.
  • Figure 11(a) is a schematic diagram of a public access area of a living community according to an embodiment of the present application.
  • FIG. 11(b) is a schematic diagram of the control of an automatic aircraft according to an embodiment of the present application.
  • FIG. 11( c ) is a schematic control diagram of a controlled robot according to an embodiment of the present application.
  • Fig. 12(a) is a schematic diagram of an operation scenario of a controlled traveling manipulator according to an embodiment of the present application.
  • Fig. 12(b) is a schematic diagram of the control of the controlled traveling manipulator according to the embodiment of the present application.
  • FIG. 13 is a schematic diagram of an information processing apparatus according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a server according to an embodiment of the present application.
  • 15 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • FIG. 16 is a schematic diagram of an information processing system according to an embodiment of the present application.
  • Fig. 1 is a flowchart of an information processing method according to an embodiment of the present application, as shown in Fig. 1 , the information processing method comprises the following steps:
  • Step S102 it is determined that the target enters the predetermined area.
  • the above-mentioned predetermined area may be a public area of a living community where at least one sampling device is installed, a construction site where at least one sampling device is installed, a road where at least one sampling device is installed, and a museum exhibit where at least one sampling device is installed. Hall, etc.
  • the above-mentioned target may be a vehicle entering the above-mentioned road where at least one sampling device is installed, a visitor entering the above-mentioned museum exhibition hall where at least one sampling device is installed, or a manipulator working on the above-mentioned construction site where at least one sampling device is installed.
  • determining that the target enters the predetermined area includes: acquiring frame image data collected by at least one sampling device; performing image recognition on the frame image data to obtain a recognition result; determining that there is identification information of the target in the recognition result , it is determined that the target enters the predetermined area.
  • Step S104 in response to the relay tracking thread for the target, obtain the real-time position information of the target generated by the relay tracking thread, wherein the relay tracking thread is used for relay tracking the target between at least one sampling device in the predetermined area, and based on at least one sampling device.
  • a sampling device captures frame image information to generate real-time position data.
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • the above sampling device is a camera
  • at least one camera here has a fixed position and shooting angle; the imaging quality of the camera and the arrangement density of the camera determine the accuracy of target positioning. Methods such as improving the imaging resolution of the camera, increasing the arrangement density of the camera, and the distance between the camera and the target can achieve the positioning accuracy that needs to be matched.
  • relay tracking tasks can be single-camera positioning and single-line relay tracking according to the application scenario requirements, or multi-camera positioning and multi-line relay tracking; low-precision requirements
  • a single camera can be used to locate and map to an approximate position in a two-dimensional coordinate system; in application scenarios with high precision requirements, multiple cameras can be used to locate and map to an accurate position in a three-dimensional coordinate system.
  • Step S106 determining specific information based on the real-time location information.
  • the specific information can be determined by the real-time location information of the target, where the specific information can be the media resource of the item currently viewed by the tourists in the exhibition hall of the museum, or the tourists visiting the exhibition hall of the museum.
  • the direction data sent can be target navigation data sent to tourists, or risk prediction data sent to autonomous vehicles.
  • step S108 the specific information is sent to the terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • the above-mentioned specific information can be sent to terminal devices (eg, mobile phones, iPads, autonomous navigation devices, etc.), automatic driving devices, manipulators, etc. carried by the user, so that these targets or terminal devices can perform predetermined operations.
  • terminal devices eg, mobile phones, iPads, autonomous navigation devices, etc.
  • automatic driving devices, manipulators, etc. carried by the user so that these targets or terminal devices can perform predetermined operations.
  • FIG. 2 is a schematic diagram of an information processing method according to an embodiment of the present application.
  • the sampling device can collect frame image information including the characteristics of the identification method, frame image information including the target, and the camera in the near-field area of the target.
  • the service system can confirm the identification method through the identification and positioning unit, and show the identification to the human-computer interaction device.
  • the human-computer interaction service mode starts the interactive service mode; the identification and positioning unit of the service system identifies the characteristic information of the target and determines the initial coordinate position of the served target; the relay tracking unit starts the relay tracking thread to generate real-time target position data; the direction feature unit Start the target direction feature recognition task, and generate target direction data in real time; the risk prediction unit starts the live analysis of the target near-field camera to generate target risk prediction data; the navigation path unit starts the target far-field and camera live analysis along the route to generate the navigation path data ; The media resource unit extracts the corresponding media resources in response to the target position data; after the service system sends the information collected above to the human-computer interaction device, the user interaction unit receives the real-time target position data, real-time target direction data, real-time target direction data, Risk prediction data, navigation path data and corresponding media resources are aggregated to generate interactive information, and the interactive information is displayed or played to users.
  • the service system can obtain real-time sampling information, and the real-time sampling information can be information collected by a sampling device.
  • the sampling device includes at least a plurality of cameras, and determines the initial position data of the target according to the real-time sampling information;
  • the relay tracking task thread of the target, the relay tracking task thread here can be relay tracking of the target among multiple sampling devices; the relay tracking task thread generates the real-time position data of the target; then sends the real-time position of the target to the mobile terminal through the wireless network data.
  • the service system can extract the target direction data of the target location, and/or target risk prediction data, and/or media resources according to the real-time position data of the target, and send the data to the mobile terminal through the wireless network. and/or, target risk prediction data, and/or, target navigation path data, and/or, media resources.
  • the target in response to the relay tracking thread for the target, the real-time position information of the target generated by the relay tracking thread is obtained, wherein the relay tracking thread is used for the predetermined area.
  • the target is relay-tracked between at least one sampling device, and real-time position data is generated based on the frame image information collected by the at least one sampling device; specific information is determined based on the real-time position information; specific information is sent to the terminal device or target, wherein the terminal device Or the target generates predetermined information based on specific information, and in response to the predetermined information, realizes the purpose of pushing the information required by the target in the current environment to the target or the device related to the target according to the real-time location data of the target, and improves the positioning service system.
  • the technical effect of the flexibility also improves the applicability of the positioning service system.
  • the information processing method provided by the embodiments of the present application solves the technical problem that the system for providing positioning services for service recipients in the related art can only provide positioning services for users and has a relatively single function.
  • the information processing method before it is determined that the target enters the predetermined area, further includes: identifying identification information for identifying the target from the target; wherein identifying information for identifying the target from the target.
  • the identification information of the target including: identifying the biological information and/or non-biological information of the target from the collected images; using the biological information and/or non-biological information as the identification information for identifying the target; wherein, the non-biological information includes the following At least one of: the outline of the target, the color of the target, the text on the target, the identification code of the target; the biological features of the target include one of the following: facial features, body features.
  • the identification information used to represent the target may be identified; when the push information is subsequently sent to the target, the identification may be used as matching information.
  • the above-mentioned image features can be facial features, object outline features, color features, text features, two-dimensional codes, barcodes, etc.;
  • the color features are identified and the initial position is determined, the digital badge features can be used to determine the initial position of the target person, and the initial position can be determined for the shape features of the target device.
  • the information processing method further includes: determining initial position information of the target; wherein, determining the initial position information of the target includes at least one of the following: acquiring sampling information, and generating an initial position information of the target based on the sampling information Location information, wherein the sampling information is obtained from at least one sampling device, and at least one sampling device is triggered by a predetermined condition to perform a shooting task; the predetermined terminal information of the terminal device is acquired, and the initial position information of the target is determined based on the predetermined terminal information.
  • the initial position data of the target can be determined, and a tracking task thread for the target can be started, and the tracking task thread can perform relay tracking among multiple sampling settings, and then according to The tracking task thread generates the real-time position data of the target; then, aggregates the real-time position data of the target, and/or, the target direction data, and/or, the target risk prediction data, and/or, the target navigation path data, and generates the driving instruction, and/or, generating a work instruction; and sending the driving instruction and/or the work instruction to the mobile terminal through a wireless network.
  • Fig. 3(a) is a schematic control diagram of a controlled traveling device according to an embodiment of the present application.
  • the information processing method may include the following steps in addition to some of the steps shown in Fig. 2 :
  • the controlled driving equipment requests the controlled service from the service system, and the identification and positioning unit of the service system determines the identification method and displays the identification features; the identification and positioning unit identifies the characteristic information of the target, and determines the initial coordinate position of the service target, and reports to the controlled
  • the traveling equipment sends information to trigger the controlled traveling equipment to start the controlled model; the controlled traveling equipment will send the controlled equipment status data and real-time sensing data to the server.
  • the planning control unit of the service system can aggregate real-time target position data, real-time target direction data, risk prediction data, navigation path data, and controlled device status data and sensing data to generate driving instructions and work instructions, and will generate The driving instructions and work instructions are sent to the controlled driving equipment.
  • the initial position data of the target can be determined by the identification feature of the target, and the identification feature is the feature information of the determined target, which can be one of image features, visible light stroboscopic features, target action features, and predetermined position features.
  • the visible light stroboscopic feature is an identification code information feature, and the stroboscopic signal contains a coded representation.
  • the visible light stroboscopic feature is generated by a mobile terminal, and can be separated and identified in a video frame using a stroboscopic signal that contains a coded representation.
  • the stroboscopic signal can be an alternating light and dark signal, or a color change signal; the stroboscopic signal can be generated by a signal light or a display screen.
  • a smartphone can be identified and located by the service system by flashing the fill light, and the smartphone can be identified and located by the service system by changing the color of the display screen; for example, a car can be identified and located by the service system by flashing its headlights; For example, UAVs are identified and located by service systems through the strobes of signal lights.
  • the target action feature is an identification code information feature, wherein the action feature includes a predetermined rule representation, and the service system determines the initial position of the target by identifying the target's predetermined action. For example, the waving action of the target person, the finger action of the target person, the nodding action of the target person, the forward and backward movement of the target device, the swinging rocker arm action of the target device, and so on.
  • the above-mentioned predetermined position feature is also a kind of identification code information.
  • the predetermined position corresponds to a preset code in the system, and is a method for determining the initial position of the target by moving the target to the predetermined identification position. For example, when the target person passes through the ticket gate, the service system determines and initially locates the target according to the known location coordinates of the ticket gate; for example, when the target car passes through the predetermined car passage, the service system determines and initially locates the target according to the known location coordinates of the vehicle passage. Determine and initialize the target vehicle; for example, when the robot is in a predetermined charging position, the service system determines and initializes the target robot according to the known position coordinates of the charging position.
  • the mobile terminal may be a human-computer interaction device with a wireless connection function, or an automatic driving device with a wireless connection function, or a controlled driving device with a wireless connection function.
  • the wireless connection function may be wireless local area network communication, cellular network communication or visible light stroboscopic communication, and the service system and the mobile terminal may be connected internally through the local area network or externally connected through the Internet.
  • the above-mentioned human-computer interaction device can be a smart portable device, a wearable smart device, a VR smart device, an AR smart device, or a car-mounted smart device, such as a smart phone, smart watch, smart glasses, smart headset, or car navigation.
  • Figure 3(b) is a schematic diagram of the control of an automatic driving device according to an embodiment of the present application.
  • the information processing method may include the following steps in addition to some of the steps shown in Figure 2:
  • the automatic driving equipment requests the controlled service from the service system, and the identification and positioning unit of the service system determines the identification method and displays the identification features;
  • the identification and positioning unit identifies the characteristic information of the target, and determines the initial coordinate position of the service target, and sends it to the automatic driving equipment.
  • information to trigger the controlled driving equipment to start the controlled model; the automatic driving equipment will send the automatic driving equipment state data and real-time sensing data to the server.
  • the above-mentioned automatic driving equipment and controlled driving equipment can be passenger cars, trucks, forklifts, turnover transport vehicles, agricultural machinery vehicles, sanitation machinery vehicles, automatic wheelchairs, balance vehicles and other vehicle equipment, and can be walking robots, mobile robots and other mechanical equipment. It is used to transport flying equipment such as helicopters and unmanned aerial vehicles.
  • the specific information includes at least one of the following: direction data of the target at a position corresponding to the real-time location information, risk prediction data of the target in a predetermined area, media resources, navigation path data, and travel instructions , where the media resource is associated with predetermined location data.
  • the target direction data is the current direction data of the tracked target determined and generated by the direction feature unit in the service system according to the analysis of real-time sampling information. For example, the direction of the body or face of the target person, the direction of the left and right hands of the target person, the direction of the head of the target car, the direction of the front end of the target robot, the working direction of the target manipulator, etc.
  • the target direction data By sending the target direction data to the mobile terminal, the target can be adjusted in precise direction and posture. For example, after the walking robot falls, it can assist the walking robot to adjust the posture correctly by sending the falling direction data to the walking robot.
  • the target risk pre-judgment data is that the risk pre-judgment unit generates target risk pre-judgment data by judging whether there is an accident risk according to predetermined rules by aggregating the real-time sampling information of the near-field camera of the moving target.
  • multiple cameras are installed in the UAV flight area, and the obstacles in the flight area are predicted in advance through the sampling information of the multiple cameras to generate risk prediction data, which can improve the flight speed and avoid the risk of collision accidents. occur.
  • multiple cameras are installed on the road, and the relay tracking unit relays tracking multiple moving targets through the sampling information of multiple cameras, and generates tracking data.
  • the moving targets can be vehicles, pedestrians, animals or unknown moving objects; through real-time tracking
  • the near-field data of the area of the service target vehicle is analyzed, and the risk prediction data of the service target vehicle is generated.
  • the service provider can arrange sampling cameras in the area around the highway to give early warning to the moving objects that may enter the highway and affect traffic safety. By sending the target risk prediction data to the mobile terminal, the blind spot of the driving equipment can be eliminated and the occurrence of risk accidents can be avoided.
  • the navigation path information is generated by the navigation path unit by analyzing the live information of the far-field camera and the cameras along the moving target. By analyzing the real-time sampling information of far-field cameras and cameras along the way, it breaks through the limitations of mobile terminals in acquiring real-time information, and makes the navigation path more in line with the real-time changes in the environment.
  • Media resources are media resources such as images, audios, graphics and texts that are preset in the service system, and media resources are associated with fixed location data.
  • the first media resource is associated with the first set of location data
  • the second media resource is associated with the first Two sets of location data.
  • the media resource may be introduction information associated with the fixed location, advertisement information associated with the fixed location, music information associated with the fixed location, or VR or AR image information associated with the fixed location.
  • the service system in response to the real-time location data of the relayed tracking target, the service system pushes to the human-computer interaction device a hyperlink of the audio and graphic introduction information associated with the corresponding location data.
  • the advertisement information of the store product associated with the position data is pushed to the human-computer interaction device, or, in response to the position of the target being relayed to be tracked, the advertisement information of the store product is pushed to the human-computer interaction device.
  • the product function introduction information associated with the location data By pushing media resources, preset media resources associated with corresponding locations can be accurately pushed to service objects.
  • the driving instruction is configured to be generated by the planning control unit by real-time aggregation of the target's real-time position data, target direction data, target risk prediction data, target navigation path data, mobile terminal status data and mobile terminal sensing data.
  • the mobile terminal status data may be device parameters, energy value, load value, wireless signal strength value, fault status, and the like.
  • the planning control unit can generate different driving commands according to different equipment parameters. For example, different brands of cars use different driving command interfaces, and the driving parameters corresponding to cars of the same brand with different configurations will also be different.
  • the planning control unit can generate different driving instructions according to different energy values. For example, a fuel vehicle needs to use the fuel-saving mode to drive when the fuel tank is low, and the unmanned aerial vehicle can use high-performance when the power is sufficient.
  • the planning control unit can generate different driving instructions according to different loading values. For example, a truck with no load and a high load requires different driving modes, and a dangerous goods transport vehicle such as a tanker needs a special driving mode.
  • the planning control unit may generate different driving instructions according to different wireless signal strength values. For example, when the wireless signal strength value is low, a conservative driving mode needs to be adjusted.
  • the mobile terminal sensing data is the sensing data collected by the sensors configured on the mobile terminal equipment. It can be supplementary data in the aggregated data of the planning control unit.
  • the sensors can be image sensors, radar sensors, acceleration sensors, GPS receivers, and electronic compass. sensors, etc.
  • the service system wirelessly transmits driving instructions or work instructions to the mobile terminal, and the mobile terminal executes the driving instructions and work instructions issued by the service system; when the planning control unit is configured in the mobile terminal , the service system wirelessly transmits the real-time location data, direction data, risk prediction data or navigation path information of the target to the mobile terminal. After the mobile terminal aggregates the real-time location data, direction data, risk prediction data and navigation path data, it generates and executes driving. Instructions and work orders.
  • sending the specific information to the terminal device includes: sending direction data and media resources in the specific information to the terminal device, wherein the terminal device performs at least one of the following operations: generating Voice navigation information, and play voice navigation information; play media resources.
  • real-time sampling information can be obtained, according to the real-time frame image information, the initial position data of the target is determined, the relay tracking task thread for the target is started, and the relay tracking thread generates the real-time position data of the target;
  • the real-time location data of the target extract the associated audio resources of the real-time location data, the audio resources here are arranged in the media resource unit, and the media resource unit is arranged in the service system, and/or in the playback device; the extracted associated audio resources are used for The playback task of the playback device.
  • FIG. 4 is a schematic control diagram of a navigation device according to an embodiment of the present application.
  • real-time sampling information is obtained, and the real-time sampling information is frame image information collected by a plurality of cameras; according to the real-time frame image information, the initial position of the target is determined data; start the relay tracking task thread for the target, the relay tracking task thread can relay tracking the target among multiple cameras; the relay tracking task thread generates the real-time position data of the target; sends the real-time position data of the target to the navigation device through the wireless network ;
  • the real-time position data is used by the navigation device to generate navigation information; the navigation information is used by the navigation device to present tasks, which can be used for the control of the navigation device.
  • FIG. 5 is a schematic control diagram of a playback device according to an embodiment of the present application.
  • real-time sampling information is obtained, and the real-time sampling information is frame image information collected by multiple cameras; according to the real-time frame image information, the initial position of the target is determined data; start the relay tracking task thread for the target, the relay tracking task thread can relay tracking the target among multiple cameras; the relay tracking task thread generates the real-time position data of the target; according to the real-time position data of the target, extract the real-time position data association Audio resources, the audio resources are preset in the media resource unit, and the media resource unit is set in the service system and/or in the playback device; the extracted associated audio resources are used for the playback task of the playback device.
  • FIG. 6 is a schematic control diagram of an intelligent portable device according to an embodiment of the present application.
  • the camera group can collect frame image information including the characteristics of the identification method and frame image information including the target; After the interactive service request of the intelligent portable device, the identification and positioning unit determines the identification method. At this time, the intelligent portable device displays the identification features; the identification and positioning unit identifies the characteristic information of the target, determines the initial coordinate position of the service target, and sends it to the intelligent portable device.
  • the service system can use the relay tracking unit to generate real-time target position data, and the media resource unit extracts the target position data to associate the advertising media, and display the advertising media through the smart portable device .
  • real-time sampling information can be obtained, and the real-time sampling information is frame image information collected by multiple cameras; according to the real-time frame image information, the initial position data of the target is determined; the relay tracking task thread for the target is started, and the relay tracking task The thread can relay tracking the target among multiple cameras; the relay tracking task thread generates the real-time position data of the target; extracts the real-time position data and associates the advertising media according to the real-time position data of the target; sends the advertising media to the smart portable device through the wireless network; Advertising media is used for the display of smart portable devices, and in this way, advertisements can be pushed.
  • sending the specific information to the target includes: determining that the target activates the automatic driving mode or the controlled driving mode; sending the specific information to the target, wherein when the target activates the automatic driving mode, the target is based on At least one of the following generates a driving instruction: direction data in the specific information, risk prediction data in the specific information, navigation route data in the specific information, sensing data and status data, and operates based on the driving instruction, sensing data and status
  • the data is the data sensed by the target; when the target starts the controlled driving mode, the characteristic information carries a driving instruction, and the above-mentioned target operates based on the driving instruction.
  • FIG. 7 is a schematic diagram of the control of an autonomous vehicle according to an embodiment of the present application.
  • the camera group can collect frame image information including the characteristics of the recognition method and the frame image information collected by the camera in the target near-field area;
  • the identification and positioning unit is triggered to confirm the identification method, and then the automatic driving vehicle is triggered to display the identification features; then the identification and positioning unit identifies the characteristic information of the target, determines the initial coordinate position of the service target, and triggers the automatic driving.
  • the vehicle starts the automatic driving mode; then the risk prediction unit starts the real-time analysis of the target near-field camera to generate target risk prediction data, and the planning control unit aggregates the risk prediction data to generate driving instructions and execute the driving instructions.
  • real-time sampling information is acquired, and the real-time sampling information is frame image information collected by a plurality of cameras; initial position data of the autonomous vehicle is determined according to the real-time frame image information; and a relay tracking task thread for the autonomous vehicle is started , the relay tracking task thread can relay tracking the target between multiple cameras; analyze the real-time sampling information of the near-field camera of the autonomous vehicle, judge whether there is an accident risk according to predetermined rules, and generate target risk prediction data; The driving vehicle sends risk prediction data; the risk prediction data is one of the aggregated data generated by the autonomous driving vehicle to generate the driving instruction; the autonomous driving vehicle executes the driving instruction, and in this way, the autonomous driving vehicle can be controlled.
  • FIG. 8 is a schematic diagram of the control of a controlled traveling vehicle according to an embodiment of the present application.
  • the controlled traveling vehicle sends a controlled service request to the service system
  • the identification positioning unit confirms the identification method, and triggers the controlled traveling vehicle to display Identify features
  • the identification and positioning unit of the service system identifies the feature information of the target, determines the initial coordinate position of the served target, and triggers the controlled driving vehicle to start the controlled mode
  • the relay tracking unit starts the relay tracking thread, generates real-time target positioning data, and plans
  • the control unit aggregates real-time target position data, generates driving instructions, and triggers the controlled driving vehicle to execute the driving instructions.
  • real-time sampling information is acquired, and the real-time sampling information is frame image information collected by a plurality of cameras; according to the real-time frame image information, initial position data of the controlled driving vehicle is determined; relay tracking of the controlled driving vehicle is started.
  • the task thread the relay tracking task thread can relay tracking the target among multiple cameras; the relay tracking task thread generates real-time position data of the target; at least aggregates the real-time position data to generate driving instructions; Sending a driving command; the driving command is executed by the controlled driving vehicle, and the controlled object can be controlled in this way.
  • Figure 9(a) is a schematic diagram of a museum exhibition hall according to an embodiment of the present application.
  • the museum exhibition hall provides indoor navigation services and guided tour services for tourists;
  • the public area of the exhibition hall is arranged with a group of cameras for relay tracking , the service system can relay and track the tourists in the exhibition hall according to the sampling information of the camera group, and locate the tourists in real time.
  • the service system sends real-time location data to the smartphone client of the tourists in real time through the wireless network, and the navigation client of the smartphone provides indoor navigation services for the tourists according to the real-time location data received;
  • Real-time location data through the wireless network, send the audio guide information corresponding to the tourist location or the audio advertisement information of the store, and the hyperlink of the detailed introduction of the exhibits corresponding to the location to the tourist smartphone client in real time.
  • Tourists can listen to the navigation voice and navigation voice of the smartphone client by wearing headphones.
  • the tourist guide service system is configured with an identification and positioning unit, a relay tracking unit, a direction feature unit and a media resource unit.
  • the identification and positioning unit is configured to identify and confirm the initial position of the tourists with the tourists' smart phone client.
  • the identification and positioning unit can identify the tourists' facial information, smart phone electronic ticket information or smart phone stroboscopic information, and by identifying the location of the characteristic information , to determine the visitor's initial location data.
  • This embodiment uses the stroboscopic feature as the identification feature method to determine the initial position of the tourist. After the service system receives the interactive service request sent by the smart phone client, the identification and positioning unit sends the stroboscopic feature code to the smart phone client or receives the smart phone client.
  • the stroboscopic feature code preset on the mobile phone client the smart phone displays the feature code information through the flickering of the screen or the fill light; the identification and positioning unit recognizes the video after receiving the stroboscopic video information containing the feature code collected by the camera group.
  • the stroboscopic feature information in the information if the identified stroboscopic feature information matches the sent stroboscopic feature code, it is determined that the tourist carrying the smartphone is the target tourist, and the initial coordinate position of the target tourist is determined.
  • the relay tracking unit is configured to perform real-time relay tracking of tourists among multiple cameras, generating real-time location data of the tourists. After the identification and positioning unit of the service system confirms the initial position data of the tourist, the relay tracking unit starts the relay tracking thread of the tourist.
  • the direction feature unit is configured to analyze the real-time sampling information of the camera, determine the real-time direction data of the tourists being relayed and track, and provide accurate orientation data for the navigation service and the guide service.
  • Accurate orientation data can be the body direction data of the tourists, the head direction data of the tourists, the hand pointing data of the tourists, the travel direction data of the tourists, etc.
  • the media resource unit is configured to preset media resources such as exhibit introduction audio, exhibit detailed introduction text, and advertisement videos of some stores.
  • the media resources are associated with fixed location data. When the coordinate location data of tourists is related to the coordinate location of exhibits or stores When the data matches within a predetermined threshold, the media resource unit extracts the media resource and pushes it to the smartphone client. By pushing media resources, it is possible to accurately push exhibit introduction audio or advertising video to service objects.
  • a guide client is installed in the tourist's smart phone, and the client can be an application, or a client applet is installed under the application platform, or a browser is installed, and the web client is accessed through the browser.
  • FIG. 9(b) is a schematic diagram of an information processing method based on a smartphone according to an embodiment of the present application.
  • a client APP is preset in the smartphone, and the client APP includes a navigation module and a navigation module. modules can run simultaneously.
  • the client APP performs data connection with the guide service system through the wireless network. After the guide service system identifies and confirms the initial position of the tourists, the guide service is started.
  • the service system receives the interactive service request sent by the client APP through wireless network transmission and the preset stroboscopic feature code sent by the client APP.
  • the client APP cyclically displays the preset stroboscopic feature code through the flashing method of the display screen of the smartphone, or cyclically displays the preset stroboscopic feature code through the flashing method of the smartphone camera fill light. After the stroboscopic feature code starts to flash on the display screen of the smartphone or the fill light, the visitor lifts the mobile phone so that the camera in the exhibition hall can collect the flashing signal of the smartphone.
  • the identification and positioning unit After the cameras in the camera group in the exhibition hall collect the flickering signal, the identification and positioning unit starts to identify and locate the flickering signal, and determine the location data of the tourists holding the smart phone with the flickering signal. After the identification and positioning unit confirms the initial coordinate position of the tourist, the relay tracking unit starts the relay tracking thread of the tourist, generates the real-time location data of the tourist, and sends it to the smartphone client APP through the wireless network.
  • the direction feature unit in the tour service system starts the task of identifying the direction features of tourists, and generates real-time direction data of tourists according to the frame image information containing tourists collected in real time by the camera group, and sends them to the client APP through the wireless network.
  • the navigation module of the smartphone client APP aggregates the real-time position data and real-time direction data of tourists, and provides accurate navigation voice broadcasts for tourists.
  • Precise navigation voice broadcast which can include precise steering, step, viewing direction, etc., for example, broadcast “please go forward 10 steps and turn left 90 degrees", “please take 5 steps forward and turn right 45 degrees” , “Please turn around 180 degrees and continue to walk about 20 steps, there is the men's bathroom on the right side", “Please enter from the third gate on the right", “Please look at this exhibit on the left", “Please look at the third exhibit from the right", "Please look back at the exhibit just introduced and this exhibit!” and so on.
  • the navigation module of the smartphone client APP receives and plays the first background music audio pushed by the media resource unit of the navigation service system after the visitor enters the first exhibition hall; after the visitor enters the second exhibition hall, receives and plays it to the media resource unit Pushed second background music audio.
  • the background music audio is preset in the media resource unit, and the background music is associated with the location data of each exhibition hall.
  • the media resource unit extracts the background music audio resources associated with the location data in response to the real-time location data of the tourists, and pushes them to the smart phone client APP.
  • the client APP After the visitor enters the booth area of the first exhibit, the client APP receives the audio of the first exhibit introduction, and the navigation module plays the audio of the first exhibit introduction; after the visitor enters the booth area of the second exhibit, the client APP receives the second exhibit Introduce the graphic information and hyperlink information in detail, and the client APP displays the second exhibit to visitors through the smartphone screen to introduce graphic information and hyperlink information in detail.
  • Exhibit introduction audio, detailed introduction graphic information and hyperlink information are preset in the media resource unit.
  • the media resource unit responds to the real-time location data of tourists, extracts the associated media resources, and pushes them to the smartphone navigation APP.
  • the navigation module of the smartphone client APP receives the video advertisement of the hot-selling products in the first store pushed by the media resource unit, and plays the hot-selling products of the first store through the smartphone screen.
  • Video advertisement After entering the area of the second store, the tourist receives the video advertisement of the promotion activity of the second store pushed by the media resource unit, and plays the video advertisement of the promotion activity of the second store through the smartphone screen.
  • the video advertisement is preset in the media resource unit, and the media resource unit extracts the corresponding advertisement video and pushes it to the smart phone client APP in response to the real-time location data of the tourists.
  • the navigation or guide method in the embodiments of the present application can also be applied to public places such as shopping malls, stations, hospitals, scenic spots, etc., to provide users with accurate navigation, shopping guide, guide, and guide services.
  • Fig. 9(c) is a schematic diagram of an information processing method based on a navigation device according to an embodiment of the present application.
  • the museum pavilion also provides a navigation device for some tourists to use, and the navigation device can only provide simple
  • the navigation device is preset with the introduction audio information of the exhibits in the exhibition hall, and the navigation device extracts the related introduction audio information in response to the received real-time location data, and plays it to the tourists.
  • the real-time location data is the real-time location coordinate data of tourists generated by the relay tracking unit in the local service system through the analysis of the frame images collected by the camera group; Visitors; the exhibits here introduce audio and location-related data for the audio is stored in the Navigator.
  • Figure 10(a) is a schematic diagram of a scene of an autonomous driving vehicle according to an embodiment of the present application.
  • the urban road is densely covered with cameras, and the real-time frame image information collected by the road condition data service company through the road Vehicles, pedestrians, animals and abnormal objects are tracked throughout the whole process, and the position data, driving direction data and risk prediction data of the corresponding vehicles on the road are generated in real time. The judgment data is pushed to the corresponding vehicle.
  • An autonomous vehicle is usually equipped with a variety of sensors to sense road conditions.
  • the planning control unit of the autonomous vehicle can generate driving instructions according to the road condition data obtained by the sensors to drive the car to drive automatically.
  • the self-driving vehicle can automatically apply for the real-time road condition data service, and receive the vehicle location data, driving direction data and risk prediction data pushed by the road condition data service company in real time through the wireless network.
  • the self-driving vehicle opens up the perspective of God, and can receive the road data that the self-driving vehicle cannot sense in real time, which solves the problem of the blind spot of the self-driving vehicle.
  • Complete real-time road condition data can avoid the occurrence of traffic accidents, and can also maximize the safe driving speed and improve travel efficiency.
  • Figure 10(b) is a schematic diagram of the control of the autonomous vehicle according to the embodiment of the present application.
  • the autonomous vehicle is connected to the cloud server of the road condition data service company through the mobile wireless network, and the automatic driving Driving a vehicle to request real-time traffic data services.
  • the identification and positioning unit sends the stroboscopic feature code.
  • the autonomous vehicle receives the strobe feature code, it displays the feature code information by flashing the headlights of the car.
  • the identification and positioning unit After the road camera group collects the flashing signal sent by the car, the identification and positioning unit starts to identify and locate the flashing signal, and completes the confirmation of the serviced vehicle and the confirmation of the initial coordinate position of the vehicle with the autonomous driving vehicle.
  • the relay tracking unit in the service system of the road condition data service company starts the relay tracking thread of the vehicle after the identification and positioning unit confirms the initial coordinate position of the vehicle, and generates the position data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle. And send it to the self-driving vehicle through the wireless network.
  • the direction feature unit starts the task of identifying the direction features of the vehicle, and generates the driving direction data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle, and sends it to the autonomous vehicle through the wireless network.
  • the risk prediction unit generates the target vehicle driving risk prediction data by analyzing the real situation of the camera in the near-field area of the target vehicle's driving road, and sends it to the autonomous driving vehicle through the wireless network.
  • the navigation path unit generates the navigation path data through the real-time analysis of the camera in the far-field area and the area along the road of the target vehicle, and sends it to the autonomous vehicle through the wireless network.
  • the planning control unit of the autonomous driving vehicle After receiving the real-time location data, real-time direction data, real-time risk prediction data and reference navigation path data pushed by the service system of the road condition data service company, the planning control unit of the autonomous driving vehicle analyzes the data and the road acquired by the autonomous driving vehicle's own sensors. The state data is aggregated to generate driving instructions and drive the vehicle to drive autonomously.
  • Figure 10(c) is a schematic diagram of the control of a controlled driving vehicle according to an embodiment of the present application.
  • the road condition data service company can also provide the business of driving instructions
  • the planning control unit in the service system aggregates the position data of the vehicle by , driving direction data, risk pre-judgment data, and navigation path data, generate driving instructions that can directly drive the controlled driving vehicle, and push the driving instructions to the corresponding vehicle through the wireless network.
  • the controlled driving car itself does not need a planning control unit with high computing performance, or does not need to activate the planning control unit of the car itself, and only needs to receive driving instructions to achieve automatic driving.
  • a car with automatic parking function and adaptive cruise function can realize automatic driving without adding a planning control unit with high computing performance, and only need a few upgrades, which reduces the difficulty of application.
  • the controlled driving car After the controlled driving car enters the service area of the road condition data service company, the controlled driving car connects with the cloud server of the road condition data service company through the mobile wireless network, and the controlled driving car requests the controlled service.
  • the identification and positioning unit After the service system of the road condition data service company receives the service request of the controlled driving vehicle, the identification and positioning unit sends the stroboscopic feature code. After receiving the stroboscopic feature code, the controlled driving car displays the feature code information by flashing the headlights of the car.
  • the identification and positioning unit After the road camera group collects the flashing signal sent by the car, the identification and positioning unit starts to identify and locate the flashing signal, and completes the vehicle confirmation and the confirmation of the initial coordinate position of the vehicle with the controlled driving car.
  • the relay tracking unit in the service system of the road condition data service company starts the relay tracking thread of the vehicle after the identification and positioning unit confirms the initial coordinate position of the vehicle, and generates the position data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle.
  • the direction feature unit generates the driving direction data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle.
  • the risk prediction unit generates the target vehicle driving risk prediction data by analyzing the real situation of the camera in the near-field area of the target vehicle's driving road.
  • the navigation path unit generates the driving path reference information by analyzing the real situation of the cameras in the far-field area and the area along the road of the target vehicle.
  • the controlled driving car has a variety of built-in sensors, which can sense and obtain sensing data including radar signals, image information, altitude information, acceleration information, and GPS positioning information in real time.
  • the controlled driving car can send sensing data and car status data to the service system of the road condition data service company through the wireless network in real time.
  • the vehicle state data may be vehicle parameters, energy load values, running load values, wireless signal strength values, fault state values, and the like.
  • the car parameters can be the car brand, model and configuration, or the drive interface parameters of steering and accelerator control.
  • the planning control unit in the service system can generate different driving commands according to different configurations of different cars.
  • the service system of the road condition data service company also includes a planning control unit, and the planning control unit is configured to aggregate real-time location data, driving direction data, real-time risk prediction data, navigation route data, and sensing data and vehicle status data sent by the car,
  • the driving instruction is generated and sent to the controlled driving vehicle through the wireless network, and the controlled driving vehicle executes the driving instruction and drives the vehicle to drive.
  • the controlled driving vehicle also has a built-in emergency automatic control unit, which is configured to extract the preset emergency driving program instructions and temporarily drive the autonomous vehicle when the wireless network cable is interrupted and cannot receive driving instructions.
  • Figure 11 (a) is a schematic diagram of a public acquisition area of a living community according to an embodiment of the present application. As shown in the figure, a group of cameras that can be used for relay tracking are arranged in the public activity area of the living community; the community property company can, according to the relay tracking data, Provide services such as drone patrols, robotic cleaning, robotic luggage handling, and delivery.
  • Figure 11(b) is a schematic diagram of the control of the automatic aircraft according to an embodiment of the present application.
  • the property company activates the automatic aircraft when it needs to perform a patrol mission, and the aircraft is connected to the service system of the local server of the property company through a wireless network , the planning control unit of the autonomous aircraft requests real-time environmental data services.
  • the identification and positioning unit sends the stroboscopic feature code.
  • the automatic aircraft receives the strobe feature code, it displays the feature code information through the blinking of the aircraft signal lights.
  • the identification and positioning unit starts to identify and locate the flickering signal, and completes the confirmation of the initial coordinate position with the automatic aircraft.
  • the relay tracking unit in the property service system starts the relay tracking thread of the aircraft after the identification and positioning unit confirms the initial coordinate position of the aircraft, and generates the position data of the aircraft in real time by analyzing the collected frame image information containing the target aircraft. sent to the autopilot via the wireless network.
  • the direction feature unit generates the flight direction data of the aircraft in real time by analyzing the collected frame image information containing the target aircraft, and sends it to the automatic aircraft through the wireless network.
  • the risk pre-judgment unit generates the target aircraft's driving risk pre-judgment data by analyzing the camera in the near-field area of the target aircraft's flight route, and sends it to the automatic aircraft through the wireless network.
  • the navigation path unit generates navigation path data by performing live analysis on the far-field area of the target aircraft's flight path and cameras in the area along the flight path, and sends it to the automatic aircraft through the wireless network.
  • the planning control unit of the automatic aircraft After receiving the real-time position data, real-time direction data, real-time risk prediction data and navigation path data pushed by the property service system, the planning control unit of the automatic aircraft fuses the data with the environmental state data obtained by the automatic aircraft's own sensors, Generate flight instructions to drive the aircraft to fly automatically; in addition, by receiving complete live environment data provided by cameras in the community, the automatic aircraft can effectively avoid obstacles, reasonably plan flight routes and flight speeds, and quickly and effectively complete predetermined tasks.
  • the local service system of the property company can also preset the planning control unit to directly generate the flight instructions of the aircraft to drive the aircraft, thereby reducing the integration of intelligent hardware of the aircraft and saving the purchase cost.
  • Figure 11(c) is a schematic diagram of the control of a controlled robot according to an embodiment of the present application.
  • the controlled robot is in the community and is connected to the local server of the property through a wireless network, and the controlled robot requests a controlled service.
  • the property service system receives the service request from the controlled robot and receives the number information preset by the controlled robot.
  • the controlled robot displays the serial number and graphic code information printed on the fuselage in different directions by twisting the fuselage.
  • the camera group collects the frame image information containing the robot number, and the identification and positioning unit identifies the body number or graphic code information, and completes the confirmation of the initial coordinate position with the controlled robot.
  • the relay tracking unit in the service system starts the relay tracking thread of the controlled robot after identifying the positioning unit and confirms the initial coordinate position of the controlled robot, and generates the position of the target robot in real time by analyzing the collected frame image information containing the target robot. data.
  • the direction feature unit generates real-time direction data of the controlled robot by analyzing the collected frame image information containing the target controlled robot.
  • the risk pre-judgment unit generates the driving risk pre-judgment data of the controlled robot by performing live analysis of the cameras in the near-field area of the controlled robot.
  • the navigation path unit generates the driving path reference data by analyzing the real situation of the camera in the far field area and along the route area of the target robot's driving route.
  • the property company service system further includes a planning control unit, which is configured to aggregate real-time position data, real-time direction data, risk prediction data, navigation path data, and sensing data and robot status data sent by the controlled robot, Generate travel instructions and work instructions, and send them to the controlled robot through the wireless network.
  • the controlled robot executes the travel instructions and work instructions, drives the robot to travel and completes work tasks.
  • a controlled robot can have no planning control unit, no system, or a brainless executive machine. Just like a thin client in a computer network, the terminal does not need to be loaded with system hardware and system software, and only needs basic functional configuration to operate independently. Compared with the conventional brainless robot, the design and manufacture of the brainless controlled robot of the present application are simpler, and the management, maintenance and upgrade are also more convenient, and can be applied to such as welcome shopping guide, transportation and delivery, sanitation, security inspection, patrol, assembly Various application scenarios such as production, harvesting and picking.
  • FIG. 12( a ) is a schematic diagram of an operation scenario of a controlled traveling manipulator according to an embodiment of the present application.
  • a construction company uses a controlled traveling manipulator to perform construction operations.
  • the construction company installed a chain image acquisition device column with a high-density arrangement of cameras on the construction site.
  • the chain image acquisition device is characterized in that multiple cameras are distributed in a chain in the same data transmission bus.
  • initialized modeling is performed in advance, the mapping relationship of each camera in the three-dimensional space is constructed, and the coordinate position and viewing angle of each camera on the construction site are determined.
  • the service system can calculate the exact coordinate position of the object in the viewing area through the frame image information obtained by multiple cameras, so as to realize the accurate positioning and relay tracking of the target object.
  • Figure 12(b) is a schematic diagram of the control of the controlled traveling manipulator according to the embodiment of the present application.
  • the controlled traveling manipulator is connected to the local server of the construction site through a wireless network in the construction site, and the controlled traveling manipulator requests Controlled Service.
  • the construction site service system After receiving the service request from the controlled traveling manipulator, the construction site service system sends characteristic action instructions to the traveling manipulator, and the controlled traveling manipulator executes the characteristic action instructions and swings the manipulator.
  • the identification and positioning unit identifies the motion feature information of the traveling manipulator, and completes the confirmation of the initial coordinate position with the controlled traveling manipulator.
  • the relay tracking unit in the service system recognizes the positioning unit and confirms the initial coordinate position of the traveling manipulator, it starts the relay tracking thread of the traveling manipulator, and generates the position data of the traveling manipulator in real time by analyzing the collected frame image information containing the target traveling manipulator.
  • the direction feature unit generates real-time direction data of the traveling manipulator by analyzing the collected frame image information containing the target traveling manipulator.
  • the planning control unit in the service system aggregates real-time position data and real-time direction data, generates driving instructions, and sends them to the controlled driving manipulator through the wireless network.
  • the controlled driving manipulator executes the driving instructions and drives the driving manipulator to travel.
  • the supporting claws on the chassis of the traveling manipulator are driven to drop down to support the ground, so as to ensure the stability and reliability of the working state of the manipulator.
  • the identification and positioning unit analyzes the frame image information including the manipulator collected by the camera group, and accurately verifies the position of the manipulator in the coordinate system. If the coordinate position of the traveling manipulator does not match the predetermined working position, the planning control unit generates a position correction travel instruction according to the coordinate deviation, and drives the traveling manipulator to travel to the predetermined working position. If the coordinate position of the traveling manipulator matches the predetermined working position, the service system starts the preset work instruction, and the traveling manipulator executes the work instruction to complete the job task.
  • the preset work instruction may be a work instruction unit stored at the service system end, or a work instruction unit stored at the controlled manipulator end.
  • a work instruction can be a program file written in a high-level programming language and stored in the service system to respond to invocation and transmission at any time; it can be a dynamic instruction generated by a neural network unit in the service system based on real-time data; it can be a robot-side PLC programmable control The static sequential instructions preset in the processor.
  • the current large-scale 3D printing additive manufacturing equipment needs to be equipped with large-scale cantilevers or traveling guide rails, and the handling, installation and debugging of the equipment are cumbersome, which is not conducive to large-scale use.
  • the controlled traveling manipulator of the present application can solve the problem that the equipment of the construction 3D printing technology is too large and complex, and can complete the precise manufacturing tasks of large-scale work scenes through a plurality of traveling manipulators working independently, for example, it can collaboratively complete cloth, brick laying on construction sites , pouring, spraying, painting, leveling, welding and other construction tasks.
  • the service system can decompose the 3D model in the 3D printing file into multiple work modules, each work module corresponds to a standard robot automation program, The service system drives the traveling manipulator to enter the working position of the first working module and starts the automation program once, and then drives the traveling manipulator to enter the working position of the second working module and starts the same automatic program after the automatic program is executed.
  • FIG. 13 is a schematic diagram of the information processing apparatus according to the embodiment of the present application.
  • the information processing apparatus may include: a first determination Unit 1301 , obtaining unit 1103 , second determining unit 1305 and sending unit 1307 .
  • the information processing apparatus will be described below.
  • the first determining unit 1301 is configured to determine that the target enters the predetermined area.
  • the acquisition unit 1303 is configured to acquire the real-time position information of the target generated by the relay tracking thread in response to the relay tracking thread of the target, wherein the relay tracking thread is used for relay tracking the target between at least one sampling device in the predetermined area, And generate real-time position data based on frame image information collected by at least one sampling device.
  • the second determining unit 1305 is configured to determine specific information based on the real-time location information.
  • the sending unit 1307 is configured to send the specific information to the terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • first determining unit 1301, acquiring unit 1303, second determining unit 1305, and sending unit 1307 correspond to steps S102 to S108 in the embodiment, and examples and applications implemented by the above-mentioned units and corresponding steps The scenarios are the same, but are not limited to the contents disclosed in the above embodiments. It should be noted that the above-mentioned units may be executed in a computer system such as a set of computer-executable instructions as part of an apparatus.
  • the first determination unit can be used to determine that the target enters the predetermined area; then the acquisition unit is used to obtain the real-time position information of the target generated by the relay tracking thread in response to the relay tracking thread for the target, wherein , the relay tracking thread is used to relay tracking the target between at least one sampling device in the predetermined area, and generate real-time position data based on the frame image information collected by the at least one sampling device; and use the second determining unit to determine the specific information; and using the transmitting unit to transmit the specific information to the terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information and responds to the predetermined information.
  • the purpose of pushing the information required by the target in the current environment to the target or the device related to the target according to the real-time position data of the target is achieved, and the flexibility of the positioning service system is improved.
  • the technical effect also improves the applicability of the positioning service system, and solves the technical problem that the system for providing positioning services for service recipients in the related art can only provide positioning services for users and has a relatively single function.
  • the first determination unit includes: a first acquisition module configured to acquire frame image data collected by at least one sampling device; a first identification module configured to perform image recognition on the frame image data, The identification result is obtained; the first determination module is configured to determine that the identification information of the target exists in the identification result, and then determine that the target enters the predetermined area.
  • the information processing apparatus further includes: an identification unit, configured to identify identification information for identifying the target from the target before it is determined that the target enters the predetermined area; wherein the identification unit includes: : a second identification module, configured to identify the biological information and/or non-biological information of the target from the collected images; a second determination module, configured to use the biological information and/or non-biological information as identification information for identifying the target ; wherein, the non-biological information includes at least one of the following: the outline of the target, the color of the target, the text on the target, and the identification code of the target; the biological features of the target include one of the following: facial features, body features.
  • the information processing apparatus further includes: a third determination unit, configured to determine the initial position information of the target; wherein the third determination unit includes at least one of the following: a second acquisition module, configured as Obtain sampling information, and generate initial position information of the target based on the sampling information, wherein the sampling information is obtained from at least one sampling device, and at least one sampling device is triggered by a predetermined condition to perform a shooting task; the third determining module is set to obtain The predetermined terminal information of the terminal device, and the initial location information of the target is determined based on the predetermined terminal information.
  • a third determination unit configured to determine the initial position information of the target
  • the third determination unit includes at least one of the following: a second acquisition module, configured as Obtain sampling information, and generate initial position information of the target based on the sampling information, wherein the sampling information is obtained from at least one sampling device, and at least one sampling device is triggered by a predetermined condition to perform a shooting task; the third determining module is set to obtain The predetermined terminal information of the terminal device,
  • the specific information includes at least one of the following: direction data of the target at a position corresponding to the real-time location information, risk prediction data of the target in a predetermined area, media resources, navigation path data, and travel instructions , where the media resource is associated with predetermined location data.
  • the sending unit includes: a sending module, configured to send the direction data and media resources in the specific information to the terminal device, wherein the terminal device performs at least one of the following operations: generating based on the direction data Voice navigation information, and play voice navigation information; play media resources.
  • the sending unit includes: a fourth determining module, configured to determine the target to activate the automatic driving mode or the controlled driving mode; and a sending module, configured to send specific information to the target, wherein, in the target When the automatic driving mode is activated, the target generates a driving instruction based on at least one of the following: the direction data in the specific information, the risk prediction data in the specific information, the navigation route data in the specific information, the sensing data and the state data, and based on the driving When the command runs, the sensing data and the state data are the data sensed by the target; when the target starts the controlled driving mode, the characteristic information carries the driving command, and the above-mentioned target operates based on the driving command.
  • a fourth determining module configured to determine the target to activate the automatic driving mode or the controlled driving mode
  • a sending module configured to send specific information to the target, wherein, in the target When the automatic driving mode is activated, the target generates a driving instruction based on at least one of the following: the direction data in the specific information
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • FIG. 14 is a schematic diagram of the server according to the embodiment of the present application.
  • the The server includes: an identification and positioning unit 1401 , a relay tracking unit 1403 , a direction feature unit 1405 , a risk prediction unit 1407 , a position navigation unit 1409 and a media association unit 1411 .
  • the server is described below.
  • the identification and positioning unit 1401 is configured to identify and determine the initial position data of the target, and generate the initial position information of the target based on the initial position data.
  • the relay tracking unit 1403 is configured to perform relay tracking of the target between at least one sampling device in a predetermined area, and generate real-time position information of the target.
  • the direction feature unit 1405 is used for real-time location information to generate the direction information of the target.
  • the risk prediction unit 1407 is configured to determine whether the target is at risk based on the real-time location information through a predetermined rule, and generate risk prediction data of the target in a predetermined area.
  • the location navigation unit 1409 is configured to generate navigation path data based on real-time location information.
  • the media association unit 1411 is used for retrieving the media resource corresponding to the real-time location information of the target;
  • one or more of the direction information of the target, the risk prediction data of the target in the predetermined area, the navigation path data and the media resources are determined as specific information and sent to the target or terminal device.
  • the terminal device or target is based on The specific information generates predetermined information, and responds to the predetermined information.
  • the server in the embodiment of the present application can identify and determine the initial position data of the target through the identification and positioning unit, and generate the initial position information of the target based on the initial position data;
  • the relay tracking unit is used for at least A sampling device performs relay tracking on the target to generate real-time position information of the target;
  • a direction feature unit is used for real-time position information to generate the direction information of the target;
  • a risk pre-judgment unit is used to determine whether the target exists based on the real-time position information through predetermined rules risk, and generate the risk prediction data of the target in the predetermined area;
  • the location navigation unit is used to generate navigation path data based on the real-time location information;
  • the media association unit is used to retrieve the media resources corresponding to the real-time location information of the target; wherein, One or more of the direction information of the target, the risk prediction data of the target in the predetermined area, the navigation path data and the media resource are determined as specific information and sent to the target or terminal device, and the terminal device or target is
  • FIG. 15 is a schematic diagram of the terminal device according to the embodiment of the present application, as shown in FIG. 15 .
  • the terminal equipment may include:
  • the receiving module 1501 is used for receiving specific information.
  • the above-mentioned reservation information can be the media resources of the items currently viewed by the tourists in the museum exhibition hall, the direction data sent to the tourists visiting the museum exhibition hall, the target navigation data sent to the tourists, It can also be risk prediction data sent to autonomous vehicles.
  • the processing module 1503 is configured to generate predetermined information based on the specific information.
  • the predetermined information may be navigation data generated based on the direction data.
  • the execution module 1505 is used to respond to specific information and/or predetermined information.
  • the terminal device can play media resources in response to specific information; it can also generate navigation data based on the specific information and play the navigation data to provide route navigation for tourists.
  • FIG. 16 is a schematic diagram of the information processing system according to the embodiment of the present application, as shown in FIG. 16 .
  • the information processing system may include:
  • the server 1603 in the above is used to generate the real-time position information of the target based on the frame image information, and determine the specific information based on the real-time position information, and send the specific information to the terminal device or the target, and the terminal device or the target generates predetermined information based on the specific information , and in response to predetermined information.
  • At least one sampling device can be used to collect the frame image information of the target;
  • the server in the above is used to generate real-time position information of the target based on the frame image information, determine specific information based on the real-time position information, and convert the specific information Send to the terminal device or target, the terminal device or target generates predetermined information based on specific information, and in response to the predetermined information, realizes the target or target-related devices to push the information required by the target in the current environment according to the target's real-time location data
  • the identification and positioning unit in the information processing system of the embodiment of the present application is configured to identify the target, determine the initial position of the target, and generate target initial position data;
  • the relay tracking unit is configured to perform real-time relay tracking of the target among multiple sampling devices , generate target real-time position data;
  • the target direction feature unit is configured to analyze the real-time sampling information to generate the real-time direction data of the relayed target;
  • the risk prediction unit is configured to analyze the real-time sampling information of the near-field camera of the moving target, according to a predetermined The rules determine whether there is an accident risk, and generate target risk prediction data;
  • the location navigation path guidance unit is configured to generate the navigation path data by analyzing the real-time sampling information of the far-field camera of the moving target and the real-time information of the cameras along the distance;
  • the associated media unit is configured to preset media resources such as images, audios, graphics and texts, and the media resources are associated with fixed location data;
  • the planning control unit is configured to aggregate real-time location data, target direction
  • a human-computer interaction device which includes a wireless receiving unit configured to receive target position data, and/or target direction data sent by an information processing system through a wireless network, and/or target direction data, and/or Or, target risk prediction data, and/or, target navigation path data, and/or, media resources.
  • the user interaction unit is configured to receive and aggregate the target location data, and/or target direction data, and/or target risk prediction data, and/or target navigation path data, and/or, media resource, generating interactive information to display to the user; and an interactive output unit, configured to display and/or play the interactive information to the user.
  • an automatic driving device which includes a wireless receiving unit configured to receive target position data, and/or target direction data, and/or target direction data sent by an information processing system through a wireless network , target risk prediction data, and/or, target navigation path data; a planning control unit configured to aggregate real-time target location data in real time, and/or, target direction data, and/or, target risk prediction data, and /or, target navigation route data, and/or mobile terminal state data, and/or mobile terminal sensing data, to generate travel instructions or work instructions; an instruction execution unit configured to execute the travel instructions or work instructions.
  • a controlled traveling device which includes a wireless receiving unit configured to receive a traveling instruction and/or a work instruction sent by an information processing system through a wireless network; an instruction executing unit, is configured to execute the travel instructions, and/or work instructions.
  • a computer-readable storage medium includes a stored computer program, wherein when the computer program is run by the processor, the device where the computer storage medium is located is controlled to execute the above-mentioned The information processing method of any one.
  • a processor is also provided, where the processor is configured to run a computer program, wherein when the computer program runs, any one of the information processing methods described above is executed.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

La présente demande divulgue un procédé et un appareil de traitement d'informations et un système de traitement d'informations. Le procédé consiste à : déterminer qu'une cible entre dans une zone prédéfinie; en réponse à un fil d'exécution de suivi de relais de la cible, obtenir des informations de position en temps réel de la cible générées par le fil d'exécution de suivi de relais, le fil d'exécution de suivi de relais étant utilisé pour effectuer un suivi de relais sur la cible parmi au moins un dispositif d'échantillonnage dans la zone prédéfinie et générer des données de position en temps réel sur la base d'informations d'image de trame collectées par le ou les dispositifs d'échantillonnage; déterminer les informations spécifiques sur la base des informations de position en temps réel; et envoyer les informations spécifiques à un dispositif terminal ou à la cible, le dispositif terminal ou la cible générant des informations prédéfinies sur la base des informations spécifiques et répondant aux informations prédéfinies. La présente demande résout le problème technique dans l'état de la technique selon lequel un système pour fournir un service de positionnement pour un destinataire de service peut uniquement fournir le service de positionnement pour un utilisateur et la fonction du système est relativement simple.
PCT/CN2021/138526 2020-12-29 2021-12-15 Procédé et appareil de traitement d'informations et système de traitement d'informations WO2022143181A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011602856.8 2020-12-29
CN202011602856.8A CN114693727A (zh) 2020-12-29 2020-12-29 信息处理方法及装置、信息处理***

Publications (1)

Publication Number Publication Date
WO2022143181A1 true WO2022143181A1 (fr) 2022-07-07

Family

ID=82133140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138526 WO2022143181A1 (fr) 2020-12-29 2021-12-15 Procédé et appareil de traitement d'informations et système de traitement d'informations

Country Status (2)

Country Link
CN (1) CN114693727A (fr)
WO (1) WO2022143181A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115407803A (zh) * 2022-10-31 2022-11-29 北京闪马智建科技有限公司 一种基于无人机的目标监控方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749170A (zh) * 2017-12-07 2018-03-02 东莞职业技术学院 一种车辆跟踪装置及方法
CN109724610A (zh) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 一种全信息实景导航的方法及装置
CN111351492A (zh) * 2018-12-20 2020-06-30 赫尔环球有限公司 用于自动驾驶车辆导航的方法和***
CN111836009A (zh) * 2020-06-18 2020-10-27 浙江大华技术股份有限公司 多个相机进行目标跟踪的方法、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749170A (zh) * 2017-12-07 2018-03-02 东莞职业技术学院 一种车辆跟踪装置及方法
CN111351492A (zh) * 2018-12-20 2020-06-30 赫尔环球有限公司 用于自动驾驶车辆导航的方法和***
CN109724610A (zh) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 一种全信息实景导航的方法及装置
CN111836009A (zh) * 2020-06-18 2020-10-27 浙江大华技术股份有限公司 多个相机进行目标跟踪的方法、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115407803A (zh) * 2022-10-31 2022-11-29 北京闪马智建科技有限公司 一种基于无人机的目标监控方法及装置

Also Published As

Publication number Publication date
CN114693727A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
US11874663B2 (en) Systems and methods for computer-assisted shuttles, buses, robo-taxis, ride-sharing and on-demand vehicles with situational awareness
US11061406B2 (en) Object action classification for autonomous vehicles
KR102558774B1 (ko) 자율 주행 차량을 위한 교통 신호등 검출 및 차선 상태 인식
CN112823372B (zh) 排队进入上车和下车位置
US11900815B2 (en) Augmented reality wayfinding in rideshare applications
WO2017079341A2 (fr) Extraction automatisée d'informations sémantiques pour améliorer des modifications de cartographie différentielle pour véhicules robotisés
US20210209543A1 (en) Directing secondary delivery vehicles using primary delivery vehicles
US11964673B2 (en) Systems and methods for autonomous vehicle controls
US20200327811A1 (en) Devices for autonomous vehicle user positioning and support
US11904902B2 (en) Identifying a customer of an autonomous vehicle
WO2022057737A1 (fr) Procédé de commande de stationnement et dispositif associé
US20240071100A1 (en) Pipeline Architecture for Road Sign Detection and Evaluation
KR20210041106A (ko) 자율 차량을 위한 주변 조명 조건
CN112810603B (zh) 定位方法和相关产品
WO2022143181A1 (fr) Procédé et appareil de traitement d'informations et système de traitement d'informations
WO2022108744A1 (fr) Système de rétroaction embarqué pour véhicules autonomes
CN114326775B (zh) 基于物联网的无人机***
KR102645700B1 (ko) 디지털 트윈 기반 충전소 관제 시스템
US20240028031A1 (en) Autonomous vehicle fleet service and system
WO2023146693A1 (fr) Atténuation de fausse piste dans des systèmes de détection d'objet
CN116844025A (zh) 数据处理方法及相关设备
WO2024102431A1 (fr) Systèmes et procédés de détection de véhicule d'urgence
JP2024051891A (ja) エリア監視システムおよびエリア監視方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913922

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.11.2023)