WO2022143181A1 - 信息处理方法及装置、信息处理*** - Google Patents

信息处理方法及装置、信息处理*** Download PDF

Info

Publication number
WO2022143181A1
WO2022143181A1 PCT/CN2021/138526 CN2021138526W WO2022143181A1 WO 2022143181 A1 WO2022143181 A1 WO 2022143181A1 CN 2021138526 W CN2021138526 W CN 2021138526W WO 2022143181 A1 WO2022143181 A1 WO 2022143181A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
data
real
predetermined
Prior art date
Application number
PCT/CN2021/138526
Other languages
English (en)
French (fr)
Inventor
聂兰龙
Original Assignee
青岛千眼飞凤信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛千眼飞凤信息技术有限公司 filed Critical 青岛千眼飞凤信息技术有限公司
Publication of WO2022143181A1 publication Critical patent/WO2022143181A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the present application relates to the technical field of information processing, and in particular, to an information processing method and device, and an information processing system.
  • the technologies that provide positioning services for consumers mainly include satellite positioning and wireless base station positioning, and their positioning accuracy is about 10m, and they can only provide positioning services for service recipients, but cannot provide other more information for service recipients.
  • the embodiments of the present application provide an information processing method and apparatus, and an information processing system, to at least solve the technical problem in the related art that the system for providing positioning services for service recipients can only provide positioning services for users and has a relatively single function.
  • an information processing method comprising: determining that a target enters a predetermined area; and in response to a relay tracking thread for the target, acquiring a real-time position of the target generated by the relay tracking thread information, wherein the relay tracking thread is used to relay tracking the target between at least one sampling device in the predetermined area, and generate the real-time position data based on frame image information collected by the at least one sampling device determining specific information based on the real-time location information; sending the specific information to a terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information information.
  • determining that the target has entered a predetermined area includes: acquiring frame image data collected by the at least one sampling device; performing image recognition on the frame image data to obtain a recognition result; determining that there is an image of the target in the recognition result. identification information, it is determined that the target enters the predetermined area.
  • the information processing method further includes: identifying identification information for identifying the target from the target; wherein identifying information for identifying the target from the target Identification information, including: identifying biological information and/or non-biological information of the target from the collected images; using the biological information and/or non-biological information as identification information for identifying the target; wherein the non-biological information and/or non-biological information are used as identification information for identifying the target;
  • the biological information includes at least one of the following: the outline of the target, the color of the target, the text on the target, the identification code of the target; the biological features of the target include one of the following: facial features, body features .
  • the information processing method further includes: determining initial position information of the target; wherein, determining the initial position information of the target includes at least one of the following: acquiring sampling information, and generating an initial position information of the target based on the sampling information location information, wherein the sampling information is obtained from the at least one sampling device, and the at least one sampling device is triggered by a predetermined condition to perform a shooting task; obtain the predetermined terminal information of the terminal device, and based on the predetermined terminal information to determine the initial location information of the target.
  • the specific information includes at least one of the following: direction data of the target at the position corresponding to the real-time location information, risk prediction data of the target in the predetermined area, media resources, and navigation paths data, and driving instructions, wherein the media resource is associated with predetermined location data.
  • sending the specific information to the terminal device includes: sending direction data and media resources in the specific information to the terminal device, wherein the terminal device performs at least one of the following operations: based on the The direction data is used to generate voice navigation information, and the voice navigation information is played; the media resource is played.
  • sending the specific information to the target includes: determining that the target activates an automatic driving mode or a controlled driving model; sending the specific information to the target, wherein when the target is activated In the automatic driving mode, the target generates a driving instruction based on at least one of the following: direction data in the specific information, risk prediction data in the specific information, navigation path data in the specific information, sensing data and state data, and run based on the driving command, the sensing data and the state data are the data sensed by the target; when the target starts the controlled driving mode, the characteristic information carries A travel command on which the target operates based on the travel command.
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • an information processing device including: a first determination unit, configured to determine that a target enters a predetermined area; an acquisition unit, configured to respond to a relay tracking thread for the target, Acquiring the real-time position information of the target generated by the relay tracking thread, wherein the relay tracking thread is used for relay tracking the target between at least one sampling device in the predetermined area, and based on the at least one sampling device in the predetermined area.
  • the frame image information collected by a sampling device generates the real-time position data; the second determining unit is configured to determine specific information based on the real-time position information; the sending unit is configured to send the specific information to the terminal device or the target , wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • the first determination unit includes: a first acquisition module configured to acquire frame image data collected by the at least one sampling device; a first identification module configured to perform image recognition on the frame image data, The identification result is obtained; the first determination module is configured to determine that the identification information of the target exists in the identification result, and then determine that the target enters the predetermined area.
  • the information processing device further includes: an identification unit, configured to identify identification information for identifying the target from the target before it is determined that the target enters the predetermined area; wherein the identification unit includes: a first a second identification module, configured to identify the biological information and/or non-biological information of the target from the collected images; a second determination module, configured to use the biological information and/or non-biological information as the information used to identify the target Identification information; wherein, the non-biological information includes at least one of the following: the outline of the target, the color of the target, the text on the target, and the identification code of the target; the biological characteristics of the target include the following One: facial features, body features.
  • the information processing apparatus further includes: a third determination unit, configured to determine initial position information of the target; wherein, the third determination unit includes at least one of the following: a second acquisition module, configured to acquire sampling information, and generate initial position information of the target based on the sampling information, wherein the sampling information is obtained from the at least one sampling device, and the at least one sampling device is triggered by a predetermined condition to perform a shooting task; third The determining module is configured to acquire predetermined terminal information of the terminal device, and determine the initial position information of the target based on the predetermined terminal information.
  • the specific information includes at least one of the following: direction data of the target at the position corresponding to the real-time location information, risk prediction data of the target in the predetermined area, media resources, and navigation paths data, and driving instructions, wherein the media resource is associated with predetermined location data.
  • the sending unit includes: a sending module configured to send the direction data and media resources in the specific information to a terminal device, wherein the terminal device performs at least one of the following operations: based on the direction
  • the data generates voice navigation information, and plays the voice navigation information; and plays the media resource.
  • the sending unit includes: a fourth determining module configured to determine that the target activates an automatic driving mode or a controlled driving mode; a sending module configured to send the specific information to the target, wherein, When the target activates the automatic driving mode, the target generates a driving instruction based on at least one of the following: direction data in the specific information, risk prediction data in the specific information, and risk prediction data in the specific information navigation route data, sensing data and state data, and run based on the driving command, the sensing data and the state data are data sensed by the target; when the target starts the controlled driving mode, The characteristic information carries a driving instruction, and the target operates based on the driving instruction.
  • a fourth determining module configured to determine that the target activates an automatic driving mode or a controlled driving mode
  • a sending module configured to send the specific information to the target, wherein, When the target activates the automatic driving mode, the target generates a driving instruction based on at least one of the following: direction data in the specific information, risk prediction data in the specific information, and risk
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • a server which is applied to the information processing method described in any one of the above, including: an identification and positioning unit for identifying and determining initial position data of a target, and generating initial position information of the target based on the initial position data; a relay tracking unit for relay tracking the target between at least one sampling device in a predetermined area, and generating real-time position information of the target; directional characteristics a unit for generating the direction information of the target from the real-time position information; a risk prediction unit for judging whether the target is at risk based on the real-time position information through predetermined rules, and generating the target in the predetermined regional risk prediction data; a location navigation unit for generating navigation path data based on the real-time location information; a media association unit for retrieving media resources corresponding to the real-time location information of the target; wherein the target One or more of the direction information of the target, the risk prediction data of the target in the predetermined area, the navigation path data and the media resource
  • a terminal device which is applied to the information processing method described in any one of the above, including: a receiving module for receiving specific information; a processing unit for The specific information generates predetermined information; the execution unit is configured to respond to the specific information and/or the predetermined information.
  • an information processing system which is applied to the information processing method described in any one of the above, including: at least one sampling device for collecting frame image information of a target;
  • the server is used to generate real-time position information of the target based on the frame image information, and determine specific information based on the real-time position information, and send the specific information to the terminal device or the target, the The terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • a computer-readable storage medium includes a stored computer program, wherein when the computer program is run by a processor, the computer is controlled
  • the device where the storage medium is located executes any one of the information processing methods described above.
  • a processor is also provided, and the processor is configured to run a computer program, wherein, when the computer program runs, any one of the information processing methods described above is executed.
  • the determined target is used to enter the predetermined area; in response to the relay tracking thread for the target, the real-time location information of the target generated by the relay tracking thread is obtained, wherein the relay tracking thread is used for at least one sampling in the predetermined area.
  • the information generates predetermined information, and in response to the predetermined information, the information processing method provided by the embodiment of the present application achieves the purpose of pushing the information required by the target in the current environment to the target or a device related to the target according to the real-time location data of the target.
  • FIG. 1 is a flowchart of an information processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an information processing method according to an embodiment of the present application.
  • Fig. 3 (a) is a control schematic diagram of a controlled traveling device according to an embodiment of the present application.
  • FIG. 3(b) is a schematic diagram of the control of an automatic driving device according to an embodiment of the present application.
  • FIG. 4 is a schematic control diagram of a navigation device according to an embodiment of the present application.
  • FIG. 5 is a schematic control diagram of a playback device according to an embodiment of the present application.
  • FIG. 6 is a schematic control diagram of an intelligent portable device according to an embodiment of the present application.
  • FIG. 7 is a schematic control diagram of an automatic driving vehicle according to an embodiment of the present application.
  • FIG. 8 is a control schematic diagram of a controlled driving vehicle according to an embodiment of the present application.
  • FIG. 9(a) is a schematic diagram of a museum exhibition hall according to an embodiment of the present application.
  • FIG. 9(b) is a schematic diagram of an information processing method based on a smartphone according to an embodiment of the present application.
  • Fig. 9(c) is a schematic diagram of an information processing method based on a navigator according to an embodiment of the present application.
  • Fig. 10(a) is a schematic diagram of a scene of an autonomous driving vehicle according to an embodiment of the present application.
  • FIG. 10(b) is a schematic diagram of the control of an autonomous driving vehicle according to an embodiment of the present application.
  • Fig. 10(c) is a control schematic diagram of a controlled driving vehicle according to an embodiment of the present application.
  • Figure 11(a) is a schematic diagram of a public access area of a living community according to an embodiment of the present application.
  • FIG. 11(b) is a schematic diagram of the control of an automatic aircraft according to an embodiment of the present application.
  • FIG. 11( c ) is a schematic control diagram of a controlled robot according to an embodiment of the present application.
  • Fig. 12(a) is a schematic diagram of an operation scenario of a controlled traveling manipulator according to an embodiment of the present application.
  • Fig. 12(b) is a schematic diagram of the control of the controlled traveling manipulator according to the embodiment of the present application.
  • FIG. 13 is a schematic diagram of an information processing apparatus according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a server according to an embodiment of the present application.
  • 15 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • FIG. 16 is a schematic diagram of an information processing system according to an embodiment of the present application.
  • Fig. 1 is a flowchart of an information processing method according to an embodiment of the present application, as shown in Fig. 1 , the information processing method comprises the following steps:
  • Step S102 it is determined that the target enters the predetermined area.
  • the above-mentioned predetermined area may be a public area of a living community where at least one sampling device is installed, a construction site where at least one sampling device is installed, a road where at least one sampling device is installed, and a museum exhibit where at least one sampling device is installed. Hall, etc.
  • the above-mentioned target may be a vehicle entering the above-mentioned road where at least one sampling device is installed, a visitor entering the above-mentioned museum exhibition hall where at least one sampling device is installed, or a manipulator working on the above-mentioned construction site where at least one sampling device is installed.
  • determining that the target enters the predetermined area includes: acquiring frame image data collected by at least one sampling device; performing image recognition on the frame image data to obtain a recognition result; determining that there is identification information of the target in the recognition result , it is determined that the target enters the predetermined area.
  • Step S104 in response to the relay tracking thread for the target, obtain the real-time position information of the target generated by the relay tracking thread, wherein the relay tracking thread is used for relay tracking the target between at least one sampling device in the predetermined area, and based on at least one sampling device.
  • a sampling device captures frame image information to generate real-time position data.
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • the above sampling device is a camera
  • at least one camera here has a fixed position and shooting angle; the imaging quality of the camera and the arrangement density of the camera determine the accuracy of target positioning. Methods such as improving the imaging resolution of the camera, increasing the arrangement density of the camera, and the distance between the camera and the target can achieve the positioning accuracy that needs to be matched.
  • relay tracking tasks can be single-camera positioning and single-line relay tracking according to the application scenario requirements, or multi-camera positioning and multi-line relay tracking; low-precision requirements
  • a single camera can be used to locate and map to an approximate position in a two-dimensional coordinate system; in application scenarios with high precision requirements, multiple cameras can be used to locate and map to an accurate position in a three-dimensional coordinate system.
  • Step S106 determining specific information based on the real-time location information.
  • the specific information can be determined by the real-time location information of the target, where the specific information can be the media resource of the item currently viewed by the tourists in the exhibition hall of the museum, or the tourists visiting the exhibition hall of the museum.
  • the direction data sent can be target navigation data sent to tourists, or risk prediction data sent to autonomous vehicles.
  • step S108 the specific information is sent to the terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • the above-mentioned specific information can be sent to terminal devices (eg, mobile phones, iPads, autonomous navigation devices, etc.), automatic driving devices, manipulators, etc. carried by the user, so that these targets or terminal devices can perform predetermined operations.
  • terminal devices eg, mobile phones, iPads, autonomous navigation devices, etc.
  • automatic driving devices, manipulators, etc. carried by the user so that these targets or terminal devices can perform predetermined operations.
  • FIG. 2 is a schematic diagram of an information processing method according to an embodiment of the present application.
  • the sampling device can collect frame image information including the characteristics of the identification method, frame image information including the target, and the camera in the near-field area of the target.
  • the service system can confirm the identification method through the identification and positioning unit, and show the identification to the human-computer interaction device.
  • the human-computer interaction service mode starts the interactive service mode; the identification and positioning unit of the service system identifies the characteristic information of the target and determines the initial coordinate position of the served target; the relay tracking unit starts the relay tracking thread to generate real-time target position data; the direction feature unit Start the target direction feature recognition task, and generate target direction data in real time; the risk prediction unit starts the live analysis of the target near-field camera to generate target risk prediction data; the navigation path unit starts the target far-field and camera live analysis along the route to generate the navigation path data ; The media resource unit extracts the corresponding media resources in response to the target position data; after the service system sends the information collected above to the human-computer interaction device, the user interaction unit receives the real-time target position data, real-time target direction data, real-time target direction data, Risk prediction data, navigation path data and corresponding media resources are aggregated to generate interactive information, and the interactive information is displayed or played to users.
  • the service system can obtain real-time sampling information, and the real-time sampling information can be information collected by a sampling device.
  • the sampling device includes at least a plurality of cameras, and determines the initial position data of the target according to the real-time sampling information;
  • the relay tracking task thread of the target, the relay tracking task thread here can be relay tracking of the target among multiple sampling devices; the relay tracking task thread generates the real-time position data of the target; then sends the real-time position of the target to the mobile terminal through the wireless network data.
  • the service system can extract the target direction data of the target location, and/or target risk prediction data, and/or media resources according to the real-time position data of the target, and send the data to the mobile terminal through the wireless network. and/or, target risk prediction data, and/or, target navigation path data, and/or, media resources.
  • the target in response to the relay tracking thread for the target, the real-time position information of the target generated by the relay tracking thread is obtained, wherein the relay tracking thread is used for the predetermined area.
  • the target is relay-tracked between at least one sampling device, and real-time position data is generated based on the frame image information collected by the at least one sampling device; specific information is determined based on the real-time position information; specific information is sent to the terminal device or target, wherein the terminal device Or the target generates predetermined information based on specific information, and in response to the predetermined information, realizes the purpose of pushing the information required by the target in the current environment to the target or the device related to the target according to the real-time location data of the target, and improves the positioning service system.
  • the technical effect of the flexibility also improves the applicability of the positioning service system.
  • the information processing method provided by the embodiments of the present application solves the technical problem that the system for providing positioning services for service recipients in the related art can only provide positioning services for users and has a relatively single function.
  • the information processing method before it is determined that the target enters the predetermined area, further includes: identifying identification information for identifying the target from the target; wherein identifying information for identifying the target from the target.
  • the identification information of the target including: identifying the biological information and/or non-biological information of the target from the collected images; using the biological information and/or non-biological information as the identification information for identifying the target; wherein, the non-biological information includes the following At least one of: the outline of the target, the color of the target, the text on the target, the identification code of the target; the biological features of the target include one of the following: facial features, body features.
  • the identification information used to represent the target may be identified; when the push information is subsequently sent to the target, the identification may be used as matching information.
  • the above-mentioned image features can be facial features, object outline features, color features, text features, two-dimensional codes, barcodes, etc.;
  • the color features are identified and the initial position is determined, the digital badge features can be used to determine the initial position of the target person, and the initial position can be determined for the shape features of the target device.
  • the information processing method further includes: determining initial position information of the target; wherein, determining the initial position information of the target includes at least one of the following: acquiring sampling information, and generating an initial position information of the target based on the sampling information Location information, wherein the sampling information is obtained from at least one sampling device, and at least one sampling device is triggered by a predetermined condition to perform a shooting task; the predetermined terminal information of the terminal device is acquired, and the initial position information of the target is determined based on the predetermined terminal information.
  • the initial position data of the target can be determined, and a tracking task thread for the target can be started, and the tracking task thread can perform relay tracking among multiple sampling settings, and then according to The tracking task thread generates the real-time position data of the target; then, aggregates the real-time position data of the target, and/or, the target direction data, and/or, the target risk prediction data, and/or, the target navigation path data, and generates the driving instruction, and/or, generating a work instruction; and sending the driving instruction and/or the work instruction to the mobile terminal through a wireless network.
  • Fig. 3(a) is a schematic control diagram of a controlled traveling device according to an embodiment of the present application.
  • the information processing method may include the following steps in addition to some of the steps shown in Fig. 2 :
  • the controlled driving equipment requests the controlled service from the service system, and the identification and positioning unit of the service system determines the identification method and displays the identification features; the identification and positioning unit identifies the characteristic information of the target, and determines the initial coordinate position of the service target, and reports to the controlled
  • the traveling equipment sends information to trigger the controlled traveling equipment to start the controlled model; the controlled traveling equipment will send the controlled equipment status data and real-time sensing data to the server.
  • the planning control unit of the service system can aggregate real-time target position data, real-time target direction data, risk prediction data, navigation path data, and controlled device status data and sensing data to generate driving instructions and work instructions, and will generate The driving instructions and work instructions are sent to the controlled driving equipment.
  • the initial position data of the target can be determined by the identification feature of the target, and the identification feature is the feature information of the determined target, which can be one of image features, visible light stroboscopic features, target action features, and predetermined position features.
  • the visible light stroboscopic feature is an identification code information feature, and the stroboscopic signal contains a coded representation.
  • the visible light stroboscopic feature is generated by a mobile terminal, and can be separated and identified in a video frame using a stroboscopic signal that contains a coded representation.
  • the stroboscopic signal can be an alternating light and dark signal, or a color change signal; the stroboscopic signal can be generated by a signal light or a display screen.
  • a smartphone can be identified and located by the service system by flashing the fill light, and the smartphone can be identified and located by the service system by changing the color of the display screen; for example, a car can be identified and located by the service system by flashing its headlights; For example, UAVs are identified and located by service systems through the strobes of signal lights.
  • the target action feature is an identification code information feature, wherein the action feature includes a predetermined rule representation, and the service system determines the initial position of the target by identifying the target's predetermined action. For example, the waving action of the target person, the finger action of the target person, the nodding action of the target person, the forward and backward movement of the target device, the swinging rocker arm action of the target device, and so on.
  • the above-mentioned predetermined position feature is also a kind of identification code information.
  • the predetermined position corresponds to a preset code in the system, and is a method for determining the initial position of the target by moving the target to the predetermined identification position. For example, when the target person passes through the ticket gate, the service system determines and initially locates the target according to the known location coordinates of the ticket gate; for example, when the target car passes through the predetermined car passage, the service system determines and initially locates the target according to the known location coordinates of the vehicle passage. Determine and initialize the target vehicle; for example, when the robot is in a predetermined charging position, the service system determines and initializes the target robot according to the known position coordinates of the charging position.
  • the mobile terminal may be a human-computer interaction device with a wireless connection function, or an automatic driving device with a wireless connection function, or a controlled driving device with a wireless connection function.
  • the wireless connection function may be wireless local area network communication, cellular network communication or visible light stroboscopic communication, and the service system and the mobile terminal may be connected internally through the local area network or externally connected through the Internet.
  • the above-mentioned human-computer interaction device can be a smart portable device, a wearable smart device, a VR smart device, an AR smart device, or a car-mounted smart device, such as a smart phone, smart watch, smart glasses, smart headset, or car navigation.
  • Figure 3(b) is a schematic diagram of the control of an automatic driving device according to an embodiment of the present application.
  • the information processing method may include the following steps in addition to some of the steps shown in Figure 2:
  • the automatic driving equipment requests the controlled service from the service system, and the identification and positioning unit of the service system determines the identification method and displays the identification features;
  • the identification and positioning unit identifies the characteristic information of the target, and determines the initial coordinate position of the service target, and sends it to the automatic driving equipment.
  • information to trigger the controlled driving equipment to start the controlled model; the automatic driving equipment will send the automatic driving equipment state data and real-time sensing data to the server.
  • the above-mentioned automatic driving equipment and controlled driving equipment can be passenger cars, trucks, forklifts, turnover transport vehicles, agricultural machinery vehicles, sanitation machinery vehicles, automatic wheelchairs, balance vehicles and other vehicle equipment, and can be walking robots, mobile robots and other mechanical equipment. It is used to transport flying equipment such as helicopters and unmanned aerial vehicles.
  • the specific information includes at least one of the following: direction data of the target at a position corresponding to the real-time location information, risk prediction data of the target in a predetermined area, media resources, navigation path data, and travel instructions , where the media resource is associated with predetermined location data.
  • the target direction data is the current direction data of the tracked target determined and generated by the direction feature unit in the service system according to the analysis of real-time sampling information. For example, the direction of the body or face of the target person, the direction of the left and right hands of the target person, the direction of the head of the target car, the direction of the front end of the target robot, the working direction of the target manipulator, etc.
  • the target direction data By sending the target direction data to the mobile terminal, the target can be adjusted in precise direction and posture. For example, after the walking robot falls, it can assist the walking robot to adjust the posture correctly by sending the falling direction data to the walking robot.
  • the target risk pre-judgment data is that the risk pre-judgment unit generates target risk pre-judgment data by judging whether there is an accident risk according to predetermined rules by aggregating the real-time sampling information of the near-field camera of the moving target.
  • multiple cameras are installed in the UAV flight area, and the obstacles in the flight area are predicted in advance through the sampling information of the multiple cameras to generate risk prediction data, which can improve the flight speed and avoid the risk of collision accidents. occur.
  • multiple cameras are installed on the road, and the relay tracking unit relays tracking multiple moving targets through the sampling information of multiple cameras, and generates tracking data.
  • the moving targets can be vehicles, pedestrians, animals or unknown moving objects; through real-time tracking
  • the near-field data of the area of the service target vehicle is analyzed, and the risk prediction data of the service target vehicle is generated.
  • the service provider can arrange sampling cameras in the area around the highway to give early warning to the moving objects that may enter the highway and affect traffic safety. By sending the target risk prediction data to the mobile terminal, the blind spot of the driving equipment can be eliminated and the occurrence of risk accidents can be avoided.
  • the navigation path information is generated by the navigation path unit by analyzing the live information of the far-field camera and the cameras along the moving target. By analyzing the real-time sampling information of far-field cameras and cameras along the way, it breaks through the limitations of mobile terminals in acquiring real-time information, and makes the navigation path more in line with the real-time changes in the environment.
  • Media resources are media resources such as images, audios, graphics and texts that are preset in the service system, and media resources are associated with fixed location data.
  • the first media resource is associated with the first set of location data
  • the second media resource is associated with the first Two sets of location data.
  • the media resource may be introduction information associated with the fixed location, advertisement information associated with the fixed location, music information associated with the fixed location, or VR or AR image information associated with the fixed location.
  • the service system in response to the real-time location data of the relayed tracking target, the service system pushes to the human-computer interaction device a hyperlink of the audio and graphic introduction information associated with the corresponding location data.
  • the advertisement information of the store product associated with the position data is pushed to the human-computer interaction device, or, in response to the position of the target being relayed to be tracked, the advertisement information of the store product is pushed to the human-computer interaction device.
  • the product function introduction information associated with the location data By pushing media resources, preset media resources associated with corresponding locations can be accurately pushed to service objects.
  • the driving instruction is configured to be generated by the planning control unit by real-time aggregation of the target's real-time position data, target direction data, target risk prediction data, target navigation path data, mobile terminal status data and mobile terminal sensing data.
  • the mobile terminal status data may be device parameters, energy value, load value, wireless signal strength value, fault status, and the like.
  • the planning control unit can generate different driving commands according to different equipment parameters. For example, different brands of cars use different driving command interfaces, and the driving parameters corresponding to cars of the same brand with different configurations will also be different.
  • the planning control unit can generate different driving instructions according to different energy values. For example, a fuel vehicle needs to use the fuel-saving mode to drive when the fuel tank is low, and the unmanned aerial vehicle can use high-performance when the power is sufficient.
  • the planning control unit can generate different driving instructions according to different loading values. For example, a truck with no load and a high load requires different driving modes, and a dangerous goods transport vehicle such as a tanker needs a special driving mode.
  • the planning control unit may generate different driving instructions according to different wireless signal strength values. For example, when the wireless signal strength value is low, a conservative driving mode needs to be adjusted.
  • the mobile terminal sensing data is the sensing data collected by the sensors configured on the mobile terminal equipment. It can be supplementary data in the aggregated data of the planning control unit.
  • the sensors can be image sensors, radar sensors, acceleration sensors, GPS receivers, and electronic compass. sensors, etc.
  • the service system wirelessly transmits driving instructions or work instructions to the mobile terminal, and the mobile terminal executes the driving instructions and work instructions issued by the service system; when the planning control unit is configured in the mobile terminal , the service system wirelessly transmits the real-time location data, direction data, risk prediction data or navigation path information of the target to the mobile terminal. After the mobile terminal aggregates the real-time location data, direction data, risk prediction data and navigation path data, it generates and executes driving. Instructions and work orders.
  • sending the specific information to the terminal device includes: sending direction data and media resources in the specific information to the terminal device, wherein the terminal device performs at least one of the following operations: generating Voice navigation information, and play voice navigation information; play media resources.
  • real-time sampling information can be obtained, according to the real-time frame image information, the initial position data of the target is determined, the relay tracking task thread for the target is started, and the relay tracking thread generates the real-time position data of the target;
  • the real-time location data of the target extract the associated audio resources of the real-time location data, the audio resources here are arranged in the media resource unit, and the media resource unit is arranged in the service system, and/or in the playback device; the extracted associated audio resources are used for The playback task of the playback device.
  • FIG. 4 is a schematic control diagram of a navigation device according to an embodiment of the present application.
  • real-time sampling information is obtained, and the real-time sampling information is frame image information collected by a plurality of cameras; according to the real-time frame image information, the initial position of the target is determined data; start the relay tracking task thread for the target, the relay tracking task thread can relay tracking the target among multiple cameras; the relay tracking task thread generates the real-time position data of the target; sends the real-time position data of the target to the navigation device through the wireless network ;
  • the real-time position data is used by the navigation device to generate navigation information; the navigation information is used by the navigation device to present tasks, which can be used for the control of the navigation device.
  • FIG. 5 is a schematic control diagram of a playback device according to an embodiment of the present application.
  • real-time sampling information is obtained, and the real-time sampling information is frame image information collected by multiple cameras; according to the real-time frame image information, the initial position of the target is determined data; start the relay tracking task thread for the target, the relay tracking task thread can relay tracking the target among multiple cameras; the relay tracking task thread generates the real-time position data of the target; according to the real-time position data of the target, extract the real-time position data association Audio resources, the audio resources are preset in the media resource unit, and the media resource unit is set in the service system and/or in the playback device; the extracted associated audio resources are used for the playback task of the playback device.
  • FIG. 6 is a schematic control diagram of an intelligent portable device according to an embodiment of the present application.
  • the camera group can collect frame image information including the characteristics of the identification method and frame image information including the target; After the interactive service request of the intelligent portable device, the identification and positioning unit determines the identification method. At this time, the intelligent portable device displays the identification features; the identification and positioning unit identifies the characteristic information of the target, determines the initial coordinate position of the service target, and sends it to the intelligent portable device.
  • the service system can use the relay tracking unit to generate real-time target position data, and the media resource unit extracts the target position data to associate the advertising media, and display the advertising media through the smart portable device .
  • real-time sampling information can be obtained, and the real-time sampling information is frame image information collected by multiple cameras; according to the real-time frame image information, the initial position data of the target is determined; the relay tracking task thread for the target is started, and the relay tracking task The thread can relay tracking the target among multiple cameras; the relay tracking task thread generates the real-time position data of the target; extracts the real-time position data and associates the advertising media according to the real-time position data of the target; sends the advertising media to the smart portable device through the wireless network; Advertising media is used for the display of smart portable devices, and in this way, advertisements can be pushed.
  • sending the specific information to the target includes: determining that the target activates the automatic driving mode or the controlled driving mode; sending the specific information to the target, wherein when the target activates the automatic driving mode, the target is based on At least one of the following generates a driving instruction: direction data in the specific information, risk prediction data in the specific information, navigation route data in the specific information, sensing data and status data, and operates based on the driving instruction, sensing data and status
  • the data is the data sensed by the target; when the target starts the controlled driving mode, the characteristic information carries a driving instruction, and the above-mentioned target operates based on the driving instruction.
  • FIG. 7 is a schematic diagram of the control of an autonomous vehicle according to an embodiment of the present application.
  • the camera group can collect frame image information including the characteristics of the recognition method and the frame image information collected by the camera in the target near-field area;
  • the identification and positioning unit is triggered to confirm the identification method, and then the automatic driving vehicle is triggered to display the identification features; then the identification and positioning unit identifies the characteristic information of the target, determines the initial coordinate position of the service target, and triggers the automatic driving.
  • the vehicle starts the automatic driving mode; then the risk prediction unit starts the real-time analysis of the target near-field camera to generate target risk prediction data, and the planning control unit aggregates the risk prediction data to generate driving instructions and execute the driving instructions.
  • real-time sampling information is acquired, and the real-time sampling information is frame image information collected by a plurality of cameras; initial position data of the autonomous vehicle is determined according to the real-time frame image information; and a relay tracking task thread for the autonomous vehicle is started , the relay tracking task thread can relay tracking the target between multiple cameras; analyze the real-time sampling information of the near-field camera of the autonomous vehicle, judge whether there is an accident risk according to predetermined rules, and generate target risk prediction data; The driving vehicle sends risk prediction data; the risk prediction data is one of the aggregated data generated by the autonomous driving vehicle to generate the driving instruction; the autonomous driving vehicle executes the driving instruction, and in this way, the autonomous driving vehicle can be controlled.
  • FIG. 8 is a schematic diagram of the control of a controlled traveling vehicle according to an embodiment of the present application.
  • the controlled traveling vehicle sends a controlled service request to the service system
  • the identification positioning unit confirms the identification method, and triggers the controlled traveling vehicle to display Identify features
  • the identification and positioning unit of the service system identifies the feature information of the target, determines the initial coordinate position of the served target, and triggers the controlled driving vehicle to start the controlled mode
  • the relay tracking unit starts the relay tracking thread, generates real-time target positioning data, and plans
  • the control unit aggregates real-time target position data, generates driving instructions, and triggers the controlled driving vehicle to execute the driving instructions.
  • real-time sampling information is acquired, and the real-time sampling information is frame image information collected by a plurality of cameras; according to the real-time frame image information, initial position data of the controlled driving vehicle is determined; relay tracking of the controlled driving vehicle is started.
  • the task thread the relay tracking task thread can relay tracking the target among multiple cameras; the relay tracking task thread generates real-time position data of the target; at least aggregates the real-time position data to generate driving instructions; Sending a driving command; the driving command is executed by the controlled driving vehicle, and the controlled object can be controlled in this way.
  • Figure 9(a) is a schematic diagram of a museum exhibition hall according to an embodiment of the present application.
  • the museum exhibition hall provides indoor navigation services and guided tour services for tourists;
  • the public area of the exhibition hall is arranged with a group of cameras for relay tracking , the service system can relay and track the tourists in the exhibition hall according to the sampling information of the camera group, and locate the tourists in real time.
  • the service system sends real-time location data to the smartphone client of the tourists in real time through the wireless network, and the navigation client of the smartphone provides indoor navigation services for the tourists according to the real-time location data received;
  • Real-time location data through the wireless network, send the audio guide information corresponding to the tourist location or the audio advertisement information of the store, and the hyperlink of the detailed introduction of the exhibits corresponding to the location to the tourist smartphone client in real time.
  • Tourists can listen to the navigation voice and navigation voice of the smartphone client by wearing headphones.
  • the tourist guide service system is configured with an identification and positioning unit, a relay tracking unit, a direction feature unit and a media resource unit.
  • the identification and positioning unit is configured to identify and confirm the initial position of the tourists with the tourists' smart phone client.
  • the identification and positioning unit can identify the tourists' facial information, smart phone electronic ticket information or smart phone stroboscopic information, and by identifying the location of the characteristic information , to determine the visitor's initial location data.
  • This embodiment uses the stroboscopic feature as the identification feature method to determine the initial position of the tourist. After the service system receives the interactive service request sent by the smart phone client, the identification and positioning unit sends the stroboscopic feature code to the smart phone client or receives the smart phone client.
  • the stroboscopic feature code preset on the mobile phone client the smart phone displays the feature code information through the flickering of the screen or the fill light; the identification and positioning unit recognizes the video after receiving the stroboscopic video information containing the feature code collected by the camera group.
  • the stroboscopic feature information in the information if the identified stroboscopic feature information matches the sent stroboscopic feature code, it is determined that the tourist carrying the smartphone is the target tourist, and the initial coordinate position of the target tourist is determined.
  • the relay tracking unit is configured to perform real-time relay tracking of tourists among multiple cameras, generating real-time location data of the tourists. After the identification and positioning unit of the service system confirms the initial position data of the tourist, the relay tracking unit starts the relay tracking thread of the tourist.
  • the direction feature unit is configured to analyze the real-time sampling information of the camera, determine the real-time direction data of the tourists being relayed and track, and provide accurate orientation data for the navigation service and the guide service.
  • Accurate orientation data can be the body direction data of the tourists, the head direction data of the tourists, the hand pointing data of the tourists, the travel direction data of the tourists, etc.
  • the media resource unit is configured to preset media resources such as exhibit introduction audio, exhibit detailed introduction text, and advertisement videos of some stores.
  • the media resources are associated with fixed location data. When the coordinate location data of tourists is related to the coordinate location of exhibits or stores When the data matches within a predetermined threshold, the media resource unit extracts the media resource and pushes it to the smartphone client. By pushing media resources, it is possible to accurately push exhibit introduction audio or advertising video to service objects.
  • a guide client is installed in the tourist's smart phone, and the client can be an application, or a client applet is installed under the application platform, or a browser is installed, and the web client is accessed through the browser.
  • FIG. 9(b) is a schematic diagram of an information processing method based on a smartphone according to an embodiment of the present application.
  • a client APP is preset in the smartphone, and the client APP includes a navigation module and a navigation module. modules can run simultaneously.
  • the client APP performs data connection with the guide service system through the wireless network. After the guide service system identifies and confirms the initial position of the tourists, the guide service is started.
  • the service system receives the interactive service request sent by the client APP through wireless network transmission and the preset stroboscopic feature code sent by the client APP.
  • the client APP cyclically displays the preset stroboscopic feature code through the flashing method of the display screen of the smartphone, or cyclically displays the preset stroboscopic feature code through the flashing method of the smartphone camera fill light. After the stroboscopic feature code starts to flash on the display screen of the smartphone or the fill light, the visitor lifts the mobile phone so that the camera in the exhibition hall can collect the flashing signal of the smartphone.
  • the identification and positioning unit After the cameras in the camera group in the exhibition hall collect the flickering signal, the identification and positioning unit starts to identify and locate the flickering signal, and determine the location data of the tourists holding the smart phone with the flickering signal. After the identification and positioning unit confirms the initial coordinate position of the tourist, the relay tracking unit starts the relay tracking thread of the tourist, generates the real-time location data of the tourist, and sends it to the smartphone client APP through the wireless network.
  • the direction feature unit in the tour service system starts the task of identifying the direction features of tourists, and generates real-time direction data of tourists according to the frame image information containing tourists collected in real time by the camera group, and sends them to the client APP through the wireless network.
  • the navigation module of the smartphone client APP aggregates the real-time position data and real-time direction data of tourists, and provides accurate navigation voice broadcasts for tourists.
  • Precise navigation voice broadcast which can include precise steering, step, viewing direction, etc., for example, broadcast “please go forward 10 steps and turn left 90 degrees", “please take 5 steps forward and turn right 45 degrees” , “Please turn around 180 degrees and continue to walk about 20 steps, there is the men's bathroom on the right side", “Please enter from the third gate on the right", “Please look at this exhibit on the left", “Please look at the third exhibit from the right", "Please look back at the exhibit just introduced and this exhibit!” and so on.
  • the navigation module of the smartphone client APP receives and plays the first background music audio pushed by the media resource unit of the navigation service system after the visitor enters the first exhibition hall; after the visitor enters the second exhibition hall, receives and plays it to the media resource unit Pushed second background music audio.
  • the background music audio is preset in the media resource unit, and the background music is associated with the location data of each exhibition hall.
  • the media resource unit extracts the background music audio resources associated with the location data in response to the real-time location data of the tourists, and pushes them to the smart phone client APP.
  • the client APP After the visitor enters the booth area of the first exhibit, the client APP receives the audio of the first exhibit introduction, and the navigation module plays the audio of the first exhibit introduction; after the visitor enters the booth area of the second exhibit, the client APP receives the second exhibit Introduce the graphic information and hyperlink information in detail, and the client APP displays the second exhibit to visitors through the smartphone screen to introduce graphic information and hyperlink information in detail.
  • Exhibit introduction audio, detailed introduction graphic information and hyperlink information are preset in the media resource unit.
  • the media resource unit responds to the real-time location data of tourists, extracts the associated media resources, and pushes them to the smartphone navigation APP.
  • the navigation module of the smartphone client APP receives the video advertisement of the hot-selling products in the first store pushed by the media resource unit, and plays the hot-selling products of the first store through the smartphone screen.
  • Video advertisement After entering the area of the second store, the tourist receives the video advertisement of the promotion activity of the second store pushed by the media resource unit, and plays the video advertisement of the promotion activity of the second store through the smartphone screen.
  • the video advertisement is preset in the media resource unit, and the media resource unit extracts the corresponding advertisement video and pushes it to the smart phone client APP in response to the real-time location data of the tourists.
  • the navigation or guide method in the embodiments of the present application can also be applied to public places such as shopping malls, stations, hospitals, scenic spots, etc., to provide users with accurate navigation, shopping guide, guide, and guide services.
  • Fig. 9(c) is a schematic diagram of an information processing method based on a navigation device according to an embodiment of the present application.
  • the museum pavilion also provides a navigation device for some tourists to use, and the navigation device can only provide simple
  • the navigation device is preset with the introduction audio information of the exhibits in the exhibition hall, and the navigation device extracts the related introduction audio information in response to the received real-time location data, and plays it to the tourists.
  • the real-time location data is the real-time location coordinate data of tourists generated by the relay tracking unit in the local service system through the analysis of the frame images collected by the camera group; Visitors; the exhibits here introduce audio and location-related data for the audio is stored in the Navigator.
  • Figure 10(a) is a schematic diagram of a scene of an autonomous driving vehicle according to an embodiment of the present application.
  • the urban road is densely covered with cameras, and the real-time frame image information collected by the road condition data service company through the road Vehicles, pedestrians, animals and abnormal objects are tracked throughout the whole process, and the position data, driving direction data and risk prediction data of the corresponding vehicles on the road are generated in real time. The judgment data is pushed to the corresponding vehicle.
  • An autonomous vehicle is usually equipped with a variety of sensors to sense road conditions.
  • the planning control unit of the autonomous vehicle can generate driving instructions according to the road condition data obtained by the sensors to drive the car to drive automatically.
  • the self-driving vehicle can automatically apply for the real-time road condition data service, and receive the vehicle location data, driving direction data and risk prediction data pushed by the road condition data service company in real time through the wireless network.
  • the self-driving vehicle opens up the perspective of God, and can receive the road data that the self-driving vehicle cannot sense in real time, which solves the problem of the blind spot of the self-driving vehicle.
  • Complete real-time road condition data can avoid the occurrence of traffic accidents, and can also maximize the safe driving speed and improve travel efficiency.
  • Figure 10(b) is a schematic diagram of the control of the autonomous vehicle according to the embodiment of the present application.
  • the autonomous vehicle is connected to the cloud server of the road condition data service company through the mobile wireless network, and the automatic driving Driving a vehicle to request real-time traffic data services.
  • the identification and positioning unit sends the stroboscopic feature code.
  • the autonomous vehicle receives the strobe feature code, it displays the feature code information by flashing the headlights of the car.
  • the identification and positioning unit After the road camera group collects the flashing signal sent by the car, the identification and positioning unit starts to identify and locate the flashing signal, and completes the confirmation of the serviced vehicle and the confirmation of the initial coordinate position of the vehicle with the autonomous driving vehicle.
  • the relay tracking unit in the service system of the road condition data service company starts the relay tracking thread of the vehicle after the identification and positioning unit confirms the initial coordinate position of the vehicle, and generates the position data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle. And send it to the self-driving vehicle through the wireless network.
  • the direction feature unit starts the task of identifying the direction features of the vehicle, and generates the driving direction data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle, and sends it to the autonomous vehicle through the wireless network.
  • the risk prediction unit generates the target vehicle driving risk prediction data by analyzing the real situation of the camera in the near-field area of the target vehicle's driving road, and sends it to the autonomous driving vehicle through the wireless network.
  • the navigation path unit generates the navigation path data through the real-time analysis of the camera in the far-field area and the area along the road of the target vehicle, and sends it to the autonomous vehicle through the wireless network.
  • the planning control unit of the autonomous driving vehicle After receiving the real-time location data, real-time direction data, real-time risk prediction data and reference navigation path data pushed by the service system of the road condition data service company, the planning control unit of the autonomous driving vehicle analyzes the data and the road acquired by the autonomous driving vehicle's own sensors. The state data is aggregated to generate driving instructions and drive the vehicle to drive autonomously.
  • Figure 10(c) is a schematic diagram of the control of a controlled driving vehicle according to an embodiment of the present application.
  • the road condition data service company can also provide the business of driving instructions
  • the planning control unit in the service system aggregates the position data of the vehicle by , driving direction data, risk pre-judgment data, and navigation path data, generate driving instructions that can directly drive the controlled driving vehicle, and push the driving instructions to the corresponding vehicle through the wireless network.
  • the controlled driving car itself does not need a planning control unit with high computing performance, or does not need to activate the planning control unit of the car itself, and only needs to receive driving instructions to achieve automatic driving.
  • a car with automatic parking function and adaptive cruise function can realize automatic driving without adding a planning control unit with high computing performance, and only need a few upgrades, which reduces the difficulty of application.
  • the controlled driving car After the controlled driving car enters the service area of the road condition data service company, the controlled driving car connects with the cloud server of the road condition data service company through the mobile wireless network, and the controlled driving car requests the controlled service.
  • the identification and positioning unit After the service system of the road condition data service company receives the service request of the controlled driving vehicle, the identification and positioning unit sends the stroboscopic feature code. After receiving the stroboscopic feature code, the controlled driving car displays the feature code information by flashing the headlights of the car.
  • the identification and positioning unit After the road camera group collects the flashing signal sent by the car, the identification and positioning unit starts to identify and locate the flashing signal, and completes the vehicle confirmation and the confirmation of the initial coordinate position of the vehicle with the controlled driving car.
  • the relay tracking unit in the service system of the road condition data service company starts the relay tracking thread of the vehicle after the identification and positioning unit confirms the initial coordinate position of the vehicle, and generates the position data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle.
  • the direction feature unit generates the driving direction data of the vehicle in real time by analyzing the collected frame image information containing the target vehicle.
  • the risk prediction unit generates the target vehicle driving risk prediction data by analyzing the real situation of the camera in the near-field area of the target vehicle's driving road.
  • the navigation path unit generates the driving path reference information by analyzing the real situation of the cameras in the far-field area and the area along the road of the target vehicle.
  • the controlled driving car has a variety of built-in sensors, which can sense and obtain sensing data including radar signals, image information, altitude information, acceleration information, and GPS positioning information in real time.
  • the controlled driving car can send sensing data and car status data to the service system of the road condition data service company through the wireless network in real time.
  • the vehicle state data may be vehicle parameters, energy load values, running load values, wireless signal strength values, fault state values, and the like.
  • the car parameters can be the car brand, model and configuration, or the drive interface parameters of steering and accelerator control.
  • the planning control unit in the service system can generate different driving commands according to different configurations of different cars.
  • the service system of the road condition data service company also includes a planning control unit, and the planning control unit is configured to aggregate real-time location data, driving direction data, real-time risk prediction data, navigation route data, and sensing data and vehicle status data sent by the car,
  • the driving instruction is generated and sent to the controlled driving vehicle through the wireless network, and the controlled driving vehicle executes the driving instruction and drives the vehicle to drive.
  • the controlled driving vehicle also has a built-in emergency automatic control unit, which is configured to extract the preset emergency driving program instructions and temporarily drive the autonomous vehicle when the wireless network cable is interrupted and cannot receive driving instructions.
  • Figure 11 (a) is a schematic diagram of a public acquisition area of a living community according to an embodiment of the present application. As shown in the figure, a group of cameras that can be used for relay tracking are arranged in the public activity area of the living community; the community property company can, according to the relay tracking data, Provide services such as drone patrols, robotic cleaning, robotic luggage handling, and delivery.
  • Figure 11(b) is a schematic diagram of the control of the automatic aircraft according to an embodiment of the present application.
  • the property company activates the automatic aircraft when it needs to perform a patrol mission, and the aircraft is connected to the service system of the local server of the property company through a wireless network , the planning control unit of the autonomous aircraft requests real-time environmental data services.
  • the identification and positioning unit sends the stroboscopic feature code.
  • the automatic aircraft receives the strobe feature code, it displays the feature code information through the blinking of the aircraft signal lights.
  • the identification and positioning unit starts to identify and locate the flickering signal, and completes the confirmation of the initial coordinate position with the automatic aircraft.
  • the relay tracking unit in the property service system starts the relay tracking thread of the aircraft after the identification and positioning unit confirms the initial coordinate position of the aircraft, and generates the position data of the aircraft in real time by analyzing the collected frame image information containing the target aircraft. sent to the autopilot via the wireless network.
  • the direction feature unit generates the flight direction data of the aircraft in real time by analyzing the collected frame image information containing the target aircraft, and sends it to the automatic aircraft through the wireless network.
  • the risk pre-judgment unit generates the target aircraft's driving risk pre-judgment data by analyzing the camera in the near-field area of the target aircraft's flight route, and sends it to the automatic aircraft through the wireless network.
  • the navigation path unit generates navigation path data by performing live analysis on the far-field area of the target aircraft's flight path and cameras in the area along the flight path, and sends it to the automatic aircraft through the wireless network.
  • the planning control unit of the automatic aircraft After receiving the real-time position data, real-time direction data, real-time risk prediction data and navigation path data pushed by the property service system, the planning control unit of the automatic aircraft fuses the data with the environmental state data obtained by the automatic aircraft's own sensors, Generate flight instructions to drive the aircraft to fly automatically; in addition, by receiving complete live environment data provided by cameras in the community, the automatic aircraft can effectively avoid obstacles, reasonably plan flight routes and flight speeds, and quickly and effectively complete predetermined tasks.
  • the local service system of the property company can also preset the planning control unit to directly generate the flight instructions of the aircraft to drive the aircraft, thereby reducing the integration of intelligent hardware of the aircraft and saving the purchase cost.
  • Figure 11(c) is a schematic diagram of the control of a controlled robot according to an embodiment of the present application.
  • the controlled robot is in the community and is connected to the local server of the property through a wireless network, and the controlled robot requests a controlled service.
  • the property service system receives the service request from the controlled robot and receives the number information preset by the controlled robot.
  • the controlled robot displays the serial number and graphic code information printed on the fuselage in different directions by twisting the fuselage.
  • the camera group collects the frame image information containing the robot number, and the identification and positioning unit identifies the body number or graphic code information, and completes the confirmation of the initial coordinate position with the controlled robot.
  • the relay tracking unit in the service system starts the relay tracking thread of the controlled robot after identifying the positioning unit and confirms the initial coordinate position of the controlled robot, and generates the position of the target robot in real time by analyzing the collected frame image information containing the target robot. data.
  • the direction feature unit generates real-time direction data of the controlled robot by analyzing the collected frame image information containing the target controlled robot.
  • the risk pre-judgment unit generates the driving risk pre-judgment data of the controlled robot by performing live analysis of the cameras in the near-field area of the controlled robot.
  • the navigation path unit generates the driving path reference data by analyzing the real situation of the camera in the far field area and along the route area of the target robot's driving route.
  • the property company service system further includes a planning control unit, which is configured to aggregate real-time position data, real-time direction data, risk prediction data, navigation path data, and sensing data and robot status data sent by the controlled robot, Generate travel instructions and work instructions, and send them to the controlled robot through the wireless network.
  • the controlled robot executes the travel instructions and work instructions, drives the robot to travel and completes work tasks.
  • a controlled robot can have no planning control unit, no system, or a brainless executive machine. Just like a thin client in a computer network, the terminal does not need to be loaded with system hardware and system software, and only needs basic functional configuration to operate independently. Compared with the conventional brainless robot, the design and manufacture of the brainless controlled robot of the present application are simpler, and the management, maintenance and upgrade are also more convenient, and can be applied to such as welcome shopping guide, transportation and delivery, sanitation, security inspection, patrol, assembly Various application scenarios such as production, harvesting and picking.
  • FIG. 12( a ) is a schematic diagram of an operation scenario of a controlled traveling manipulator according to an embodiment of the present application.
  • a construction company uses a controlled traveling manipulator to perform construction operations.
  • the construction company installed a chain image acquisition device column with a high-density arrangement of cameras on the construction site.
  • the chain image acquisition device is characterized in that multiple cameras are distributed in a chain in the same data transmission bus.
  • initialized modeling is performed in advance, the mapping relationship of each camera in the three-dimensional space is constructed, and the coordinate position and viewing angle of each camera on the construction site are determined.
  • the service system can calculate the exact coordinate position of the object in the viewing area through the frame image information obtained by multiple cameras, so as to realize the accurate positioning and relay tracking of the target object.
  • Figure 12(b) is a schematic diagram of the control of the controlled traveling manipulator according to the embodiment of the present application.
  • the controlled traveling manipulator is connected to the local server of the construction site through a wireless network in the construction site, and the controlled traveling manipulator requests Controlled Service.
  • the construction site service system After receiving the service request from the controlled traveling manipulator, the construction site service system sends characteristic action instructions to the traveling manipulator, and the controlled traveling manipulator executes the characteristic action instructions and swings the manipulator.
  • the identification and positioning unit identifies the motion feature information of the traveling manipulator, and completes the confirmation of the initial coordinate position with the controlled traveling manipulator.
  • the relay tracking unit in the service system recognizes the positioning unit and confirms the initial coordinate position of the traveling manipulator, it starts the relay tracking thread of the traveling manipulator, and generates the position data of the traveling manipulator in real time by analyzing the collected frame image information containing the target traveling manipulator.
  • the direction feature unit generates real-time direction data of the traveling manipulator by analyzing the collected frame image information containing the target traveling manipulator.
  • the planning control unit in the service system aggregates real-time position data and real-time direction data, generates driving instructions, and sends them to the controlled driving manipulator through the wireless network.
  • the controlled driving manipulator executes the driving instructions and drives the driving manipulator to travel.
  • the supporting claws on the chassis of the traveling manipulator are driven to drop down to support the ground, so as to ensure the stability and reliability of the working state of the manipulator.
  • the identification and positioning unit analyzes the frame image information including the manipulator collected by the camera group, and accurately verifies the position of the manipulator in the coordinate system. If the coordinate position of the traveling manipulator does not match the predetermined working position, the planning control unit generates a position correction travel instruction according to the coordinate deviation, and drives the traveling manipulator to travel to the predetermined working position. If the coordinate position of the traveling manipulator matches the predetermined working position, the service system starts the preset work instruction, and the traveling manipulator executes the work instruction to complete the job task.
  • the preset work instruction may be a work instruction unit stored at the service system end, or a work instruction unit stored at the controlled manipulator end.
  • a work instruction can be a program file written in a high-level programming language and stored in the service system to respond to invocation and transmission at any time; it can be a dynamic instruction generated by a neural network unit in the service system based on real-time data; it can be a robot-side PLC programmable control The static sequential instructions preset in the processor.
  • the current large-scale 3D printing additive manufacturing equipment needs to be equipped with large-scale cantilevers or traveling guide rails, and the handling, installation and debugging of the equipment are cumbersome, which is not conducive to large-scale use.
  • the controlled traveling manipulator of the present application can solve the problem that the equipment of the construction 3D printing technology is too large and complex, and can complete the precise manufacturing tasks of large-scale work scenes through a plurality of traveling manipulators working independently, for example, it can collaboratively complete cloth, brick laying on construction sites , pouring, spraying, painting, leveling, welding and other construction tasks.
  • the service system can decompose the 3D model in the 3D printing file into multiple work modules, each work module corresponds to a standard robot automation program, The service system drives the traveling manipulator to enter the working position of the first working module and starts the automation program once, and then drives the traveling manipulator to enter the working position of the second working module and starts the same automatic program after the automatic program is executed.
  • FIG. 13 is a schematic diagram of the information processing apparatus according to the embodiment of the present application.
  • the information processing apparatus may include: a first determination Unit 1301 , obtaining unit 1103 , second determining unit 1305 and sending unit 1307 .
  • the information processing apparatus will be described below.
  • the first determining unit 1301 is configured to determine that the target enters the predetermined area.
  • the acquisition unit 1303 is configured to acquire the real-time position information of the target generated by the relay tracking thread in response to the relay tracking thread of the target, wherein the relay tracking thread is used for relay tracking the target between at least one sampling device in the predetermined area, And generate real-time position data based on frame image information collected by at least one sampling device.
  • the second determining unit 1305 is configured to determine specific information based on the real-time location information.
  • the sending unit 1307 is configured to send the specific information to the terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information, and responds to the predetermined information.
  • first determining unit 1301, acquiring unit 1303, second determining unit 1305, and sending unit 1307 correspond to steps S102 to S108 in the embodiment, and examples and applications implemented by the above-mentioned units and corresponding steps The scenarios are the same, but are not limited to the contents disclosed in the above embodiments. It should be noted that the above-mentioned units may be executed in a computer system such as a set of computer-executable instructions as part of an apparatus.
  • the first determination unit can be used to determine that the target enters the predetermined area; then the acquisition unit is used to obtain the real-time position information of the target generated by the relay tracking thread in response to the relay tracking thread for the target, wherein , the relay tracking thread is used to relay tracking the target between at least one sampling device in the predetermined area, and generate real-time position data based on the frame image information collected by the at least one sampling device; and use the second determining unit to determine the specific information; and using the transmitting unit to transmit the specific information to the terminal device or the target, wherein the terminal device or the target generates predetermined information based on the specific information and responds to the predetermined information.
  • the purpose of pushing the information required by the target in the current environment to the target or the device related to the target according to the real-time position data of the target is achieved, and the flexibility of the positioning service system is improved.
  • the technical effect also improves the applicability of the positioning service system, and solves the technical problem that the system for providing positioning services for service recipients in the related art can only provide positioning services for users and has a relatively single function.
  • the first determination unit includes: a first acquisition module configured to acquire frame image data collected by at least one sampling device; a first identification module configured to perform image recognition on the frame image data, The identification result is obtained; the first determination module is configured to determine that the identification information of the target exists in the identification result, and then determine that the target enters the predetermined area.
  • the information processing apparatus further includes: an identification unit, configured to identify identification information for identifying the target from the target before it is determined that the target enters the predetermined area; wherein the identification unit includes: : a second identification module, configured to identify the biological information and/or non-biological information of the target from the collected images; a second determination module, configured to use the biological information and/or non-biological information as identification information for identifying the target ; wherein, the non-biological information includes at least one of the following: the outline of the target, the color of the target, the text on the target, and the identification code of the target; the biological features of the target include one of the following: facial features, body features.
  • the information processing apparatus further includes: a third determination unit, configured to determine the initial position information of the target; wherein the third determination unit includes at least one of the following: a second acquisition module, configured as Obtain sampling information, and generate initial position information of the target based on the sampling information, wherein the sampling information is obtained from at least one sampling device, and at least one sampling device is triggered by a predetermined condition to perform a shooting task; the third determining module is set to obtain The predetermined terminal information of the terminal device, and the initial location information of the target is determined based on the predetermined terminal information.
  • a third determination unit configured to determine the initial position information of the target
  • the third determination unit includes at least one of the following: a second acquisition module, configured as Obtain sampling information, and generate initial position information of the target based on the sampling information, wherein the sampling information is obtained from at least one sampling device, and at least one sampling device is triggered by a predetermined condition to perform a shooting task; the third determining module is set to obtain The predetermined terminal information of the terminal device,
  • the specific information includes at least one of the following: direction data of the target at a position corresponding to the real-time location information, risk prediction data of the target in a predetermined area, media resources, navigation path data, and travel instructions , where the media resource is associated with predetermined location data.
  • the sending unit includes: a sending module, configured to send the direction data and media resources in the specific information to the terminal device, wherein the terminal device performs at least one of the following operations: generating based on the direction data Voice navigation information, and play voice navigation information; play media resources.
  • the sending unit includes: a fourth determining module, configured to determine the target to activate the automatic driving mode or the controlled driving mode; and a sending module, configured to send specific information to the target, wherein, in the target When the automatic driving mode is activated, the target generates a driving instruction based on at least one of the following: the direction data in the specific information, the risk prediction data in the specific information, the navigation route data in the specific information, the sensing data and the state data, and based on the driving When the command runs, the sensing data and the state data are the data sensed by the target; when the target starts the controlled driving mode, the characteristic information carries the driving command, and the above-mentioned target operates based on the driving command.
  • a fourth determining module configured to determine the target to activate the automatic driving mode or the controlled driving mode
  • a sending module configured to send specific information to the target, wherein, in the target When the automatic driving mode is activated, the target generates a driving instruction based on at least one of the following: the direction data in the specific information
  • the at least one sampling device is at least one of the following: a camera and a radar; the at least one sampling device has a fixed position and a shooting angle.
  • FIG. 14 is a schematic diagram of the server according to the embodiment of the present application.
  • the The server includes: an identification and positioning unit 1401 , a relay tracking unit 1403 , a direction feature unit 1405 , a risk prediction unit 1407 , a position navigation unit 1409 and a media association unit 1411 .
  • the server is described below.
  • the identification and positioning unit 1401 is configured to identify and determine the initial position data of the target, and generate the initial position information of the target based on the initial position data.
  • the relay tracking unit 1403 is configured to perform relay tracking of the target between at least one sampling device in a predetermined area, and generate real-time position information of the target.
  • the direction feature unit 1405 is used for real-time location information to generate the direction information of the target.
  • the risk prediction unit 1407 is configured to determine whether the target is at risk based on the real-time location information through a predetermined rule, and generate risk prediction data of the target in a predetermined area.
  • the location navigation unit 1409 is configured to generate navigation path data based on real-time location information.
  • the media association unit 1411 is used for retrieving the media resource corresponding to the real-time location information of the target;
  • one or more of the direction information of the target, the risk prediction data of the target in the predetermined area, the navigation path data and the media resources are determined as specific information and sent to the target or terminal device.
  • the terminal device or target is based on The specific information generates predetermined information, and responds to the predetermined information.
  • the server in the embodiment of the present application can identify and determine the initial position data of the target through the identification and positioning unit, and generate the initial position information of the target based on the initial position data;
  • the relay tracking unit is used for at least A sampling device performs relay tracking on the target to generate real-time position information of the target;
  • a direction feature unit is used for real-time position information to generate the direction information of the target;
  • a risk pre-judgment unit is used to determine whether the target exists based on the real-time position information through predetermined rules risk, and generate the risk prediction data of the target in the predetermined area;
  • the location navigation unit is used to generate navigation path data based on the real-time location information;
  • the media association unit is used to retrieve the media resources corresponding to the real-time location information of the target; wherein, One or more of the direction information of the target, the risk prediction data of the target in the predetermined area, the navigation path data and the media resource are determined as specific information and sent to the target or terminal device, and the terminal device or target is
  • FIG. 15 is a schematic diagram of the terminal device according to the embodiment of the present application, as shown in FIG. 15 .
  • the terminal equipment may include:
  • the receiving module 1501 is used for receiving specific information.
  • the above-mentioned reservation information can be the media resources of the items currently viewed by the tourists in the museum exhibition hall, the direction data sent to the tourists visiting the museum exhibition hall, the target navigation data sent to the tourists, It can also be risk prediction data sent to autonomous vehicles.
  • the processing module 1503 is configured to generate predetermined information based on the specific information.
  • the predetermined information may be navigation data generated based on the direction data.
  • the execution module 1505 is used to respond to specific information and/or predetermined information.
  • the terminal device can play media resources in response to specific information; it can also generate navigation data based on the specific information and play the navigation data to provide route navigation for tourists.
  • FIG. 16 is a schematic diagram of the information processing system according to the embodiment of the present application, as shown in FIG. 16 .
  • the information processing system may include:
  • the server 1603 in the above is used to generate the real-time position information of the target based on the frame image information, and determine the specific information based on the real-time position information, and send the specific information to the terminal device or the target, and the terminal device or the target generates predetermined information based on the specific information , and in response to predetermined information.
  • At least one sampling device can be used to collect the frame image information of the target;
  • the server in the above is used to generate real-time position information of the target based on the frame image information, determine specific information based on the real-time position information, and convert the specific information Send to the terminal device or target, the terminal device or target generates predetermined information based on specific information, and in response to the predetermined information, realizes the target or target-related devices to push the information required by the target in the current environment according to the target's real-time location data
  • the identification and positioning unit in the information processing system of the embodiment of the present application is configured to identify the target, determine the initial position of the target, and generate target initial position data;
  • the relay tracking unit is configured to perform real-time relay tracking of the target among multiple sampling devices , generate target real-time position data;
  • the target direction feature unit is configured to analyze the real-time sampling information to generate the real-time direction data of the relayed target;
  • the risk prediction unit is configured to analyze the real-time sampling information of the near-field camera of the moving target, according to a predetermined The rules determine whether there is an accident risk, and generate target risk prediction data;
  • the location navigation path guidance unit is configured to generate the navigation path data by analyzing the real-time sampling information of the far-field camera of the moving target and the real-time information of the cameras along the distance;
  • the associated media unit is configured to preset media resources such as images, audios, graphics and texts, and the media resources are associated with fixed location data;
  • the planning control unit is configured to aggregate real-time location data, target direction
  • a human-computer interaction device which includes a wireless receiving unit configured to receive target position data, and/or target direction data sent by an information processing system through a wireless network, and/or target direction data, and/or Or, target risk prediction data, and/or, target navigation path data, and/or, media resources.
  • the user interaction unit is configured to receive and aggregate the target location data, and/or target direction data, and/or target risk prediction data, and/or target navigation path data, and/or, media resource, generating interactive information to display to the user; and an interactive output unit, configured to display and/or play the interactive information to the user.
  • an automatic driving device which includes a wireless receiving unit configured to receive target position data, and/or target direction data, and/or target direction data sent by an information processing system through a wireless network , target risk prediction data, and/or, target navigation path data; a planning control unit configured to aggregate real-time target location data in real time, and/or, target direction data, and/or, target risk prediction data, and /or, target navigation route data, and/or mobile terminal state data, and/or mobile terminal sensing data, to generate travel instructions or work instructions; an instruction execution unit configured to execute the travel instructions or work instructions.
  • a controlled traveling device which includes a wireless receiving unit configured to receive a traveling instruction and/or a work instruction sent by an information processing system through a wireless network; an instruction executing unit, is configured to execute the travel instructions, and/or work instructions.
  • a computer-readable storage medium includes a stored computer program, wherein when the computer program is run by the processor, the device where the computer storage medium is located is controlled to execute the above-mentioned The information processing method of any one.
  • a processor is also provided, where the processor is configured to run a computer program, wherein when the computer program runs, any one of the information processing methods described above is executed.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

本申请公开了一种信息处理方法及装置、信息处理***。其中,该方法包括:确定目标进入预定区域;响应于对目标的接力跟踪线程,获取接力跟踪线程生成的目标的实时位置信息,其中,接力跟踪线程用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,并基于至少一个采样设备采集的帧图像信息生成实时位置数据;基于实时位置信息确定特定信息;将特定信息发送至终端设备或目标,其中,终端设备或目标基于特定信息生成预定信息,并响应于预定信息。本申请解决了相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。

Description

信息处理方法及装置、信息处理***
本申请要求于2020年12月29日提交中国专利局、申请号为202011602856.8、申请名称“信息处理方法及装置、信息处理***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息处理技术领域,具体而言,涉及一种信息处理方法及装置、信息处理***。
背景技术
当下面向消费者提供定位服务的技术主要有卫星定位和无线基站定位,其定位精度在10m左右,而且只能为服务接收者提供定位服务,无法为服务接受者提供其他更多的信息。
针对上述相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的问题,目前尚未提出有效的解决方案。
申请内容
本申请实施例提供了一种信息处理方法及装置、信息处理***,以至少解决相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。
根据本申请实施例的一个方面,提供了一种信息处理方法,包括:确定目标进入预定区域;响应于对所述目标的接力跟踪线程,获取所述接力跟踪线程生成的所述目标的实时位置信息,其中,所述接力跟踪线程用于在所述预定区域中的至少一个采样设备间对所述目标进行接力跟踪,并基于所述至少一个采样设备采集的帧图像信息生成所述实时位置数据;基于所述实时位置信息确定特定信息;将所述特定信息发送至终端设备或所述目标,其中,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
可选地,确定目标进入预定区域,包括:获取所述至少一个采样设备采集的帧图像数据;对所述帧图像数据进行图像识别,得到识别结果;确定所述识别结果中存在 所述目标的标识信息,则确定所述目标进入所述预定区域。
可选地,在确定目标进入预定区域之前,该信息处理方法还包括:从所述目标中识别出用于标识该目标的标识信息;其中,从所述目标中识别出用于标识该目标的标识信息,包括:从采集的图像中识别所述目标的生物信息和/或非生物信息;将所述生物信息和/或非生物信息作为用于标识该目标的标识信息;其中,所述非生物信息包括以下至少之一:所述目标的轮廓、所述目标的颜色、所述目标上的文字、所述目标的标识码;所述目标的生物特征包括以下之一:面部特征、体态特征。
可选地,该信息处理方法还包括:确定目标的初始位置信息;其中,确定所述目标的初始位置信息包括以下至少之一:获取采样信息,并基于所述采样信息生成所述目标的初始位置信息,其中,所述采样信息是从所述至少一个采样设备处获取到的,所述至少一个采样设备被预定条件触发执行拍摄任务;获取终端设备的预定终端信息,并基于所述预定终端信息确定所述目标的初始位置信息。
可选地,所述特定信息包括以下至少之一:所述目标在所述实时位置信息对应的位置上的方向数据、所述目标在所述预定区域的风险预判数据、媒体资源、导航路径数据、行驶指令,其中,所述媒体资源与预定位置数据关联。
可选地,将所述特定信息发送至所述终端设备,包括:将所述特定信息中的方向数据和媒体资源发送至终端设备,其中,所述终端设备执行以下至少之一操作:基于所述方向数据生成语音导航信息,并播放所述语音导航信息;播放所述媒体资源。
可选地,将所述特定信息发送至所述目标,包括:确定所述目标启动自动行驶模式或受控行驶模型;将所述特定信息发送至所述目标,其中,在所述目标启动所述自动行驶模式时,所述目标基于以下至少之一生成行驶指令:所述特定信息中的方向数据、所述特定信息中的风险预判数据、所述特定信息中的导航路径数据、感测数据以及状态数据,并基于所述行驶指令运行,所述感测数据和所述状态数据为所述目标感测到的数据;在所述目标启动受控行驶模式时,所述特征信息携带有行驶指令,所述目标基于所述行驶指令运行。
可选地,所述至少一个采样设备为以下至少之一:摄像头、雷达;所述至少一个采样设备具有固定的位置和拍摄角度。
根据本申请实施例的另外一个方面,还提供了一种信息处理装置,包括:第一确定单元,设置为确定目标进入预定区域;获取单元,设置为响应于对所述目标的接力跟踪线程,获取所述接力跟踪线程生成的所述目标的实时位置信息,其中,所述接力跟踪线程用于在所述预定区域中的至少一个采样设备间对所述目标进行接力跟踪,并 基于所述至少一个采样设备采集的帧图像信息生成所述实时位置数据;第二确定单元,设置为基于所述实时位置信息确定特定信息;发送单元,设置为将所述特定信息发送至终端设备或所述目标,其中,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
可选地,所述第一确定单元,包括:第一获取模块,设置为获取所述至少一个采样设备采集的帧图像数据;第一识别模块,设置为对所述帧图像数据进行图像识别,得到识别结果;第一确定模块,设置为确定所述识别结果中存在所述目标的标识信息,则确定所述目标进入所述预定区域。
可选地,该信息处理装置还包括:识别单元,设置为在确定目标进入预定区域之前,从所述目标中识别出用于标识该目标的标识信息;其中,所述识别单元,包括:第二识别模块,设置为从采集的图像中识别所述目标的生物信息和/或非生物信息;第二确定模块,设置为将所述生物信息和/或非生物信息作为用于标识该目标的标识信息;其中,所述非生物信息包括以下至少之一:所述目标的轮廓、所述目标的颜色、所述目标上的文字、所述目标的标识码;所述目标的生物特征包括以下之一:面部特征、体态特征。
可选地,该信息处理装置还包括:第三确定单元,设置为确定目标的初始位置信息;其中,所述第三确定单元包括以下至少之一:第二获取模块,设置为获取采样信息,并基于所述采样信息生成所述目标的初始位置信息,其中,所述采样信息是从所述至少一个采样设备处获取到的,所述至少一个采样设备被预定条件触发执行拍摄任务;第三确定模块,设置为获取终端设备的预定终端信息,并基于所述预定终端信息确定所述目标的初始位置信息。
可选地,所述特定信息包括以下至少之一:所述目标在所述实时位置信息对应的位置上的方向数据、所述目标在所述预定区域的风险预判数据、媒体资源、导航路径数据、行驶指令,其中,所述媒体资源与预定位置数据关联。
可选地,所述发送单元,包括:发送模块,设置为将所述特定信息中的方向数据和媒体资源发送至终端设备,其中,所述终端设备执行以下至少之一操作:基于所述方向数据生成语音导航信息,并播放所述语音导航信息;播放所述媒体资源。
可选地,所述发送单元,包括:第四确定模块,设置为确定所述目标启动自动行驶模式或受控行驶模式;发送模块,设置为将所述特定信息发送至所述目标,其中,在所述目标启动所述自动行驶模式时,所述目标基于以下至少之一生成行驶指令:所述特定信息中的方向数据、所述特定信息中的风险预判数据、所述特定信息中的导航 路径数据、感测数据以及状态数据,并基于所述行驶指令运行,所述感测数据和所述状态数据为所述目标感测到的数据;在所述目标启动受控行驶模式时,所述特征信息携带有行驶指令,所述目标基于所述行驶指令运行。
可选的,所述至少一个采样设备为以下至少之一:摄像头、雷达;所述至少一个采样设备具有固定的位置和拍摄角度。
根据本申请实施例的另外一个方面,还提供了一种服务端,应用于上述中任一项所述的信息处理方法,包括:识别定位单元,用于识别并确定目标的初始位置数据,并基于所述初始位置数据生成所述目标的初始位置信息;接力跟踪单元,用于在预定区域中的至少一个采样设备间对所述目标进行接力跟踪,生成所述目标的实时位置信息;方向特征单元,用于所述实时位置信息生成所述目标的方向信息;风险预判单元,用于通过预定规则基于所述实时位置信息判断所述目标是否存在风险,并生成所述目标在所述预定区域的风险预判数据;位置导航单元,用于基于所述实时位置信息生成导航路径数据;媒体关联单元,用于调取与所述目标的实时位置信息对应的媒体资源;其中,所述目标的方向信息、所述目标在所述预定区域的风险预判数据、所述导航路径数据以及所述媒体资源中的一种或多种被确定为特定信息,并被发送至所述目标或终端设备,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
根据本申请实施例的另外一个方面,还提供了一种终端设备,应用于上述中任一项所述的信息处理方法,包括:接收模块,用于接收特定信息;处理单元,用于基于所述特定信息生成预定信息;执行单元,用于响应于所述特定信息和/或所述预定信息。
根据本申请实施例的另外一个方面,还提供了一种信息处理***,应用于上述中任一项所述的信息处理方法,包括:至少一个采样设备,用于采集目标的帧图像信息;上述中的服务端,用于基于所述帧图像信息生成所述目标的实时位置信息,并基于所述实时位置信息确定特定信息,并将所述特定信息发送至终端设备或所述目标,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
根据本申请实施例的另外一个方面,还提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序被处理器运行时控制所述计算机存储介质所在设备执行上述中任一项所述的信息处理方法。
根据本申请实施例的另外一个方面,还提供了一种处理器,所述处理器用于运行计算机程序,其中,所述计算机程序运行时执行上述中任一项所述的信息处理方法。
在本申请实施例中,采用确定目标进入预定区域;响应于对目标的接力跟踪线程, 获取接力跟踪线程生成的目标的实时位置信息,其中,接力跟踪线程用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,并基于至少一个采样设备采集的帧图像信息生成实时位置数据;基于实时位置信息确定特定信息;将特定信息发送至终端设备或目标,其中,终端设备或目标基于特定信息生成预定信息,并响应于预定信息,通过本申请实施例提供的信息处理方法,实现了根据目标的实时位置数据对目标或与目标相关的设备推送目标在当前环境下所需的信息的目的,达到了提高定位服务***的灵活性的技术效果,也提高了定位服务***的适用性,进而解决了相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的信息处理方法的流程图;
图2是根据本申请实施例的信息处理方法的示意图;
图3(a)是根据本申请实施例的受控行驶设备的控制示意图;
图3(b)是根据本申请实施例的自动行驶设备的控制示意图;
图4是根据本申请实施例的导航设备的控制示意图;
图5是根据本申请实施例的播放设备的控制示意图;
图6是根据本申请实施例的智能便携设备的控制示意图;
图7是根据本申请实施例的自动行驶车辆的控制示意图;
图8是根据本申请实施例的受控行驶车辆的控制示意图;
图9(a)是根据本申请实施例的博物展览馆的示意图;
图9(b)是根据本申请实施例的基于智能手机的信息处理方法的示意图;
图9(c)是根据本申请实施例的基于导览器的信息处理方法的示意图;
图10(a)是根据本申请实施例的自动驾驶车辆的场景示意图;
图10(b)是根据本申请实施例的自动驾驶车辆的控制示意图;
图10(c)是根据本申请实施例的受控驾驶车辆的控制示意图;
图11(a)是根据本申请实施例的生活社区公共获取区域的示意图;
图11(b)是根据本申请实施例的自动飞行器的控制示意图;
图11(c)是根据本申请实施例的受控机器人的控制示意图;
图12(a)是根据本申请实施例的受控行驶机械手运行场景示意图;
图12(b)是根据本申请实施例的受控行驶机械手的控制示意图;
图13是根据本申请实施例的信息处理装置的示意图;
图14是根据本申请实施例的服务端的示意图;
图15是根据本申请实施例的终端设备的示意图;
图16是根据本申请实施例的信息处理***的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、***、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例,提供了一种信息处理方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机***中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请实施例的信息处理方法的流程图,如图1所示,该信息处理方 法包括如下步骤:
步骤S102,确定目标进入预定区域。
可选的,上述预定区域可以是,安装有至少一个采样设备的生活社区公共区域、安装有至少一个采样设备的建筑工地、安装有至少一个采样设备的道路、安装有至少一个采样设备的博物展览馆等。
可选的,上述目标可以是进入上述安装有至少一个采样设备的道路的车辆、进入上述安装有至少一个采样设备的博物展览馆的游客、工作在上述安装有至少一个采样设备的建筑工地的机械手、工作在上述安装有至少一个采样设备的生活社区公共区域的以下至少之一:无人机、机器人。
在一种可选的实施例中,确定目标进入预定区域,包括:获取至少一个采样设备采集的帧图像数据;对帧图像数据进行图像识别,得到识别结果;确定识别结果中存在目标的标识信息,则确定目标进入预定区域。
步骤S104,响应于对目标的接力跟踪线程,获取接力跟踪线程生成的目标的实时位置信息,其中,接力跟踪线程用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,并基于至少一个采样设备采集的帧图像信息生成实时位置数据。
可选地,上述至少一个采样设备为以下至少之一:摄像头、雷达;至少一个采样设备具有固定的位置和拍摄角度。
若上述采样设备为摄像头,则这里的至少一个摄像头有固定的位置和拍摄角度;摄像头的成像质量和摄像头的布置密度决定了目标定位的精度,例如,在需要精确定位的应用场景中,可以通过提高摄像头的成像分辨率、提高摄像头的布置密度以及摄像头与目标的距离等方法实现需要匹配的定位精度。
其中,多个摄像头按连续性分布,可以满足接力跟踪任务的采样要求;接力跟踪任务按应用场景需求,可以是单摄像头定位单线接力跟踪,可以是多摄像头定位多线接力跟踪;低精度要求的应用场景中,可以使用单摄像头定位,映射为一个二维坐标系近似位置;高精度要求的应用场景中,可以使用多个摄像头定位,映射为一个三维坐标***的精确位置。
步骤S106,基于实时位置信息确定特定信息。
在该实施例中,可以通过目标的实时位置信息来确定特定信息,这里的特定信息可以是向处于博物展览管中的游客当前观看的物品的媒体资源、可以是向博物展览馆中游览的游客发送的方向数据、可以是向游客发送的目标导航数据、也可以是向自动 驾驶车辆发送的风险预判数据。
步骤S108,将特定信息发送至终端设备或目标,其中,终端设备或目标基于特定信息生成预定信息,并响应于预定信息。
在该实施例中,可以将上述特定信息发送至用户携带的终端设备(例如,手机、iPad、自主导航设备等)、自动驾驶设备、机械手等,以使得这些目标或终端设备可以执行预定操作。
图2是根据本申请实施例的信息处理方法的示意图,如图2所示,采样设备可以采集包含有识别方法特征的帧图像信息、包含有目标的帧图像信息、目标近场区域内摄像头采集的帧图像信息以及目标远场和沿程区域内摄像头采集的帧图像信息等;人机交互设备发出请求交互服务后;服务***可以通过识别定位单元确认识别方法,并向人机交互设备展示识别特征,人机交互服务模式启动交互服务模式;服务***的识别定位单元识别目标的特征信息、确定被服务目标的初始坐标位置;接力跟踪单元启动接力跟踪线程,生成实时目标位置数据;方向特征单元启动目标方向特征识别任务,实时生成目标方向数据;风险预判单元启动目标近场摄像头实况分析,生成目标风险预判数据;导航路径单元启动目标远场和沿程摄像头实况分析,生成导航路径数据;媒体资源单元响应于目标位置数据,提取相应的媒体资源;服务***将上述采集的信息发送至人机交互设备后,用户交互单元实时接收服务***发送的实时目标位置数据、实时目标方向数据、风险预判数据、导航路径数据和相应的媒体资源,聚合生成交互信息,并向用户展示或播放交互信息。
另外,在该实施例中,服务***可以获取实时采样信息,实时采样信息可以是由采样设备采集的信息,采样设备至少包括多个摄像头,并根据实时采样信息,确定目标初始位置数据;启动对目标的接力跟踪任务线程,这里的接力跟踪任务线程可以是在多个采样设备间对目标进行接力跟踪;接力跟踪任务线程生成目标的实时位置数据;接着通过无线网络向移动终端发送目标的实时位置数据。
具体地,服务***可以根据目标的实时位置数据,提取所述目标在位置的目标方向数据,和/或,目标风险预判数据,和/或,媒体资源,并通过无线网络向移动终端发送所述目标方向数据,和/或,目标风险预判数据,和/或,目标导航路径数据,和/或,媒体资源。
由上可知,在本申请实施例中,可以确定目标进入预定区域;响应于对目标的接力跟踪线程,获取接力跟踪线程生成的目标的实时位置信息,其中,接力跟踪线程用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,并基于至少一个采样设 备采集的帧图像信息生成实时位置数据;基于实时位置信息确定特定信息;将特定信息发送至终端设备或目标,其中,终端设备或目标基于特定信息生成预定信息,并响应于预定信息,实现了根据目标的实时位置数据对目标或与目标相关的设备推送目标在当前环境下所需的信息的目的,达到了提高定位服务***的灵活性的技术效果,也提高了定位服务***的适用性。
因此,通过本申请实施例提供的信息处理方法,解决了相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。
在一种可选的实施例中,在确定目标进入预定区域之前,该信息处理方法还包括:从目标中识别出用于标识该目标的标识信息;其中,从目标中识别出用于标识该目标的标识信息,包括:从采集的图像中识别目标的生物信息和/或非生物信息;将生物信息和/或非生物信息作为用于标识该目标的标识信息;其中,非生物信息包括以下至少之一:目标的轮廓、目标的颜色、目标上的文字、目标的标识码;目标的生物特征包括以下之一:面部特征、体态特征。
在该实施例中,可以在确定目标进入预定区域前,识别出用于表示该目标的标识信息;可以在后续向目标发送推送信息时,将该标识作为匹配信息。
上述图像特征可以是人脸的面部特征、物体轮廓特征、颜色特征、文字特征、二维码、条形码等;例如,可以对目标人物使用面部特征进行识别并确定初始位置,可以对目标人物使用衣着颜色特征进行识别并确定初始位置,可以对目标人物使用数字胸牌特征进行确定初始位置等,可以对目标设备外形特征进行确定初始位置。
在一种可选的实施例中,该信息处理方法还包括:确定目标的初始位置信息;其中,确定目标的初始位置信息包括以下至少之一:获取采样信息,并基于采样信息生成目标的初始位置信息,其中,采样信息是从至少一个采样设备处获取到的,至少一个采样设备被预定条件触发执行拍摄任务;获取终端设备的预定终端信息,并基于预定终端信息确定目标的初始位置信息。
在该实施例中,可以在获取到目标的实时采样信息后,确定目标的初始位置数据,并启动对目标的跟踪任务线程,该跟踪任务线程可以在多个采样设置间进行接力跟踪,接着根据跟踪任务线程生成目标的实时位置数据;然后,聚合目标的实时位置数据,和/或,目标方向数据,和/或,目标风险预判数据,和/或,目标导航路径数据,生成行驶指令,和/或,生成工作指令;并通过无线网络向移动终端发送所述行驶指令,和/或,工作指令。
图3(a)是根据本申请实施例的受控行驶设备的控制示意图,如图3(a)所示, 该信息处理方法除了包括如图2所示的部分步骤外,还可以包括如下步骤:受控行驶设备向服务***请求受控服务,服务***的识别定位单元确定识别方法后展示识别特征;识别定位单元识别目标的特征信息,并确定被服务目标的初始坐标位置,并向受控行驶设备发送信息以触发受控行驶设备启动受控模型;受控行驶设备会向服务器发送受控设备状态数据和实时感测数据。
接着,服务***的规划控制单元可以聚合实时目标位置数据、实时目标方向数据、风险预判数据、导航路径数据、以及受控设备状态数据和感测数据,生成行驶指令和工作指令,并将生成的行驶指令和工作指令发送至受控行驶设备。
可选的,目标的初始位置数据可以是通过目标的识别特征进行确定的,识别特征是确定目标的特征信息,可以是通过图像特征、可见光频闪特征、目标动作特征、预定位置特征之一。
可见光频闪特征是一种标识码信息特征,闪频信号包含有编码表征,其中,可见光频闪特征由移动终端产生的,可以是在视频帧中分离并识别使用包含有编码表征的频闪信号确定目标的初始位置,频闪信号可以是明暗交替信号,可以是颜色变化信号;频闪信号可以是由信号照明灯生成,可以是由显示屏幕生成。例如,智能手机可以通过闪烁补光灯实现被服务***识别和定位,智能手机可以通过显示屏幕显示颜色变换实现被服务***识别和定位;例如,汽车通过闪烁大灯实现被服务***识别和定位;例如,无人飞行器通过信号灯的频闪被服务***识别和定位。
目标动作特征是一种标识码信息特征,其中,动作特征包含有预定规则表征,服务***通过识别目标的预定动作,确定目标的初始位置。例如,目标人物的挥手动作,目标人物的手指动作,目标人物的点头动作,目标设备的前后移动动作,目标设备的摆动摇臂动作等。
上述预定位置特征同样是一种标识码信息,预定位置在***内对应预置的编码,是目标移动至预定识别位置确定目标初始位置的方法。例如,目标人物通过检票闸机口时,服务***根据已知的检票闸机口位置坐标对目标进行确定和初始定位;例如,目标汽车通过预定汽车通道时,服务***根据已知汽车通道位置坐标对目标汽车进行确定和初始定位;例如,机器人在预定充电位时,服务***根据已知充电位位置坐标对目标机器人进行确定和初始定位。
在本申请实施例中移动终端可以是具备无线连接功能的人机交互设备,或者,具备无线连接功能的自动行驶设备,或者,具备无线连接功能的受控行驶设备。
在本申请实施例中,无线连接功能可以是无线局域网通讯、蜂窝网络通讯或可见 光频闪通讯,服务***和移动终端可以是通过局域网内部连接,可以是通过互联网外部连接。
上述人机交互设备可以是智能便携设备、可穿戴智能设备、VR智能设备、AR智能设备或车载智能设备等,例如智能手机、智能手表、智能眼镜、智能耳机或车载导航等。
图3(b)是根据本申请实施例的自动行驶设备的控制示意图,如图3(b)所示,该信息处理方法除了包括如图2所示的部分步骤外,还可以包括如下步骤:自动行驶设备向服务***请求受控服务,服务***的识别定位单元确定识别方法后展示识别特征;识别定位单元识别目标的特征信息,并确定被服务目标的初始坐标位置,并向自动行驶设备发送信息以触发受控行驶设备启动受控模型;自动行驶设备会向服务器发送自动行驶设备状态数据和实时感测数据。
上述自动行驶设备和受控行驶设备可以是客车、货车、叉车、周转运输车、农业机械车辆、环卫机械车辆、自动轮椅、平衡车等车辆设备,可以是行走机器人、移动机械手等机械设备,可以是运输直升机、无人飞行器等飞行设备等。
在一种可选的实施例中,特定信息包括以下至少之一:目标在实时位置信息对应的位置上的方向数据、目标在预定区域的风险预判数据、媒体资源、导航路径数据、行驶指令,其中,媒体资源与预定位置数据关联。
在本申请实施例中,目标方向数据,是服务***中的方向特征单元根据分析实时采样信息确定生成的被跟踪目标当前的方向数据。例如,目标人物身体或面部的方向,目标人物左右手的方向,目标汽车头部方向,目标机器人前端方向,目标机械手的工作方向等。通过向移动终端发送目标方向数据,可以使目标进行精确的方向和姿势的调整。例如,行走机器人在摔倒后,通过向行走机器人发送摔倒的方向数据,可以辅助行走机器人正确的调整姿势。
目标风险预判数据,是风险预判单元通过聚合移动目标的近场摄像头实时采样信息,按预定规则判断是否存在事故风险,生成目标风险预判数据。例如,在无人飞行器飞行区域安装有多个摄像头,通过多个摄像头的采样信息对飞行区域内的障碍物进行提前预判生成风险预判数据,能够在提高飞行速度的同时避免撞机事故的发生。例如,在公路上安装多个摄像头,接力跟踪单元通过多个摄像头的采样信息对多个移动目标进行接力跟踪,并生成跟踪数据,移动目标可以是车辆、行人、动物或未知活动物体;通过实时分析被服务目标车辆的区域的近场数据,生成被服务目标车辆的风险预判数据。为了提高被服务目标车辆的安全行驶系数,服务提供商可以在公路周边区 域布置采样摄像头,对可能进入公路影响交通安全的活动物体进行提前预警。通过向移动终端发送目标风险预判数据,可以消除行驶设备的感测盲区,规避风险事故的发生。
导航路径信息,是导航路径单元通过分析移动目标的远场摄像头和沿程摄像头的实况信息生成。通过分析远场摄像头和沿程摄像头的实时采样信息,突破移动终端在获取实时信息方面的局限,使导航路径更加符合环境的实时变化。
媒体资源是预置于服务***中的影像、音频、图文等媒体资源,媒体资源被关联有固定位置数据,例如,第一媒体资源被关联第一组位置数据,第二媒体资源被关联第二组位置数据。媒体资源可以是与固定位置相关联的介绍信息,可以是与固定位置相关联的广告信息,可以是与固定位置相关联的音乐信息,可以是与固定位置相关联的VR或AR影像信息。例如,在博物展馆内,响应于被接力跟踪目标的实时位置数据,服务***向人机交互设备推送相应位置数据关联的展品介绍音频和图文介绍信息的超链接。例如,在购物商场内,响应于被接力跟踪目标的位置数据,向人机交互设备推送该位置数据关联的店铺商品的广告信息,或者,响应于被接力跟踪目标的位置向人机交互设备推送该位置数据关联的商品功能介绍信息。通过推送媒体资源,可以实现把相应位置关联的预置媒体资源精准的推送给服务对象。
行驶指令,被设置为规划控制单元实时聚合目标的实时位置数据、目标方向数据、目标风险预判数据、目标导航路径数据、移动终端状态数据和移动终端感测数据生成的。移动终端状态数据可以是设备参数、能源值、配载值、无线信号强度值、故障状态等。规划控制单元可以根据不同的设备参数生成不同的行驶指令,例如,不同品牌的汽车使用的行驶指令接口不同,相同品牌不同的配置的汽车对应的驱动参数也会不同。规划控制单元可以根据不同的能源值生成不同的行驶指令,例如,燃油汽车在油箱油量较少的情况下就需要选用省油模式进行行驶,无人飞行器在电量充足的情况下就可以选用高性能模式进行飞行。规划控制单元可以根据不同的配载值生成不同的行驶指令,例如,货车空载和高负载就需要不同的行驶模式,油罐车等危险品运输车就需要特殊的行驶模式。规划控制单元可以根据不同的无线信号强度值生成不同的行驶指令,例如,在无线信号强度值低的情况下就需要调整为偏保守的行驶模式。移动终端感测数据是移动终端设备上配置的传感器采集到的感测数据,可以是规划控制单元聚合数据中的补充数据,传感器可以是图像传感器、雷达传感器、加速度传感器、GPS接收器、电子罗盘传感器等。
规划控制单元被配置于服务***中的情况下,服务***向移动终端无线传输行驶指令或工作指令,移动终端执行服务***发出的行驶指令和工作指令;规划控制单元 被配置于移动终端的情况下,服务***向移动终端无线传输目标的实时位置数据、方向数据、风险预判数据或导航路径信息,移动终端聚合实时位置数据、方向数据、风险预判数据和导航路径数据后,生成并执行行驶指令和工作指令。
在一种可选的实施例中,将特定信息发送至终端设备,包括:将特定信息中的方向数据和媒体资源发送至终端设备,其中,终端设备执行以下至少之一操作:基于方向数据生成语音导航信息,并播放语音导航信息;播放媒体资源。
一个方面,在本申请实施例中,可以获取实时采样信息,根据实时帧图像信息,确定目标初始位置数据,启动对目标的接力跟踪任务线程,接力跟踪线程生成目标的实时位置数据;服务***根据目标的实时位置数据,提取实时位置数据关联音频资源,这里的音频资源设置于媒体资源单元,媒体资源单元被设置于服务***中,和/或播放设备中;提取的关联的音频资源被用于播放设备的播放任务。
图4是根据本申请实施例的导航设备的控制示意图,如图4所示,获取实时采样信息,实时采样信息是由多个摄像头采集的帧图像信息;根据实时帧图像信息,确定目标初始位置数据;启动对目标的接力跟踪任务线程,接力跟踪任务线程可以在多个摄像头间对目标进行接力跟踪;接力跟踪任务线程生成目标的实时位置数据;通过无线网络向导航设备发送目标的实时位置数据;实时位置数据被所述导航设备用于生成导航信息;导航信息被所述导航设备用于展示任务,该方式可以用于导航设备的控制。
图5是根据本申请实施例的播放设备的控制示意图,如图5所示,获取实时采样信息,实时采样信息是由多个摄像头采集的帧图像信息;根据实时帧图像信息,确定目标初始位置数据;启动对目标的接力跟踪任务线程,接力跟踪任务线程可以在多个摄像头间对目标进行接力跟踪;接力跟踪任务线程生成目标的实时位置数据;根据目标的实时位置数据,提取实时位置数据关联音频资源,音频资源预置于媒体资源单元,媒体资源单元被设置于服务***中,和/或,播放设备中;提取的关联音频资源被用于播放设备的播放任务。
图6是根据本申请实施例的智能便携设备的控制示意图,如图6所示,摄像头群可以采集包含有识别方法特征的帧图像信息以及包含有目标的帧图像信息;服务***可以在接收到智能便携设备的交互服务请求后,识别定位单元确定识别方法,此时智能便携设备展示识别特征;识别定位单元识别目标的特征信息、确定被服务目标的初始坐标位置并将其发送至智能便携设备以触发智能便携设备启动交互服务模式、并展示广告媒体;另外,服务***可以利用接力跟踪单元生成实时目标位置数据,并且媒体资源单元提取目标位置数据关联广告媒体,并通过智能便携设备展示广告媒体。
在该实施例中,可以获取实时采样信息,实时采样信息是由多个摄像头采集的帧图像信息;根据实时帧图像信息,确定目标初始位置数据;启动对目标的接力跟踪任务线程,接力跟踪任务线程可以在多个摄像头间对目标进行接力跟踪;接力跟踪任务线程生成目标的实时位置数据;根据目标的实时位置数据,提取实时位置数据关联广告媒体;通过无线网络向智能便携设备发送广告媒体;广告媒体被用于智能便携设备的展示,通过该方式可以实现广告的推送。
在一种可选的实施例中,将特定信息发送至目标,包括:确定目标启动自动行驶模式或受控行驶模式;将特定信息发送至目标,其中,在目标启动自动行驶模式时,目标基于以下至少之一生成行驶指令:特定信息中的方向数据、特定信息中的风险预判数据、特定信息中的导航路径数据、感测数据以及状态数据,并基于行驶指令运行,感测数据和状态数据为目标感测到的数据;在目标启动受控行驶模式时,特征信息携带有行驶指令,上述目标基于行驶指令运行。
图7是根据本申请实施例的自动行驶车辆的控制示意图,如图7所示,摄像头群可以采集包含有识别方法特征的帧图像信息以及目标近场区域内摄像头采集帧图像信息;服务***在接收到自动行驶车辆的实时信息请求后,触发识别定位单元确认识别方法,接着触发自动行驶车辆展示识别特征;接着识别定位单元识别目标的特征信息,确定被服务目标的初始坐标位置,触发自动驾驶车辆启动自动行驶模式;接着风险预判单元启动目标近场摄像头实况分析,生成目标风险预判数据,规划控制单元聚合风险预判数据生成行驶指令并执行行驶指令。
在该实施例中,获取实时采样信息,实时采样信息是由多个摄像头采集的帧图像信息;根据实时帧图像信息,确定自动行驶车辆的初始位置数据;启动对自动行驶车辆的接力跟踪任务线程,接力跟踪任务线程可以在多个摄像头间对目标进行接力跟踪;分析自动行驶车辆的近场摄像头实时采样信息,按预定规则判断是否存在事故风险,生成目标风险预判数据;通过无线网络向自动行驶车辆发送风险预判数据;风险预判数据是;自动行驶车辆生成行驶指令所聚合数据之一;自动行驶车辆执行所述行驶指令,通过该方式可以进行自动行驶车辆的控制。
图8是根据本申请实施例的受控行驶车辆的控制示意图,如图8所示,受控行驶车辆向服务***发送受控服务请求,识别定位单元确认识别方法,并触发受控行驶车辆展示识别特征;服务***的识别定位单元识别目标的特征信息,确定被服务目标的初始坐标位置,并触发受控行驶车辆启动受控模式;接力跟踪单元启动接力跟踪线程,生成实时目标定位数据,规划控制单元聚合实时目标位置数据,生成行驶指令,触发受控行驶车辆执行行驶指令。
在该实施例中,获取实时采样信息,实时采样信息是由多个摄像头采集的帧图像信息;根据实时帧图像信息,确定受控行驶车辆的初始位置数据;启动对受控行驶车辆的接力跟踪任务线程,接力跟踪任务线程可以在多个摄像头间对目标进行接力跟踪;接力跟踪任务线程生成目标的实时位置数据;至少聚合所述实时位置数据,生成行驶指令;通过无线网络向受控行驶车辆发送行驶指令;行驶指令被所述受控行驶车辆执行,通过该方式可以实现对受控对象的控制。
下面结合不同的场景对本申请实施例进行说明。
场景实施例1
图9(a)是根据本申请实施例的博物展览馆的示意图,如图所示,博物展馆为游客提供室内导航服务和导览服务;展馆的公共区域布置有用于接力跟踪的摄像头群,服务***可以根据摄像头群的采样信息对展馆内的游客进行接力跟踪,实时对游客进行定位。服务***根据对游客的实时定位,通过无线网络实时向游客智能手机客户端发送实时位置数据,智能手机的导航客户端根据接收到的实时位置数据为游客进行室内导航服务;服务***根据对游客的实时位置数据,通过无线网络实时向游客智能手机客户端发送游客位置对应的导览语音介绍信息或商铺语音广告信息,以及位置对应展品的详细介绍的超链接。游客通过佩戴耳机收听智能手机客户端的导航语音和导览语音。
为实现如上服务,游客导览服务***被配置有识别定位单元、接力跟踪单元、方向特征单元和媒体资源单元。
识别定位单元被配置为与游客智能手机客户端进行识别和确认游客的初始位置,识别定位单元可以识别游客的面部信息、智能手机电子门票信息或智能手机频闪信息,通过识别出特征信息所在位置,确定游客的初始位置数据。本实施例是以频闪特征为识别特征方法进行游客初始位置确定的,服务***接收到智能手机客户端发送的交互服务请求后,识别定位单元向智能手机客户端发送频闪特征码或接收智能手机客户端预置的频闪特征码,智能手机通过屏幕或补光灯的闪烁方式展示特征码信息;识别定位单元接收到摄像头群采集到的包含有特征码的频闪视频信息后,识别视频信息中的频闪特征信息;如果被识别频闪特征信息与发送的频闪特征码相符,即确定携带智能手机的游客为目标游客,确定目标游客的初始坐标位置。
接力跟踪单元被配置为可以在多个摄像头间对游客进行实时接力跟踪,生成游客的实时位置数据。服务***的识别定位单元确认游客的初始位置数据后,接力跟踪单元启动该游客的接力跟踪线程。
方向特征单元被配置为分析摄像头实时采样信息,确定被接力跟踪游客的实时方向数据,为导航服务和导览服务提供精准的方位数据。精准的方位数据,可以是游客的身体方向数据,可以是游客的头部方向数据,可以是游客的手部指向数据,可以是游客的行进方向数据等。
媒体资源单元,被配置为预置有展品简介音频、展品详细介绍文本和部分商铺的广告视频等媒体资源,媒体资源被关联有固定位置数据,当游客的坐标位置数据与展品或商铺的坐标位置数据在预定阈值内相符时,媒体资源单元提取媒体资源,推送给智能手机客户端。通过推送媒体资源,可以实现把展品介绍音频或广告视频精准的推送给服务对象。
游客智能手机中安装有导览客户端,客户端可以是应用程序,或者应用程序平台下安装有客户端小程序,或者安装有浏览器,通过浏览器访问网页客户端。
图9(b)是根据本申请实施例的基于智能手机的信息处理方法的示意图,如图所示,智能手机中预置有客户端APP,客户端APP包含有导航模块和导览模块,两个模块可以同时运行。客户端APP通过无线网络与导览服务***进行数据连接,导览服务***识别和确认游客初始位置后,启动导览服务。
当游客进入博物展馆后,打开智能手机中的客户端APP,请求交互服务。服务***收到客户端APP通过无线网络传输发送的交互服务请求和客户端APP发送的预置频闪特征码。客户端APP通过智能手机的显示屏幕闪烁方式循环显示预置频闪特征码,或者通过智能手机相机补光灯的闪烁方式循环展示预置频闪特征码。游客在智能手机的显示屏或补光灯开始闪烁频闪特征码后,举起手机方便展馆内的摄像头采集到智能手机的闪烁信号。
展馆内摄像头群中的摄像头采集到闪烁信号后,识别定位单元开始对闪烁信号进行识别和定位,确定持有闪烁信号智能手机的游客位置数据。识别定位单元确认游客初始坐标位置后,接力跟踪单元启动游客的接力跟踪线程,生成游客的实时位置数据,并通过无线网络发送给智能手机客户端APP。导览服务***中的方向特征单元启动游客方向特征识别任务,根据摄像头群实时采集到的包含有游客的帧图像信息,生成游客的实时方向数据,通过无线网络发送给客户端APP。智能手机客户端APP的导航模块在接收到接力跟踪单元生成的实时位置数据和实时方向数据后,聚合游客的实时位置数据和实时方向数据后,为游客提供精准的导航语音播报。精准的导航语音播报,可以包含精准的转向、迈步、观看方向等内容,例如播报“请您往前走10步后左转90度”、“请您往前走5步后右转45度”、“请您180度转身后继续走大约20步,在右手边就是男卫生间”、“请您从右侧第三个闸机口进入”、“请您向左看这件展品……”、 “请您看右手边向右数第三件展品……”、“请您再回头看一个刚才介绍的展品与这件展品之间……”等。
智能手机客户端APP的导览模块在游客进入第一展厅后,接收和播放导览服务***媒体资源单元推送的第一背景音乐音频;在游客进入第二展厅后,接收和播放到媒体资源单元推送的第二背景音乐音频。背景音乐音频预置于媒体资源单元,背景音乐被关联有各展厅的位置数据。媒体资源单元响应于游客实时位置数据,提取位置数据相关联的背景音乐音频资源,推送给智能手机客户端APP。
在游客走入第一展品展位区域后,客户端APP接收第一展品介绍音频,导览模块播放第一展品介绍音频,;在游客走入第二展品展位区域后,客户端APP接收第二展品详细介绍图文信息和超链接信息,客户端APP通过智能手机屏幕向游客展示第二展品详细介绍图文信息和超链接信息。展品介绍音频、详细介绍图文信息和超链接信息预置于媒体资源单元,媒体资源单元响应于游客的实时位置数据,提取相关联的媒体资源,推送给智能手机导航APP。
智能手机客户端APP的导览模块在游客进入到第一商铺区域内后,接收到媒体资源单元推送的第一商铺内热销商品的视频广告,通过智能手机屏幕播放第一商铺热销商品的视频广告;游客进入到第二商铺区域内后,接收到媒体资源单元推送的第二商铺促销活动的视频广告,通过智能手机屏播放第二商铺促销活动的视频广告。视频广告预置于媒体资源单元,媒体资源单元响应于游客的实时位置数据,提取相应的广告视频推送给智能手机客户端APP。
本本申请实施例中的导航或导览方法还可以应用于商场、车站、医院、景区等公共场所,为用户提供精准的导航、导购、导览、导游服务。
图9(c)是根据本申请实施例的基于导览器的信息处理方法的示意图,如图所示,博物展馆还提供一种导览器供部分游客使用,导览器只可以提供简单的导览服务,导览器内预置有展馆内展品的介绍音频信息,导览器响应于接收到的实时位置数据提取相关联的介绍音频信息,播放给游客。实时位置数据是本地服务***中的接力跟踪单元通过摄像头群采集到的帧图像分析生成的游客实时位置坐标数据;导览器可以接收目标实时位置数据,提取位置数据关联的展品介绍音频、播放给游客;这里的展品介绍音频和音频的位置关联数据存储于导览器中。
场景实施例2
图10(a)是根据本申请实施例的自动驾驶车辆的场景示意图,如图所示,城市道路被密布有摄像头,路况数据服务公司通过道路摄像头采集到的实时帧图像信息, 对道路及周边的车辆、行人、动物和异常物体进行全程接力跟踪,实时生成道路上对应车辆的位置数据、行驶方向数据和风险预判数据,路况数据服务公司可以把车辆的位置数据、行驶方向数据和风险预判数据推送给对应车辆。
自动驾驶车辆通常被设置有多种传感器用于感测道路状况,自动驾驶车辆的规划控制单元可以根据传感器获取到的道路状况数据,生成行驶指令,驱动汽车自动驾驶。自动驾驶车辆在驶入路况数据服务公司的服务道路后,自动驾驶车辆可以自动申请实时路况数据服务,通过无线网络接收路况数据服务公司实时推送的车辆位置数据、行驶方向数据和风险预判数据。自动驾驶车辆通过接收和融合路况数据服务公司推送的路况数据,开启了上帝视角,能够实时接收到自动驾驶车辆自身无法感测到的路面数据,解决了自动驾驶车辆的感测盲区问题,通过全面完备的实时路况数据,可以规避交通事故的发生,还可以最大化的提高安全行驶速度,提高出行效率。
图10(b)是根据本申请实施例的自动驾驶车辆的控制示意图,如图所示,自动驾驶车辆进入服务区域后,自动驾驶车辆通过移动无线网络与路况数据服务公司的云端服务器连接,自动驾驶车辆请求实时路况数据服务。路况数据服务公司服务***收到自动驾驶车辆的服务请求后,识别定位单元发送频闪特征码。自动驾驶车辆收到频闪特征码后,通过汽车大灯的闪烁方式展示特征码信息。
道路摄像头群采集到汽车发出的闪烁信号后,识别定位单元开始对闪烁信号进行识别和定位,与自动驾驶车辆完成被服务车辆的确认和车辆初始坐标位置的确认。
路况数据服务公司的服务***中的接力跟踪单元在识别定位单元确认车辆初始坐标位置后,启动车辆的接力跟踪线程,通过分析采集的包含有目标车辆的帧图像信息,实时生成车辆的位置数据,并通过无线网络发送给自动驾驶车辆。方向特征单元启动对车辆的方向特征识别任务,通过分析采集的包含有目标车辆的帧图像信息,实时生成车辆的行驶方向数据,并通过无线网络发送给自动驾驶车辆。风险预判单元通过对目标车辆行驶道路的近场区域内摄像头实况分析,生成目标车辆行驶风险预判数据,并通过无线网络发送给自动驾驶车辆。导航路径单元通过对目标车辆行驶道路的远场区域和沿程区域内摄像头的实况分析,生成导航路径数据,并通过无线网络发送给自动驾驶车辆。
自动驾驶车辆的规划控制单元在接收到路况数据服务公司的服务***推送的实时位置数据、实时方向数据、实时风险预判数据和参考导航路径数据后,对数据和自动驾驶车辆自身传感器获取的道路状态数据进行聚合,生成行驶指令,驱动车辆自动驾驶。
图10(c)是根据本申请实施例的受控驾驶车辆的控制示意图,如图所示,路况数据服务公司还可以提供行驶指令的业务,服务***中的规划控制单元通过聚合车辆的位置数据、行驶方向数据、风险预判数据、导航路径数据,生成可以直接驱动受控驾驶汽车的行驶指令,通过无线网络推送行驶指令给对应车辆。受控驾驶汽车自身不需要具备高计算性能的规划控制单元,或者,不需要启用汽车自身的规划控制单元,只需要接收行驶指令就可以实现自动驾驶。例如,通过该技术,具备自动泊车功能和自适应巡航功能的汽车,无需增加高计算性能的规划控制单元,只需要少量升级就可以实现自动驾驶,降低应用难度。
如图所示,受控驾驶汽车进入路况数据服务公司的服务区域后,受控驾驶汽车通过移动无线网络与路况数据服务公司的云端服务器连接,受控驾驶汽车请求受控服务。路况数据服务公司服务***收到受控驾驶汽车的服务请求后,识别定位单元发送频闪特征码。受控驾驶汽车收到频闪特征码后,通过汽车大灯的闪烁方式展示特征码信息。
道路摄像头群采集到汽车发出的闪烁信号后,识别定位单元开始对闪烁信号进行识别和定位,与受控驾驶汽车完成车辆确认和车辆初始坐标位置的确认。
路况数据服务公司的服务***中的接力跟踪单元在识别定位单元确认车辆初始坐标位置后,启动车辆的接力跟踪线程,通过分析采集的包含有目标车辆的帧图像信息,实时生成车辆的位置数据。方向特征单元通过分析采集的包含有目标车辆的帧图像信息,实时生成车辆的行驶方向数据。风险预判单元通过对目标车辆行驶道路的近场区域内摄像头实况分析,生成目标车辆行驶风险预判数据。导航路径单元通过对目标车辆行驶道路的远场区域和沿程区域内摄像头实况分析,生成行驶路径参考信息。
受控驾驶汽车内置有多种传感器,可以实时感测获取包括雷达信号、图像信息、高度信息、加速度信息、GPS定位信息等感测数据。受控驾驶汽车可以实时通过无线网络向路况数据服务公司的服务***发送感测数据和汽车状态数据。汽车状态数据可以是汽车参数、能源负载值、运行配载值、无线信号强度值、故障状态值等。汽车参数可以是汽车品牌型号和配置,可以是转向和油门控制的驱动接口参数,服务***中的规划控制单元可以根据不同汽车的不同配置分别生成不同的行驶驱动指令。
路况数据服务公司的服务***中还包括规划控制单元,规划控制单元被配置为聚合实时位置数据、行驶方向数据、实时风险预判数据、导航路径数据和汽车发送的感测数据和汽车状态数据,生成行驶指令,通过无线网络发送给受控驾驶汽车,受控驾驶汽车执行行驶指令,驱动汽车行驶。
受控驾驶汽车还内置有应急自动控制单元,被配置为在无线网线中断无法接收行 驶指令时提取预置的应急行驶程序指令,临时性驱动自动驾驶车辆汽车。
场景实施例3
图11(a)是根据本申请实施例的生活社区公共获取区域的示意图,如图所示的生活社区公共活动区域内布置有可用于接力跟踪的摄像头群;社区物业公司可以根据接力跟踪数据,提供如无人机巡逻、机器人清洁、机器人行李搬运和投递派送等服务。
图11(b)是根据本申请实施例的自动飞行器的控制示意图,如图所示,物业公司在需要执行巡逻任务时,激活自动飞行器,飞行器通过无线网络与物业公司的本地服务器的服务***连接,自动飞行器的规划控制单元请求实时环境数据服务。服务***收到自动飞行器的服务请求后,识别定位单元发送频闪特征码。自动飞行器收到频闪特征码后,通过飞行器信号灯的闪烁方式展示特征码信息。
当社区内摄像头群采集到飞行器发出的闪烁信号后,识别定位单元开始对闪烁信号进行识别和定位,与自动飞行器完成初始坐标位置的确认。
接着物业服务***中的接力跟踪单元在识别定位单元确认飞行器初始坐标位置后,启动所述飞行器的接力跟踪线程,通过分析采集的包含有目标飞行器的帧图像信息,实时生成飞行器的位置数据,并通过无线网络发送给自动飞行器。方向特征单元通过分析采集的包含有目标飞行器的帧图像信息,实时生成飞行器的飞行方向数据,并通过无线网络发送给自动飞行器。风险预判单元通过对目标飞行器飞行路线的近场区域内摄像头进行实况分析,生成目标飞行器行驶风险预判数据,并通过无线网络发送给自动飞行器。导航路径单元通过对目标飞行器飞行路线的远场区域和沿程区域内摄像头进行实况分析,生成导航路径数据,并通过无线网络发送给自动飞行器。
自动飞行器的规划控制单元在接收到物业服务***推送的实时位置数据、实时方向数据、实时风险预判数据和导航路径数据后,对所述数据和自动飞行器自身传感器获取的环境状态数据进行融合,生成飞行指令,驱动飞行器自动飞行;另外,自动飞行器通过接收社区内摄像头提供的完备实况环境数据,可以有效规避障碍物,合理规划飞行路线和飞行速度,快速有效地完成预定任务。
需要说明的是,物业公司的本地服务***还可以预置规划控制单元,直接生成飞行器的飞行指令驱动飞行器,降低飞行器的智能硬件集成度,节省购置成本。
图11(c)是根据本申请实施例的受控机器人的控制示意图,如图所示,受控机器人在社区内,通过无线网络与物业的本地服务器连接,受控机器人请求受控服务。物业服务***收到受控机器人的服务请求并接收受控机器人预置的编号信息,受控机器人通过扭转机身向不同方向展示机身上印制的编号和图形码信息。摄像头群采集到 包含有机器人编号内容的帧图像信息,识别定位单元识别机身编号或图形码信息,与受控机器人完成初始坐标位置的确认。
服务***中的接力跟踪单元在识别定位单元确认受控机器人初始坐标位置后,启动所述受控机器人的接力跟踪线程,通过分析采集的包含有目标机器人的帧图像信息,实时生成目标机器人的位置数据。方向特征单元通过分析采集的包含有目标受控机器人的帧图像信息,生成受控机器人的实时方向数据。风险预判单元通过对受控机器人的近场区域内摄像头进行实况分析,生成受控机器人行驶风险预判数据。导航路径单元通过对目标机器人行驶路线的远场区域和沿程区域内摄像头进行实况分析,生成行驶路径参考数据。
物业公司服务***中还包括规划控制单元,所述规划控制单元被配置为聚合实时位置数据、实时方向数据、风险预判数据、导航路径数据和受控机器人发送的感测数据和机器人状态数据,生成行驶指令和工作指令,通过无线网络发送给受控机器人,受控机器人执行行驶指令和工作指令,驱动机器人行驶和完成工作任务。
受控机器人可以没有规划控制单元,可以没有***,可以是一种无脑执行机器。就如同电脑网络中的瘦客户端,终端不需要装载有***硬件和***软件,只需要基本的功能配置就可以独立作业。相对比于常规的有脑机器人,本申请的无脑受控机器人的设计和制造更简单,管理维护和升级也更方便,可以应用于如迎宾导购、搬运投递、卫生清洁、安检巡逻、装配生产、收获采摘等多种应用场景。
场景实施例4
图12(a)是根据本申请实施例的受控行驶机械手运行场景示意图,如图所示,建筑公司使用受控行驶机械手进行建筑施工作业。为了提高机械手的定位精度,建筑公司在施工场地安置了摄像头高密度排列的链式图像获取装置立柱,链式图像获取装置其特征在于多个摄像头在同一数据传输总线中链状分布。在该链式图像获取装置安置到位后,预先进行初始化建模,构建出各摄像头在三维空间中的映射关系,确定各摄像头在施工现场的坐标位置和取景角度。初始化建模后,服务***就可以通过多个摄像头获取的帧图像信息计算出取景区域内的物体的准确坐标位置,实现对目标物的准确定位和接力跟踪。
图12(b)是根据本申请实施例的受控行驶机械手的控制示意图,如图所示,受控行驶机械手在施工工地内,通过无线网络与建筑工地的本地服务器连接,受控行驶机械手请求受控服务。建筑工地服务***收到受控行驶机械手的服务请求后,向行驶机械手发送特征动作指令,受控行驶机械手执行特征动作指令,摆动机械臂。摄像头 群采集到包含有机械手摆臂动作的视频信息后,识别定位单元识别行驶机械手的动作特征信息,与受控行驶机械手完成初始坐标位置的确认。
当服务***中的接力跟踪单元在识别定位单元确认行驶机械手初始坐标位置后,启动行驶机械手的接力跟踪线程,通过分析采集的包含有目标行驶机械手的帧图像信息,实时生成行驶机械手的位置数据。方向特征单元通过分析采集的包含有目标行驶机械手的帧图像信息,生成行驶机械手的实时方向数据。服务***中的规划控制单元聚合实时位置数据、实时方向数据,生成行驶指令,通过无线网络发送给受控行驶机械手,受控行驶机械手执行行驶指令,驱动行驶机械手行驶。
在确定受控行驶机械手行驶至预定工作位置时,驱动行驶机械手底盘上的支撑爪下落撑地,确保机械手工作状态的稳定性和可靠性。
当受控行驶机械手行驶至预定工作位置时,识别定位单元分析摄像头群采集的包含有机械手的帧图像信息,对机械手在坐标系中的位置进行精准校验。如果行驶机械手的坐标位置与预定工作位置不相符,规划控制单元根据坐标偏差,生成位置修正行驶指令,驱动行驶机械手行驶至预定工作位置。如果行驶机械手的坐标位置与预定工作位置相符,服务***启动预置工作指令,行驶机械手执行工作指令,完成作业任务。预置工作指令,可以是存储于服务***端的工作指令单元,也可以是存储于受控机械手端的工作指令单元。例如,工作指令可以是使用高级程序语言编写的程序文件存储于服务***中随时响应调用和传输,可以是服务***中的神经网络单元根据实时数据生成的动态指令,可以是机械手端PLC可编程控制器中预置的静态顺序指令。
目前的大型3D打印增材制造设备都需要配置大型的悬臂或者行驶导轨,设备的搬运、安装和调试过程繁琐,不利于规模化使用。本申请的受控行驶机械手可以解决建筑3D打印技术的设备过于庞大复杂的问题,通过多个独立工作的行驶机械手协同完成大型工作场景的精准制造任务,例如可以在建筑工地协同完成布料、砌砖、浇筑、喷涂、粉刷、刮平、焊接等多种建造施工任务。
例如,建筑公司设计人员在设计完成建筑工程的三维模型并输出3D打印文件,服务***可以把3D打印文件中的三维模型分解为多个工作模块,每个工作模块对应一个标准的机械手自动化程序,服务***驱动行驶机械手进入第一工作模块的工作位置后启动一次自动化程序,自动化程序执行完毕后再驱动行驶机械手进入第二工作模块的工作位置并启动相同的自动化程序。通过把一个大型的三维模型分解为多个小型的工作模块,在不同工作位置重复执行相应自动化程序,完成整个施工作业任务。
根据本申请实施例的另外一个方面,还提供了一种信息处理装置,图13是根据本 申请实施例的信息处理装置的示意图,如图13所示,该信息处理装置可以包括:第一确定单元1301,获取单元1103,第二确定单元1305以及发送单元1307。下面对该信息处理装置进行说明。
第一确定单元1301,设置为确定目标进入预定区域。
获取单元1303,设置为响应于对目标的接力跟踪线程,获取接力跟踪线程生成的目标的实时位置信息,其中,接力跟踪线程用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,并基于至少一个采样设备采集的帧图像信息生成实时位置数据。
第二确定单元1305,设置为基于实时位置信息确定特定信息。
发送单元1307,设置为将特定信息发送至终端设备或目标,其中,终端设备或目标基于特定信息生成预定信息,并响应于预定信息。
此处需要说明的是,上述第一确定单元1301,获取单元1303,第二确定单元1305以及发送单元1307对应于实施例中的步骤S102至S108,上述单元与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述单元作为装置的一部分可以在诸如一组计算机可执行指令的计算机***中执行。
由上可知,在本申请上述实施例中,可以利用第一确定单元确定目标进入预定区域;然后利用获取单元响应于对目标的接力跟踪线程,获取接力跟踪线程生成的目标的实时位置信息,其中,接力跟踪线程用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,并基于至少一个采样设备采集的帧图像信息生成实时位置数据;并利用第二确定单元基于实时位置信息确定特定信息;以及利用发送单元将特定信息发送至终端设备或目标,其中,终端设备或目标基于特定信息生成预定信息,并响应于预定信息。通过本申请实施例提供的信息处理装置,实现了根据目标的实时位置数据对目标或与目标相关的设备推送目标在当前环境下所需的信息的目的,达到了提高定位服务***的灵活性的技术效果,也提高了定位服务***的适用性,解决了相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。
在一种可选的实施例中,第一确定单元,包括:第一获取模块,设置为获取至少一个采样设备采集的帧图像数据;第一识别模块,设置为对帧图像数据进行图像识别,得到识别结果;第一确定模块,设置为确定识别结果中存在目标的标识信息,则确定目标进入预定区域。
在一种可选的实施例中,该信息处理装置还包括:识别单元,设置为在确定目标进入预定区域之前,从目标中识别出用于标识该目标的标识信息;其中,识别单元, 包括:第二识别模块,设置为从采集的图像中识别目标的生物信息和/或非生物信息;第二确定模块,设置为将生物信息和/或非生物信息作为用于标识该目标的标识信息;其中,非生物信息包括以下至少之一:目标的轮廓、目标的颜色、目标上的文字、目标的标识码;目标的生物特征包括以下之一:面部特征、体态特征。
在一种可选的实施例中,该信息处理装置还包括:第三确定单元,设置为确定目标的初始位置信息;其中,第三确定单元包括以下至少之一:第二获取模块,设置为获取采样信息,并基于采样信息生成目标的初始位置信息,其中,采样信息是从至少一个采样设备处获取到的,至少一个采样设备被预定条件触发执行拍摄任务;第三确定模块,设置为获取终端设备的预定终端信息,并基于预定终端信息确定目标的初始位置信息。
在一种可选的实施例中,特定信息包括以下至少之一:目标在实时位置信息对应的位置上的方向数据、目标在预定区域的风险预判数据、媒体资源、导航路径数据、行驶指令,其中,媒体资源与预定位置数据关联。
在一种可选的实施例中,发送单元,包括:发送模块,设置为将特定信息中的方向数据和媒体资源发送至终端设备,其中,终端设备执行以下至少之一操作:基于方向数据生成语音导航信息,并播放语音导航信息;播放媒体资源。
在一种可选的实施例中,发送单元,包括:第四确定模块,设置为确定目标启动自动行驶模式或受控行驶模式;发送模块,设置为将特定信息发送至目标,其中,在目标启动自动行驶模式时,目标基于以下至少之一生成行驶指令:特定信息中的方向数据、特定信息中的风险预判数据、特定信息中的导航路径数据、感测数据以及状态数据,并基于行驶指令运行,感测数据和状态数据为目标感测到的数据;在目标启动受控行驶模式时,特征信息携带有行驶指令,上述目标基于行驶指令运行。
在一种可选的实施例中,至少一个采样设备为以下至少之一:摄像头、雷达;至少一个采样设备具有固定的位置和拍摄角度。
根据本申请实施例的另外一个方面,还提供了一种服务端,应用于上述中任一项的信息处理方法,图14是根据本申请实施例的服务端的示意图,如图14所示,该服务端包括:识别定位单元1401,接力跟踪单元1403,方向特征单元1405,风险预判单元1407,位置导航单元1409以及媒体关联单元1411。下面对该服务端进行说明。
识别定位单元1401,用于识别并确定目标的初始位置数据,并基于初始位置数据生成目标的初始位置信息。
接力跟踪单元1403,用于在预定区域中的至少一个采样设备间对目标进行接力跟 踪,生成目标的实时位置信息。
方向特征单元1405,用于实时位置信息生成目标的方向信息。
风险预判单元1407,用于通过预定规则基于实时位置信息判断目标是否存在风险,并生成目标在预定区域的风险预判数据。
位置导航单元1409,用于基于实时位置信息生成导航路径数据。
媒体关联单元1411,用于调取与目标的实时位置信息对应的媒体资源;
其中,目标的方向信息、目标在预定区域的风险预判数据、导航路径数据以及媒体资源中的一种或多种被确定为特定信息,并被发送至目标或终端设备,终端设备或目标基于特定信息生成预定信息,并响应于预定信息。
由上可知,本申请实施例中的服务端可以通过识别定位单元识别并确定目标的初始位置数据,并基于初始位置数据生成目标的初始位置信息;接力跟踪单元,用于在预定区域中的至少一个采样设备间对目标进行接力跟踪,生成目标的实时位置信息;方向特征单元,用于实时位置信息生成目标的方向信息;风险预判单元,用于通过预定规则基于实时位置信息判断目标是否存在风险,并生成目标在预定区域的风险预判数据;位置导航单元,用于基于实时位置信息生成导航路径数据;媒体关联单元,用于调取与目标的实时位置信息对应的媒体资源;其中,目标的方向信息、目标在预定区域的风险预判数据、导航路径数据以及媒体资源中的一种或多种被确定为特定信息,并被发送至目标或终端设备,终端设备或目标基于特定信息生成预定信息,并响应于预定信息,实现了根据目标的实时位置数据对目标或与目标相关的设备推送目标在当前环境下所需的信息的目的,达到了提高定位服务***的灵活性的技术效果,也提高了定位服务***的适用性,解决了相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。
根据本申请实施例的另外一个方面,还提供了一种终端设备,应用于上述中任一项所述的信息处理方法,图15是根据本申请实施例的终端设备的示意图,如图15所示,该终端设备可以包括:
接收模块1501,用于接收特定信息。
可选的,上述预定信息可以是向处于博物展览管中的游客当前观看的物品的媒体资源、可以是向博物展览馆中游览的游客发送的方向数据、可以是向游客发送的目标导航数据、也可以是向自动驾驶车辆发送的风险预判数据。
处理模块1503,用于基于特定信息生成预定信息。
可选的,上述预定信息可以是基于上述方向数据生成的导航数据。
执行模块1505,用于响应于特定信息和/或预定信息。
在该实施例中,终端设备可以响应于特定信息,播放媒体资源;也可以基于特定信息生成导航数据,并播放导航数据,以为游客提供路径导航。
根据本申请实施例的另外一个方面,还提供了一种信息处理***,应用于上述中任一项的信息处理方法,图16是根据本申请实施例的信息处理***的示意图,如图16所示,该信息处理***可以包括:
至少一个采样设备1601,用于采集目标的帧图像信息;
上述中的服务端1603,用于基于帧图像信息生成目标的实时位置信息,并基于实时位置信息确定特定信息,并将特定信息发送至终端设备或目标,终端设备或目标基于特定信息生成预定信息,并响应于预定信息。
通过该信息处理***可以采用至少一个采样设备采集目标的帧图像信息;上述中的服务端,用于基于帧图像信息生成目标的实时位置信息,并基于实时位置信息确定特定信息,并将特定信息发送至终端设备或目标,终端设备或目标基于特定信息生成预定信息,并响应于预定信息,实现了根据目标的实时位置数据对目标或与目标相关的设备推送目标在当前环境下所需的信息的目的,达到了提高定位服务***的灵活性的技术效果,也提高了定位服务***的适用性,解决了相关技术中用于为服务接受者提供定位服务的***仅能为用户提供定位服务,功能比较单一的技术问题。
在本申请实施例的信息处理***中的识别定位单元被配置为识别目标并确定目标初始位置,生成目标初始位置数据;接力跟踪单元被配置为可以在多个采样设备间对目标进行实时接力跟踪,生成目标实时位置数据;目标方向特征单元,被配置为分析实时采样信息生成被接力跟踪目标实时方向数据;风险预判单元,被配置为通过分析移动目标的近场摄像头实时采样信息,按预定规则判断是否存在事故风险,生成目标风险预判数据;位置导航路径指导单元,被配置为通过分析移动目标的远场摄像头实时采样信息和沿程摄像头实时信息,生成的导航路径数据;关联媒体单元,被配置为预置有影像、音频、图文等媒体资源,所述媒体资源被关联有固定位置数据;规划控制单元,被配置为实时聚合目标的实时位置数据、目标方向数据、目标风险预判数据、目标导航路径数据、移动终端状态数据和移动终端感测数据生成的行驶指令或工作指令。
在本申请实施例中,还提供了一种人机交互设备,其包括,无线接收单元,被配置为通过无线网络接收信息处理***发送的目标位置数据,和/或,目标方向数据,和 /或,目标风险预判数据,和/或,目标导航路径数据,和/或,媒体资源。
其中,用户交互单元,被配置为接收并聚合所述目标位置数据,和/或,目标方向数据,和/或,目标风险预判数据,和/或,目标导航路径数据,和/或,媒体资源,生成交互信息向用户展示;交互输出单元,被配置为向用户显示,和/或,播放所述交互信息。
在本申请实施例中,还提供了一种自动行驶设备,其包括,无线接收单元,被配置为通过无线网络接收信息处理***发送的目标位置数据,和/或,目标方向数据,和/或,目标风险预判数据,和/或,目标导航路径数据;规划控制单元,被配置为实时聚合目标的实时位置数据,和/或,目标方向数据,和/或,目标风险预判数据,和/或,目标导航路径数据,和/或,移动终端状态数据,和/或,移动终端感测数据,生成行驶指令或工作指令;指令执行单元,被配置为执行所述行驶指令或工作指令。
在本申请实施例中,还提供了一种受控行驶设备,其包括,无线接收单元,被配置为通过无线网络接收信息处理***发送的行驶指令,和/或,工作指令;指令执行单元,被配置为执行所述行驶指令,和/或,工作指令。
根据本申请实施例的另外一个方面,还提供了一种计算机可读存储介质,计算机可读存储介质包括存储的计算机程序,其中,在计算机程序被处理器运行时控制计算机存储介质所在设备执行上述中任一项的信息处理方法。
根据本申请实施例的另外一个方面,还提供了一种处理器,处理器用于运行计算机程序,其中,计算机程序运行时执行上述中任一项的信息处理方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到 多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (14)

  1. 一种信息处理方法,包括:
    确定目标进入预定区域;
    响应于对所述目标的接力跟踪线程,获取所述接力跟踪线程生成的所述目标的实时位置信息,其中,所述接力跟踪线程用于在所述预定区域中的至少一个采样设备间对所述目标进行接力跟踪,并基于所述至少一个采样设备采集的帧图像信息生成所述实时位置数据;
    基于所述实时位置信息确定特定信息;
    将所述特定信息发送至终端设备或所述目标,其中,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
  2. 根据权利要求1所述的方法,其中,确定目标进入预定区域,包括:
    获取所述至少一个采样设备采集的帧图像数据;
    对所述帧图像数据进行图像识别,得到识别结果;
    确定所述识别结果中存在所述目标的标识信息,则确定所述目标进入所述预定区域。
  3. 根据权利要求2所述的方法,其中,在确定目标进入预定区域之前,还包括:从所述目标中识别出用于标识该目标的标识信息;
    其中,从所述目标中识别出用于标识该目标的标识信息,包括:
    从采集的图像中识别所述目标的生物信息和/或非生物信息;
    将所述生物信息和/或非生物信息作为用于标识该目标的标识信息;
    其中,所述非生物信息包括以下至少之一:所述目标的轮廓、所述目标的颜色、所述目标上的文字、所述目标的标识码;所述目标的生物特征包括以下之一:面部特征、体态特征。
  4. 根据权利要求1所述的方法,其中,还包括:确定目标的初始位置信息;
    其中,确定所述目标的初始位置信息包括以下至少之一:
    获取采样信息,并基于所述采样信息生成所述目标的初始位置信息,其中, 所述采样信息是从所述至少一个采样设备处获取到的,所述至少一个采样设备被预定条件触发执行拍摄任务;
    获取终端设备的预定终端信息,并基于所述预定终端信息确定所述目标的初始位置信息。
  5. 根据权利要求1所述的方法,其中,所述特定信息包括以下至少之一:所述目标在所述实时位置信息对应的位置上的方向数据、所述目标在所述预定区域的风险预判数据、媒体资源、导航路径数据、行驶指令,其中,所述媒体资源与预定位置数据关联。
  6. 根据权利要求5所述的方法,其中,将所述特定信息发送至所述终端设备,包括:
    将所述特定信息中的方向数据和媒体资源发送至终端设备,其中,所述终端设备执行以下至少之一操作:基于所述方向数据生成语音导航信息,并播放所述语音导航信息;播放所述媒体资源。
  7. 根据权利要求5所述的方法,其中,将所述特定信息发送至所述目标,包括:
    确定所述目标启动自动行驶模式或受控行驶模式;
    将所述特定信息发送至所述目标,其中,在所述目标启动所述自动行驶模式时,所述目标基于以下至少之一生成行驶指令:所述特定信息中的方向数据、所述特定信息中的风险预判数据、所述特定信息中的导航路径数据、感测数据以及状态数据,并基于所述行驶指令运行,所述感测数据和所述状态数据为所述目标感测到的数据;在所述目标启动受控行驶模式时,所述特征信息携带有行驶指令,所述目标基于所述行驶指令运行。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述至少一个采样设备为以下至少之一:摄像头、雷达;所述至少一个采样设备具有固定的位置和拍摄角度。
  9. 一种信息处理装置,包括:
    第一确定单元,设置为确定目标进入预定区域;
    获取单元,设置为响应于对所述目标的接力跟踪线程,获取所述接力跟踪线程生成的所述目标的实时位置信息,其中,所述接力跟踪线程用于在所述预定区域中的至少一个采样设备间对所述目标进行接力跟踪,并基于所述至少一个采样设备采集的帧图像信息生成所述实时位置数据;
    第二确定单元,设置为基于所述实时位置信息确定特定信息;
    发送单元,设置为将所述特定信息发送至终端设备或所述目标,其中,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
  10. 一种服务端,应用于上述权利要求1至8中任一项所述的信息处理方法,包括:
    识别定位单元,用于识别并确定目标的初始位置数据,并基于所述初始位置数据生成所述目标的初始位置信息;
    接力跟踪单元,用于在预定区域中的至少一个采样设备间对所述目标进行接力跟踪,生成所述目标的实时位置信息;
    方向特征单元,用于所述实时位置信息生成所述目标的方向信息;
    风险预判单元,用于通过预定规则基于所述实时位置信息判断所述目标是否存在风险,并生成所述目标在所述预定区域的风险预判数据;
    位置导航单元,用于基于所述实时位置信息生成导航路径数据;
    媒体关联单元,用于调取与所述目标的实时位置信息对应的媒体资源;
    其中,所述目标的方向信息、所述目标在所述预定区域的风险预判数据、所述导航路径数据以及所述媒体资源中的一种或多种被确定为特定信息,并被发送至所述目标或终端设备,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
  11. 一种终端设备,应用于上述权利要求1至8中任一项所述的信息处理方法,包括:
    接收模块,用于接收特定信息;
    处理单元,用于基于所述特定信息生成预定信息;
    执行单元,用于响应于所述特定信息和/或所述预定信息。
  12. 一种信息处理***,应用于上述权利要求1至8中任一项所述的信息处理方法,包括:
    至少一个采样设备,用于采集目标的帧图像信息;
    上述权利要求9所述的服务端,用于基于所述帧图像信息生成所述目标的实时位置信息,并基于所述实时位置信息确定特定信息,并将所述特定信息发送至终端设备或所述目标,所述终端设备或所述目标基于所述特定信息生成预定信息,并响应于所述预定信息。
  13. 一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其 中,在所述计算机程序被处理器运行时控制所述计算机存储介质所在设备执行权利要求1至8中任一项所述的信息处理方法。
  14. 一种处理器,所述处理器用于运行计算机程序,其中,所述计算机程序运行时执行权利要求1至8中任一项所述的信息处理方法。
PCT/CN2021/138526 2020-12-29 2021-12-15 信息处理方法及装置、信息处理*** WO2022143181A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011602856.8 2020-12-29
CN202011602856.8A CN114693727A (zh) 2020-12-29 2020-12-29 信息处理方法及装置、信息处理***

Publications (1)

Publication Number Publication Date
WO2022143181A1 true WO2022143181A1 (zh) 2022-07-07

Family

ID=82133140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138526 WO2022143181A1 (zh) 2020-12-29 2021-12-15 信息处理方法及装置、信息处理***

Country Status (2)

Country Link
CN (1) CN114693727A (zh)
WO (1) WO2022143181A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115407803A (zh) * 2022-10-31 2022-11-29 北京闪马智建科技有限公司 一种基于无人机的目标监控方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749170A (zh) * 2017-12-07 2018-03-02 东莞职业技术学院 一种车辆跟踪装置及方法
CN109724610A (zh) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 一种全信息实景导航的方法及装置
CN111351492A (zh) * 2018-12-20 2020-06-30 赫尔环球有限公司 用于自动驾驶车辆导航的方法和***
CN111836009A (zh) * 2020-06-18 2020-10-27 浙江大华技术股份有限公司 多个相机进行目标跟踪的方法、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749170A (zh) * 2017-12-07 2018-03-02 东莞职业技术学院 一种车辆跟踪装置及方法
CN111351492A (zh) * 2018-12-20 2020-06-30 赫尔环球有限公司 用于自动驾驶车辆导航的方法和***
CN109724610A (zh) * 2018-12-29 2019-05-07 河北德冠隆电子科技有限公司 一种全信息实景导航的方法及装置
CN111836009A (zh) * 2020-06-18 2020-10-27 浙江大华技术股份有限公司 多个相机进行目标跟踪的方法、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115407803A (zh) * 2022-10-31 2022-11-29 北京闪马智建科技有限公司 一种基于无人机的目标监控方法及装置

Also Published As

Publication number Publication date
CN114693727A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
US11874663B2 (en) Systems and methods for computer-assisted shuttles, buses, robo-taxis, ride-sharing and on-demand vehicles with situational awareness
US11061406B2 (en) Object action classification for autonomous vehicles
CN112823372B (zh) 排队进入上车和下车位置
US11900815B2 (en) Augmented reality wayfinding in rideshare applications
WO2017079341A2 (en) Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles
US20210209543A1 (en) Directing secondary delivery vehicles using primary delivery vehicles
US20200327811A1 (en) Devices for autonomous vehicle user positioning and support
US11904902B2 (en) Identifying a customer of an autonomous vehicle
KR102479271B1 (ko) 자율 차량을 위한 주변 조명 조건
WO2022057737A1 (zh) 一种停车控制方法及相关设备
US20240071100A1 (en) Pipeline Architecture for Road Sign Detection and Evaluation
CN112810603B (zh) 定位方法和相关产品
WO2022143181A1 (zh) 信息处理方法及装置、信息处理***
WO2022108744A1 (en) On-board feedback system for autonomous vehicles
CN114326775B (zh) 基于物联网的无人机***
CN116569070A (zh) 用于分析动态LiDAR点云数据的方法和***
KR102645700B1 (ko) 디지털 트윈 기반 충전소 관제 시스템
US20240028031A1 (en) Autonomous vehicle fleet service and system
WO2023146693A1 (en) False track mitigation in object detection systems
CN116844025A (zh) 数据处理方法及相关设备
WO2024102431A1 (en) Systems and methods for emergency vehicle detection
JP2024051891A (ja) エリア監視システムおよびエリア監視方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913922

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.11.2023)