CN217787820U - Vehicle control apparatus - Google Patents

Vehicle control apparatus Download PDF

Info

Publication number
CN217787820U
CN217787820U CN202123173888.3U CN202123173888U CN217787820U CN 217787820 U CN217787820 U CN 217787820U CN 202123173888 U CN202123173888 U CN 202123173888U CN 217787820 U CN217787820 U CN 217787820U
Authority
CN
China
Prior art keywords
vehicle
audio
target
target vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202123173888.3U
Other languages
Chinese (zh)
Inventor
万鹏
吴银生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qisheng Technology Co Ltd
Original Assignee
Beijing Qisheng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qisheng Technology Co Ltd filed Critical Beijing Qisheng Technology Co Ltd
Priority to CN202123173888.3U priority Critical patent/CN217787820U/en
Application granted granted Critical
Publication of CN217787820U publication Critical patent/CN217787820U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Circuit For Audible Band Transducer (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The embodiment of the specification provides a vehicle control device, which comprises a processing circuit and an audio acquisition device, wherein the processing circuit is in signal connection with the audio acquisition device, and the audio acquisition device comprises a plurality of microphones; the processing circuitry is to: the method comprises the steps of obtaining a plurality of electric signals generated by the audio acquisition device when a target car returning audio is received, and determining a first position of a target vehicle sending the target car returning audio based on the plurality of electric signals, wherein the first position comprises at least one of a direction or a distance of the target vehicle relative to the audio acquisition device.

Description

Vehicle control apparatus
Technical Field
The present description relates to the technical field of vehicle management, and in particular, to a vehicle control device based on sound source localization.
Background
With the rapid development of the internet, shared vehicles (e.g., shared bicycles) are widely available as a new rental model. With the putting of a large number of shared bicycles and electric bicycles, the disordered putting leads to the disordered stacking of a large number of vehicles, which has certain influence on the appearance and the appearance of the city. Therefore, it is desirable to provide reasonable equipment to standardize parking, confirm that the vehicle is accurately parked in a designated area, and enable the shared bicycle and the electric bicycle to serve users more orderly and efficiently.
Disclosure of Invention
One aspect of the present specification provides a vehicle control apparatus. The vehicle control apparatus may include a processing circuit and an audio acquisition device. The processing circuit is in signal connection with the audio acquisition device. The audio capture device may include a plurality of microphones. The processing circuit may be configured to obtain a plurality of electrical signals generated by the audio capture device when receiving a target car-returning audio, and determine a first location of a target vehicle emitting the target car-returning audio based on the plurality of electrical signals, the first location including at least one of a direction or a distance of the target vehicle relative to the audio capture device.
Drawings
The present description will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals refer to like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of an exemplary vehicle control system according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an application scenario of an exemplary vehicle control apparatus, according to some embodiments of the present application;
FIG. 3 is an exemplary block diagram of a vehicle control system according to some embodiments of the present application;
FIG. 4 is an exemplary flow chart of a vehicle control method according to some embodiments of the present application;
FIG. 5 is an exemplary flow chart of a vehicle control method according to some embodiments of the present application;
FIG. 6 is an exemplary block diagram of a vehicle control apparatus according to some embodiments of the present application;
FIG. 7 is a schematic diagram of an exemplary decision circuit shown in accordance with some embodiments of the present application;
FIG. 8 is an exemplary flow chart of a vehicle control method according to some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The embodiment of the application can be applied to different shared traffic service systems. Such as a human powered vehicle, a vehicle (e.g., a bicycle, an electric bicycle, etc.), an automobile (e.g., a small car, a bus, a large transportation vehicle, etc.), an unmanned vehicle, etc. The application scenarios of the different embodiments of the present application include but are not limited to one or a combination of several of transportation industry, warehouse logistics industry, agricultural operation system, urban public transportation system, commercial operation shared vehicles, etc. It should be understood that the application scenarios of the system and method of the present application are merely examples or embodiments of the present application, and those skilled in the art can also apply the present application to other similar scenarios without inventive effort based on these drawings. Such as other similar tracked vehicles.
FIG. 1 is a schematic diagram of an application scenario of an exemplary vehicle control system according to some embodiments of the present application.
As shown in fig. 1, the vehicle control system 100 may include a server 110, a network 120, a terminal device 130, a storage device 140, a vehicle 150, and a vehicle control device 160.
In some embodiments, the server 110 may be used to process information and/or data related to vehicle control. The server 110 may be a computer server. In some embodiments, the server 110 may be a single server or a group of servers. The server group may be a centralized server group connected to the network 120 via an access point, or a distributed server group respectively connected to the network 120 via one or more access points. In some embodiments, server 110 may be connected locally to network 120 or remotely from network 120. For example, server 110 may access information and/or data stored in terminal device 130 and/or storage device 140 via network 120. As another example, storage device 140 may serve as back-end storage for server 110. In some embodiments, the server 110 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the server 110 may include a processing device 112. Processing device 112 may process information and/or data related to performing one or more of the functions described in the present application. For example, the processing device 112 may obtain the target return audio and determine the location of the target vehicle from the return audio. The processing device 112 may determine whether the target vehicle is located within the parking area based on its location. As another example, the processing device 112 may acquire an image of a parking area. The processing device 112 may determine the target vehicle in the image according to the position of the target vehicle, and determine whether the pose of the target vehicle meets the parking requirement according to the feature of the target vehicle in the image. In some embodiments, the processing device 112 may include one or more processing units (e.g., single core processing engines or multiple core processing engines). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
Network 120 may facilitate the exchange of information and/or data. In some embodiments, one or more components in the vehicle control system 100 (e.g., the server 110, the terminal device 130, the storage device 140, the vehicle 150, the vehicle control device 160) may send information and/or data to other components in the vehicle control system 100 over the network 120. For example, the server 110 may obtain the target return audio from the vehicle control device 160 via the network 120. In some embodiments, the network 120 may be any type or combination of wired or wireless network. By way of example only, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or Internet switching points 120-1, 120-2, etc. One or more components of the vehicle control system 100 may connect to the network 120 through network access points to exchange data and/or information.
The terminal device 130 may enable user interaction with the vehicle control system 100. For example, the user may send a parking request through the terminal device 130. Vehicle 150 may broadcast a return audio based on the parking request. The server 110 may control the vehicle control device 160 to turn on based on the parking request. In some embodiments, the terminal device 130 may also receive alert information (e.g., alert tones, alert animations, etc.) transmitted by the server 110. In some embodiments, the terminal device 130 may include a mobile device 130-1, a tablet 130-2, a laptop 130-3, an automotive built-in device 130-4, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart televisions, smart cameras, interphones, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart footwear, a smart glasses, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, and the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or augmented reality device may include a virtual reality helmet, virtual reality eyesA mirror, a virtual reality eyeshade, an augmented reality helmet, augmented reality glasses, an augmented reality eyeshade, and the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include a Google Glass TM 、Oculus Rift TM 、Hololens TM 、Gear VR TM And so on. In some embodiments, terminal device 130 may include a location-enabled device to determine the location of the user and/or terminal device 130.
Storage device 140 may store data and/or instructions. In some embodiments, storage device 140 may store data and/or instructions that server 110 may execute to provide the methods or steps described herein. In some embodiments, storage device 140 may store data associated with vehicle 150, such as location information, log information, and the like associated with vehicle 150. As another example, the storage device 140 may store return audio broadcast by the vehicle 150. In some embodiments, one or more components in the vehicle control system 100 may access data or instructions stored in the storage device 140 via the network 120. In some embodiments, storage device 140 may be connected directly to server 110 as back-end storage. In some embodiments, storage device 140 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like. Exemplary removable memories may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and so forth. Exemplary volatile read-write memory can include Random Access Memory (RAM). Exemplary RAM may include Dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T-RAM), zero capacitor RAM (Z-RAM), and the like. Exemplary ROMs may include Mask ROM (MROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, the storage device 140 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the vehicle 150 may include a bicycle, electric bicycle, tricycle, minicar, van, truck, or the like. In some embodiments, the vehicle 150 may include a private car, a taxi, and the like. In some embodiments, the vehicle 150 may include a manned vehicle and/or an unmanned autonomous vehicle, and the like, and the description does not limit the type of the vehicle 150. In some embodiments, vehicle 150 may include a locating device. In some embodiments, the positioning device may be a Global Positioning System (GPS), global navigation satellite system (GLONASS), COMPASS navigation system (COMPASS), beidou navigation satellite system, galileo positioning system, quasi-zenith satellite system (QZSS), or the like.
The vehicle control device 160 may be used to monitor and/or control the vehicle. The vehicle control apparatus 160 may include at least an audio acquisition device for acquiring audio data. The audio capture device may include a microphone array. The microphone array can collect sound signals and can also realize the noise reduction function. In some embodiments, the microphone array may include 2 microphones, 6 microphones, 12 microphones, 18 microphones, and so on, or more microphones. In some embodiments, the vehicle control apparatus 160 may also include a mounting bracket, an image capture device, and the like. The image capture device may be used to capture images and/or video of the parking area. In some embodiments, the vehicle control device 160 may be mounted near a parking area. For example, a mounting bracket may be provided near a parking area (e.g., 0.5 meters). An audio capture device and/or an image capture device may be mounted on the mounting bracket. In some embodiments, the vehicle control device 160 may transmit the acquired data (e.g., audio data, image data, etc.) to one or more components of the vehicle control system 100 (e.g., the server 110, the terminal device 130, and/or the storage device 140) via the network 120. In some embodiments, vehicle control device 160 may include a processor and/or memory. Further description of the vehicle control apparatus is provided with reference to fig. 2, and will not be described herein.
It should be noted that the above description is for convenience only and should not be taken as limiting the scope of the present application. It will be understood by those skilled in the art that, having the benefit of this disclosure, numerous modifications and variations in form and detail can be made without departing from the principles of the system and the application of the method and system described above.
Fig. 2 is a schematic diagram of an application scenario of a vehicle control device according to some embodiments of the present application.
As shown in fig. 2, the vehicle control apparatus 200 may include a mounting bracket 210, an audio capture device 220, and an image capture device 230. In some embodiments, the audio capture device 220 and/or the image capture device 230 may be in communication with the server 110. For example, the server 110 (e.g., the processing device 112) may obtain audio information it captures from the audio capture device 220 and image information it captures from the image capture device 230. For another example, the server 110 may send a control instruction to turn on the audio capture device 220 and/or the image capture device 230.
In some embodiments, the vehicle control apparatus 200 may include a processor and/or a memory (not shown). One or more steps related to the embodiment of the present application may be executed by a processor in the vehicle control apparatus 200. The processor may implement one or more steps involved in the embodiments of the present application by executing instructions stored in a memory in the vehicle control apparatus 200. In some embodiments, the processor may process data collected by the audio collection device 220 and the image collection device 230. For example, the processor may obtain the target return audio collected by the audio collection device 220 and determine the first position of the target vehicle based on the target audio data. The processor may determine whether the target vehicle is located within the parking area based on the first position of the target vehicle. As another example, the processor may also acquire a first image of the parking area 240 acquired by the image acquisition device 230. The processor may determine the target vehicle in the first image based on the first location of the target vehicle. The processor may determine whether the target vehicle is located within the parking area based on the characteristics of the target vehicle. Further, the processor can also judge whether the parking gesture of the target vehicle meets the standard according to the characteristics of the target vehicle. In some embodiments, the processor may send a lock or inhibit lock command to the target vehicle (e.g., vehicle 252 or 254) based on the determination. In some embodiments, the processor may send the determination to the server 110. The server 110 may send a lock or inhibit lock command to the target vehicle.
In some embodiments, the vehicle control apparatus 200 may include a processing circuit. The processing circuit may have a signal connection with the audio capture device 220. The processing circuitry may be used to process a plurality of electrical signals generated when audio data is acquired. In some embodiments, the processing circuitry may also have a signal connection with the image capture device 230 and process electrical signals generated when image data is acquired. For more description of the processing circuit, reference may be made to fig. 6 and fig. 7 and the description thereof, which are not repeated herein.
Mounting bracket 210 may be mounted adjacent parking area 240. For example, mounting bracket 210 may be mounted at the edge of parking area 240. As another example, the mounting bracket 210 may be mounted within a distance range (e.g., 0.2 meters, 0.5 meters, 1 meter) from the edge of the parking area 240. As another example, mounting bracket 210 may be mounted inside parking area 240. In some embodiments, as shown in fig. 2, the mounting bracket 210 may be mounted behind a parking area 240. In this application, the front of the parking area may refer to the direction in which the head of the vehicle faces when the vehicle is parked (e.g., the direction indicated by arrow a in fig. 2).
Parking area 240 may be used to park vehicles. In some embodiments, parking area 240 may be a pre-divided area of any shape in which vehicles (e.g., vehicles 252, 254, etc.) can be placed. For example, the parking area may be rectangular, parallelogram, triangular, circular, etc., or other irregular shape. In some embodiments, parking area 240 may be a pre-marked area. For example, parking area 240 may be a rectangular frame that has been previously painted with yellow paint. As another example, parking area 240 may be an area marked with a laser projector. In this case, a laser projector may be mounted on the mounting bracket 210 to project laser light at a specific area to form a parking area.
In some embodiments, the vehicle control apparatus 200 may include a plurality of mounting brackets 210. The plurality of mounting brackets 210 may be uniformly or non-uniformly disposed about/or within the parking area. The audio capture device 220 and the image capture device 230 may be mounted on the same mounting bracket or on different mounting brackets. It should be appreciated that the vehicle mounting bracket 210 may also be a separate component from the vehicle control apparatus 200.
The audio capture device 220 may be used to capture audio data near the parking area 240. For example, the audio capture device 220 may include a microphone array. The microphone array may capture the sound broadcast by vehicles in the vicinity of parking area 240. In some embodiments, the microphone array may suppress interference of environmental noise (e.g., tire noise, wind noise of a vehicle) by using phase differences of picked-up audio through a beamforming (beamforming) technique. In some embodiments, the microphone array may include a plurality of microphones, e.g., 4, 6, 8, 12, 18, etc. In some embodiments, the plurality of microphones may be mounted as a single unit (e.g., a microphone array board) on a mounting bracket. In some embodiments, the plurality of microphones may be mounted on different mounting brackets, respectively, to form a microphone array. For example, for a rectangular parking area, 4 mounting brackets may be mounted at 4 corners of the rectangular parking area to form a rectangle, and 4 microphones may be mounted on the four mounting brackets. In some embodiments, the mounting height of each microphone on each mounting bracket may be the same or different.
The image capture device 230 may be configured to capture images and/or video within its field of view. The field of view range of the image capture device 230 may include at least a range of the parking area 240. In some embodiments, image capture device 230 may include a wide angle camera, a fisheye camera, a monocular camera, a binocular camera, a dome camera, an infrared camera, a Digital Video Recorder (DVR), etc., or any combination thereof. In some embodiments, the image acquired by the image acquisition device 230 may be a two-dimensional image, a three-dimensional image, a four-dimensional image, or the like.
In some embodiments, the image capture device 230 may be activated in response to the audio capture device 220 receiving the targeted return audio. In some embodiments, the image capture device 230 may be activated in response to determining that the parking location of the vehicle is within the parking area. In some embodiments, after the vehicle (e.g., vehicle 252) is successfully locked, the processing device 112 may control the image capture device 230 to turn off to reduce power consumption of the image capture device 230. In some embodiments, the image capture device 230 may be omitted, i.e., the vehicle control apparatus 200 may not include the image capture device 230.
FIG. 3 is an exemplary block diagram of a vehicle control system according to some embodiments of the present application. In some embodiments, the vehicle control system 300 may be implemented by the server 110 (e.g., the processing device 112). In some embodiments, the vehicle control system 300 may be implemented by the vehicle control device 160 (e.g., a processor in the vehicle control device 160).
As shown in FIG. 3, the vehicle control system 300 may include an audio acquisition module 310, a positioning module 320, an image acquisition module, and a determination module 340.
The audio acquisition module 310 may be used to acquire target return audio. For example, the audio acquisition module 310 may acquire the target return audio from an audio capture device in the vehicle control apparatus. In some embodiments, the audio acquisition module 310 may identify the target return audio from the one or more audios using speech recognition techniques. For example, the audio obtaining module 310 may extract an audio feature of each audio and match the audio feature with a preset target audio feature. The audio acquisition module 310 may determine that the audio with the audio characteristics matching the preset target audio characteristics is the target car returning audio.
The localization module 320 may be configured to determine a first location of a target vehicle emitting the target return audio using sound source localization techniques based on the target return audio. In some embodiments, the first location may include a direction and distance of the target vehicle relative to a vehicle control device (e.g., an audio capture device).
The image acquisition module 330 may be used to acquire a first image of a parking area. The first image may include at least a parking area. In some embodiments, the first image may also include the target vehicle and/or other vehicles, and/or the like.
The determining module 340 may be configured to determine whether the target vehicle is located within the parking area based on the feature of the target vehicle in the first image when the first position is within a threshold range. In some embodiments, it is determined whether the target vehicle is within the parking area based at least on the first position of the target vehicle. For example, the determination module 340 may determine that the target vehicle is parked in the parking area when the first position of the target vehicle is located in the parking area. For another example, when the ranges at a certain distance from the first position of the target vehicle are all located in the parking area, the determination module 340 may determine that the target vehicle is parked in the parking area. In some embodiments, the determination module 340 may determine the target vehicle in the first image based on the first location. The determination module 340 may also determine whether the target vehicle is located within the parking area using the image recognition model based on the characteristics of the target vehicle in the first image. In some embodiments, the determination module 340 may determine whether the parking posture of the target vehicle meets the criterion using the image recognition model based on the feature of the target vehicle in the first image.
It should be understood that the system and its modules shown in FIG. 3 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the system and its modules is merely for convenience of description and should not limit the present application to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the audio acquisition module 310 may include two units, e.g., an audio acquisition unit and an image acquisition unit, to acquire the target return audio and the first image, respectively. As another example, the vehicle control system 300 may include an audio recognition module to identify target return car audio. For another example, each module may share one storage device, and each module may have its own storage device. Such variations are within the scope of the present application.
FIG. 4 is an exemplary flow chart of a vehicle control method according to some embodiments of the present application.
In some embodiments, flow 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations illustrated in fig. 4 for monitoring and/or controlling a vehicle may be implemented by a processor in the vehicle control system 100 illustrated in fig. 1 or the vehicle control apparatus 200 illustrated in fig. 2 or the vehicle control system 300 illustrated in fig. 3. For example, the process 400 may be stored in the storage device 140 in the form of instructions and executed and/or invoked by the processing device 112.
In step 410, the processing device 112 may obtain the destination return audio. In some embodiments, step 410 may be performed by audio acquisition module 310 in system 300.
The target car return audio may be any form of sound. For example, the target car return audio may be announced in a verbal form, such as "request for car return". For another example, the target car-still audio may be announced in a simulated sound, such as a "ticker". In some embodiments, the target return audio may include an ID (i.e., identification) of the vehicle corresponding thereto, a broadcast time, or the like, or any combination thereof. The processing device 112 may determine the user information corresponding to the vehicle based on the vehicle ID. In some embodiments, the processing device 112 may determine the ID of the vehicle and/or corresponding user information corresponding to the target car-return audio based on the announcement time.
In some embodiments, the target return audio may be a sound emitted by a vehicle (e.g., an audio announcement device on vehicle 150). The vehicle may broadcast the target return audio in response to a parking request sent by a terminal device (e.g., terminal device 130). For example, a user may click a return button in an application (such as APP) installed on a terminal device (e.g., a cell phone) to trigger a parking request. The vehicle can directly receive the parking request and broadcast the target returning audio. As another example, the terminal device may first send a parking request to the server 110 (e.g., processing device 112). The server 110 then transmits the parking request to the vehicle corresponding to the terminal device. The vehicle may broadcast a target return audio upon receiving a parking request sent by the server 110. In some embodiments, the target return audio may be a sound emitted by the user terminal device. For example, when the server 110 receives a parking request initiated by a user through a terminal device (e.g., a mobile phone), the server 110 may control the terminal device of the user to broadcast a target car return audio. For convenience of description, the vehicle will be used to broadcast the target returning audio in the present application as an example, which does not limit the scope of the present application. In some embodiments, the vehicle emitting the target return audio may also be referred to as the sound source and/or the target vehicle.
In some embodiments, the targeted return audio may be captured by an audio capture device (e.g., audio capture device 220 in vehicle control apparatus 200) and sent to processing device 112. In some embodiments, the audio capture device may capture a target return audio broadcast by the target vehicle when the target vehicle is within a certain distance range from a parking area (e.g., parking area 240) and/or the audio capture device. For example, when the distance from the vehicle to the center of the parking area is within 3 meters, the audio acquisition device can acquire the target returning audio broadcast by the vehicle. For another example, when the distance from the vehicle to the edge of the parking area is within 1 meter, the audio acquisition device may acquire the target returning audio broadcast by the vehicle. In some embodiments, the vehicle (or the user's terminal device) may broadcast the target return audio in real-time. The audio capture device may capture the destination return audio in real-time or periodically and send it to the processing device 112. In some embodiments, the processing device 112 may obtain targeted return audio from the audio capture device in real-time or periodically. In some embodiments, the audio capture device may capture audio data. The audio data may include one or more audios. The processing device 112 may identify the targeted return audio from the one or more audios using speech recognition techniques. For example, the processing device 112 may extract audio features of each audio and match the audio features to preset target audio features. The processing device 112 may determine that the audio whose audio characteristics match the preset target audio characteristics is the target return audio. In some embodiments, the processing device 112 may extract audio features of each audio using Automatic Speech Recognition (ASR) techniques. In some embodiments, the audio feature extraction algorithms used in the automatic speech recognition techniques may include Hidden Markov Models (HMMs), convolutional neural network algorithms, deep learning neural network algorithms, and the like, or any combination thereof.
In step 420, the processing device 112 may determine, based on the targeted return audio, a first location of the targeted vehicle emitting the targeted return audio using sound source localization techniques. In some embodiments, step 420 may be performed by positioning module 320 in system 300.
In some embodiments, the algorithms employed by the sound source localization techniques may include time difference of arrival (TDOA) algorithms, beamforming algorithms, acoustic holography algorithms, high resolution spectral estimation algorithms, and the like, or any combination thereof. The first position of the target vehicle can be accurately determined through a sound source positioning technology, so that whether the target vehicle is located in the parking area or not can be accurately judged through the first position of the target vehicle. For example, the processing device 112 may utilize the difference in position of each microphone in the audio acquisition arrangement and the time difference (or phase difference) at which the target return audio is received to determine the location from which the target return audio (also referred to as the sound source signal) originated, i.e., the first location of the target vehicle from which the target return audio originated.
In some embodiments, the first location may include a direction and/or distance of the target vehicle relative to the audio capture device (or vehicle control apparatus). In some embodiments, the first position may be represented in polar coordinates. For example, the first position may be represented as a directional angle and/or a straight-line distance (or ground distance) of the target vehicle relative to the audio capture device. In some embodiments, the first location may be represented in latitude and longitude coordinates. For example, the coordinates of the first position are 100 ° 54'38 ″ east, 38 ° 54'26 ″ north latitude.
In step 430, the processing device 112 may determine whether the target vehicle is located within the parking area based at least on the first position of the target vehicle. In some embodiments, step 430 may be performed by the determination module 340 in the system 300.
In some embodiments, the processing device 112 may determine that the target vehicle is parked in the parking area when the first location of the target vehicle is within the parking area. In some embodiments, the processing device 112 may determine that the target vehicle is parked within the parking area when the range of distances from the first position of the target vehicle is all located within the parking area.
In some embodiments, when it is determined that the target vehicle is parked within the parking area, the processing device 112 may send a lock permission instruction to the target vehicle. In some embodiments, the target vehicle may automatically perform a lock-up in response to receiving the allow-lock instruction. In some embodiments, the user may perform a manual vehicle lock after the target vehicle receives the allow vehicle lock command. Specifically, the user can toggle the lock pin of the vehicle lock to lock the vehicle. In some embodiments, after the target vehicle is successfully locked, the target vehicle may feed back a locking success prompt tone.
In some embodiments, when it is determined that the target vehicle is not parked within the parking area, the processing device 112 may send a lock prohibition instruction to the target vehicle to prohibit the vehicle from performing automatic lock or manual lock by the user. In some embodiments, when it is determined that the target vehicle is not parked in the parking area, the processing device 112 may further control the target vehicle to broadcast a first prompt tone and/or send a first prompt message to a user terminal device corresponding to the target vehicle, so as to prompt the user to park the target vehicle in the parking area. In some embodiments, the first alert tone may be any sound. For example, the first warning sound may be a voice such as "please park the vehicle in the parking area". For another example, the first warning sound may be a warning sound of "ticker. In some embodiments, the first prompt message may include text, audio, images, animations, and the like, or any combination thereof. For example, the first prompt message may be a prompt tone. For another example, the first prompt message may be an image containing the target vehicle. Further, a mark that the target vehicle is not located in the parking area may be included on the image. In some embodiments, the first reminder message may include an operation guidance message (e.g., move forward by another 20 centimeters). The user can perform corresponding operation on the target vehicle according to the operation guide information. In some embodiments, the processing device 112 may determine a first position of the target vehicle relative to the parking area. The processing device 112 may determine the operation guidance information according to the relative relationship. In some embodiments, the processing device 112 may also alert the user to park the target vehicle in the parking area by controlling an indicator light of the target vehicle. For example, the processing device 112 may control an indicator light to emit a particular color of light (e.g., red). As another example, the processing device 112 may control an indicator light to flash. In some embodiments, the processing device 112 may also report user information corresponding to the target vehicle to the server 110.
It should be noted that the above description is merely for convenience and should not be taken as limiting the scope of the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the principles of the system. In some embodiments, the processing device 112 may acquire a first image of a parking area. The processing device 112 may further determine whether the target vehicle is located within the parking area based on the first image. For further description of determining whether the target vehicle is located in the parking area according to the first image of the parking area, reference may be made to fig. 5 and its detailed description, which are not repeated herein.
FIG. 5 is an exemplary flow chart of a vehicle control method according to some embodiments of the present application.
In some embodiments, flow 500 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations illustrated in fig. 5 for monitoring and/or controlling a vehicle may be implemented by a processor in the vehicle control system 100 illustrated in fig. 1 or the vehicle control apparatus 200 illustrated in fig. 2 or the vehicle control system 300 illustrated in fig. 3. For example, the process 500 may be stored in the storage device 140 in the form of instructions and executed and/or invoked by the processing device 112.
In step 510, the processing device 112 may acquire a first image. In some embodiments, step 510 may be performed by image acquisition module 330 in system 300.
The first image may include at least a parking area. In some embodiments, the first image may also include the target vehicle and/or other vehicles, and/or the like. The processing device 112 may acquire the first image from an image acquisition apparatus, a storage device, or the like. For example, the processing device 112 may directly acquire the first image from an image capture device (e.g., the image capture device 230 in the vehicle control device 200). For another example, the image capturing apparatus may store the captured first image in the storage device 140. The processing device 112 may retrieve the first image from the storage device 140.
In some embodiments, when the target return audio is acquired, the processing device 112 may control to turn on the image capture device for image capture. In some embodiments, the processing device 112 may also determine whether the first position of the target vehicle is within a threshold range. The processing device 112 may control turning on the image capture device when the first position of the target vehicle is within the threshold range. The threshold range may include an angular range as well as a distance range of the target vehicle from the parking area and/or audio capture device. For example, as shown in fig. 2, when the first position of the target vehicle is within 3 meters from the vehicle control apparatus (e.g., audio capture device) and is located in front of the vehicle control apparatus (e.g., to the left of the dashed line BB'), the processing device 112 may control the image capture device to be turned on. In some embodiments, the processing device 112 may determine whether the first position of the target vehicle is within a parking area. When it is determined that the first position of the target vehicle is located within the parking area, the processing device 112 may control to turn on the image capturing apparatus to capture the image. For more description of the first position of the target vehicle, reference may be made to fig. 4 and its detailed description, which are not repeated herein. In some embodiments, the processing device 112 may update the first image in real-time. It should be noted that when the target returning audio is not acquired or the target vehicle is not judged to be in the parking area according to the position of the target vehicle, the image acquisition device may be in the off state all the time, so as to reduce the power consumption of the vehicle control device.
In step 520, the processing device 112 may determine the target vehicle in the first image based on the first location.
In some embodiments, step 520 may be performed by the determination module 340 in the system 300.
In some embodiments, the processing device 112 may identify the locations of all vehicles in the first image. The processing device 112 may determine the vehicle whose vehicle position is closest to the first position as the target vehicle in the first image. In some embodiments, the processing device 112 may also determine the target vehicle in the first image based on the vehicle ID of the vehicle. For example, the processing device 112 may obtain a target vehicle ID of the target vehicle. The processing device 112 may identify a vehicle ID for each vehicle in the first image. The processing device 112 may determine a vehicle in the first image having the same vehicle ID as the target vehicle. In some embodiments, when the processing device 112 determines that the target vehicle is not included in the first image, the processing device 112 may determine that the target vehicle is not parked in the parking area. At this time, the processing device 112 may transmit a lock prohibition instruction to the target vehicle, control the target vehicle to broadcast the first prompt tone, or transmit the first prompt message to the user terminal corresponding to the target vehicle.
In some embodiments, the processing device 112 may determine the vehicle ID of the target vehicle based on the target return audio. In some embodiments, the processing device 112 may obtain the target vehicle ID from the target return audio by means of voice recognition. For example, the target car-returning audio may be a voice including the target vehicle ID such as "vehicle number 12345 requests car-returning". In some embodiments, the processing device 112 may obtain the vehicle ID of the target vehicle from the parking request sent by the terminal device. For example, the parking request transmitted by the user through the terminal device includes a vehicle ID. In some embodiments, the processing device 112 may determine the target vehicle ID based on the broadcast time of the target return audio. Specifically, for example, the processing device 112 may obtain a broadcast time of the target return car audio. The processing device 112 may further determine, according to the broadcast time of the target returning audio, that the vehicle with the sending time of the parking request corresponding to the broadcast time (e.g., the time closest) is the target vehicle. The processing device 112 may thus determine the vehicle ID of the target vehicle and may determine the corresponding user information for the target vehicle based on the vehicle ID. In some embodiments, the processing device 112 may obtain the parking request received within a certain time period (e.g., the broadcast time and one second before the broadcast time) corresponding to the broadcast time of the target car return audio. If there is only one parking request in the time period, the vehicle corresponding to the parking request may be determined as the target vehicle. If there are multiple parking requests within the time period, the processing device 112 may further determine the target vehicle according to the locations of the multiple parking requests (e.g., GPS location of the user terminal, GPS location of the relevant vehicle, etc.) and the location of the vehicle control device (e.g., image capture device).
In some embodiments, the vehicle ID may include a license plate number, a vehicle number, a two-dimensional code, a barcode, a fluorescent number, or the like, or any combination thereof. In some embodiments, the vehicle ID may be attached to the vehicle head, the vehicle tail fender, the battery compartment, or the like, which is convenient for the image capture device to capture and identify.
In step 530, the processing device 112 may determine whether the target vehicle is located within the parking area using the first image recognition model based on the features of the target vehicle in the first image. In some embodiments, step 530 may be performed by the determination module 340 in the system 300.
The first image recognition model may be trained based on a plurality of sets of training data. Each set of training data may include an image containing the vehicle and the parking area and a determination of whether the corresponding vehicle is located in the parking area. The processing device 112 may input the first image into the first image recognition model. The first image recognition model may output a result of whether the target vehicle is located in the parking area. In some embodiments, the first image recognition model may include a Convolutional Neural Network (CNN) model, a Full Convolutional Network (FCN) model, or the like, or any combination thereof.
In some embodiments, when it is determined that the target vehicle is parked within the parking area, the processing device 112 may send a lock permission instruction to the target vehicle. In some embodiments, the target vehicle may automatically perform a lock in response to receiving the allow lock command. In some embodiments, the user may perform a manual vehicle lock after the target vehicle receives the allow vehicle lock command. Specifically, the user can toggle the lock pin of the vehicle lock to lock the vehicle. In some embodiments, after the target vehicle is successfully locked, the target vehicle may feed back a locking success prompt tone. In some embodiments, after the target vehicle is successfully locked, the processing device 112 may control the image capture device to turn off to reduce its power consumption.
In some embodiments, when it is determined that the target vehicle is not parked within the parking area, the processing device 112 may send a lock prohibition instruction to the target vehicle to prohibit the vehicle from performing automatic locking or manual locking by the user. In some embodiments, when it is determined that the target vehicle is not parked in the parking area, the processing device 112 may further control the target vehicle to broadcast a first prompt tone and/or a user terminal device corresponding to the target vehicle to send a first prompt message to prompt the user to park the target vehicle in the parking area. For the content of the first prompt tone or the first prompt information, reference may be made to fig. 4 and the description thereof, which are not repeated herein. In some embodiments, the processing device 112 may also alert the user to park the target vehicle in the parking area by controlling indicator lights on the target vehicle. For example, the processing device 112 may control an indicator light to emit a particular color of light (e.g., red). As another example, the processing device 112 may control an indicator light to flash. In some embodiments, the processing device 112 may also report user information corresponding to the target vehicle to the server 110.
In some embodiments, the processing device 112 may further determine whether the parking posture of the target vehicle meets the criterion using the second image recognition model based on the feature of the target vehicle in the first image. The second image recognition model may be trained based on a plurality of sets of training data. Each set of training data may include an image of a vehicle having a certain parking posture and a determination result of whether the corresponding vehicle posture meets a criterion. The processing device 112 may input the first image into the second image recognition model. The second image recognition model may output a result of whether the parking posture of the target vehicle meets a criterion. In some embodiments, the second image recognition model may include a Convolutional Neural Network (CNN) model, a Full Convolutional Network (FCN) model, or the like, or any combination thereof. In some embodiments, the parking posture of the target vehicle may include a parking direction, a lodging state, whether the vehicle body is pressed, and the like, or any combination thereof. In some embodiments, the first image recognition model and the second image recognition model may be different models. In some embodiments, the first image recognition model and the second image recognition model may be the same model capable of performing both functions. In some embodiments, when it is determined that the vehicle parking meets the criteria (e.g., vehicle 252 in fig. 2), processing device 112 may send a permit parking instruction to the target vehicle.
In some embodiments, when determining that the parking posture of the target vehicle does not meet the criteria (e.g., vehicle 254 in fig. 2), the processing device 112 may transmit a vehicle locking prohibition instruction to the target vehicle, control the target vehicle to broadcast a second prompt tone, and transmit second prompt information to the user terminal corresponding to the target vehicle. In some embodiments, the first and second alert tones may be the same or different. For example, both the first alert tone and the second alert tone may be analog tones of a "tic". For another example, the first warning sound may be a voice such as "please park the vehicle in the parking area", and the second warning sound may be a voice such as "please correctly position the vehicle". In some embodiments, the second prompting message may include text, audio, images, animations, and the like, or any combination thereof. For example, the second prompt message may be a prompt tone. For another example, the second prompt message may be an image (e.g., the first image) captured by the image capture device that includes the target vehicle. Further, a mark for marking that the parking posture of the target vehicle does not meet the standard may be included on the image. In some embodiments, the second prompt message may include an operation guidance message (e.g., please lift the vehicle). The user can perform corresponding operation on the target vehicle according to the operation guide information.
It should be noted that the above description is merely for convenience and should not be taken as limiting the scope of the present application. It will be understood by those skilled in the art that, having the benefit of the teachings of this system, various modifications and changes in form and detail may be made to the field of application for which the method and system described above may be practiced without departing from this teachings. For example, in some embodiments, the processing device 112 may perform a deduction process for a user corresponding to the target vehicle when the target vehicle is not parked in the parking area.
FIG. 6 is an exemplary block diagram of a vehicle control apparatus according to some embodiments of the present application.
As shown in fig. 6, the vehicle control apparatus 600 may include an audio capture device 610, a processing circuit 620, and a communication device 640. The processing circuit 620 may have a signal connection with the audio capture device 610 and/or the communication device 640. For example, the processing circuit 620 may obtain its captured audio electrical signal from the audio capture device 610. For another example, the processing circuit 620 may control the communication device 640 to send a control signal to turn on the audio capture device 610.
The audio capture device 610 may be used to capture targeted return audio. In some embodiments, the audio capture device 610 may include a microphone array. The microphone array can collect sound signals and can also realize the noise reduction function. For example, the microphone array may suppress interference of environmental noise (e.g., tire noise, wind noise of a vehicle) by using a phase difference of picked-up audio through a beamforming (beamforming) technique. In some embodiments, the microphone array may include a plurality of microphones, e.g., 4, 6, 8, 12, 18, etc.
In some embodiments, the target car return audio may be any form of sound. For example, the target car return audio may be announced in a verbal form, such as "request for car return". For another example, the target car return audio may be broadcast in the form of a simulated sound, such as a "ticker". In some embodiments, the target return audio may be a sound emitted by a vehicle (e.g., an audio announcement device on vehicle 150). The vehicle may broadcast the target return audio in response to a parking request sent by a terminal device (e.g., terminal device 130). For example, a user may click a return button in an application (such as APP) installed on a terminal device (e.g., a cell phone) to trigger a parking request. The vehicle can directly receive the parking request and broadcast the target returning audio. For another example, the terminal apparatus may first transmit a parking request to the vehicle control apparatus 600 (e.g., the communication device 640). The vehicle control apparatus 600 then transmits the parking request to the vehicle corresponding to the terminal apparatus. The vehicle may broadcast a target return audio upon receiving a parking request sent by the vehicle control apparatus 600. In some embodiments, the target return audio may be a sound emitted by the user terminal device. For example, when the vehicle control device 600 receives a parking request initiated by a user through a terminal device (e.g., a mobile phone), the vehicle control device 600 may control the terminal device of the user to broadcast a target car return audio. For convenience of description, the vehicle will be used to broadcast the target returning audio in the present application as an example, which does not limit the scope of the present application. In some embodiments, the vehicle emitting the target return audio may also be referred to as the sound source and/or the target vehicle.
In some embodiments, the audio capture device 610 may capture target return audio broadcast by vehicles within a distance from the parking area (e.g., the parking area 240) and/or the audio capture device 610. For example, when the distance from the vehicle to the center of the parking area is within 3 meters, the audio capture device 610 may capture the target return audio broadcast by the vehicle. For another example, when the distance from the vehicle to the edge of the parking area is within 1 meter, the audio capture device 610 may capture the target returning audio broadcast by the vehicle. In some embodiments, the vehicle (or the user's terminal device) may broadcast the target return audio in real-time. The audio capture device 610 may capture the target return audio in real-time or periodically and transmit it to the processing circuitry 620, and generate an electrical signal.
The processing circuit 620 may be configured to obtain a plurality of electrical signals generated by the audio capture device 610 when receiving the target return audio (which may also be referred to as target return audio electrical signals), and determine a first location of the target vehicle emitting the target return audio based on the plurality of electrical signals. In some embodiments, the first location may include a direction and/or distance of the target vehicle relative to the audio capture device 610. In some embodiments, the first position may be represented in polar coordinates. For example, the first location may be represented as a directional angle and/or a straight-line distance (or ground distance) of the target vehicle relative to the audio capture device 610. In some embodiments, the first location may be represented in latitude and longitude coordinates. For example, the coordinates of the first position are 100 ° 54 'east longitude 38' and 38 ° north latitude 54 'north latitude 26'.
In some embodiments, the processing circuit 620 may determine the sound source location information (i.e., the first location of the target vehicle) using a time difference of arrival (TDOA) method, a beamforming method, an acoustic holography method, a high-resolution spectrum estimation method, or the like. The processing circuit 620 may determine whether the target vehicle is located within the parking area based on the first position of the target vehicle.
In some embodiments, the processing circuit 620 may include a positioning circuit 622 and a determination circuit 624. The positioning circuit 622 may be coupled to the audio capture device 610. The positioning circuit 622 can determine the sound source position information according to the target returning audio electrical signal transmitted by the audio acquisition device 610. Specifically, the positioning circuit 622 may be connected to each microphone in the audio capture device 610 and determine sound source location information based on the electrical audio signals transmitted by the respective microphones. For example, the positioning circuit 622 may compare the generation times of the plurality of audio electrical signals based on the position information of the audio capturing device, and determine the generation order and the time difference of the plurality of audio electrical signals. The location circuit 622 can determine a first location of the target vehicle based on the sequence of generation of the plurality of electrical audio signals and the time difference. For example, the first location of the target vehicle may include a direction and distance of the target vehicle relative to the vehicle control apparatus 600 (e.g., audio capture device 610). Specifically, the positioning circuit 622 may determine the distance of the target vehicle relative to the vehicle control apparatus 600 (e.g., the audio capture device 610) based on the generated time difference of the plurality of audio electrical signals. The positioning circuit 622 can determine the direction of the target vehicle relative to the vehicle control apparatus 600 according to the reception order of the plurality of audio signals. The determination circuit 624 may determine whether the target vehicle is located within the parking area based on the first position of the target vehicle. For example, the determination circuit 624 may compare the direction and distance of the target vehicle with respect to the vehicle control apparatus 600 with preset direction and distance ranges to determine whether the target vehicle is located within the parking area. For another example, the determination circuit 624 may compare the generation order and the time difference of the plurality of audio electric signals with a preset generation order and time difference range of the audio electric signals to determine whether the target vehicle is located in the parking area.
In some embodiments, processing circuit 620 may also include an audio recognition circuit (not shown). The audio capture device 610 may be connected to an audio recognition circuit. The audio identification circuit can identify the target returning audio electric signal. For example, the audio capture device 610 may capture audio data. The audio data may include one or more electrical audio signals. The audio identification circuit may identify a target return audio electrical signal from the one or more audio electrical signals. For example, the audio recognition circuit may extract audio features of each audio electrical signal and match the audio features to preset target audio features. The audio identification circuit can determine that the audio electric signal with the audio characteristic matched with the preset target audio characteristic is the target returning audio electric signal. The audio recognition circuit may transmit the identified target return audio electrical signal to the positioning circuit 622 for sound source localization.
In some embodiments, when it is determined that the target vehicle is located within the parking area, the processing circuit 620 (e.g., the determination circuit 624) may control the communication device 640 to send the lock permission instruction to the target vehicle. In some embodiments, the target vehicle may automatically perform a lock in response to receiving the allow lock command. In some embodiments, the user may perform a manual vehicle lock after the target vehicle receives the allow vehicle lock command. Specifically, the user can lock the vehicle by toggling the lock pin of the vehicle lock. In some embodiments, after the target vehicle is successfully locked, the target vehicle may feed back a locking success prompt tone.
In some embodiments, when it is determined that the target vehicle is not located within the parking area, the processing circuit 620 may control the communication device 640 to transmit a lock prohibition instruction to the target vehicle to prohibit the target vehicle from performing automatic lock or manual lock by the user. In some embodiments, when it is determined that the target vehicle is not located in the parking area, the processing circuit 620 may further control the communication device 640 to send a prompt instruction to the target vehicle to control the target vehicle to broadcast a first prompt tone to remind the user to park the target vehicle in the parking area. In some embodiments, the first alert tone may be any sound. For example, the first warning sound may be a voice such as "please park the vehicle in the parking area". For another example, the first warning sound may be a warning sound of "ticker. In some embodiments, when it is determined that the target vehicle is not located within the parking area, the processing circuit 620 may further control the communication device 640 to transmit the first prompt message to the user terminal device corresponding to the target vehicle. In some embodiments, the first prompt message may include text, audio, images, animations, and the like, or any combination thereof. For example, the first prompt message may be a prompt tone. For another example, the first prompt message may be an image containing the target vehicle. Further, a mark that the target vehicle is not located in the parking area may be included on the image. In some embodiments, the first reminder message may include an operation guidance message (e.g., move forward by another 20 centimeters). The user can perform corresponding operation on the target vehicle according to the operation guide information. In some embodiments, the operation guidance information may be determined based on a relative relationship of the first position of the target vehicle to the parking area. In some embodiments, processing circuit 620 may also control communication device 640 to send indicator light control instructions to the target vehicle to control the indicator lights of the target vehicle to alert the user to park the target vehicle in the parking area. For example, the indicator light control instruction may include controlling the indicator light to emit a particular color of light (e.g., red). As another example, the indicator light control instruction may include controlling an indicator light to blink.
In some embodiments, the vehicle control apparatus 600 may further include an image capturing device 630 for capturing the first image. The processing circuit 620 may be in signal connection with an image capture device 630. Specifically, the image capturing device 630 may be connected to the determining circuit 624. The determining circuit 624 may obtain the image electric signal (which may also be referred to as a first image electric signal) acquired by the image acquiring device 630. The first image may include at least a parking area. In some embodiments, the first image may also include the target vehicle and/or other vehicles, and/or the like. In some embodiments, the vehicle control apparatus 600 may be in signal connection with a server to transmit the first image to the server for further processing. For example, a target vehicle in the first image is identified. For another example, it is further determined whether or not the target vehicle is parked in the parking area based on the first image. For another example, it is determined whether the parking posture of the target vehicle meets the criterion based on the first image.
In some embodiments, when the target return audio electrical signal is acquired, the processing circuit 620 may control the image capture device 630 to be turned on for image capture. In some embodiments, the processing circuit 620 may also determine whether the first position of the target vehicle is within a threshold range. When the first position of the target vehicle is within the threshold range, the processing circuit 620 may control the image capture device 630 to be turned on. The threshold range may include an angular range as well as a distance range of the target vehicle from the parking area and/or the audio capture device 610. For example, as shown in fig. 2, when the first position of the target vehicle is within 3 meters from the vehicle control device (e.g., audio capture device 220) and is located in front of (e.g., to the left of dotted line BB') the vehicle control device (e.g., audio capture device 220), the processing circuit 620 may control turning on the image capture device 630. In some embodiments, when it is determined that the first position of the target vehicle is located in the parking area, the processing circuit 620 may control the image capture device 630 to be turned on to capture an image. In some embodiments, processing circuitry 620 may update the first image in real-time. In some embodiments, after the target vehicle is successfully locked, the processing circuit 620 may control the image capture device 630 to be turned off to reduce power consumption of the image capture device 630. It should be noted that when the audio electric signal of the target returning vehicle is not acquired or the target returning vehicle is not judged to be in the parking area according to the position of the target returning vehicle, the image capturing device 630 may be in the off state all the time, so as to reduce the power consumption of the vehicle control device.
It should be noted that the above description is merely for convenience and should not be taken as limiting the scope of the present application. It will be understood by those skilled in the art that, having the benefit of the teachings of this apparatus, various modifications and changes in form and detail may be made to the apparatus described above without departing from such teachings. In some embodiments, the vehicle control apparatus 600 may further include a memory (not shown) for storing information related to the target vehicle. In some embodiments, the processing circuit 620 may be a programmable chip, such as a CPU or FGPA, and the functions of each of the above circuits may be implemented in the form of instructions that are executed by the programmable chip to implement the above functions.
FIG. 7 is a schematic diagram of an exemplary decision circuit shown in accordance with some embodiments of the present application.
As shown in fig. 7, the determining circuit 700 can provide the communication device 720 with a command basis. The decision circuit 700 may include a comparator U. The comparator U may be configured to compare the received position electrical signal of the first position with a preset position threshold. When the position electric signal is greater than the position threshold value, it can be judged that the first position is not located in the parking area; when the position electrical signal is less than the position threshold, it may be determined that the first position is within the parking area.
Specifically, a first input of comparator U may be coupled to an output of positioning circuit 710 to receive the position electrical signal for the first position. A second input of the comparator U may be connected to a reference voltage VR 1. The reference voltage VR1 may correspond to a preset position threshold. When the potential of the + pin (i.e., the positive input terminal) of the comparator U is higher than the potential of the-pin (i.e., the negative input terminal), the comparator U can output a high level, i.e., the position electrical signal is smaller than the position threshold, and the first position can be considered to be located in the parking area. In this case, the communicator 720 may be notified to send control instructions related to allowing the vehicle to be locked. For example, allowing the vehicle to lock automatically or allowing the user to lock manually. When the "+" pin potential of the comparator U is lower than the "-" pin potential, the comparator U may output a low level, that is, the position electrical signal is greater than the position threshold, and it may be determined that the first position is not located in the parking area. In this case, the communication device 720 may be controlled to transmit a control instruction related to the prohibition of locking the vehicle, for example, to prohibit the target vehicle from locking the vehicle, to control the target vehicle to broadcast the first prompt tone, to control the user terminal device to broadcast the first prompt message, and so on.
FIG. 8 is an exemplary flow chart of a vehicle control method according to some embodiments of the present application. As shown in fig. 8, the process may include:
step 801: the terminal device may send a parking request to the server. The parking request can be triggered and generated by the terminal equipment user operating the terminal equipment (for example, clicking APP on the terminal equipment).
Step 802: after receiving the parking request, the server may transmit an audio broadcasting instruction to the vehicle terminal. In some embodiments, the terminal device may send the audio announcement instruction directly to the vehicle terminal, without via the server.
Step 803: the vehicle terminal can respond to the received audio broadcasting instruction to broadcast the target vehicle returning audio.
Step 804: the audio acquisition device can acquire the target car returning audio broadcasted by the car terminal and judge whether the target car is in the threshold range. For example, a processor may be provided in the audio capture device, and the processor may determine whether the target vehicle is within the threshold range according to the target returning audio. The threshold range may include an angular range as well as a distance range of the target vehicle from the parking area and/or the audio capture device. The audio acquisition device may determine a first location of the target vehicle using sound source localization techniques, and further determine whether the target vehicle is within a threshold range based on the first location. In some embodiments, the audio capturing device may send the captured target return audio to the server, and the server feeds back whether the target vehicle is within the threshold range.
Step 805: and if the target vehicle is within the threshold range, the audio acquisition device sends an opening instruction and an image acquisition instruction to the image acquisition device. In some embodiments, if the target vehicle is within the threshold range, the audio capture device may send a request to the server to turn on the image capture device. In response to the received request to turn on the image capture device, the server may send a turn on instruction and an image capture instruction to the image capture device.
Step 806: in response to the received opening instruction and the image acquisition instruction, the image acquisition device can acquire a first image of the parking area and send the acquired image information to the server.
Step 807: the server may determine target vehicle information (e.g., vehicle ID) using the background time information. In some embodiments, the background time information includes a time when the terminal device sends a parking request, a time when the target vehicle broadcasts a target car return audio, and the like. For example, when the target car returning audio is received, the audio acquisition device may feed back the time when the target car reports the target car returning audio to the server.
Step 808: the server can judge whether the target vehicle is positioned in the parking area through the image information and the target vehicle information. In some embodiments, the server may determine the target vehicle in the first image based on the target vehicle information (e.g., vehicle ID). In some embodiments, the server may determine the target vehicle in the first image based on the first location. The server may identify a feature of the target vehicle in the first image using the first image recognition model. The server may determine whether the target vehicle is located within the parking area based on a feature of the target vehicle in the first image. If the target vehicle is located in the parking area, the server can send a vehicle locking instruction to the vehicle terminal. If the target vehicle is not located in the parking area, the server may send a reminder to the vehicle terminal that the target vehicle is not located in the parking area. For example, the server may control the target vehicle to broadcast the first alert tone. In some embodiments, the server may further determine whether the parking posture of the target vehicle meets a criterion using the second image recognition model. When the parking posture of the target vehicle is judged not to meet the standard, the server can send a prompt that the parking posture does not meet the standard to the vehicle terminal. For example, the server may control the target vehicle to broadcast the second warning tone.
Step 809: the server may feed back to the terminal device that the locking is successful or prompt that the target vehicle is not in the parking area. For example, the server may send the first prompt message to the terminal device. In some embodiments, when it is determined that the parking posture of the target vehicle does not meet the criterion, the server may further feed back to the terminal device that the parking posture does not meet the criterion. For example, the server may send the second prompt message to the terminal device.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) The application provides a method for judging whether a vehicle is parked in a parking area or not based on voice recognition and sound source positioning technology, which can solve the problem of low positioning accuracy in the conventional method; (2) Under the condition of accurate positioning, the parking position and the posture of the vehicle are further judged by combining the video/image, and the accuracy of judging the parking position is improved; (3) When the vehicle is determined to be parked through voice judgment, the image acquisition device is started to acquire images, so that the power consumption of the equipment is reduced. It is to be noted that different embodiments may produce different advantages, and in different embodiments, the advantages that may be produced may be any one or combination of the above, or any other advantages that may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, a conventional programming language such as C, visual Basic, fortran2003, perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service using, for example, software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (6)

1. The vehicle control equipment is characterized by comprising a processing circuit and an audio acquisition device, wherein the audio acquisition device is used for acquiring a target vehicle returning audio emitted by a target vehicle, the processing circuit is in signal connection with the audio acquisition device, and the audio acquisition device comprises a plurality of microphones;
the processing circuit is used for acquiring a plurality of electric signals generated by the audio acquisition device when the target car returning audio is received, and determining a first position of a target car sending the target car returning audio based on the plurality of electric signals; the first location comprises at least one of a direction and a distance of the target vehicle relative to the audio capture device, wherein the processing circuit comprises a positioning circuit and a determination circuit;
the positioning circuit is connected with an output port of the audio acquisition device, and the positioning circuit acquires a plurality of electric signals generated when the audio acquisition device receives the target car returning audio and acquires position electric signals; and
the judging circuit is connected with an output port of the positioning circuit and comprises a comparator for comparing the position electric signal with a preset position threshold value.
2. The vehicle control apparatus according to claim 1, characterized in that the audio pickup device includes a microphone array board provided with the plurality of microphones.
3. The vehicle control apparatus according to claim 1, wherein the comparator includes a first input terminal and a second input terminal, the first input terminal being connected to the positioning circuit, the second input terminal being connected to a reference voltage.
4. The vehicle control apparatus according to claim 1, characterized in that the vehicle control apparatus further comprises a communication device connected to an output of the comparator; the communication device is capable of transmitting an instruction to the target vehicle according to the output signal of the comparator.
5. The vehicle control apparatus according to claim 1, characterized in that the vehicle control apparatus further comprises an image pickup device, and the determination circuit has a signal connection with the image pickup device.
6. The vehicle control apparatus according to claim 1, characterized in that the vehicle control apparatus is provided on a mounting bracket.
CN202123173888.3U 2021-12-16 2021-12-16 Vehicle control apparatus Active CN217787820U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202123173888.3U CN217787820U (en) 2021-12-16 2021-12-16 Vehicle control apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202123173888.3U CN217787820U (en) 2021-12-16 2021-12-16 Vehicle control apparatus

Publications (1)

Publication Number Publication Date
CN217787820U true CN217787820U (en) 2022-11-11

Family

ID=83904762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202123173888.3U Active CN217787820U (en) 2021-12-16 2021-12-16 Vehicle control apparatus

Country Status (1)

Country Link
CN (1) CN217787820U (en)

Similar Documents

Publication Publication Date Title
CN108496377B (en) System and method for monitoring vehicles on the way
US20200043326A1 (en) Use sub-system of autonomous driving vehicles (adv) for police car patrol
US20180259353A1 (en) Information processing apparatus and information processing method
US10553113B2 (en) Method and system for vehicle location
WO2014115563A1 (en) Driving support device, driving support method, and recording medium storing driving support program
CN108779984A (en) Signal handling equipment and signal processing method
CN113095216A (en) Parking monitoring method, system, equipment and storage medium
WO2019144876A1 (en) Pickup Service Based on Recognition between Vehicle and Passenger
US11269069B2 (en) Sensors for determining object location
KR20120079341A (en) Method, electronic device and recorded medium for updating map data
US11912309B2 (en) Travel control device and travel control method
WO2018198926A1 (en) Electronic device, roadside device, method for operation of electronic device, and traffic system
JP2019008709A (en) Vehicle, information processing system, information processing device, and data structure
CN113792589B (en) Overhead identification method and device
US11377125B2 (en) Vehicle rideshare localization and passenger identification for autonomous vehicles
US20220281486A1 (en) Automated driving vehicle, vehicle allocation management device, and terminal device
CN113076896A (en) Standard parking method, system, device and storage medium
CN113099385B (en) Parking monitoring method, system and equipment
CN217787820U (en) Vehicle control apparatus
CN111542295A (en) Automatic driving method and system for intelligent wheelchair and computer readable medium
JP2021013068A (en) Information providing device, information providing method, and program
CN111405459A (en) Parking position recording method and system based on mobile terminal, storage medium and terminal
CN109115233B (en) Method, device, system and computer readable medium for non-destination navigation
CN116266427A (en) Vehicle control apparatus
WO2021149594A1 (en) Information provision device, information provision method, information provision program, and recording medium

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant