CN115691121A - Vehicle stop detection method and device - Google Patents

Vehicle stop detection method and device Download PDF

Info

Publication number
CN115691121A
CN115691121A CN202211247422.XA CN202211247422A CN115691121A CN 115691121 A CN115691121 A CN 115691121A CN 202211247422 A CN202211247422 A CN 202211247422A CN 115691121 A CN115691121 A CN 115691121A
Authority
CN
China
Prior art keywords
vehicle
image
user
time information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211247422.XA
Other languages
Chinese (zh)
Inventor
胡腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211247422.XA priority Critical patent/CN115691121A/en
Publication of CN115691121A publication Critical patent/CN115691121A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the specification provides a vehicle stop detection method and a vehicle stop detection device, wherein the vehicle stop detection method comprises the following steps: acquiring a first image and a second image which are acquired by a user aiming at a vehicle; comparing the first image with the second image; under the condition that the comparison result meets the vehicle stop condition, inquiring characteristic driving records of the vehicle in a target time period; the target time period is constituted by first time information read from the first image and second time information read from the second image; and if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.

Description

Vehicle stop detection method and device
The application is a divisional application of Chinese patent application with the application date of 2020, 10, month and 20, and the application number of CN202011125902.X, entitled "vehicle stopping detection method and device".
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for detecting vehicle stop.
Background
With the annual increase of the vehicle keeping amount, traffic jam is caused in many cities, meanwhile, certain environmental pollution can be caused in the driving process of the vehicles, and urban managers encourage enterprises or individuals to actively control the environmental pollution caused in the driving process of the vehicles through various modes.
Disclosure of Invention
One or more embodiments of the present specification provide a vehicle stop detection method. The vehicle stop detection method includes: first and second images captured by a user for a vehicle are acquired. And comparing the first image with the second image. Under the condition that the comparison result meets the vehicle stop condition, inquiring characteristic driving records of the vehicle in a target time period; the target time period is constituted by first time information read from the first image and second time information read from the second image. And if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.
One or more embodiments of the present specification provide a vehicle stop detection apparatus including: an acquisition module configured to acquire a first image and a second image captured by a user for a vehicle. A comparison module configured to compare the first image with the second image. The query module is configured to query the characteristic driving record of the vehicle in the target time period under the condition that the comparison result meets the vehicle stopping condition; the target time period is constituted by first time information read from the first image and second time information read from the second image. The determining module is configured to determine that the vehicle is in the effective stop state in the target time period if the query result is empty.
One or more embodiments of the present specification provide a vehicle stop detection apparatus including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to: a first image and a second image of a user captured with respect to a vehicle are acquired. And comparing the first image with the second image. Under the condition that the comparison result meets the vehicle stop condition, inquiring characteristic driving records of the vehicle in a target time period; the target time period is composed of first time information read from the first image and second time information read from the second image. And if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed, implement the following: first and second images captured by a user for a vehicle are acquired. And comparing the first image with the second image. Under the condition that the comparison result meets the vehicle stop condition, inquiring characteristic driving records of the vehicle in a target time period; the target time period is constituted by first time information read from the first image and second time information read from the second image. And if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a process flow diagram of a vehicle stop detection method according to one or more embodiments of the present disclosure;
FIG. 2 is a process flow diagram of a vehicle-stopping detection method applied to a carbon-saving project scenario according to one or more embodiments of the present disclosure;
FIG. 3 is a schematic view of a vehicle-stopping detection device according to one or more embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a vehicle-stop detection device according to one or more embodiments of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step, shall fall within the scope of protection of this document.
The embodiment of the vehicle stop detection method provided by the specification comprises the following steps:
referring to fig. 1, which shows a processing flow chart of a vehicle stop detection method provided by the present embodiment, and referring to fig. 2, which shows a processing flow chart of a vehicle stop detection method applied to a carbon-saving project scenario provided by the present embodiment.
Referring to fig. 1, the vehicle-stopped-running detection method provided by the present embodiment specifically includes steps S102 to S108 described below.
Step S102, acquiring a first image and a second image which are acquired by a user aiming at a vehicle.
The vehicle stopping detection method provided by the embodiment includes the steps of firstly obtaining two images collected by a user before and after a certain time period, preliminarily determining a stopping state of a vehicle by comparing running characteristics, time information and position information in the two images, further inquiring a running record of the vehicle by calling a vehicle inquiry interface on the basis of preliminarily determining that the vehicle is in the stopping state, inquiring a running transaction record of the vehicle in the time period in a historical transaction record of the user, determining that the vehicle is in an effective stopping state under the condition that an inquiry result is empty, and issuing a reward to the user on the basis of the effective stopping state of the vehicle so as to encourage the user to participate in a carbon saving project in a vehicle stopping mode, reduce environmental pollution caused in the vehicle running process and improve effectiveness and accuracy of vehicle stopping state judgment.
In the embodiment, two images before and after a certain time period are acquired by a user, the state of a vehicle is reflected or identified through the acquired images, the image acquired at the starting time point of the time period is called a first image, and the image acquired at the ending time point of the time period is called a second image; wherein the first image comprises: the time period starting time point is a photograph taken of an odometer, a fuel quantity and/or an environment in which the vehicle is located, and the second image includes: a photograph taken at the end point of the period of time for the odometer, the amount of fuel and/or the environment in which the vehicle is located; for example, the user U captures photographs taken of the dashboard of the vehicle C before and after a certain period of time, and the captured photograph of the dashboard taken by the user U at the start time point of the period of time is referred to as a first image, and the photograph of the dashboard taken at the end time point of the period of time is referred to as a second image. Accordingly, a target time period, that is, a time period during which an image is captured is formed from the first time information read from the first image and the second time information read from the second image.
In this embodiment, a carbon-saving project is taken as an example, and a vehicle stop detection method is explained through a vehicle stop scene; in a specific implementation, in the process of participating in the carbon-saving project, the user collects the carbon-saving data of the user, and needs to obtain the authorization of the user, in an optional implementation manner provided in this embodiment, the following manner is specifically adopted to open the authority of the user to participate in the carbon-saving project:
sending a participation guide reminder for participating in the carbon-saving project to the user;
and performing authorization processing on the carbon-saving project for the user according to the participation confirmation action submitted by the user based on the participation guide prompt.
In practical application, a user acquires the right to participate in a carbon saving project, the user needs to upload a driving pass certificate of the user for a vehicle to carry out identity verification, a binding relationship between the user and the vehicle is established, and based on the binding relationship, the acquired vehicle driving image submitted by the user is defaulted to be the vehicle driving image of the vehicle bound with the user; in specific implementation, after the user is authorized to participate in the carbon-saving project, in order to improve the convenience of the user in participating in the carbon-saving project and improve the project perception degree of the user, in an optional implementation manner provided in this embodiment, after obtaining the participation right of the carbon-saving project, the method further includes:
receiving a driving pass voucher uploaded by the user and aiming at the vehicle;
and establishing a binding relation between the user and the vehicle based on the driving pass certificate.
For example, it is monitored that the user U enters the third-party platform, and in order to encourage the user U to participate in the carbon-saving project and further reduce environmental pollution caused by the automobile in the driving process, a participation guide prompt for participating in the carbon-saving project is sent to the user U, and authorization processing of the carbon-saving project is performed on the user U under the condition that the user U determines participation; after the user U obtains the participation right, in order to further guarantee the user right and ensure the accuracy of the image submitted by the user, the user U needs to upload a driving license of the user U, determine the binding relationship between the user U and the vehicle C, acquire the photos of the user U, which are acquired by the user U, and shoot the dashboard of the vehicle C before and after a certain period of time, take the dashboard photo of the user U, which is shot at the starting time point of the period of time, as a first image, and take the dashboard photo, which is shot at the ending time point of the period of time, as a second image.
In addition, when the user participates in the carbon-saving project, the user can shoot and upload an environment image of the environment where the vehicle is located. For example, a vehicle C of a user U is parked in a parking lot, and an environment image where the vehicle C is located is collected after the user U gets off the vehicle, namely, an image which not only includes image characteristics of the vehicle, but also includes a parking space and a relationship between the vehicle and the parking space is collected as a first image in a certain time period; similarly, a second image is also acquired using the method described above.
Step S104, comparing the first image with the second image.
In practical application, whether the vehicle can be preliminarily determined to be in a stop state is determined by comparing a first image and a second image acquired by a user for the vehicle, specifically, in order to improve accuracy, the driving characteristics, time information and position information of the first image and the second image are compared, in an optional implementation manner provided by this embodiment, the first image and the second image are compared specifically in the following manner:
identifying first mileage information carried in the first image and second mileage information carried in the second image;
calculating a difference value between the first mileage information and the second mileage information;
if the difference value of the first time information and the second time information is smaller than the mileage threshold value, comparing the first time information read from the first image with the second time information read from the second image;
and comparing the first position information with the second position information under the condition that the second time information has forward time difference relative to the first time information.
Accordingly, the vehicle stop condition includes: the position deviation of the first position information and the second position information is smaller than a preset position threshold value.
Optionally, the first position information and the second position information are read in the following manner:
reading the position information of the user, which is acquired in the process of uploading the first image by the user, as the first position information, storing the first position information and establishing an incidence relation with the first image;
similarly, the position information of the user, which is acquired in the process of uploading the second image by the user, is read as the second position information, and the second position information is stored and the association relation with the second image is established.
In specific implementation, in order to ensure the security of user data, in an optional implementation manner provided in this embodiment, before reading the location information of the user:
sending reminding information of position acquisition permission for acquiring the position information of the user to the user;
and carrying out authorization processing on the position acquisition permission according to the confirmation action submitted by the user aiming at the reminding information.
For example, a first image and a second image of a vehicle C acquired by a user U by photographing are acquired, a first vehicle mark carried by the first image is identified by an OCR technology (Optical Character Recognition), a first mileage number of the vehicle C in the first image is acquired, a second vehicle mark carried by the second image is identified, a second mileage number of the vehicle C in the second image is acquired, a difference between the first mileage number and the second mileage number is calculated, if the difference is smaller than a mileage threshold, first time information read from metadata of the first image is compared with second time information read from metadata of the second image, if there is a forward time difference between the second time information and the first time information, LBS (Location Based Services) of the user U acquired by the user U during uploading the first image is further compared with LBS (Location Based Services) of the user U acquired by the user U during uploading the second image, and if the longitude deviation meets a preset condition, the longitude deviation of the corresponding Location of the user U is determined, and if the longitude deviation meets a preset condition, the longitude deviation meets a condition that the vehicle stop condition; the LBS of the user U acquired in the process of uploading the first image by the user U is first position information, and the LBS of the user U acquired in the process of uploading the second image by the user U is second position information.
In addition, any one or two of the three dimensions of the driving characteristics, the time information and the position information can be compared, such as: firstly, identifying a first vehicle running mark carried by the first image and a second vehicle running mark carried by the second image; and then comparing the first vehicle running mark with the second vehicle running mark, comparing first time information read from the first image with second time information read from the second image, and/or comparing first position information with second position information. The first position information can also be directly read from the metadata of the first image, and correspondingly, the second position information can also be directly read from the metadata of the second image.
In practical applications, in the process of comparing the first image with the second image, the first vehicle mark may also be a first vehicle fuel quantity (for example, a remaining fuel quantity, a remaining natural gas quantity, a remaining power, and the like) obtained by identifying the first image, the second vehicle mark may also be a second vehicle fuel quantity obtained by identifying the second image, when comparing the first image with the second image, the first vehicle fuel quantity is compared with the second vehicle fuel quantity, first position information is compared with the second position information, and first time information for locating the first image is compared with second time information for locating the second image.
In addition, when the first image and the second image are environment images of an environment where the user participates in the carbon-saving project and shoots and uploads the vehicle, in order to improve the perception degree of the user on the carbon-saving project, in an optional implementation manner provided by this embodiment, when the first image and the second image are environment images, the first image and the second image are specifically compared in the following manner:
performing image segmentation processing on the first image to obtain a first stopping environment characteristic, and performing image segmentation processing on the second image to obtain a second stopping environment characteristic;
comparing first time information read from the first image with second time information read from the second image if the first and second stopping environmental features satisfy an environmental feature condition;
comparing the first position information with the second position information under the condition that the second time information has forward time difference relative to the first time information;
correspondingly, the vehicle-stop condition includes: the position deviation of the first position information and the second position information is smaller than a preset position threshold value.
For example, a first image and a second image of a vehicle C acquired by a user U are acquired as environment images of an environment where the vehicle C is located, in order to preliminarily determine that the vehicle C is in a stopped state by comparing the first image with the second image, the first image is subjected to image segmentation processing to obtain a first stopped environment feature, the second image is subjected to image segmentation processing to obtain a second stopped environment feature, feature similarity of the first stopped environment feature and the second stopped environment feature is calculated, when the feature similarity is greater than a similarity threshold value, first time information read from metadata of the first image is compared with second time information read from metadata of the second image, and when the first time information has a forward time difference with respect to the second time information, longitude and latitude deviations of the first position information and the second position information are further calculated; accordingly, in the case where the longitude deviation of the first position information and the second position information satisfies the preset longitude condition and the latitude deviation satisfies the preset latitude condition, it is preliminarily determined that the vehicle C is in the stopped state.
In addition, in order to further improve the accuracy of vehicle state determination, the vehicle state of the vehicle can be preliminarily determined by comparing one or more of the vehicle mileage, the vehicle fuel quantity and the environment where the vehicle is located with the time information and/or the position information.
And step S106, inquiring the characteristic driving record of the vehicle in the target time period under the condition that the comparison result meets the vehicle stopping condition.
In this embodiment, when the comparison result satisfies the vehicle stop condition, the process of querying the characteristic driving record of the vehicle in the target time period and querying the characteristic driving record of the vehicle in the target time period includes specifically invoking a vehicle query interface to query the driving record of the vehicle in the target time period and querying a driving transaction record of the vehicle in the target time period in a historical transaction record.
The vehicle stop condition refers to a restriction condition that can prove that the vehicle is in a stop state; optionally, the vehicle-stop condition includes: the first vehicle running mark and the second vehicle running mark meet a characteristic mark condition, the second time information has a forward time difference relative to the first time information, and/or the position deviation of the first position information and the second position information is smaller than a preset position threshold value.
The target time period is a time period formed by time information in the metadata of the first image and time information in the metadata of the second image; the characteristic driving record refers to consumption record related to the vehicle or driving record related to the vehicle on other systems.
In specific implementation, in order to further improve the effectiveness of the vehicle-stopped-state determination, on the basis of preliminarily determining that the vehicle is in a stopped state, in a target time period formed by first time information read from metadata of the first image and second time information read from metadata of the second image, a characteristic driving record of the vehicle is further queried; specifically, a vehicle inquiry interface is called to inquire whether the vehicle has a driving record in the target time period, and whether the vehicle has the driving record in the target time period is inquired in the historical transaction record of the user third-party platform. Specifically, one of the driving record and the driving transaction record may be queried, and the effective stopped state of the vehicle may be determined when the query result is empty, or the driving record and the driving transaction record may be queried at the same time, and the effective stopped state of the vehicle may be determined when both the query results are empty.
In order to further determine the stop state of the vehicle, this embodiment provides an alternative implementation manner, in which the characteristic driving record of the vehicle in the target time period is queried by the following manner:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies vehicle identifications corresponding to vehicle running images of a preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identifications obtained by identification;
acquiring a first query result returned by the vehicle query interface;
and under the condition that the first query result is empty, querying a driving transaction record of the vehicle in the target time period in a historical transaction record, and obtaining a second query result as the query result.
In addition, in the process of calling the vehicle query interface to query the driving record of the vehicle in the target time period, in order to reduce the query cardinality and improve the query efficiency, the driving record of the vehicle is queried in a preset geographic area; for example, if the vehicle stops for two days, the maximum possible driving distance of the vehicle within two days is calculated according to factors such as speed limit, and the preset geographical position is determined by taking the vehicle stopping position as the center of a circle and the maximum possible driving distance as the radius.
For example, it is preliminarily determined that the vehicle C of the user U is in a stopped state, and further stop inquiry is made for the vehicle C in order to prevent cheating by the user U; during query, firstly, a vehicle query interface provided by a data provider is called, a vehicle identifier of a vehicle C is sent to the vehicle query interface as a target vehicle identifier, and first time information in metadata of a first image and second time information in metadata of a second image, which are acquired by a user U, are sent; after receiving the information, the data provider identifies the vehicle identification of the vehicle running image shot and uploaded by the image provider in a target time period formed by the first time information and the second time information in a preset geographic area, inquires the target vehicle identification in the identified vehicle identification, finally generates a first inquiry result and returns the first inquiry result through the vehicle inquiry interface, and inquires the running transaction record of the vehicle C in the target time period in the historical transaction record of the user U under the condition that the acquired first inquiry result is empty, acquires a second inquiry result and takes the second inquiry result as the inquiry result, so that the determination of the vehicle running stopping state is guaranteed.
Specifically, in order to make the determined stopped state of the vehicle more effective and reduce the risk of cheating by the user, before invoking the vehicle inquiry interface for inquiry, the invocation permission of the user for invoking the vehicle inquiry interface needs to be opened, in an optional implementation manner provided in this embodiment, the permission of the user for invoking the vehicle inquiry interface is specifically opened in the following manner:
sending a call guidance prompt for calling the vehicle inquiry interface to the user;
and carrying out authorization processing on the vehicle query interface for the user according to a call confirmation action submitted by the user based on the call guiding prompt.
In addition, in the process of querying the characteristic driving record of the vehicle in the target time period, one of the driving record and the driving transaction record may be queried, and this embodiment provides an optional implementation manner, that the characteristic driving record of the vehicle in the target time period is queried in the following manner:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies vehicle identifications corresponding to vehicle running images of a preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identifications obtained by identification;
acquiring a query result returned by the vehicle query interface;
and/or the presence of a gas in the gas,
inquiring the driving transaction record of the vehicle in the target time period in the historical transaction record, and determining an inquiry result;
wherein the driving transaction record comprises at least one of: a refueling transaction record for the vehicle, a no-parking electronic toll collection transaction record for the vehicle, and a parking transaction record for the vehicle.
And step S108, if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.
The effective stop state means that the vehicle is considered to be in the effective stop state if the stop state of the vehicle is further determined to be abnormal through inquiry on the basis of preliminarily determining that the vehicle is in the stop state.
In practical applications, in order to encourage users to participate in the carbon-saving project and improve environmental pollution, the users are often encouraged to participate in the carbon-saving project by issuing certain rewards, and the specific issued rewards are determined according to the carbon-saving indexes obtained by the users in the carbon-saving project, in an optional implementation manner provided by this embodiment, after determining that the vehicle is in an effective stop state, the carbon-saving index of the effective stop is calculated in the following manner:
determining the effective stop duration of the vehicle according to the target time period of the effective stop state of the vehicle;
calculating a carbon saving index of the user for the effective stop of the vehicle according to a preset carbon saving quantification algorithm based on the effective stop duration of the vehicle;
and when the carbon-saving index is accumulated to meet a preset carbon-saving threshold, the carbon-saving index is converted into an available carbon-saving resource or the participation right for participating in the carbon-saving behavior is opened.
For example, the vehicle C of the user U is determined to be in an effective stop state, the effective stop duration of the vehicle C is determined according to a target time period formed by first time information in metadata of a first image and second time information in metadata of a second image, which are collected by the user U, and the carbon saving index of the user U for the effective stop of the vehicle C is calculated according to a carbon saving quantification algorithm on the basis of the determination of the effective stop duration of the vehicle C.
In addition, in an optional implementation manner provided by this embodiment, the carbon-saving index of the user for the effective stop of the vehicle may also be calculated according to the vehicle type of the vehicle and/or the geographic area range where the vehicle is located; and when the carbon-saving index is accumulated to meet a preset threshold value, the carbon-saving index is converted into an available carbon-saving resource or the participation right participating in the carbon-saving behavior is opened.
In specific implementation, after the carbon-saving index of the user participating in the carbon-saving project is determined, the user is rewarded according to the carbon-saving index. In an optional implementation manner provided by this embodiment, in order to encourage the user to use more public transportation for travel and reduce environmental pollution caused by the automobile in the driving process, the user who is determined to be in the effective stop state is rewarded, specifically, the following manner is adopted:
displaying the carbon-saving identification carrying the carbon-saving index to the user;
and accumulating the carbon-saving index to the historical carbon-saving index of the user according to a confirmation instruction submitted by the user aiming at the carbon-saving identification.
For example, it is determined that the vehicle C of the user U is in an effective stop state, the effective stop time of the vehicle C is three days, the effective carbon saving index of the effective stop is calculated to be num1 according to a carbon saving quantization algorithm, the user U is shown with the carbon saving identifier carrying num1, the carbon saving index of num1 is accumulated into the historical carbon saving index of the user U according to the confirmation operation of the user U on the carbon saving identifier carrying num1, and the user U can participate in the carbon saving behavior or exchange a specified product after the carbon saving index of the user U is accumulated to a certain value.
In the following, with reference to fig. 2, the application of the vehicle stop detection method provided in this embodiment to a carbon-saving project scene is taken as an example, and the vehicle stop detection method provided in this embodiment is further described. Referring to fig. 2, the vehicle-stop detection method applied to the carbon-saving project scenario specifically includes steps S202 to S232.
Step S202, sending participation guide reminding for participating in the carbon saving project to the user, and carrying out authorization processing on the carbon saving project for the user according to participation confirmation actions of the user.
And step S204, acquiring the driving pass certificate uploaded by the user, and establishing the binding relationship between the user and the vehicle.
Wherein, the driving pass certificate refers to the driving pass of the user.
And step S206, sending a vehicle inquiry interface calling guidance prompt for calling a data provider to the user, and performing authorization processing on the vehicle inquiry interface for the user according to the calling confirmation action of the user.
Step S208, a first image collected by a user is obtained.
The first image includes a photograph taken by the user of the odometer, the amount of fuel, and/or the environment in which the vehicle is located.
Step S210, identifying the first image to obtain a first vehicle driving mark, and obtaining first time information and first position information.
Specifically, the first vehicle running mark comprises the mileage of the vehicle, the fuel quantity of the vehicle and/or the environment where the vehicle is located; reading first time information from metadata of a first image; and reading the position information of the user, collected in the process of uploading the first image by the user, as the first position information, storing the first position information and establishing an association relation with the first image.
In specific implementation, before reading the position information of the user, sending reminding information of the position acquisition permission for acquiring the position information of the user to the user, and performing authorization processing of the position acquisition permission according to a confirmation action of the user on the reminding information.
Step S212, a second image acquired by the user is acquired.
The second image includes a photograph taken by the user of the odometer, the amount of fuel, and/or the environment in which the vehicle is located.
Step S214, the second image is recognized to obtain the second vehicle running mark, and the second time information and the second position information are obtained.
Specifically, the second vehicle running mark comprises the vehicle mileage, the vehicle fuel quantity and/or the environment where the vehicle is located; the first time information and the second time information form a target time period; the second time information is read from the metadata of the second image; and reading the position information of the user, collected in the process of uploading the second image by the user, as second position information, storing the second position information and establishing an association relation with the second image.
Step S216, comparing the first vehicle running mark with the second vehicle running mark.
If the first vehicle running flag and the second vehicle running flag satisfy the feature flag condition, performing step S218;
otherwise, step S232 is executed.
Step S218, judging whether the second time information has a positive difference relative to the first time information;
if yes, go to step S220;
if not, go to step S232.
Step S220, calculating longitude deviation and latitude deviation in the first position information and the second position information;
if the longitude deviation is smaller than the preset longitude threshold and the latitude deviation is smaller than the preset latitude threshold, executing step S222 to step S226;
otherwise, step S232 is executed.
Step S222, calling a vehicle query interface provided by the data provider, and sending the target vehicle identifier, the first time information, and the second time information of the vehicle to the vehicle query interface.
After receiving the target vehicle identification, the first time information and the second time information, the data provider determines a preset geographic area, identifies the vehicle identification of a vehicle running image of the geographic area in a target time period, inquires the target vehicle identification in the vehicle identification obtained by identification, and finally returns a first inquiry result through the vehicle inquiry interface;
the vehicle driving image is shot by an image provider and uploaded to a data provider.
Step S224, a first query result is obtained.
In step S226, in case that the first query result is empty, it is queried in the historical transaction record of the user whether there is a driving transaction record for the vehicle in the target time period.
If not, determining that the vehicle is in an effective stop state, and executing the steps S228 to S230;
if yes, go to step S232.
In step S228, an effective stop time period of the vehicle is determined according to the target time period, and the carbon-saving index of the effective stop is calculated based on the effective stop time period.
Step S230, displaying the carbon saving identifier carrying the carbon saving index to the user, and accumulating the carbon saving index and the historical carbon saving index of the user according to the confirmation operation of the user on the carbon saving identifier.
In step S232, it is determined that the vehicle is not in the stopped state.
In summary, according to the vehicle stopping detection method provided in this embodiment, first, a first image and a second image, which are acquired by a user before and after a certain period of time, are obtained, feature parameters of the first image and the second image are compared, when a comparison result of the feature parameters of the first image and the second image satisfies a vehicle stopping condition, a feature driving record of the vehicle in a target period of time is further queried, and if a query result is empty, it is determined that the vehicle is in an effective stopping state in the target period of time, so that effectiveness of determining the vehicle stopping state is improved, possibility of creating fake by the user is reduced, and cheating by the user is prevented.
The vehicle stop detection device provided by the specification comprises the following embodiments:
in the above embodiments, a vehicle stop detection method is provided, and a vehicle stop detection device is provided, which is described below with reference to the accompanying drawings.
Referring to fig. 3, a schematic diagram of a vehicle stop detection device provided in the present embodiment is shown.
Since the device embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions only need to refer to the corresponding description of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides a vehicle stop detection device including:
an acquisition module 302 configured to acquire a first image and a second image captured by a user for a vehicle;
a comparison module 304 configured to compare the first image with the second image;
the query module 306 is configured to query a characteristic driving record of the vehicle in a target time period if the comparison result meets the vehicle stopping condition; the target time period is constituted by first time information read from the first image and second time information read from the second image;
a determination module 308 configured to determine that the vehicle is in an active stop state for the target time period if the query result is empty.
Optionally, the vehicle stop detection device further includes:
a stop time period determination module configured to determine an effective stop time period of the vehicle according to a target time period in which an effective stop state of the vehicle is located;
a carbon saving index calculation module configured to calculate a carbon saving index of the user for the effective stop of the vehicle according to a preset carbon saving quantification algorithm based on the effective stop time of the vehicle; and when the carbon-saving index is accumulated to meet a preset carbon-saving threshold, the carbon-saving index is converted into an available carbon-saving resource or the participation permission for participating in the carbon-saving behavior is opened.
Optionally, the alignment module 304 includes:
an identification sub-module configured to identify a first vehicle travel mark carried by the first image and a second vehicle travel mark carried by the second image;
a feature comparison sub-module configured to compare the first vehicle travel mark with the second vehicle travel mark, compare first time information read from the first image with second time information read from the second image, and/or compare first position information with second position information.
Optionally, the first location information and the second location information are read by operating the following sub-modules:
a first position information sub-reading module configured to read user position information of the user, which is acquired in a process of uploading the first image by the user, as the first position information; storing the first position information and establishing an incidence relation with the first image;
a second position information sub-reading module configured to read user position information of the user acquired in a process of uploading the second image by the user as the second position information; and storing the second position information and establishing an incidence relation with the second image.
Optionally, before the user location information is read, the following sub-modules are operated:
the position authority reminding sending submodule is configured to send reminding information of the position authority for collecting the user position information of the user to the user;
and the position authority authorization sub-module is configured to carry out authorization processing on the position authority of the user according to the confirmation action submitted by the user aiming at the reminding information.
Optionally, the vehicle-stop condition includes:
the first vehicle running mark and the second vehicle running mark meet a characteristic mark condition, a forward time difference exists between the second time and the first time information, and/or the position deviation between the first position information and the second position information is smaller than a preset position threshold value.
Optionally, the alignment module 304 includes:
a mileage identification submodule configured to identify first mileage information carried by the first image and second mileage information carried by the second image;
a mileage difference value calculating sub-module configured to calculate a difference value of the first mileage information and the second mileage information;
the time comparison sub-module is configured to compare first time information read from the first image with second time information read from the second image if the difference value of the first time information and the second time information is smaller than a mileage threshold value;
the position comparison submodule is configured to compare the first position information with the second position information under the condition that the second time information has a forward time difference relative to the first time information;
correspondingly, the vehicle-stop condition includes: the position deviation of the first position information and the second position information is smaller than a preset position threshold value.
Optionally, the query module 306 includes:
the driving record inquiry submodule is configured to call a vehicle inquiry interface provided by a third-party system, and transmit a target vehicle identifier of the vehicle, the first time information and the second time information to the vehicle inquiry interface; the third-party system identifies vehicle identifications corresponding to vehicle running images of a preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identifications obtained by identification;
the query result acquisition sub-module is configured to acquire a query result returned by the vehicle query interface;
the driving transaction record query sub-module is configured to query the driving transaction records of the vehicle in the target time period in historical transaction records and determine query results;
wherein the travel transaction record includes at least one of: a refueling transaction record for the vehicle, a no-parking electronic toll collection transaction record for the vehicle, and a parking transaction record for the vehicle.
Optionally, the query module 306 is specifically configured to: calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies the vehicle identification of the vehicle running image of the preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identification obtained by identification; acquiring a first query result returned by the vehicle query interface; and under the condition that the first query result is empty, querying a driving transaction record of the vehicle in the target time period in a historical transaction record, and obtaining a second query result as the query result.
Optionally, the vehicle stop detection device further includes:
a call guidance module configured to send a call guidance alert to the user that calls the vehicle query interface;
and the calling authorization module is configured to perform authorization processing on the vehicle query interface for the user according to a calling confirmation action submitted by the user based on the calling guide reminder.
Optionally, the vehicle stop detection device further includes:
a participation guidance module configured to send a participation guidance reminder to the user to participate in the carbon-saving project;
and the participation authorization module is configured to authorize the user for the carbon-saving project according to a participation confirmation action submitted by the user based on the participation guide reminder.
Optionally, the vehicle stop detection device further includes:
a credential receiving module configured to receive a driving pass credential uploaded by the user for the vehicle;
a relationship establishing module configured to establish a binding relationship between the user and the vehicle based on the travel pass credential.
Optionally, the vehicle stop detection device further includes:
a second index calculation module configured to calculate a carbon-saving index of the user for the effective stop of the vehicle according to the vehicle type of the vehicle and/or the geographic area range where the vehicle is located; and when the accumulation of the carbon-saving index meets a preset threshold value, the carbon-saving index is converted into an available carbon-saving resource or the participation permission for participating in the carbon-saving behavior is opened.
Optionally, the vehicle stop detection device further includes:
a display module configured to display the carbon-saving identifier carrying the carbon-saving indicator to the user;
an accumulation module configured to accumulate the carbon-saving indicator to a historical carbon-saving indicator of the user according to a confirmation instruction submitted by the user for the carbon-saving identifier.
Optionally, the comparison module 304 is specifically configured to perform image segmentation on the first image to obtain a first stop environment feature, and perform image segmentation on the second image to obtain a second stop environment feature; comparing first time information read from the first image with second time information read from the second image when the first and second stop environmental features satisfy an environmental feature condition; comparing the first position information with the second position information under the condition that the second time information has forward time difference relative to the first time information; accordingly, the vehicle stop condition includes: the position deviation of the first position information and the second position information is smaller than a preset position threshold value.
The vehicle stop detection device provided by the specification comprises the following embodiments:
in correspondence to the vehicle-run-off detection method described above, based on the same technical concept, one or more embodiments of the present specification further provide a vehicle-run-off detection apparatus for performing the vehicle-run-off detection method described above, and fig. 4 is a schematic structural diagram of the vehicle-run-off detection apparatus provided in one or more embodiments of the present specification.
As shown in fig. 4, the vehicle-stop detection device may have a relatively large difference due to different configurations or performances, and may include one or more processors 401 and a memory 402, where one or more stored applications or data may be stored in the memory 402. Wherein memory 402 may be transient or persistent. The application program stored in memory 402 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in a vehicle-off detection device. Still further, the processor 401 may be configured to communicate with the memory 402 to execute a series of computer-executable instructions in the memory 402 on the vehicle-off detection device. The vehicle-stop detection apparatus may also include one or more power sources 403, one or more wired or wireless network interfaces 404, one or more input-output interfaces 405, one or more keyboards 406, and the like.
In one particular embodiment, the vehicle-stop detection apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the vehicle-stop detection apparatus, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring a first image and a second image which are acquired by a user aiming at a vehicle;
comparing the first image with the second image;
under the condition that the comparison result meets the vehicle stop condition, inquiring characteristic driving records of the vehicle in a target time period; the target time period is composed of first time information read from the first image and second time information read from the second image;
and if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.
Optionally, the querying a characteristic driving record of the vehicle in a target time period includes:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies vehicle identifications corresponding to vehicle running images of a preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identifications obtained by identification;
acquiring a query result returned by the vehicle query interface;
and/or the presence of a gas in the gas,
inquiring the driving transaction record of the vehicle in the target time period in the historical transaction record, and determining an inquiry result;
wherein the driving transaction record comprises at least one of: a refueling transaction record for the vehicle, a non-stop electronic toll collection transaction record for the vehicle, and a parking transaction record for the vehicle.
Optionally, the querying a characteristic driving record of the vehicle in a target time period includes:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies the vehicle identification of the vehicle running image of the preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identification obtained by identification;
acquiring a first query result returned by the vehicle query interface;
and under the condition that the first query result is empty, querying a driving transaction record of the vehicle in the target time period in a historical transaction record, and obtaining a second query result as the query result.
Optionally, before the step of calling a vehicle query interface provided by a third-party system and transmitting the target vehicle identifier of the vehicle, the first time information, and the second time information to the vehicle query interface, the method further includes:
sending a call guidance prompt for calling the vehicle inquiry interface to the user;
and carrying out authorization processing on the vehicle query interface for the user according to a call confirmation action submitted by the user based on the call guiding prompt.
An embodiment of a storage medium provided in this specification is as follows:
in correspondence to the vehicle-stop detection method described above, based on the same technical idea, one or more embodiments of the present specification further provide a storage medium.
The storage medium provided in this embodiment is used to store computer-executable instructions, and when executed, the computer-executable instructions implement the following processes:
acquiring a first image and a second image which are acquired by a user aiming at a vehicle;
comparing the first image with the second image;
under the condition that the comparison result meets the vehicle stop condition, inquiring characteristic driving records of the vehicle in a target time period; the target time period is constituted by first time information read from the first image and second time information read from the second image;
and if the query result is empty, determining that the vehicle is in an effective stop state in the target time period.
Optionally, the querying a characteristic driving record of the vehicle in a target time period includes:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies vehicle identifications corresponding to vehicle running images of a preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identifications obtained by identification;
acquiring a query result returned by the vehicle query interface;
and/or the presence of a gas in the gas,
inquiring the driving transaction record of the vehicle in the target time period in the historical transaction record, and determining an inquiry result;
wherein the driving transaction record comprises at least one of: a refueling transaction record for the vehicle, a no-parking electronic toll collection transaction record for the vehicle, and a parking transaction record for the vehicle.
Optionally, the querying a characteristic driving record of the vehicle in a target time period includes:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies the vehicle identification of the vehicle running image of the preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identification obtained by identification;
acquiring a first query result returned by the vehicle query interface;
and under the condition that the first query result is empty, querying a driving transaction record of the vehicle in the target time period in a historical transaction record, and obtaining a second query result as the query result.
Optionally, before the step of calling a vehicle query interface provided by a third-party system and transmitting the target vehicle identifier of the vehicle, the first time information, and the second time information to the vehicle query interface, the method further includes:
sending a call guidance prompt for calling the vehicle inquiry interface to the user;
and performing authorization processing on the vehicle query interface for the user according to a call confirmation action submitted by the user based on the call guiding reminder.
It should be noted that the embodiment of the storage medium in this specification and the embodiment of the user resource processing method in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the foregoing corresponding method, and repeated details are not described here.
The foregoing has described embodiments of features of the present specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. Additionally, the processes depicted in the accompanying figures do not necessarily require the order in which features are shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 30 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a characteristic Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as ABEL (Advanced Boolean Expression Language), AHDL (alternate Hardware Description Language), traffic, CUPL (core universal Programming Language), HDCal, jhddl (Java Hardware Description Language), lava, lola, HDL, PALASM, rhyd (Hardware Description Language), and vhigh-Language (Hardware Description Language), which is currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry for implementing the logical method flows can be readily obtained by a mere need to program the method flows with some of the hardware description languages described above and into an integrated circuit.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be regarded as a hardware component and the means for performing the various functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in multiple software and/or hardware when implementing the embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present document and is not intended to limit the present document. Various modifications and changes may occur to those skilled in the art from this document. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.

Claims (16)

1. A vehicle-off detection method, comprising:
acquiring a first image and a second image which are acquired by a user aiming at a vehicle;
when the feature similarity of a first stop environment feature and a second stop environment feature obtained by performing image segmentation processing on the first image and the second image is greater than a similarity threshold value, determining the stop state of the vehicle by comparing time information and/or position information of the first image and the second image; the position information is the position information of the user acquired in the image uploading process;
on the basis of determining that the vehicle is in a stop state, inquiring a driving transaction record of the vehicle in a target time period in a historical transaction record, and determining an inquiry result;
and if the query result is empty, determining that the vehicle is in an effective stop state in a target time period formed by the first time information and the second time information.
2. The vehicle stop detection method according to claim 1, the travel transaction record comprising at least one of: a refueling transaction record for the vehicle, a non-stop electronic toll collection transaction record for the vehicle, and a parking transaction record for the vehicle.
3. The vehicle stop detection method according to claim 1, wherein, after the step of determining that the vehicle is in the active stop state in the target time period formed by the first time information and the second time information is executed if the query result is empty, the method further comprises:
determining the effective stop duration of the vehicle according to the target time period of the effective stop state of the vehicle;
calculating a carbon-saving index of the user for the effective stop of the vehicle according to a preset carbon-saving quantification algorithm based on the effective stop duration of the vehicle;
and when the carbon-saving index is accumulated to meet a preset carbon-saving threshold, the carbon-saving index is converted into an available carbon-saving resource or the participation permission for participating in the carbon-saving behavior is opened.
4. The vehicle-stop detection method according to claim 1, the comparing time information and/or position information of the first image and the second image, comprising:
comparing the first time information read from the first image with the second time information read from the second image, and/or comparing the first position information with the second position information.
5. The vehicle stop detection method according to claim 4, wherein the first position information and the second position information are read in the following manner:
reading the position information of the user, which is acquired in the process of uploading the first image by the user, as the first position information, storing the first position information and establishing an incidence relation with the first image;
and reading the position information of the user, collected in the process of uploading the second image by the user, as the second position information, storing the second position information and establishing an incidence relation with the second image.
6. The vehicle-stop detection method according to claim 5, wherein, before the position information of the user is read, the following operations are performed:
sending reminding information of position acquisition permission for acquiring the position information of the user to the user;
and carrying out authorization processing on the position acquisition permission according to the confirmation action submitted by the user aiming at the reminding information.
7. The vehicle stop detection method according to claim 1, further comprising, before the operation of the stopped state of the vehicle is surely performed by comparing time information and/or position information of the first image and the second image:
identifying first mileage information carried by the first image and second mileage information carried by the second image;
calculating a difference value between the first mileage information and the second mileage information;
and if the difference value of the first time information and the second time information is smaller than the mileage threshold value, the operation of comparing whether the second time information read from the second image and the first time information read from the first image have a forward time difference or not is executed.
8. The vehicle-stop detection method according to claim 1, wherein the querying of the historical transaction records for the vehicle's travel transaction record in the target time period in the historical transaction records, and the determination of the query result comprise:
calling a vehicle query interface provided by a third-party system, and transmitting a target vehicle identification of the vehicle, the first time information and the second time information to the vehicle query interface; the third-party system identifies the vehicle identification of the vehicle running image of the preset geographic area in the target time period, and inquires the target vehicle identification in the vehicle identification obtained by identification;
acquiring a first query result returned by the vehicle query interface;
and under the condition that the first query result is empty, querying a driving transaction record of the vehicle in the target time period in a historical transaction record, and obtaining a second query result as the query result.
9. The vehicle-off detection method of claim 8, further comprising, before the invoking a vehicle-query interface provided by a third-party system and entering into the vehicle-query interface a target vehicle identification of the vehicle, the first time information, and the second time information substep for execution:
sending a call guidance prompt for calling the vehicle query interface to the user;
and performing authorization processing on the vehicle query interface for the user according to a call confirmation action submitted by the user based on the call guiding reminder.
10. The vehicle-off detection method of claim 1, further comprising, before the step of obtaining the first and second images captured by the user for the vehicle is performed:
sending a participation guide prompt for participating in the carbon-saving project to the user;
and performing authorization processing on the carbon-saving project for the user according to a participation confirmation action submitted by the user based on the participation guide reminder.
11. The vehicle-stopped running detection method according to claim 10, further comprising, after the step of performing the authorization processing of the carbon-saving project to the user according to a participation confirmation action submitted by the user based on the participation guidance prompt is executed:
receiving a driving pass voucher uploaded by the user and aiming at the vehicle;
and establishing a binding relation between the user and the vehicle based on the driving pass certificate.
12. The vehicle-stopped state detection method according to claim 1, wherein, if the query result is empty, after the step of determining that the vehicle is in the effective stopped state within the target time period formed by the first time information and the second time information is executed, the method further comprises:
calculating a carbon-saving index of the user for the effective stop of the vehicle according to the vehicle type of the vehicle and/or the geographic area range where the vehicle is located;
and when the accumulation of the carbon-saving index meets a preset threshold value, the carbon-saving index is converted into an available carbon-saving resource or the participation permission for participating in the carbon-saving behavior is opened.
13. The vehicle-stop detection method according to claim 3, further comprising, after the step of calculating a throttle index for the user for the effective stop of the vehicle according to a preset throttle quantization algorithm based on the effective stop time period of the vehicle is performed:
displaying the carbon-saving identification carrying the carbon-saving index to the user;
and accumulating the carbon-saving index to the historical carbon-saving index of the user according to a confirmation instruction submitted by the user aiming at the carbon-saving identification.
14. A vehicle-stop detection device comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is configured to acquire a first image and a second image which are acquired by a user aiming at a vehicle;
a comparison module configured to determine a stopped state of the vehicle by comparing time information and/or position information of the first image and the second image when a feature similarity between a first stopped environment feature and a second stopped environment feature obtained by image segmentation processing of the first image and the second image is greater than a similarity threshold; the position information is the position information of the user collected in the image uploading process;
on the basis of determining that the vehicle is in a stop state, operating an inquiry module, wherein the inquiry module is configured to inquire a driving transaction record of the vehicle in a target time period in a historical transaction record and determine an inquiry result;
the determining module is configured to determine that the vehicle is in an effective stop state in a target time period formed by the first time information and the second time information if the query result is empty.
15. A vehicle stop detection apparatus comprising:
a processor; and the number of the first and second groups,
a memory configured to store computer-executable instructions that, when executed, cause the processor to:
acquiring a first image and a second image which are acquired by a user aiming at a vehicle;
when the feature similarity of a first stopping environment feature and a second stopping environment feature obtained by performing image segmentation processing on the first image and the second image is greater than a similarity threshold value, determining the stopping state of the vehicle by comparing time information and/or position information of the first image and the second image; the position information is the position information of the user collected in the image uploading process;
on the basis of determining that the vehicle is in a stop state, inquiring a driving transaction record of the vehicle in a target time period in a historical transaction record, and determining an inquiry result;
and if the query result is empty, determining that the vehicle is in an effective stop state in a target time period formed by the first time information and the second time information.
16. A storage medium storing computer-executable instructions that when executed perform the following:
acquiring a first image and a second image which are acquired by a user aiming at a vehicle;
when the feature similarity of a first stop environment feature and a second stop environment feature obtained by performing image segmentation processing on the first image and the second image is greater than a similarity threshold value, determining the stop state of the vehicle by comparing time information and/or position information of the first image and the second image; the position information is the position information of the user collected in the image uploading process;
on the basis of determining that the vehicle is in a stop state, inquiring a driving transaction record of the vehicle in a target time period in a historical transaction record, and determining an inquiry result;
and if the query result is empty, determining that the vehicle is in an effective stop state in a target time period formed by the first time information and the second time information.
CN202211247422.XA 2020-10-20 2020-10-20 Vehicle stop detection method and device Pending CN115691121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211247422.XA CN115691121A (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211247422.XA CN115691121A (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device
CN202011125902.XA CN112233422B (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011125902.XA Division CN112233422B (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device

Publications (1)

Publication Number Publication Date
CN115691121A true CN115691121A (en) 2023-02-03

Family

ID=74118077

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211247422.XA Pending CN115691121A (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device
CN202011125902.XA Active CN112233422B (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011125902.XA Active CN112233422B (en) 2020-10-20 2020-10-20 Vehicle stop detection method and device

Country Status (1)

Country Link
CN (2) CN115691121A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115122933A (en) * 2022-08-22 2022-09-30 中国第一汽车股份有限公司 Electric automobile standing abnormity identification method and device

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100532244C (en) * 2006-09-26 2009-08-26 上海海事大学 Method and device for positioning container lorry mobile in port
WO2010132677A1 (en) * 2009-05-13 2010-11-18 Rutgers, The State University Vehicular information systems and methods
CN101832795B (en) * 2010-04-13 2011-07-20 上海霖碳节能科技有限公司 Personal-based carbon dioxide recording and tracing system platform
CN201819990U (en) * 2010-10-13 2011-05-04 福建邦信信息科技有限公司 Personnel geographical location identification system
CN103793772A (en) * 2012-11-02 2014-05-14 王亚利 Account system and method for realizing management of carbon emission or carbon emission reduction behavior
CN103793774A (en) * 2012-11-02 2014-05-14 王亚利 Account system and method for realizing management of carbon emission or carbon emission reduction behavior
DE102016208488A1 (en) * 2016-05-18 2017-11-23 Robert Bosch Gmbh Method and device for locating a vehicle
CN106652551B (en) * 2016-12-16 2021-03-09 浙江宇视科技有限公司 Parking space detection method and equipment
CN106686098A (en) * 2017-01-03 2017-05-17 上海量明科技发展有限公司 Method and system with function of carbon emission indication, and shared bicycle
CN109727449A (en) * 2019-01-15 2019-05-07 安徽慧联运科技有限公司 A kind of analysis method judging car operation situation according to vehicle driving position
CN110135983A (en) * 2019-01-17 2019-08-16 深圳市元征科技股份有限公司 A kind of carbon emission rationing transaction method and apparatus
CN109727000A (en) * 2019-01-22 2019-05-07 浙江爱立美能源科技有限公司 Carbon emission amount management system for monitoring and method
CN109948591A (en) * 2019-04-01 2019-06-28 广东安居宝数码科技股份有限公司 A kind of method for detecting parking stalls, device, electronic equipment and read/write memory medium
CN110001631A (en) * 2019-04-10 2019-07-12 合肥工业大学 A kind of parking stall measure method of discrimination based on laser ranging module and encoder
CN110491132B (en) * 2019-07-11 2022-04-08 平安科技(深圳)有限公司 Vehicle illegal parking detection method and device based on video frame picture analysis
CN110599794B (en) * 2019-07-31 2021-12-28 广东翼卡车联网服务有限公司 Intelligent vehicle finding method and system based on Internet of vehicles
CN110490117B (en) * 2019-08-14 2023-04-07 智慧互通科技股份有限公司 Parking event determination method and system based on image depth information
CN111243125B (en) * 2020-01-16 2022-02-22 深圳市元征科技股份有限公司 Vehicle card punching method, device, equipment and medium

Also Published As

Publication number Publication date
CN112233422B (en) 2022-10-25
CN112233422A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
US9710977B2 (en) Vehicle data collection and verification
CN114141048B (en) Parking space recommending method and device, and parking space predicting method and device for parking lot
CN113570013A (en) Service execution method and device
CN113128508B (en) License plate number-based payment processing method and device
CN112233422B (en) Vehicle stop detection method and device
CN114267110A (en) Traffic processing method and device
CN108805601B (en) Method, device and equipment for identifying user and account registration
CN110852736A (en) Vehicle payment method, device and system and electronic equipment
CN115118749B (en) Vehicle information acquisition method and device and vehicle order processing method and device
CN115860848A (en) Electronic riding invoice processing method and device
CN112990940B (en) Enterprise authentication method and device
CN116614532A (en) Vehicle information management method, system and computer storage medium
CN114550322A (en) Vehicle service fee processing method and device
CN118193805A (en) Vehicle information processing method, device and system based on block chain
CN114495297A (en) Parking-free payment parking management method and system based on intelligent terminal
CN111161533B (en) Traffic accident processing method and device and electronic equipment
CN113781792A (en) Parking detection system, method and related equipment
CN114148338A (en) Driving correction processing method and device
CN113434298B (en) Data processing method and device
CN114900552A (en) Driving reminding pushing processing method and device
CN116611671B (en) Electronic license plate management system based on AI artificial intelligence
CN117933615A (en) Vehicle driving generation processing method and device
CN116013089A (en) Vehicle reminding method and device
CN112215638B (en) Recommendation processing method and device
CN115909523A (en) Vehicle passing verification processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination