CN108805184B - Image recognition method and system for fixed space and vehicle - Google Patents

Image recognition method and system for fixed space and vehicle Download PDF

Info

Publication number
CN108805184B
CN108805184B CN201810523878.1A CN201810523878A CN108805184B CN 108805184 B CN108805184 B CN 108805184B CN 201810523878 A CN201810523878 A CN 201810523878A CN 108805184 B CN108805184 B CN 108805184B
Authority
CN
China
Prior art keywords
image
images
recognition
information
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810523878.1A
Other languages
Chinese (zh)
Other versions
CN108805184A (en
Inventor
干晓明
雷济忠
祝峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yingzhuo Electronic Technology Co ltd
Original Assignee
Guangzhou Yingzhuo Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yingzhuo Electronic Technology Co ltd filed Critical Guangzhou Yingzhuo Electronic Technology Co ltd
Priority to CN201810523878.1A priority Critical patent/CN108805184B/en
Publication of CN108805184A publication Critical patent/CN108805184A/en
Application granted granted Critical
Publication of CN108805184B publication Critical patent/CN108805184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image recognition method and system for a fixed space and a vehicle, wherein a plurality of first images of the fixed space are obtained according to first indication information, a preset rectangular frame is used for marking a target object in the first images, characteristic areas of the first images are determined according to the marking, a first data set is established according to the information of the characteristic areas, and a recognition training model is established according to the first data set; acquiring a second image of the fixed space according to second indication information; the second image is identified according to the identification training model, the identification comprises the identification training model identifying the characteristic region of the second image corresponding to the information of the characteristic region in the first image, and an identification result is generated, so that the problems of high false alarm rate of image identification and large workload of manual image screening in a fixed space are solved, the accuracy rate of image identification is improved, and the workload of manual screening is greatly reduced.

Description

Image recognition method and system for fixed space and vehicle
Technical Field
The invention relates to the field of image processing, in particular to an image identification method and system for a fixed space and a vehicle.
Background
With the development of internet technology, unattended operation appears in many industries, for example, fixed spaces such as shared cars, shared bicycles, shared hotels, monitoring rooms, warehouses, storage rooms and the like, clients rent through the internet and return cars (leave rooms), and as a manager is unattended operation, the problem of how to effectively perform remote monitoring is always solved. With the increase in the number of monitors, the efficiency of manually viewing photographs or videos is severely limited, especially by the rapidly growing industry of shared automobiles and shared bicycles.
For example: the parked vehicles are not standard, the interior of the vehicles is damaged, the customers leave articles on the vehicles, and the renting and actual driving are not the same person, so that the problems of driving without a license and the like are solved. The existing rental business depends on the field of workers or a video picture mode, is limited by the efficiency of manual identification, cannot meet the rapidly-increased business demand, and cannot detect and monitor 24-hour uninterrupted mutation. Meanwhile, in a traditional image algorithm for automatically monitoring video images, a picture at a first time point is taken as a reference in a fixed shooting space, a picture at the same place is taken at a second time point, the difference between the two pictures is compared, and an alarm is given to a difference value. The algorithm cannot distinguish whether people or articles exist in the pictures, only the difference of the pictures is judged, the false alarm rate is high, the workload of later-stage manual screening is large, and the algorithm has no learning growth capability.
Aiming at the problems that in the related art, the false alarm rate of image recognition is high and the workload of manual image screening is large in a fixed space, an effective solution is not provided at present.
Disclosure of Invention
The invention provides a method and a system for identifying images in a fixed space and on a vehicle, aiming at the problems of high false alarm rate of image identification and large workload of manual screening in the fixed space in the related art, and at least solving the problems.
According to one aspect of the invention, an image recognition method in a fixed space is provided, which includes receiving first indication information of an acquired image, acquiring a plurality of first images of the fixed space according to the first indication information;
pre-processing a plurality of said first images, the pre-processing comprising: labeling a target object in the plurality of first images by using a preset rectangular frame, determining characteristic areas of the plurality of first images according to the labeling, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
receiving second indication information of the obtained image, and obtaining a second image of the fixed space according to the second indication information;
and identifying the second image according to the identification training model, wherein the identification comprises the identification training model identifying the characteristic region of the second image corresponding to the information of the characteristic region in the first image, and generating an identification result.
Further, the pre-processing the plurality of first images comprises:
filtering the background color of the first image before labeling the target object in the plurality of first images by using a preset rectangular frame;
judging whether the average pixel brightness value or the file size value of the first image is in a threshold range or not;
deleting the first image if the average pixel brightness value or file size value of the first image is not within the threshold.
Further, after labeling the target objects in the plurality of first images with a preset rectangular frame, the method includes:
recording the plane coordinates of the rectangular frame, determining the characteristic areas and the quantity of the characteristic areas of the target object according to the plane coordinates, and establishing the first data set according to the characteristic areas and the quantity of the characteristic areas;
and establishing the recognition training model according to the first data set, the sample data set and the deep learning framework.
Further, the recognizing and training model recognizing the feature region of the second image corresponding to the information of the feature region in the first image comprises:
the recognition training model recognizes the feature regions of the second image corresponding to the information of the feature regions in the first image and the number of the feature regions;
the recognition training model recognizes whether the target object of the characteristic region of each second image is a preset target object.
Further, after the generating the recognition result, the method includes:
and reporting the recognition result, and adjusting the recognition training model according to the recognition result.
According to another aspect of the present invention, there is also provided an image recognition system in a fixed space, the recognition system comprising: a camera for taking images in a fixed space and an identification unit for data transmission, wherein,
the camera receives first indication information sent by the identification unit, acquires a plurality of first images in a fixed space according to the first indication information, and sends the first images to the identification unit;
the recognition unit performs preprocessing on a plurality of the first images, the preprocessing including: labeling a target object in the plurality of first images by using a preset rectangular frame, determining characteristic areas of the plurality of first images according to the labeling, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
the camera receives second indication information sent by the identification unit, and the camera acquires a second image in the fixed space according to the second indication information;
the recognition unit recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit generates a recognition result.
According to another aspect of the present invention, there is also provided an image recognition method on a vehicle, the vehicle including: a first camera that takes an image of the interior of the vehicle and an identification unit that performs data transmission, wherein,
the first camera receives first indication information sent by the identification unit, acquires a plurality of first images in the vehicle according to the first indication information, and sends the first images to the identification unit;
the recognition unit performs preprocessing on a plurality of the first images, the preprocessing including: labeling a target object in the plurality of first images by using a preset rectangular frame, determining characteristic areas of the plurality of first images according to the labeling, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
the first camera receives second indication information sent by the identification unit, and the first camera acquires a second image in the vehicle according to the second indication information;
the recognition unit recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit generates a recognition result.
Further, the labeling, by the identification unit, the target objects in the plurality of first images according to a preset rectangular frame, includes:
recording the plane coordinates of the rectangular frame, determining the characteristic areas and the quantity of the characteristic areas of the target object according to the plane coordinates, and establishing the first data set according to the characteristic areas and the quantity of the characteristic areas;
and establishing the recognition training model according to the first data set, the sample data set and the deep learning framework.
According to another aspect of the present invention, there is also provided an image recognition system on a vehicle, the system including: a first camera and an identification unit on the vehicle, the first camera taking an image of the interior of the vehicle, the first camera and the identification unit performing data transmission, wherein,
the first camera receives first indication information sent by the identification unit, acquires a plurality of first images in the vehicle according to the first indication information, and sends the first images to the identification unit;
the recognition unit performs preprocessing on a plurality of the first images, the preprocessing including: labeling a target object in the plurality of first images by using a preset rectangular frame, determining characteristic areas of the plurality of first images according to the labeling, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
the first camera receives second indication information sent by the identification unit, and the first camera acquires a second image in the vehicle according to the second indication information;
the recognition unit recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit generates a recognition result.
Further, the system further comprises: a second camera outside the vehicle, the second camera taking an image of the outside of the vehicle or an image of the surroundings of the vehicle, the second camera performing data transmission with the identification unit, wherein,
the second camera receives third indication information sent by the identification unit, acquires a plurality of third images outside the vehicle according to the third indication information, and sends the plurality of third images to the identification unit;
the recognition unit performs preprocessing on the plurality of third images, the preprocessing including: labeling the target objects in the plurality of third images by using a preset rectangular frame, determining characteristic areas of the plurality of third images according to the labels, establishing a second data set according to the information of the characteristic areas, and establishing a recognition training model according to the second data set;
the second camera receives fourth indication information sent by the identification unit, and the second camera acquires a fourth image outside the vehicle according to the fourth indication information;
the recognition unit recognizes the fourth image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the fourth image corresponding to the information of the feature region in the third image, and the recognition unit generates a recognition result.
According to the invention, the first indication information of the acquired image is adopted, a plurality of first images of a fixed space are acquired according to the first indication information, and the plurality of first images are preprocessed, wherein the preprocessing comprises the following steps: labeling the target objects in the first images by using a preset rectangular frame, determining characteristic areas of the first images according to the labels, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set; receiving second indication information of the acquired image, and acquiring a second image of the fixed space according to the second indication information; the second image is identified according to the identification training model, the identification comprises the identification training model identifying the characteristic region of the second image corresponding to the information of the characteristic region in the first image, and an identification result is generated, so that the problems of high false alarm rate of image identification and large workload of manual image screening in a fixed space are solved, the accuracy rate of image identification is improved, and the workload of manual screening is greatly reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of image recognition in a fixed space according to an embodiment of the present invention;
FIG. 2 is a flow diagram of establishing a recognition training model according to an embodiment of the present invention;
FIG. 3 is a flow diagram of recognition of a recognition training model according to an embodiment of the present invention;
FIG. 4 is a flow diagram of identifying training model parameter adjustments according to an embodiment of the present invention;
FIG. 5 is a block diagram of an image recognition system in a fixed space according to an embodiment of the present invention;
FIG. 6 is a block diagram of an image recognition system on a vehicle according to an embodiment of the present invention;
fig. 7 is a block diagram of a second configuration of an image recognition system on a vehicle according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In the present embodiment, there is provided an image recognition method in a fixed space, and fig. 1 is a flowchart of an image recognition method in a fixed space according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, receiving first indication information of the acquired images, and acquiring a plurality of first images of the fixed space according to the first indication information;
step S104, performing a preprocessing on the plurality of first images, the preprocessing including: labeling the target objects in the first images by using a preset rectangular frame, determining characteristic areas of the first images according to the labels, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
step S106, receiving second indication information of the acquired image, and acquiring a second image of the fixed space according to the second indication information;
step S108, recognizing the second image according to the recognition training model, where the recognition includes recognizing the feature region of the second image corresponding to the information of the feature region in the first image by the recognition training model, and generating a recognition result.
In this embodiment, through the above steps, according to the first indication information, a plurality of first images are obtained, a preset rectangular frame is used to mark a target object in the plurality of first images, a recognition training model is established, and a second image obtained through the second indication information is compared with the first image, and the recognition training model recognizes the target object in the feature region in the second image.
In this embodiment, the fixed space includes: the implementation method can carry out identification processing on any image collected by a camera in a fixed space or place, identify a target object in the image and trigger subsequent corresponding operation, such as identifying information of vehicle parking in the parking lot, collecting a plurality of first images of the vehicle in the parking lot by the camera in the parking lot after the target vehicle parks in the parking lot, establishing the identification training model, collecting driver image information and vehicle use state information of the vehicle, indicating the collection camera to collect second image information after the target vehicle leaves the parking lot, displaying the driver image information and the vehicle use state information of the vehicle by the second image, identifying whether the driver information of the vehicle is consistent or not, whether the vehicle is damaged or not and the like information, and under the condition of inconsistency or vehicle damage, reporting the information to a management platform of the parking lot, so that a manager of the parking lot can timely judge whether the vehicle is stolen or the vehicle is damaged artificially.
In the process of preprocessing the plurality of first images, filtering the background color of the first images before labeling the target objects in the plurality of first images by using a preset rectangular frame; judging whether the average pixel brightness value or the file size value of the first image is in a threshold range or not; and deleting the first image under the condition that the average pixel brightness value or the file size value of the first image is not within the threshold value, and deleting the picture with low shooting quality in the first image in the mode of screening the image in advance, so that the quality of the image in the first data set is improved, and the identification accuracy is improved.
After labeling the target objects in the first images by using a preset rectangular frame, recording plane coordinates of the rectangular frame, determining a characteristic area and the number of the characteristic areas of the target objects according to the plane coordinates, and establishing a first data set according to the characteristic areas and the number of the characteristic areas; and establishing the recognition training model according to the first data set, the sample data set and the deep learning framework. Fig. 2 is a flowchart of establishing a recognition training model according to an embodiment of the present invention, for example:
manually labeling the characteristic region in the first image with a rectangular frame by using a labeling tool written by python, generating label data by using the category information and the coordinate position information of the rectangular frame, wherein the category information may be a category of the object in the first image, for example, whether the category is a person or an object, the coordinate position information may be coordinate information of four corners of a rectangular frame, and the tag data is stored, labeling a plurality of the first images to generate a label data set, training the label data set by using a TensorFlow framework, defining a proper loss function according to the class number, the image size, the batch size, the learning rate and the training maximum step number of objects in the image, compiling a proper training model, and calculating and classifying the feature data in a high-dimensional space by using a convolutional neural network to finally obtain a recognition training model.
The recognizing and training model recognizing the characteristic region of the second image corresponding to the information of the characteristic region in the first image comprises the following steps: the recognition training model recognizes the feature regions of the second image and the number of the feature regions corresponding to the information of the feature regions in the first image; the recognition training model recognizes whether the target object of the characteristic region of each second image is a preset target object. FIG. 3 is a flow chart of recognition of a training model according to an embodiment of the present invention, for example:
the second image is digitized, each pixel point is classified by analyzing the image, the whole picture is analyzed, and element classification in the scene is obtained. And comparing the image with the recognition training model, analyzing the coordinate position of the characteristic region in the second image (namely, marking out a rectangular box), and judging that the recognition is successful when the recognition rate is greater than or equal to 99%. Which objects are to be detected can be controlled by a program, or all objects can be detected.
In this embodiment, after the recognition result is generated, the recognition result is reported, and the recognition training model is adjusted according to the recognition result.
FIG. 4 is a flow chart of identifying training model parameter adjustments, according to an embodiment of the present invention, for example:
the parameter adjustment mainly adjusts the learning rate of the recognition training model, batch _ size; the false detection pictures can be manually marked, the marked label data is added into the label data set, and the training is carried out again to obtain the identification training model. In the training process, when the loss rate or the accuracy reaches a certain value, the convergence can be considered when the up-and-down fluctuation occurs or no obvious increase occurs.
Fig. 5 is a block diagram of an image recognition system in a fixed space according to an embodiment of the present invention, and as shown in fig. 5, the apparatus includes: a camera 52 and a recognition unit 54, the camera 52 taking images in a fixed space, the camera 52 and the recognition unit 54 performing data transmission, wherein,
the camera 52 receives the first indication information sent by the identification unit 54, and the camera 52 obtains a plurality of first images in the fixed space according to the first indication information and sends the plurality of first images to the identification unit 54;
the recognition unit 54 performs a preprocessing on the plurality of first images, the preprocessing including: labeling the target objects in the first images by using a preset rectangular frame, determining characteristic areas of the first images according to the labels, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
the camera 52 receives the second indication information sent by the identification unit 54, and the camera 52 obtains a second image in the fixed space according to the second indication information;
the recognition unit 54 recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit 54 generates a recognition result.
In this embodiment, the camera 52 and the recognition unit 54 may be integrated, or may be separate devices located at different positions, and the recognition unit 54 may be connected to other platforms arbitrarily by a wired or wireless transmission manner, for example, the recognition unit 54 is connected to a remote server by a mobile communication network or a fixed communication network, receives control information sent by the server, establishes a recognition training model at the server end and performs recognition, the server reports a recognition result to a corresponding user, the recognition unit 54 may also establish a recognition training model by its own algorithm and an image processor and performs recognition, and only reports the recognition result to the remote server. The first indication information and the second indication information are applied to different scenes, and the management system of the application scene sends corresponding indication information.
For the following description of a specific scenario of vehicle rental, fig. 6 is a first structural block diagram of an image recognition system on a vehicle according to an embodiment of the present invention, where the vehicle 60 includes: a first camera 62 and a recognition unit 64 (corresponding to the recognition unit 54 in the above-described embodiment), the first camera 62 taking an image of the interior of the vehicle, the first camera 62 performing data transmission with the recognition unit 64, wherein,
the first camera 62 receives the first indication information sent by the identification unit 64, and the first camera 62 obtains a plurality of first images of the interior of the vehicle according to the first indication information and sends the plurality of first images to the identification unit 64;
the recognition unit 64 performs a preprocessing of the plurality of first images, the preprocessing including: labeling the target objects in the first images by using a preset rectangular frame, determining characteristic areas of the first images according to the labels, establishing a first data set according to the information of the characteristic areas, and establishing a recognition training model according to the first data set;
the first camera 62 receives the second indication information sent by the identification unit 64, and the first camera 62 acquires a second image of the interior of the vehicle according to the second indication information;
the recognition unit 64 recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit 64 generates a recognition result.
In this embodiment, the first camera 62 and the device unit 64 provided in the vehicle interior may be an internal device fixed to the vehicle or a detachable external device. The first indication information may be triggered by a user getting on or off the vehicle, for example, the user opens the door to get on the vehicle, the trigger recognition unit 64 sends the first indication information, the user closes the door to get off the vehicle, the trigger recognition unit 64 sends the second indication information, or the trigger recognition unit 64 sends the first indication information and the second indication information through the start and stop of the vehicle.
Fig. 7 is a block diagram of a second configuration of an image recognition system on a vehicle according to an embodiment of the present invention, the image recognition system including: the vehicle 60, the first camera 62, the recognition unit 64 and a second camera 72 outside the vehicle 60, the second camera 72 taking an image outside the vehicle or an image of the surroundings of the vehicle, the second camera 72 performing data transmission with the recognition unit 64, wherein,
the second camera 72 receives the third indication information sent by the identification unit 64, and the second camera 72 obtains a plurality of third images outside the vehicle according to the third indication information and sends the plurality of third images to the identification unit 64;
the recognition unit 64 performs a preprocessing on the plurality of third images, the preprocessing including: labeling the target objects in the third images by using a preset rectangular frame, determining characteristic areas of the third images according to the labels, establishing a second data set according to the information of the characteristic areas, and establishing a recognition training model according to the second data set;
the second camera 72 receives fourth indication information sent by the identification unit 64, and the second camera 72 acquires a fourth image outside the vehicle according to the fourth indication information;
the recognition unit 64 recognizes the fourth image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the fourth image corresponding to the information of the feature region in the third image, and the recognition unit 64 generates a recognition result.
The following is a detailed description of an application scenario of the identification system on the vehicle.
An intelligent terminal (including the identification module 64 in the above embodiment) is installed on the rental vehicle in this embodiment, a communication device and a camera (equivalent to the first camera 62 in the above embodiment) are embedded in the intelligent device, the communication device is connected to a remote server, and receives management information of a rental vehicle management system on the server, the intelligent terminal may be installed inside the vehicle, or the intelligent terminal may be used to replace a rear-view mirror above a steering wheel of the vehicle, and the wide-angle camera is installed on a frame below the front of the intelligent terminal for collecting image data.
The application scene one: vehicle rental user identification
The vehicle rental user identity identification process comprises the following steps:
step S802, receiving an image acquisition instruction (corresponding to the first indication information in the above embodiment) issued by the server rental car management system, and acquiring more than 10000 photos inside the car according to the image acquisition instruction;
step S804, labeling the photos, or screening before labeling, filtering the background color of the photos, and deleting the photos with an average pixel brightness value less than 30 and an image size less than 21500 bytes. These parameters can be adjusted according to actual conditions.
The method comprises the steps of marking a characteristic area of a picture by using a rectangular frame, wherein the characteristic area is mainly a human body image position of a renting user sitting on a driving position, a marked target object is a human body head portrait, the characteristic area can be marked on a copilot position, a plurality of characteristic areas can be provided, a first data set is established according to information of the characteristic areas of all pictures, the first data set comprises plane coordinate information and quantity information of the characteristic areas, model training is carried out by using a Tensorw flow frame according to a sample human body head portrait data set, the first data set and the sample head portrait image data set are contrastively analyzed, the quantity of the minimum rectangular frame and the quantity of the rectangular frame of a human body image are detected, and a recognition training model is established and can recognize head portrait information of a human body in the characteristic areas.
Step S806, the intelligent terminal receives the vehicle door opening information and the user leasing information issued by the server leasing vehicle management system, wherein the leasing information comprises: the intelligent terminal generates second indication information according to the head portrait information on the driving license of the user, a photo in the vehicle is obtained according to the second indication information, the head portrait information of the user in the photo is identified according to the identification training model, the head portrait information is compared with the head portrait information on the driving license, a comparison result is generated and reported to the server, the rental vehicle management system judges whether the head portrait information is consistent with the information of the rental user, and alarm information is sent under the condition that the head portrait information is inconsistent with the information of the rental user.
Application scenario two: vehicle rental user item loss identification
The invention provides a process for identifying article loss of a vehicle rental user, which comprises the following steps:
step S902, receiving an image acquisition instruction (equivalent to the first instruction information in the above embodiment) issued by the server rental car management system, and acquiring more than 10000 photos inside the car according to the image acquisition instruction;
step S904, marking a characteristic area of the picture by using a rectangular frame, wherein the characteristic area is a position where an article in a vehicle is placed, the marked target object is clothes and bags, the characteristic area can be marked at a plurality of positions, a plurality of characteristic areas can be provided, a first data set is established according to the information of the characteristic areas of all the pictures, the first data set comprises plane coordinate information and quantity information of the characteristic areas, model training is carried out by using a TensorFlow frame according to an image data set of the clothes and the bags, the first data set and the image data set are contrastively analyzed, the number of the minimum rectangular frame and the minimum rectangular frame of the clothes and the minimum rectangular frame of the bags are detected, and an identification training model is established, and the identification characteristic model can identify the information of the clothes and the bags in the characteristic area.
Step S906, the intelligent terminal receives the door closing information of the vehicle and the user leasing information issued by the server leasing vehicle management system, the leasing information comprises the end of leasing of the user, the intelligent terminal generates second indication information, the picture in the vehicle is obtained according to the second indication information, the information of clothes and bags in the picture is identified according to the identification training model, a comparison result is generated, if the information of the clothes and the bags exists, the result is reported to the server, and the leasing vehicle management system reminds the user whether the clothes and the bags are lost or not.
Application scenario three: and after the vehicle rental user finishes the rental, whether the vehicle is complete and whether the vehicle stops at the correct parking space position or not is judged.
In this embodiment, an external camera (equivalent to the second camera 72 in the foregoing embodiment) is disposed outside the vehicle or on a device near a parking space, and the external camera is in communication connection with the intelligent terminal, and a flowchart for identifying a rental and return state of the vehicle in an embodiment of the present invention includes the following steps:
step S1002, the external camera receives an image acquisition instruction (equivalent to the first instruction information in the above embodiment) issued by the server rental car management system, and acquires photos of more than 10000 parking spaces of the car according to the image acquisition instruction;
step S1004, a characteristic area of the picture is marked by a rectangular frame, the characteristic area is a parking position of a vehicle, a marked target is the appearance of the vehicle and a parking space marking line where wheels of the vehicle park, a first data set is established according to information of the characteristic areas of all the pictures, the first data set comprises plane coordinate information and quantity information of the characteristic area, model training is carried out by using a TensorFlow frame according to the appearance of the vehicle and an image data set of the parking space marking line where the wheels park, the first data set and the image data set are contrastively analyzed, the quantity of a minimum rectangular frame and a minimum rectangular frame of the appearance of the vehicle and the parking space marking line are detected, and a recognition training model is established.
Step S1006, the intelligent terminal receives the door closing information of the vehicle and the user leasing information issued by the server leasing vehicle management system, the leasing information includes that the user finishes leasing, the intelligent terminal generates second indication information and sends the second indication information to the external camera, the external camera obtains the appearance of the parked vehicle and the parking space mark line of the parked vehicle according to the second indication information, whether the appearance of the parked vehicle in the picture is intact and the parked vehicle is correctly parked in the parking space mark line is identified according to the identification training model, a comparison result is generated, if the parked vehicle is abnormal, the result is reported to the server, and the leasing vehicle management system generates corresponding alarm information.
In another embodiment, a software is provided, which is used to execute the technical solutions described in the above embodiments and the preferred embodiments.
In another embodiment, a storage medium is provided, wherein the software is stored in the storage medium, and the storage medium includes, but is not limited to, an optical disc, a floppy disc, a hard disc, a rewritable memory, and the like.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. An image recognition method in a fixed space is characterized in that:
receiving first indication information of an acquired image, and acquiring a plurality of first images of a fixed space according to the first indication information; pre-processing a plurality of said first images, the pre-processing comprising: labeling a target object in a plurality of first images by using a preset rectangular frame, determining a characteristic region of the plurality of first images according to the label, establishing a first data set according to information of the characteristic region, and establishing a recognition training model according to the first data set, wherein the fixed space is an internal or external space of a vehicle, labeling the characteristic region in the first image by using a labeling tool written by python, manually labeling the characteristic region in the first image by using the rectangular frame, generating label data by using category information and coordinate position information of the rectangular frame, wherein the category information is a category of the target object in the first image, the coordinate position information is coordinate information of four corners of the rectangular frame, storing the label data, labeling the plurality of first images, forming the first data set by using a TensorFlow frame, and training the first data set by using a TensorFlow frame, defining a loss function according to the category number, the image size, the batch size, the learning rate and the maximum training step number of the target object in the first image, compiling a training model, and calculating and classifying the first data set in a high-dimensional space by using a convolutional neural network to obtain the identification training model;
filtering the background color of the first image before labeling the target object in the plurality of first images by using a preset rectangular frame; judging whether the average pixel brightness value or the file size value of the first image is in a threshold range or not; deleting the first image if the average pixel brightness value or file size value of the first image is not within the threshold;
after labeling the target objects in the plurality of first images with a preset rectangular frame, the method comprises the following steps:
recording the plane coordinates of the rectangular frame, determining the characteristic areas and the quantity of the characteristic areas of the target object according to the plane coordinates, and establishing the first data set according to the characteristic areas and the quantity of the characteristic areas; establishing the recognition training model according to the first data set, the sample data set and the deep learning frame;
receiving second indication information of the obtained image, and obtaining a second image of the fixed space according to the second indication information; and identifying the second image according to the identification training model, identifying the characteristic region of the second image corresponding to the information of the characteristic region in the first image, and generating an identification result.
2. The method of claim 1, wherein the identifying the training model to identify the feature region of the second image corresponding to the information of the feature region in the first image comprises:
the recognition training model recognizes the feature regions of the second image corresponding to the information of the feature regions in the first image and the number of the feature regions; the recognition training model recognizes whether the target object of the characteristic region of each second image is a preset target object.
3. The method of claim 1, wherein after generating the recognition result, the method further comprises:
and reporting the recognition result, and adjusting the recognition training model according to the recognition result.
4. An image recognition system in a fixed space, the recognition system performing the method of any one of claims 1 to 3, comprising: a camera for taking images in a fixed space and an identification unit for data transmission, wherein,
the camera receives first indication information sent by the identification unit, acquires a plurality of first images in a fixed space according to the first indication information, and sends the first images to the identification unit;
the recognition unit performs preprocessing on a plurality of the first images, the preprocessing including: labeling a target object in a plurality of first images by using a preset rectangular frame, determining a characteristic region of the plurality of first images according to the label, establishing a first data set according to information of the characteristic region, and establishing a recognition training model according to the first data set, wherein the fixed space is an internal or external space of a vehicle, labeling the characteristic region in the first image by using a labeling tool written by python, manually labeling the characteristic region in the first image by using the rectangular frame, generating label data by using category information and coordinate position information of the rectangular frame, wherein the category information is a category of the target object in the first image, the coordinate position information is coordinate information of four corners of the rectangular frame, storing the label data, labeling the plurality of first images, forming the first data set by using a TensorFlow frame, and training the first data set by using a TensorFlow frame, defining a loss function according to the category number, the image size, the batch size, the learning rate and the maximum training step number of the target object in the first image, compiling a training model, and calculating and classifying the first data set in a high-dimensional space by using a convolutional neural network to obtain the identification training model;
the camera receives second indication information sent by the identification unit, and the camera acquires a second image in the fixed space according to the second indication information;
the recognition unit recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit generates a recognition result.
5. An image recognition method on a vehicle, characterized in that the vehicle comprises: the vehicle interior image recognition system comprises a first camera and a recognition unit, wherein the first camera shoots an image of the vehicle interior, and the first camera and the recognition unit perform data transmission, wherein the first camera receives first indication information sent by the recognition unit, and acquires a plurality of first images of the vehicle interior according to the first indication information and sends the plurality of first images to the recognition unit;
the recognition unit performs preprocessing on a plurality of the first images, the preprocessing including: labeling a target object in a plurality of first images by using a preset rectangular frame, determining characteristic regions of the plurality of first images according to the labeling, establishing a first data set according to information of the characteristic regions, establishing a recognition training model according to the first data set, manually labeling the characteristic regions in the first images by using a labeling tool written by python, generating label data by using category information and coordinate position information of the rectangular frame, wherein the category information is the category of the target object in the first images, the coordinate position information is the coordinate information of four corners of the rectangular frame, storing the label data, labeling the plurality of first images, forming the first data set by using a TensorFlow frame, training the first data set according to the category number of the target object in the first images, Defining a loss function, writing a training model, and calculating and classifying the first data set in a high-dimensional space by using a convolutional neural network to obtain the identification training model;
the first camera receives second indication information sent by the identification unit, and the first camera acquires a second image in the vehicle according to the second indication information;
the recognition unit recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit generates a recognition result;
the identification unit, after labeling the target objects in the plurality of first images by using a preset rectangular frame, includes:
recording the plane coordinates of the rectangular frame, determining the characteristic areas and the quantity of the characteristic areas of the target object according to the plane coordinates, and establishing the first data set according to the characteristic areas and the quantity of the characteristic areas;
and establishing the recognition training model according to the first data set, the sample data set and the deep learning framework.
6. An image recognition system on a vehicle, the system comprising: the vehicle-mounted device comprises a first camera and an identification unit, wherein the first camera shoots images in the vehicle, and the first camera and the identification unit perform data transmission, wherein the first camera receives first indication information sent by the identification unit, and acquires a plurality of first images in the vehicle according to the first indication information and sends the plurality of first images to the identification unit;
the recognition unit performs preprocessing on a plurality of the first images, the preprocessing including: labeling a target object in a plurality of first images by using a preset rectangular frame, determining characteristic regions of the plurality of first images according to the labeling, establishing a first data set according to information of the characteristic regions, establishing a recognition training model according to the first data set, manually labeling the characteristic regions in the first images by using a labeling tool written by python, generating label data by using category information and coordinate position information of the rectangular frame, wherein the category information is the category of the target object in the first images, the coordinate position information is the coordinate information of four corners of the rectangular frame, storing the label data, labeling the plurality of first images, forming the first data set by using a TensorFlow frame, training the first data set according to the category number of the target object in the first images, Defining a loss function, writing a training model, and calculating and classifying the first data set in a high-dimensional space by using a convolutional neural network to obtain the identification training model;
the first camera receives second indication information sent by the identification unit, and the first camera acquires a second image in the vehicle according to the second indication information;
the recognition unit recognizes the second image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the second image corresponding to the information of the feature region in the first image, and the recognition unit generates a recognition result;
the system further comprises: a second camera outside the vehicle, the second camera taking an image of the outside of the vehicle or an image of the surroundings of the vehicle, the second camera performing data transmission with the identification unit, wherein,
the second camera receives third indication information sent by the identification unit, acquires a plurality of third images outside the vehicle according to the third indication information, and sends the plurality of third images to the identification unit;
the recognition unit performs preprocessing on the plurality of third images, the preprocessing including: labeling the target objects in the plurality of third images by using a preset rectangular frame, determining characteristic areas of the plurality of third images according to the labels, establishing a second data set according to the information of the characteristic areas, and establishing a recognition training model according to the second data set;
the second camera receives fourth indication information sent by the identification unit, and the second camera acquires a fourth image outside the vehicle according to the fourth indication information;
the recognition unit recognizes the fourth image according to the recognition training model, the recognition includes that the recognition training model recognizes the feature region of the fourth image corresponding to the information of the feature region in the third image, and the recognition unit generates a recognition result.
CN201810523878.1A 2018-05-28 2018-05-28 Image recognition method and system for fixed space and vehicle Active CN108805184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810523878.1A CN108805184B (en) 2018-05-28 2018-05-28 Image recognition method and system for fixed space and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810523878.1A CN108805184B (en) 2018-05-28 2018-05-28 Image recognition method and system for fixed space and vehicle

Publications (2)

Publication Number Publication Date
CN108805184A CN108805184A (en) 2018-11-13
CN108805184B true CN108805184B (en) 2020-07-31

Family

ID=64090558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810523878.1A Active CN108805184B (en) 2018-05-28 2018-05-28 Image recognition method and system for fixed space and vehicle

Country Status (1)

Country Link
CN (1) CN108805184B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110002315A (en) * 2018-11-30 2019-07-12 浙江新再灵科技股份有限公司 Vertical ladder electric vehicle detection method and warning system based on deep learning
CN109847306B (en) * 2019-01-11 2020-11-17 衡阳师范学院 Badminton pace training detection method and system based on image operation
CN110688902B (en) * 2019-08-30 2022-02-11 智慧互通科技股份有限公司 Method and device for detecting vehicle area in parking space
CN113139488B (en) * 2021-04-29 2024-01-12 北京百度网讯科技有限公司 Method and device for training segmented neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306304A (en) * 2011-03-25 2012-01-04 杜利利 Face occluder identification method and device
CN103956028A (en) * 2014-04-23 2014-07-30 山东大学 Automobile multielement driving safety protection method
CN106384360A (en) * 2016-09-22 2017-02-08 北京舜裔科技有限公司 Interactive video creation method
CN106982359A (en) * 2017-04-26 2017-07-25 深圳先进技术研究院 A kind of binocular video monitoring method, system and computer-readable recording medium
CN207232993U (en) * 2017-10-17 2018-04-13 北京汽车集团有限公司 Vehicle joint strip prompt system and vehicle
CN107909040A (en) * 2017-11-17 2018-04-13 吉林大学 One kind is hired a car verification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2015202613A1 (en) * 2014-05-16 2015-12-03 Cds Worldwide Pty Ltd An action control apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306304A (en) * 2011-03-25 2012-01-04 杜利利 Face occluder identification method and device
CN103956028A (en) * 2014-04-23 2014-07-30 山东大学 Automobile multielement driving safety protection method
CN106384360A (en) * 2016-09-22 2017-02-08 北京舜裔科技有限公司 Interactive video creation method
CN106982359A (en) * 2017-04-26 2017-07-25 深圳先进技术研究院 A kind of binocular video monitoring method, system and computer-readable recording medium
CN207232993U (en) * 2017-10-17 2018-04-13 北京汽车集团有限公司 Vehicle joint strip prompt system and vehicle
CN107909040A (en) * 2017-11-17 2018-04-13 吉林大学 One kind is hired a car verification method and device

Also Published As

Publication number Publication date
CN108805184A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108805184B (en) Image recognition method and system for fixed space and vehicle
CN106845890B (en) Storage monitoring method and device based on video monitoring
CN104303193B (en) Target classification based on cluster
US9363483B2 (en) Method for available parking distance estimation via vehicle side detection
US8532414B2 (en) Region-of-interest video quality enhancement for object recognition
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN109484935A (en) A kind of lift car monitoring method, apparatus and system
Hakim et al. Implementation of an image processing based smart parking system using Haar-Cascade method
KR102043922B1 (en) A cctv divided management system and method it
CN115841651B (en) Constructor intelligent monitoring system based on computer vision and deep learning
CN115600953A (en) Monitoring method and device for warehouse positions, computer equipment and storage medium
CN114067295A (en) Method and device for determining vehicle loading rate and vehicle management system
CN111739333B (en) Empty parking space identification method
Zhou et al. Hybridization of appearance and symmetry for vehicle-logo localization
CN111241918B (en) Vehicle tracking prevention method and system based on face recognition
CN111739332B (en) Parking lot management system
CN112104838B (en) Image distinguishing method, monitoring camera and monitoring camera system thereof
CN116110127A (en) Multi-linkage gas station cashing behavior recognition system
CN111694981A (en) Gas station fee evasion alarm system and method based on machine vision
WO2022135242A1 (en) Virtual positioning method and apparatus, and virtual positioning system
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN206741505U (en) Identify management equipment and device
CN116129343A (en) Fire-fighting channel occupation detection method and device and electronic equipment
Harahap et al. Detection and simulation of vacant parking lot space using east algorithm and haar cascade
CN114743140A (en) Fire fighting access occupation identification method and device based on artificial intelligence technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant