CN112257674A - Visual data processing method and device - Google Patents

Visual data processing method and device Download PDF

Info

Publication number
CN112257674A
CN112257674A CN202011288473.8A CN202011288473A CN112257674A CN 112257674 A CN112257674 A CN 112257674A CN 202011288473 A CN202011288473 A CN 202011288473A CN 112257674 A CN112257674 A CN 112257674A
Authority
CN
China
Prior art keywords
information
visual
virtual
computing platform
structured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011288473.8A
Other languages
Chinese (zh)
Other versions
CN112257674B (en
Inventor
邓练兵
欧阳可佩
宋宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN202011288473.8A priority Critical patent/CN112257674B/en
Publication of CN112257674A publication Critical patent/CN112257674A/en
Application granted granted Critical
Publication of CN112257674B publication Critical patent/CN112257674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Water Supply & Treatment (AREA)
  • Strategic Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing visual data, wherein the method comprises the following steps: the method comprises the steps of obtaining visual data collected by a plurality of Internet of things devices through a regional Internet of things sensing system, determining a key image frame from the visual data, determining depth of field information corresponding to the key image frame, determining one or more virtual coil regions in the key image frame according to the depth of field information, respectively carrying out virtual object identification on the one or more virtual coil regions to obtain one or more virtual objects, carrying out feature identification on each virtual object to obtain object identification, structural information and non-structural information corresponding to the virtual object, and storing the corresponding relation of the object identification, the structural information and the non-structural information in a preset visual database. In the embodiment of the invention, the unified processing of the visual data is realized, and the visual data is correspondingly stored to be applied to urban application.

Description

Visual data processing method and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for processing visual data.
Background
At present, while the development and application of big data are rapidly developed, there are many problems, such as: insufficient data opening and sharing, not wide application field, insufficient development and utilization of data resources, serious disorder abuse phenomenon and the like.
In smart city construction, big data plays an important role, and according to the experience of smart city construction in the past, due to the lack of a unified development management platform, each city application can only be independently constructed, so that the problems of data barriers and application barriers exist among all the city applications, a large amount of information islands are formed, and the function and value of the big data cannot be played. Therefore, a unified cloud platform which can be popularized and used is urgently needed to be explored for breaking the stripe division among urban applications, eliminating the information gap and realizing the quality fusion of big data.
In the process of constructing a unified development management platform, a large amount of different visual data is often acquired, and in the prior art, a large amount of visual data is difficult to be processed in a unified manner, and is also difficult to be applied to various urban applications.
Disclosure of Invention
In view of the above, it is proposed to provide a method and apparatus for visual data processing that overcomes or at least partially solves the above mentioned problems, comprising:
a method for processing visual data is applied to a large-scale visual computing platform, the large-scale visual computing platform is respectively connected with a regional Internet of things sensing system and a regional visual AI platform, the regional Internet of things sensing system is connected with a plurality of Internet of things devices, and the method comprises the following steps:
the large-scale visual computing platform acquires visual data acquired by the plurality of Internet of things devices through the regional Internet of things sensing system, and determines a key image frame from the visual data;
the large-scale visual computing platform determines depth of field information corresponding to the key image frame, and determines one or more virtual coil areas in the key image frame according to the depth of field information;
the large-scale visual computing platform respectively identifies the virtual objects in the one or more virtual coil areas to obtain one or more virtual objects;
the large-scale visual computing platform performs feature recognition on each virtual object to obtain an object identifier, structural information and non-structural information corresponding to the virtual object;
the large-scale visual computing platform stores the corresponding relation among the object identification, the structural information and the non-structural information in a preset visual database;
the large-scale visual computing platform receives a query request sent by the regional visual AI platform; wherein the query request comprises a structured query condition and an unstructured query condition;
and the large-scale visual computing platform queries in the visual database according to the structured query condition and the unstructured query condition to obtain a target object identifier, and feeds the target object identifier back to the regional visual AI platform.
Optionally, the performing, by the large-scale visual computing platform, virtual object recognition on the one or more virtual coil regions respectively to obtain one or more virtual objects includes:
determining brightness information of the contained pixel points aiming at each virtual coil area;
performing edge detection according to the brightness information to obtain contour information;
one or more virtual objects are determined from the contour information.
Optionally, the object identifier includes license plate information, and the large-scale visual computing platform performs feature recognition on each virtual object to obtain an object identifier corresponding to the virtual object, including:
when the virtual object is a vehicle object, determining a license plate area corresponding to the virtual object;
and calling a preset neural network model, and carrying out image recognition in the license plate area to obtain license plate information corresponding to the virtual object.
Optionally, the large-scale visual computing platform stores the corresponding relationship between the object identifier, the structured information, and the non-structured information in a preset visual database, and includes:
determining time information corresponding to the key image frame;
storing the corresponding relationship among the time information, the object identifier, the structured information, the object identifier and the non-structured information in a preset visual database.
Optionally, the area visual AI platform includes a traffic control system, the large-scale visual computing platform is connected to the traffic control system, and the large-scale visual computing platform stores a corresponding relationship between the object identifier, the structured information, and the non-structured information in a preset visual database, including:
acquiring traffic event information corresponding to the key image frames from the traffic control system;
storing the correspondence of the traffic event information, the object identification, the structured information, the object identification, and the non-structured information in a preset visual database.
Optionally, the query request includes user information, and before the large-scale visual computing platform queries the visual database according to the structured query condition and the unstructured query condition, the method includes:
judging whether the user information is matched with preset user information or not;
and when the user information is matched with preset user information, executing the large-scale visual computing platform to query in the visual database according to the structured query condition and the unstructured query condition.
Optionally, the virtual object is a vehicle object, the structured information includes color information and texture information corresponding to the vehicle object, and the non-structured information includes region picture information corresponding to the vehicle object.
The utility model provides a device that visual data handled, is applied to extensive visual computing platform, extensive visual computing platform is connected with regional thing allies oneself with sensing system, regional vision AI platform respectively, regional thing allies oneself with sensing system and connects a plurality of thing networking devices, the device includes:
the key image frame determining module is used for the large-scale visual computing platform to acquire visual data acquired by the plurality of Internet of things devices through the regional Internet of things sensing system and determine a key image frame from the visual data;
the virtual coil area determining module is used for determining depth of field information corresponding to the key image frame and determining one or more virtual coil areas in the key image frame according to the depth of field information;
the virtual object determining module is used for respectively carrying out virtual object identification on the one or more virtual coil areas to obtain one or more virtual objects;
the system comprises a characteristic identification module, a characteristic identification module and a characteristic identification module, wherein the characteristic identification module is used for carrying out characteristic identification on each virtual object to obtain an object identifier, structural information and non-structural information corresponding to the virtual object;
the storage module is used for storing the corresponding relation among the object identification, the structural information and the non-structural information in a preset visual database;
the query request receiving module is used for receiving a query request sent by the area vision AI platform; wherein the query request comprises a structured query condition and an unstructured query condition;
and the target object identification determining module is used for inquiring in the visual database according to the structured inquiry condition and the unstructured inquiry condition to obtain a target object identification, and feeding the target object identification back to the area visual AI platform.
An electronic device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing a method of visual data processing as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of visual data processing as set forth above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, visual data acquired by a plurality of internet of things devices is acquired through a regional internet of things sensing system, a key image frame is determined from the visual data, depth of field information corresponding to the key image frame is determined, one or more virtual coil regions in the key image frame are determined according to the depth of field information, virtual object identification is respectively carried out on the one or more virtual coil regions to obtain one or more virtual objects, feature identification is carried out on each virtual object to obtain object identification, structural information and non-structural information corresponding to the virtual object, the corresponding relation of the object identification, the structural information and the non-structural information is stored in a preset visual database, an inquiry request sent by a regional visual AI platform is received, inquiry is carried out in the visual database according to structural inquiry conditions and non-structural inquiry conditions to obtain a target object identification, and the data are fed back to the regional visual AI platform, so that the visual data are uniformly processed, and the visual data are correspondingly stored to be applied to urban application.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an overall architecture diagram of a cloud platform according to an embodiment of the present invention;
FIG. 2a is a flow chart illustrating steps of a method for visual data processing according to an embodiment of the present invention;
FIG. 2b is a diagram of a data architecture of a large-scale visual computing platform according to an embodiment of the present invention;
FIG. 2c is a schematic diagram of data transmission and data output of a large-scale visual computing platform according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating steps of another method of visual data processing according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for processing visual data according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the construction of the smart city, a cross-domain multidimensional big data public service cloud platform with unified standards, unified entries, unified acquisition, unified management, unified service and unified data is built, an urban-level unified data standard is built, a data barrier is broken, the Internet of Things (IOT) and system data resources of a region are converged, all service systems of the smart city are borne, and the smart city ecology is created through data open sharing, platform capability opening and the smart city ecology creation.
The construction target of the cross-domain multi-dimensional big data public service cloud platform is that various main bodies, all levels of business coordination mechanisms and intelligent application in various fields of a smart city are built by introducing advanced technologies such as cloud computing, big data, Internet of things, mobile interconnection and the like to form an open, interconnected and intelligent smart city ecological system, so that data sharing in various fields of city management, social civilian life, resource environment and economic industry is promoted, administrative efficiency, city management capability and resident life quality are improved, industry fusion development is promoted, industry transformation and upgrading are promoted, business modes are innovated, and popularization and application of the cross-domain multi-dimensional big data public service cloud platform are realized.
A cross-domain multi-dimensional big data public service cloud platform mainly relates to leading-edge IT information technologies such as cloud computing, big data, Internet of things and artificial intelligence:
1. cloud computing technology: the cloud computing mainly comprises six core components including elastic computing, a network, storage, a database, safety and middleware, and provides elastic, quick, stable and safe resources and computing power services.
2. Big data technology: the data construction and management are taken as the core, and the capabilities of data communication, data integration, data management, data sharing and the like are provided through related components such as data calculation, data development, data analysis, data visualization and the like.
3. The technology of the Internet of things comprises the following steps: the Internet of things platform provides one-stop services such as equipment access, equipment management, monitoring operation and maintenance, safety guarantee and the like, can provide basic capability support of the Internet of things as an important component of a space-time Internet of things engine, and meets the requirement of intelligent management of a novel smart city in the future.
4. Artificial intelligence technology: an AI algorithm development platform is taken as a core, and a series of intelligent services are provided through related components such as a visual AI, text voice recognition, a Natural Language Processing (NLP) platform, a map service and the like.
As shown in fig. 1, an internet engine, a space-time internet of things engine, a cross-domain multi-dimensional big data engine, a regional internet of things sensing system, an open service gateway, a regional application portal, a secure operation and maintenance system, an open operation system, and other structures are deployed in a cloud platform, wherein the open service gateway includes a fusion service sharing center and a fusion data innovation center.
The following describes the details of the cloud platform:
space-time internet of things engine
The space-time internet of things engine is composed of a Geographic Information System (GIS), a Building Information Model (BIM) and a regional internet of things platform and is used for applying space data and a three-dimensional model to regional internet of things.
The geographic information system is a special and very important spatial information system, and can collect, store, manage, calculate, analyze, display and describe relevant geographic distribution data in the whole or part of space under the support of a computer hardware and software system.
The building information model is based on a three-dimensional digital technology, integrates engineering data models of various related information of a building engineering project, and the built model is in continuous deepening and changing along with the progress of the project.
(II) Internet Engine
Cloud efficient (DevOps) and distributed middleware are deployed in an Internet engine and used for achieving efficient resource sharing and efficient function sharing of data.
Wherein, DevOps is a combination word of Development and Operations, which is a collective name of a group of processes, methods and systems, and is used for promoting Development of application programs/software engineering, communication, cooperation and integration between technical operation and quality assurance departments.
The distributed middleware is a kind of software between the application system and the system software, and links each part of the application system or different applications on the network by using the basic service or function provided by the system software, thereby achieving the purpose of resource sharing and function sharing.
(III) Cross-domain multidimensional big data engine
The cross-domain multi-dimensional big data engine is provided with a unified data management platform and a big data engine and used for realizing the unified management of cross-domain data.
(IV) regional Internet of things sensing system
The regional Internet of things sensing system is composed of relevant sensing equipment and equipment data such as pressure, humidity, a camera, a light source, infrared sensing and temperature.
(V) converged service sharing center and converged data innovation center
The fusion service sharing center may create different data sharing centers after fusing the data of each region according to service classification, for example: the system comprises a personal information center, a credit information center, a legal information center, a financial service center, a travel service center, a comprehensive treatment service center, a space-time service center, an Internet of things service center and other sharing centers.
The fusion data innovation center realizes the innovative application of fusion data through a data fusion system and an AI algorithm system, wherein the AI algorithm system comprises the following components: a full-time global traffic dynamic perception engine, a progressive video search engine and a large-scale visual computing platform.
The fusion service sharing center and the fusion data creation center fuse the data and then can present the processed data through the area application portal.
(VI) regional application Portal
In the regional application portal, the system is mainly divided into blocks such as ecological environmental protection, global tourism, property cities, enterprise intelligent services, electronic fences, intelligent communities, international talent islands, regional economic brains, cross-border e-commerce and cross-domain authentication. The user enters each plate through the regional application portal and acquires the information corresponding to each plate formed by the processed data.
(VII) safety operation and maintenance system
The safe operation and maintenance system comprises safety guarantee, multi-cloud management, regional cloud unified management, a platform interface and the like and is used for guaranteeing the safe operation of the whole cloud platform.
(eighth) open operation system
The open operation system comprises a uniform entrance, an ability open, an operation platform and the like, and is used for establishing a uniform entrance of data and accessing the data of each area.
(nine) other structures
In addition, data can be processed through a supercomputing cluster, a regional cloud computing platform and an openstackfirmware cluster (one open-source cloud computing management platform project is a combination of a series of software open-source projects).
Referring to fig. 2a, a flowchart of steps of a method for processing visual data according to an embodiment of the present invention is shown, and is applied to a large-scale visual computing platform, where the large-scale visual computing platform is respectively connected to a regional internet of things sensing system and a regional visual AI platform, and the regional internet of things sensing system is connected to multiple pieces of internet of things equipment, where the method specifically includes the following steps:
step 201, a large-scale visual computing platform acquires visual data acquired by a plurality of internet of things devices through a regional internet of things sensing system, and determines a key image frame from the visual data;
the large-scale visual computing platform can comprise a computing engine, the computing engine can comprise an access front end, a data access module, a computing front end, a computing module, a storage and search front end and a storage and search module, the regional internet of things sensing system can be connected with related internet of things equipment such as pressure, humidity, a camera, structured light, infrared sensing and temperature, the visual data can comprise video data and continuous frame picture data, and the key image frame can be picture data used for computing in the visual data.
Specifically, as shown in fig. 2b, the data access module includes an internet video access sub-module, a view access sub-module, a streaming media forwarding service sub-module, a coil management sub-module, a pull frame and preset bit management sub-module, an MQ (Message Queue) and a data hub (streaming data bus) push sub-module, and the data access module is configured to receive data provided by an internet video platform, a video sharing platform, and a video image information base, and transmit the data to a data calculation service after processing the data;
the computing module comprises a video and data input MQ submodule, an algorithm processing submodule and a structured output submodule, and is used for receiving the data transmitted by the data access module and transmitting the processed data to the image searching service;
the Storage and search module comprises submodules such as a search engine, an MQ, a DataHub, an RDS (Relational Database Service), an ES (elastic search engine), an OSS (Object Storage Service) and the like, and the Storage and search module is used for receiving the data transmitted by the calculation module.
As shown in fig. 2c, the transmission of data in the data architecture includes a streaming media expansion service, the streaming media expansion service includes a streaming processing module group, a streaming output module group, and a central management server, the streaming processing module group includes a plurality of streaming processing modules, the streaming processing module group is configured to receive data transmitted by the video monitoring platform and various information provided by the central management server, and transmit the processed data to the streaming output module group, the streaming output module group includes a plurality of streaming output modules, the streaming output module group is configured to receive data transmitted by the streaming processing module group, and transmit the processed data to other applications, the other applications may be large-scale visual computing platforms, and the data transmitted by the video monitoring platform may be data provided by an internet video platform, a video sharing platform, and a video image information base.
In practical application, the regional internet of things sensing system can store visual data acquired by the internet of things sensing device in real time, and then the access front end and the data access module in the large-scale visual computing platform can acquire video data and multi-frame picture data in the regional internet of things sensing system, and can determine one frame of image in the video data and the multi-frame picture data, for example, determine one frame of image in the video data and the multi-frame picture data according to a preset time interval.
Step 202, the large-scale visual computing platform determines depth of field information corresponding to the key image frame, and determines one or more virtual coil areas in the key image frame according to the depth of field information;
the depth information may be depth information in the picture data, for example, the depth information may be a distance from an object in the picture data to the photographing apparatus, and the virtual coil region may be a region divided based on the depth information.
After determining the key image frame, that is, after determining one frame of image, depth information in the image may be determined, for example, structured light sensing equipment may be used to obtain structured light information, and then depth information in the image may be determined in a structured light coding manner.
After the depth of field information is determined, one or more virtual coil regions may be determined according to a preset depth of field interval, for example, the depth of field interval may be 30 meters to 20 meters, 20 meters to 10 meters, or within 10 meters, and then virtual coil regions within 30 meters to 20 meters, 20 meters to 10 meters, or within 10 meters may be determined.
Step 203, the large-scale visual computing platform respectively identifies virtual objects in one or more virtual coil areas to obtain one or more virtual objects;
the virtual object is a vehicle object or a pedestrian object.
After the virtual coil regions are determined, contour information of all objects in one of the virtual coil regions may be determined, and one or more virtual objects may be determined according to the contour information.
In practical application, the contour information of the vehicle object and the contour information of the pedestrian object can be preset in a large-scale visual computing platform, and then the vehicle object and the pedestrian object can be determined by matching the contour information of the object in the virtual coil area with the preset contour information after the virtual object is determined.
Step 204, the large-scale visual computing platform performs feature recognition on each virtual object to obtain object identification, structural information and non-structural information corresponding to the virtual object;
the structured information includes color information and texture information corresponding to the vehicle object, and the unstructured information includes region picture information corresponding to the vehicle object, which may be specifically shown in the following table:
Figure BDA0002783120770000101
after the virtual object is determined, gray information of a plurality of pixel points included in the virtual object can be determined, color information can be determined according to the gray information, gray information can be counted to establish a gray histogram, texture information can be determined according to the gray histogram, local picture information in a region corresponding to the virtual object can be determined, for example, when the virtual object is a vehicle object, a driver region corresponding to the vehicle object can be determined, and picture information of the driver region can be determined.
Step 205, the large-scale visual computing platform stores the corresponding relation of the object identification, the structural information and the non-structural information in a preset visual database;
after the object identifier, the structured information, and the non-structural information are obtained, the corresponding relationship between the object identifier, the structured information, and the non-structural information for the virtual object may be stored in a preset visual database, for example, the virtual object with the object identifier a may include color information B and texture information C, and when the virtual object is stored in the preset visual database, the object identifier a, the color information B, and the texture information C of the virtual object may be stored.
In an embodiment of the present invention, step 205 may further include the following sub-steps:
a substep 11, determining time information corresponding to the key image frame;
after the object identifier, the structural information, and the non-structural information are obtained, time information of the frame of image may be determined, for example, an image acquired through the internet of things device may be acquired in X months in X years, and then it may be determined that the time information of the frame of image is X months in X years.
And a substep 12 of storing the correspondence of the time information, the object identification, the structured information, the object identification and the non-structured information in a preset visual database.
After the time information is determined, the corresponding relationship between the time information, the object identifier, the structured information, the object identifier, and the non-structured information may be stored in a preset visual database, for example, a virtual object with an object identifier a may include color information B and texture information C, and when the virtual object is stored in the preset visual database, the object identifier a, the color information B, the texture information C, and the time information of the virtual object may be stored for X years and X months.
In an embodiment of the present invention, the area vision AI platform may include a traffic control system, the large-scale vision computing platform may be connected to the traffic control system, and step 205 may further include the following sub-steps:
a substep 21, acquiring traffic event information corresponding to the key image frame from a traffic control system;
the traffic control system may include a full-time global dynamic traffic awareness engine, the full-time global dynamic traffic awareness engine may be configured to identify traffic event information of a region based on visual data, and the traffic event information may include any one of the following: traffic jam events, traffic accident events, traffic violation events.
After the object identifier, the structured information, and the non-structured information are obtained, one or more pieces of traffic event information in the frame image may be determined by the traffic control system, and then traffic event information corresponding to the virtual object may be determined, for example, a traffic accident event and a traffic violation event in the image may be determined, and then a traffic violation event corresponding to the virtual object may be determined.
And a substep 22 of storing the correspondence of the traffic event information, the object identification, the structured information, the object identification, and the non-structured information in a preset visual database.
After determining the traffic event information, the corresponding relationship between the traffic event information, the object identifier, the structured information, the object identifier, and the non-structured information may be stored in a preset visual database, for example, the virtual object with the object identifier a may include color information B and texture information C, and when the virtual object is stored in the preset visual database, the object identifier a, the color information B, the texture information C, and the traffic violation event of the virtual object may be stored.
Step 206, the large-scale visual computing platform receives a query request sent by the regional visual AI platform;
the query request may include a structured query condition and an unstructured query condition, the structured query condition may be a color information query condition and a texture information query condition, and the unstructured query condition may be a region picture information query condition.
After the storage, the large-scale visual computing platform can provide object query service for the regional visual AI platform based on the stored information such as the object identifier, the structured information, the object identifier, the non-structured information and the like, and when a user needs the object query service, a query request can be sent to the large-scale visual computing platform through the regional visual AI platform, so that the large-scale visual computing platform can receive the query request sent by the regional visual AI platform.
And step 207, the large-scale visual computing platform queries in the visual database according to the structured query conditions and the unstructured query conditions to obtain a target object identifier, and feeds the target object identifier back to the regional visual AI platform.
After receiving the query request, the corresponding color information, texture information, and region picture information may be queried in a preset visual database according to query conditions in the query request, such as a color information query condition, a texture information query condition, and a region picture information query condition, and then an object identifier, that is, a target object identifier may be determined according to the corresponding color information, texture information, and region picture information, and the object identifier may be fed back to the region visual AI platform to complete an object query service.
In the embodiment of the invention, visual data acquired by a plurality of internet of things devices is acquired through a regional internet of things sensing system, a key image frame is determined from the visual data, depth of field information corresponding to the key image frame is determined, one or more virtual coil regions in the key image frame are determined according to the depth of field information, virtual object identification is respectively carried out on the one or more virtual coil regions to obtain one or more virtual objects, feature identification is carried out on each virtual object to obtain object identification, structural information and non-structural information corresponding to the virtual object, the corresponding relation of the object identification, the structural information and the non-structural information is stored in a preset visual database, an inquiry request sent by a regional visual AI platform is received, inquiry is carried out in the visual database according to structural inquiry conditions and non-structural inquiry conditions to obtain a target object identification, and the data are fed back to the regional visual AI platform, so that the visual data are uniformly processed, and the visual data are correspondingly stored to be applied to urban application.
Referring to fig. 3, a flowchart illustrating steps of another method for processing visual data according to an embodiment of the present invention is shown, which may specifically include the following steps:
301, a large-scale visual computing platform acquires visual data acquired by a plurality of internet of things devices through a regional internet of things sensing system, and determines a key image frame from the visual data;
step 302, the large-scale visual computing platform determines depth of field information corresponding to the key image frame, and determines one or more virtual coil areas in the key image frame according to the depth of field information;
step 303, determining brightness information of pixel points contained in each virtual coil area;
after the virtual coil regions are determined, the brightness information of all the pixel points in one of the virtual coil regions may be determined, for example, the key image frame may be an RGB (red, green, and blue) image, the RGB values of all the pixel points in the virtual coil regions may be determined, and then the RGB values may be weighted to obtain the brightness information of all the pixel points.
Step 304, performing edge detection according to the brightness information to obtain contour information;
after the luminance information is obtained, the change of the luminance information between the pixel points may be determined, and edge detection may be performed according to the change of the luminance information, for example, a change interval of the luminance information may be preset, and when the change of the luminance information is greater than the preset interval, it may be considered that an edge exists, that is, an edge exists between the pixel points.
After detecting the edge, one or more contour information may be determined according to the edge, for example, a plurality of edges may be determined according to a change in luminance information between a plurality of pixel points, and when a closed region is formed between the plurality of edges, the contour information formed by the plurality of edges may be determined.
Because in the virtual coil area, if one or more objects exist, the change of the brightness information among the multiple objects is large, and further, the contour information of the objects can be determined by determining the brightness information of the pixel points and the change among the brightness information.
Step 305, determining one or more virtual objects according to the contour information;
after the contour information is obtained, a virtual object corresponding to the contour information may be determined, for example, the contour information of the vehicle object and the contour information of the pedestrian object may be preset, and then, after the virtual object is determined, the vehicle object and the pedestrian object may be determined by matching the contour information of the object in the virtual coil region with the preset contour information.
Step 306, the large-scale visual computing platform performs feature recognition on each virtual object to obtain object identification, structural information and non-structural information corresponding to the virtual object;
in an embodiment of the present invention, the object identifier may include license plate information, and step 406 may include the following sub-steps:
substep 31, when the virtual object is a vehicle object, determining a license plate region corresponding to the virtual object;
the contour information of the vehicle object can be preset in the large-scale visual computing platform, so that the position of the license plate region corresponding to the vehicle object can be determined when the contour information of the vehicle object is set, and the picture information of the position can be determined when the virtual object is the vehicle object.
And a substep 32 of calling a preset neural network model and carrying out image recognition in the license plate region to obtain license plate information corresponding to the virtual object.
The neural network model may be trained over a large number of digital models.
After the license plate region is determined, namely after the picture information corresponding to the license plate region is determined, a preset neural network model can be called, digital recognition is carried out on the license plate region, and then the digital information in the license plate region, namely the license plate information can be obtained.
307, the large-scale visual computing platform stores the corresponding relation of the object identification, the structural information and the non-structural information in a preset visual database;
308, the large-scale visual computing platform receives a query request sent by the regional visual AI platform; wherein the query request comprises a structured query condition and an unstructured query condition;
and 309, the large-scale visual computing platform queries in the visual database according to the structured query conditions and the unstructured query conditions to obtain a target object identifier, and feeds the target object identifier back to the regional visual AI platform.
In an embodiment of the present invention, before step 309, the following steps may be further included:
judging whether the user information is matched with preset user information or not; and when the user information is matched with the preset user information, executing the large-scale visual computing platform to query in the visual database according to the structured query condition and the unstructured query condition.
The query request may include user information, and the preset user information may be a user allowed to perform a query.
After receiving the query request, determining user information in the query request, and further determining whether the user information matches with preset user information, and when the user information matches with the preset user information, it may be stated that the user is a user allowed to perform query, and further may execute the large-scale visual computing platform to perform query in the visual database according to the structured query condition and the unstructured query condition, that is, execute step 309, and when the user information does not match with the preset user information, it may be stated that the user is a user not allowed to perform query, and further does not execute step 309.
In the embodiment of the invention, a large-scale visual computing platform acquires visual data acquired by a plurality of internet of things devices through a regional internet of things sensing system, determines a key image frame from the visual data, determines depth of field information corresponding to the key image frame, determines one or more virtual coil regions in the key image frame according to the depth of field information, determines brightness information of pixel points contained in each virtual coil region, performs edge detection according to the brightness information to obtain contour information, determines one or more virtual objects according to the contour information, performs feature recognition on each virtual object by the large-scale visual computing platform to obtain object identification, structural information and non-structural information corresponding to the virtual object, and stores the object identification, the structural information and the non-structural information in a preset visual database, The large-scale visual computing platform receives an inquiry request sent by the regional visual AI platform, wherein the inquiry request comprises a structured inquiry condition and an unstructured inquiry condition, the large-scale visual computing platform inquires in a visual database according to the structured inquiry condition and the unstructured inquiry condition to obtain a target object identifier, and feeds the target object identifier back to the regional visual AI platform, so that the visual data is uniformly processed, and the visual data is correspondingly stored to be applied to urban application.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a schematic structural diagram of a device for processing visual data according to an embodiment of the present invention is shown, and is applied to a large-scale visual computing platform, where the large-scale visual computing platform is respectively connected to a regional internet of things sensing system and a regional visual AI platform, and the regional internet of things sensing system is connected to multiple internet of things devices, and specifically includes the following modules:
a key image frame determining module 401, configured to obtain, by the large-scale visual computing platform through the regional internet of things sensing system, visual data acquired by the multiple internet of things devices, and determine a key image frame from the visual data;
a virtual coil area determining module 402, configured to determine depth information corresponding to the key image frame, and determine one or more virtual coil areas in the key image frame according to the depth information;
a virtual object determining module 403, configured to perform virtual object identification on the one or more virtual coil regions respectively to obtain one or more virtual objects;
a feature recognition module 404, configured to perform feature recognition on each virtual object to obtain an object identifier, structural information, and non-structural information corresponding to the virtual object;
a storage module 405, configured to store a corresponding relationship between the object identifier, the structured information, and the non-structured information in a preset visual database;
a query request receiving module 406, configured to receive a query request sent by the area vision AI platform; wherein the query request comprises a structured query condition and an unstructured query condition;
and the target object identifier determining module 407 is configured to query the visual database according to the structured query condition and the unstructured query condition to obtain a target object identifier, and feed the target object identifier back to the area visual AI platform.
In an embodiment of the present invention, the virtual object determining module 403 includes:
the brightness information determining submodule is used for determining the brightness information of the contained pixel points aiming at each virtual coil area;
the contour information obtaining submodule is used for carrying out edge detection according to the brightness information to obtain contour information;
and the plurality of virtual object determining sub-modules are used for determining one or more virtual objects according to the contour information.
In an embodiment of the present invention, the object identifier includes license plate information, and the feature recognition module 404 includes:
the license plate region determining submodule is used for determining a license plate region corresponding to the virtual object when the virtual object is a vehicle object;
and the license plate information obtaining sub-module is used for calling a preset neural network model, and carrying out image recognition in the license plate area to obtain the license plate information corresponding to the virtual object.
In an embodiment of the present invention, the storage module 405 includes:
the time information determining submodule is used for determining time information corresponding to the key image frame;
and the time information corresponding relation storage sub-module is used for storing the corresponding relation among the time information, the object identification, the structural information, the object identification and the non-structural information in a preset visual database.
In an embodiment of the present invention, the area vision AI platform includes a traffic control system, the large-scale vision computing platform is connected to the traffic control system, and the storage module 405 includes:
the traffic event information acquisition sub-module is used for acquiring the traffic event information corresponding to the key image frame from the traffic control system;
and the traffic event information corresponding relation storage sub-module is used for storing the corresponding relation among the traffic event information, the object identification, the structured information, the object identification and the non-structured information in a preset visual database.
In an embodiment of the present invention, the query request includes user information, and the apparatus further includes:
the matching module is used for judging whether the user information is matched with preset user information or not;
and the query module is used for executing the query of the large-scale visual computing platform in the visual database according to the structured query condition and the unstructured query condition when the user information is matched with preset user information.
In the embodiment of the invention, visual data acquired by a plurality of internet of things devices is acquired through a regional internet of things sensing system, a key image frame is determined from the visual data, depth of field information corresponding to the key image frame is determined, one or more virtual coil regions in the key image frame are determined according to the depth of field information, virtual object identification is respectively carried out on the one or more virtual coil regions to obtain one or more virtual objects, feature identification is carried out on each virtual object to obtain object identification, structural information and non-structural information corresponding to the virtual object, the corresponding relation of the object identification, the structural information and the non-structural information is stored in a preset visual database, an inquiry request sent by a regional visual AI platform is received, inquiry is carried out in the visual database according to structural inquiry conditions and non-structural inquiry conditions to obtain a target object identification, and the data are fed back to the regional visual AI platform, so that the visual data are uniformly processed, and the visual data are correspondingly stored to be applied to urban application.
An embodiment of the present invention also provides an electronic device, which may include a processor, a memory, and a computer program stored on the memory and capable of running on the processor, wherein the computer program, when executed by the processor, implements the method of visual data processing as above.
An embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the method of visual data processing as above.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and apparatus for processing visual data provided above are described in detail, and the principles and embodiments of the present invention are explained herein using specific examples, which are merely used to help understand the method and core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for processing visual data is applied to a large-scale visual computing platform, the large-scale visual computing platform is respectively connected with a regional Internet of things sensing system and a regional visual AI platform, the regional Internet of things sensing system is connected with a plurality of Internet of things devices, and the method comprises the following steps:
the large-scale visual computing platform acquires visual data acquired by the plurality of Internet of things devices through the regional Internet of things sensing system, and determines a key image frame from the visual data;
the large-scale visual computing platform determines depth of field information corresponding to the key image frame, and determines one or more virtual coil areas in the key image frame according to the depth of field information;
the large-scale visual computing platform respectively identifies the virtual objects in the one or more virtual coil areas to obtain one or more virtual objects;
the large-scale visual computing platform performs feature recognition on each virtual object to obtain an object identifier, structural information and non-structural information corresponding to the virtual object;
the large-scale visual computing platform stores the corresponding relation among the object identification, the structural information and the non-structural information in a preset visual database;
the large-scale visual computing platform receives a query request sent by the regional visual AI platform; wherein the query request comprises a structured query condition and an unstructured query condition;
and the large-scale visual computing platform queries in the visual database according to the structured query condition and the unstructured query condition to obtain a target object identifier, and feeds the target object identifier back to the regional visual AI platform.
2. The method of claim 1, wherein the large-scale visual computing platform performs virtual object recognition on the one or more virtual coil regions respectively to obtain one or more virtual objects, and comprises:
determining brightness information of the contained pixel points aiming at each virtual coil area;
performing edge detection according to the brightness information to obtain contour information;
one or more virtual objects are determined from the contour information.
3. The method of claim 2, wherein the object identifier includes license plate information, and the performing, by the large-scale vision computing platform, feature recognition on each virtual object to obtain the object identifier corresponding to the virtual object comprises:
when the virtual object is a vehicle object, determining a license plate area corresponding to the virtual object;
and calling a preset neural network model, and carrying out image recognition in the license plate area to obtain license plate information corresponding to the virtual object.
4. The method of claim 1, 2 or 3, wherein the large-scale visual computing platform stores the object identifier, the structured information, and the non-structured information in a preset visual database, comprising:
determining time information corresponding to the key image frame;
storing the corresponding relationship among the time information, the object identifier, the structured information, the object identifier and the non-structured information in a preset visual database.
5. The method according to claim 1, 2 or 3, wherein the area vision AI platform comprises a traffic control system, the large-scale vision computing platform is connected with the traffic control system, and the large-scale vision computing platform stores the corresponding relationship among the object identifier, the structured information and the unstructured information in a preset vision database, and comprises:
acquiring traffic event information corresponding to the key image frames from the traffic control system;
storing the correspondence of the traffic event information, the object identification, the structured information, the object identification, and the non-structured information in a preset visual database.
6. The method of claim 1, wherein the query request includes user information, and wherein before the large-scale visual computing platform queries the visual database according to the structured query terms and the unstructured query terms, the method comprises:
judging whether the user information is matched with preset user information or not;
and when the user information is matched with preset user information, executing the large-scale visual computing platform to query in the visual database according to the structured query condition and the unstructured query condition.
7. The method according to claim 1, wherein the virtual object is a vehicle object, the structured information includes color information and texture information corresponding to the vehicle object, and the non-structured information includes region picture information corresponding to the vehicle object.
8. The utility model provides a device of visual data processing, its characterized in that is applied to extensive visual computing platform, extensive visual computing platform is connected with regional thing allies oneself with sensing system, regional vision AI platform respectively, regional thing allies oneself with sensing system and connects a plurality of thing networking equipment, the device includes:
the key image frame determining module is used for the large-scale visual computing platform to acquire visual data acquired by the plurality of Internet of things devices through the regional Internet of things sensing system and determine a key image frame from the visual data;
the virtual coil area determining module is used for determining depth of field information corresponding to the key image frame and determining one or more virtual coil areas in the key image frame according to the depth of field information;
the virtual object determining module is used for respectively carrying out virtual object identification on the one or more virtual coil areas to obtain one or more virtual objects;
the system comprises a characteristic identification module, a characteristic identification module and a characteristic identification module, wherein the characteristic identification module is used for carrying out characteristic identification on each virtual object to obtain an object identifier, structural information and non-structural information corresponding to the virtual object;
the storage module is used for storing the corresponding relation among the object identification, the structural information and the non-structural information in a preset visual database;
the query request receiving module is used for receiving a query request sent by the area vision AI platform; wherein the query request comprises a structured query condition and an unstructured query condition;
and the target object identification determining module is used for inquiring in the visual database according to the structured inquiry condition and the unstructured inquiry condition to obtain a target object identification, and feeding the target object identification back to the area visual AI platform.
9. An electronic device comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing a method of visual data processing according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of visual data processing according to any one of claims 1 to 7.
CN202011288473.8A 2020-11-17 2020-11-17 Visual data processing method and device Active CN112257674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011288473.8A CN112257674B (en) 2020-11-17 2020-11-17 Visual data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011288473.8A CN112257674B (en) 2020-11-17 2020-11-17 Visual data processing method and device

Publications (2)

Publication Number Publication Date
CN112257674A true CN112257674A (en) 2021-01-22
CN112257674B CN112257674B (en) 2022-05-27

Family

ID=74265922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011288473.8A Active CN112257674B (en) 2020-11-17 2020-11-17 Visual data processing method and device

Country Status (1)

Country Link
CN (1) CN112257674B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509090A (en) * 2011-11-29 2012-06-20 冷明 Vehicle feature recognition device based on public security video images in skynet engineering
CN103348381A (en) * 2012-02-09 2013-10-09 松下电器产业株式会社 Image recognition device, image recognition method, program and integrated circuit
CN104200671A (en) * 2014-09-09 2014-12-10 安徽四创电子股份有限公司 Method and system for managing virtual gate based on big data platform
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN108874910A (en) * 2018-05-28 2018-11-23 思百达物联网科技(北京)有限公司 The Small object identifying system of view-based access control model
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN111382592A (en) * 2018-12-27 2020-07-07 杭州海康威视数字技术股份有限公司 Living body detection method and apparatus
CN111626123A (en) * 2020-04-24 2020-09-04 平安国际智慧城市科技股份有限公司 Video data processing method and device, computer equipment and storage medium
CN111898641A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Target model detection device, electronic equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509090A (en) * 2011-11-29 2012-06-20 冷明 Vehicle feature recognition device based on public security video images in skynet engineering
CN103348381A (en) * 2012-02-09 2013-10-09 松下电器产业株式会社 Image recognition device, image recognition method, program and integrated circuit
CN104200671A (en) * 2014-09-09 2014-12-10 安徽四创电子股份有限公司 Method and system for managing virtual gate based on big data platform
CN106909911A (en) * 2017-03-09 2017-06-30 广东欧珀移动通信有限公司 Image processing method, image processing apparatus and electronic installation
CN108874910A (en) * 2018-05-28 2018-11-23 思百达物联网科技(北京)有限公司 The Small object identifying system of view-based access control model
CN111382592A (en) * 2018-12-27 2020-07-07 杭州海康威视数字技术股份有限公司 Living body detection method and apparatus
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN111626123A (en) * 2020-04-24 2020-09-04 平安国际智慧城市科技股份有限公司 Video data processing method and device, computer equipment and storage medium
CN111898641A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Target model detection device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112257674B (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN110378824B (en) Brain for public security traffic management data and construction method
CN112332981B (en) Data processing method and device
CN105045820A (en) Method for processing video image information of mass data and database system
CN112291367B (en) Data processing method and device
CN112328585A (en) Data processing method and device
CN110913032A (en) Method and platform for realizing DNS (Domain name System) domain name request analysis by using power grid threat information
CN109636307B (en) River chang APP system
CN109741482A (en) A kind of information sharing method and device
CN110177255A (en) A kind of video information dissemination method and system based on case scheduling
CN111311038A (en) Evaluation method of traffic management and control service index
CN112330519A (en) Data processing method and device
CN111047387A (en) Recovery management method and device
CN112257674B (en) Visual data processing method and device
CN112382122B (en) Traffic information processing method and device
CN115567563B (en) Comprehensive transportation hub monitoring and early warning system based on end edge cloud and control method thereof
CN112383435B (en) Fault processing method and device
CN112468696A (en) Data processing method and device
CN116208633A (en) Artificial intelligence service platform system, method, equipment and medium
CN112258372A (en) Data processing method and device
CN112258371A (en) Fault processing method and device
CN116737351A (en) Image processing system operation management method and device, storage medium and electronic equipment
CN112258370A (en) Regional vision AI platform and data processing method based on regional vision AI platform
CN114356502A (en) Unstructured data marking, training and publishing system and method based on edge computing technology
CN112187492A (en) Smart city system structure and construction method
CN112333199B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant