CN116543135B - Image data acquisition method and system based on complex scene - Google Patents

Image data acquisition method and system based on complex scene Download PDF

Info

Publication number
CN116543135B
CN116543135B CN202310559121.9A CN202310559121A CN116543135B CN 116543135 B CN116543135 B CN 116543135B CN 202310559121 A CN202310559121 A CN 202310559121A CN 116543135 B CN116543135 B CN 116543135B
Authority
CN
China
Prior art keywords
acquisition
matrix
image data
scene
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310559121.9A
Other languages
Chinese (zh)
Other versions
CN116543135A (en
Inventor
林超
尹康
周彤
赵欣阳
邢嘉城
杨天宇
何家辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Borui Xianglun Technology Development Co Ltd
Original Assignee
Beijing Borui Xianglun Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Borui Xianglun Technology Development Co Ltd filed Critical Beijing Borui Xianglun Technology Development Co Ltd
Priority to CN202310559121.9A priority Critical patent/CN116543135B/en
Publication of CN116543135A publication Critical patent/CN116543135A/en
Application granted granted Critical
Publication of CN116543135B publication Critical patent/CN116543135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a complex scene image data acquisition method and system, wherein the method comprises the following steps: determining an incidence matrix corresponding to the acquisition region and the adjacent acquisition region based on the scene matrix; the element value in the association matrix is equal to the association degree value of the corresponding adjacent monitoring area and the current acquisition area; determining an acquisition strategy of an acquisition region and an adjacent acquisition region based on the incidence matrix, and acquiring based on the acquisition strategy to obtain a current acquisition region image and an adjacent acquisition region image; according to the application, the acquisition areas are split to form independent and dynamically associated acquisition areas, the corresponding set scene matrix and associated matrix are subjected to dynamic and datamation description, and complex scenes are simplified into scenes with smaller granularity and associated with the acquisition areas, so that the scene matrix is formed, independent and differential acquisition is performed, and the storage and calculation cost possibly occurring in the follow-up process is reduced.

Description

Image data acquisition method and system based on complex scene
[ field of technology ]
The application belongs to the technical field of image processing, and particularly relates to a complex scene image data acquisition method and system.
[ background Art ]
The image data acquisition is a processing process of centralizing various image data according to a certain principle and method and normalizing and standardizing the image data. With the rapid development of computer technology, artificial intelligence, big data and internet of things, image data acquisition needs to be combined with these technologies to achieve wide application in various application fields. For example: the devices connected with the Internet of things are in the billions of billions. And the terminals of each Internet of things device are constantly generating data. In the image data transmission processing process, firstly, data is received, namely collected data is summarized, then the data is transmitted to a central processing system (usually referred to as a server), then the required result is stored, analyzed and processed through the data, and then related operation instructions are sent through application program reversing equipment. In the field of artificial intelligence, if a smaller dataset is used, the handling of the dataset can be very simple: each individual image is loaded, preprocessed and then fed to the neural network. However, for large-scale data sets, it is necessary to create a data generator that accesses only a portion of the data set at a time and then passes small batches of data to the network. In industrial control processes, the intermediate data of the control process at each moment may be recorded for the equipment terminals needing to monitor the change of the control process, and the real-time data generated by the process may be collected, acquired and processed.
With the development of artificial intelligence technology, image data acquisition is more challenging, artificial intelligence is a branch of computer science, which attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a similar way to human intelligence, and research in this field includes robots, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence has been developed from birth, and the theory and technology are mature, and the application field is expanding. In the last decade, computer vision has evolved into a key technology for many applications, replacing human supervision and monitoring. Computer vision uses a combination of techniques to analyze and understand the image data of a computer. In monitoring and security industry applications, the main goal of computer vision is to implement automation of manual supervision. The ability to capture realistic scenes and digitize provides new opportunities for better, earlier detection of threats, quantification of risk, and provision of real-time security assessments. The list of computer vision applications is increasing rapidly, driven by new machine learning, high-level computing, artificial intelligence, and wired internet of things technologies that make artificial intelligence vision a more powerful, flexible, and extensible suite. However, with the advances in these technologies, the growth of image data has led to explosive growth in the amount of artificial intelligence computation.
With the rapid development of these technologies, the application scene of image data acquisition is more and more complex, and in order to control and analyze these data more accurately, a more sensitive discovery scene is required, so that the more complex scene is discovered to acquire more accurate image data acquisition, so as to perform subsequent analysis and processing; with such a trend, there is a problem that the amount of data to be processed is increased and the scene is more complex. How to perform efficient image data acquisition on an acquisition object under a complex scene and fully utilize the image data resources of each data bit is a technical problem to be solved.
According to the application, the acquisition areas are split to form independent and dynamically associated acquisition areas, the corresponding set scene matrix and associated matrix are subjected to dynamic and datamation description, and complex scenes are simplified into scenes with smaller granularity and associated with the acquisition areas, so that the scene matrix is formed, independent and differential acquisition is performed, and the storage and calculation cost possibly occurring in the follow-up process is reduced.
[ application ]
In order to solve the above problems in the prior art, the present application provides a method and a system for acquiring image data based on a complex scene, wherein the method comprises:
step S1: determining an acquisition region of image data and an adjacent acquisition region thereof aiming at an acquisition object; wherein: the image acquisition area is an acquisition area with the highest association degree with the acquisition object; the adjacent acquisition area is an acquisition area adjacent to the acquisition area in space position and associated with the acquisition object;
step S2: determining an acquisition area scene and adjacent acquisition area scenes; constructing a scene matrix based on the scene; specific: the scene is identified by a field Jing Xiangliang, and each element in the scene vector is an acquisition area related parameter; mapping each element in the vector into an enumeration type; then calculating the digest value of the vectors so that the digest value of each vector is unique, and setting the element value of the scene matrix corresponding to the acquisition area in the scene matrix as the digest value; wherein: each element in the scene matrix corresponds to an acquisition area; the position relation of the matrix elements corresponds to the position relation between the acquisition areas; the number of rows and columns of the scene matrix are respectively equal to the number of farthest acquisition areas covered by the current acquisition area and all adjacent acquisition areas in the X and Y axis directions in the two-dimensional space;
step S3: determining an incidence matrix corresponding to the acquisition region and the adjacent acquisition region based on the scene matrix; the element value in the association matrix is equal to the association degree value of the corresponding adjacent monitoring area and the current acquisition area; the larger the association degree value is, the larger the element value is, and conversely, the smaller the association degree value is, the smaller the element value is;
the related matrix corresponding to the acquisition area and the adjacent acquisition area is determined based on the scene matrix, specifically: pre-training the artificial intelligence model with historical image data; inputting a scene matrix into an artificial intelligent model to obtain an associated matrix identifier corresponding to the scene matrix, and searching to obtain a corresponding associated matrix based on the associated matrix identifier;
the artificial intelligent model is trained in advance by adopting historical image data, and specifically comprises the following steps: pre-training the artificial intelligent model by adopting historical acquisition image data and an incidence matrix corresponding to the historical acquisition image data and an identification thereof as a sample set; firstly, determining an acquisition region and an adjacent acquisition region in a sample, and then determining scenes of the acquisition region and the adjacent acquisition region in the sample and corresponding scene matrixes; setting a corresponding incidence matrix and an identification thereof; as a sample, the correlation matrix am= [ a ] i,j ]The sample element value of the (i, j) th position in (b) is set toWherein: na (Na) i,j The number of times that the decision result corresponding to the adjacent acquisition area at the (i, j) th position in the historical image data is consistent with the decision result corresponding to the acquisition area; n is the decision number based on the decision result of the current acquisition area; the association degree between the acquisition area and the acquisition area is 1;
step S4: determining an acquisition quality level based on the acquisition resource requirements and the incidence matrix; the differentiated acquisition quality grades corresponding to the acquisition areas and the adjacent acquisition areas meet the acquisition resource requirements and accord with the incidence matrix; the coincidence incidence matrix refers to that the quality level respectively determined for the current acquisition area and the adjacent acquisition area is consistent with the incidence degree indicated by the corresponding element value in the incidence matrix;
step S5: determining an acquisition strategy of an acquisition region and an adjacent acquisition region based on the incidence matrix, and acquiring based on the acquisition strategy to obtain a current acquisition region image and an adjacent acquisition region image; the method comprises the following steps: setting the quality grade of the acquisition strategy corresponding to the acquisition area as Lc, adopting the original resolution corresponding to the quality grade Lc to acquire image data, setting the quality grade of the acquisition strategy corresponding to the adjacent acquisition area (i, j) as La, and adopting the scaling resolution corresponding to the quality grade La to acquire the image data; wherein: the scaling resolution is obtained after corresponding scaling is carried out on the basis of corresponding elements of the incidence matrix;
step S6: the acquired current acquisition region image and the acquired adjacent acquisition region image are spliced to obtain a complete acquisition image; the method comprises the following steps: and splicing the current acquired region image data and the adjacent acquired region image data according to the spatial position corresponding relation to obtain complete acquired image data.
Further, the artificial intelligence model is a neural network model.
Further, the association matrix is 3*3 matrix.
Further, the association matrix element corresponding to the acquisition area is at the (2, 2) th position.
Further, the adjacent acquisition region is an acquisition region within a preset distance range from the acquisition region.
A complex scene image data based acquisition system for implementing the above method, the system comprising: a control center and one or more acquisition devices; the control center and the acquisition device are in communication connection with each other;
the acquisition device is an image acquisition device and is used for acquiring image data based on an acquisition strategy and sending the image data acquired aiming at an acquisition area to the control center;
the control center is used for executing the complex scene image data acquisition method; and determining an acquisition device aiming at the acquisition area and an acquisition strategy to be adopted by the acquisition device, and sending the acquisition strategy to the acquisition device needing to acquire the image data.
Further, the control center includes one or more artificial intelligence analysis servers.
A big data analysis server comprising a processor coupled to a memory, the memory storing program instructions that when executed by the processor implement the complex scene-based image data acquisition method.
A computer readable storage medium comprising a program which, when run on a computer, causes the computer to perform the complex scene image data acquisition method.
An artificial intelligence analysis server comprising a processor coupled to a memory, the memory storing program instructions that when executed by the processor implement the complex scene-based image data acquisition method.
The beneficial effects of the application include:
(1) The acquisition areas are split to form independent and dynamically associated acquisition areas, the corresponding set scene matrix and associated matrix are subjected to dynamic and data description, the complex scene is simplified into a scene with smaller granularity and associated with the acquisition areas, so that a scene matrix is formed, and the internal and external condition factors to be considered in the data acquisition process are reduced; the independent and differential acquisition basis based on the incidence matrix acquisition area is further formed, the size of the image acquisition data of the coverage acquisition object to be acquired is reduced, and the subsequent possible storage and calculation cost is reduced;
(2) The correlation matrix is combined with the acquisition quality level, so that the requirements of a differential acquisition device for complex scene layout and a differential acquisition resource of a demand party are met while complex scene image acquisition is processed, and the optimal balance is achieved based on quantitative calculation;
(3) Based on scaling and 0-value filling of the association coefficient image data, the spliced image basically does not increase subsequent image processing efficiency while avoiding the problems of possibly occurring such as reduced image data identification capability and incapability of identification, and meanwhile, ensures compatibility when an intelligent identification model is adopted.
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the application, if necessary:
fig. 1 is a schematic diagram of a complex scene image data acquisition method according to the present application.
[ detailed description ] of the application
The present application will now be described in detail with reference to the drawings and the specific embodiments thereof, wherein the exemplary embodiments and descriptions are only for the purpose of illustrating the application and are not to be construed as limiting the application
As shown in fig. 1, the present application proposes a complex scene image data acquisition method, which includes:
step S1: determining an acquisition region of image data and an adjacent acquisition region thereof aiming at an acquisition object; wherein: the image acquisition area is an acquisition area with the highest association degree with the acquisition object; the adjacent acquisition area is an acquisition area adjacent to the acquisition area in space position and associated with the acquisition object;
preferably: the acquisition object association comprises covering the acquisition object, wherein the association degree corresponds to the coverage degree of the acquisition object, is related to the occurrence of the acquisition aiming at the event, corresponds to the occurrence probability of the event and the like; for example: the acquisition area covers part or all of the image acquisition object and is a core area for acquiring the acquisition object; the position relationship between the acquisition object and the acquisition area changes in real time due to scene changes; thus, the determined acquisition region is the same or different for each acquisition;
preferably: when the complex scene changes, re-executing the step S1 to re-determine the acquisition area aiming at the acquisition object and the adjacent acquisition areas;
preferably: the image acquisition area is the highest acquisition area of the coverage degree of the acquisition object; of course, the association degree can be set by default in consideration of the frequency of occurrence of the acquisition object in the image acquisition region;
preferably: the adjacent acquisition area is adjacent to the acquisition area in a space position and is within a preset adjacent stride range; for example: limiting the number of indirectly adjacent strides to be within 2 steps, and the like;
preferably: the adjacent acquisition areas are acquisition areas within a preset distance range from the acquisition areas;
that is, the application forms independent and dynamic associated acquisition areas through splitting the acquisition areas, thereby finely dividing complex scenes into scenes, and reducing the internal and external condition factors to be considered in the data acquisition process; the independent and differential acquisition basis based on the incidence matrix acquisition area is further formed, the size of the image acquisition data of the coverage acquisition object to be acquired is reduced, and the subsequent possible storage and calculation cost is reduced;
preferably: the acquisition region comprises an object of interest, namely an acquisition object; for example: equipment, operators, security monitoring targets, etc.; the number of the acquisition objects is one or more;
step S2: determining an acquisition area scene and adjacent acquisition area scenes; constructing a scene matrix based on the scene; specific: the scene is identified by a field Jing Xiangliang, and each element in the scene vector is an acquisition area related parameter; mapping each element in the vector into an enumeration type; then calculating the digest value of the vectors so that the digest value of each vector is unique, and setting the element value of the scene matrix corresponding to the acquisition area in the scene matrix as the digest value; wherein: each element in the scene matrix corresponds to an acquisition area; the position relation of the matrix elements corresponds to the position relation between the acquisition areas; the number of rows and columns of the scene matrix are respectively equal to the number of farthest acquisition areas covered by the current acquisition area and all adjacent acquisition areas in the X and Y axis directions in the two-dimensional space; obviously, the scene where the acquisition area is located and the scene matrix are all changed in real time; of course, the acquisition region and the adjacent acquisition region, and their corresponding scene matrix, may be deployed in a similar manner in three-dimensional space;
preferably: the collection area related parameters comprise one or more of collection area attributes (size, position, type and the like), an indication collection object layout or appearance mode, collection area environment parameters, collection object and collection area relation parameters and the like;
preferably: the adjacent acquisition area is an acquisition area directly or indirectly adjacent to the current acquisition area; the adjacency refers to spatial position adjacency; that is, since the acquisition region and the adjacent acquisition regions are both related to the acquisition object, voids are formed at the uncorrelated positions, so that the acquisition regions are not necessarily continuous, and thus acquisition voids are formed in the complete acquisition range, and are filled with 0 values, thereby greatly reducing acquisition overhead;
step S3: determining an incidence matrix corresponding to the acquisition region and the adjacent acquisition region based on the scene matrix; the element value in the association matrix is equal to the association degree value of the corresponding adjacent monitoring area and the current acquisition area; the larger the association degree value is, the larger the element value is, and conversely, the smaller the association degree value is, the smaller the element value is; when there is no association between the two, the corresponding element value is 0; the association degree describes the association degree between the acquisition region and the associated acquisition object thereof and the adjacent acquisition region and the same acquisition object under the current complex scene;
the related matrix corresponding to the acquisition area and the adjacent acquisition area is determined based on the scene matrix, specifically: pre-training the artificial intelligence model with historical image data; inputting a scene matrix into an artificial intelligent model to obtain an associated matrix identifier corresponding to the scene matrix, and searching to obtain a corresponding associated matrix based on the associated matrix identifier;
the artificial intelligent model is trained in advance by adopting historical image data, and specifically comprises the following steps: pre-training the artificial intelligent model by adopting historical acquisition image data and an incidence matrix corresponding to the historical acquisition image data and an identification thereof as a sample set; firstly, determining an acquisition region and an adjacent acquisition region in a sample, and then determining the scene of the acquisition region and the adjacent acquisition region in the sample and a corresponding scene matrix (the determination mode is the same as the above steps); setting a corresponding incidence matrix and an identification thereof; as a sample, the correlation matrix am= [ a ] i,j ]The sample element value of the (i, j) th position in (b) is set toWherein: na (Na) i,j The number of times that the decision result corresponding to the adjacent acquisition area at the (i, j) th position in the historical image data is consistent with the decision result corresponding to the acquisition area; n is the decision number based on the decision result of the current acquisition area; the association degree between the acquisition area and the acquisition area is 1;
preferably: the artificial intelligent model is a neural network model;
preferably: the decision result comprises one or a combination of an identification result, a judgment result, a decision result, a monitoring result, a feedback result, an object identification and the like; the decision result is a 0,1 result, or a probability value result;
alternatively, the following is used: na (Na) i,j The number of times that the adjacent acquisition area at the (i, j) th position captures (all, preset part) an acquisition object in the historical image data; n is the number of times the current acquisition area captures (all, preset part) the acquisition object;
preferably: in order to improve the usability of the historical big data, setting a scene matrix into a matrix form with a fixed size, and further setting the position of an acquisition area as the central position of the matrix; for example: 3*3 matrix, then correspondingly, adjacent acquired images are arranged, divided or processed according to the relation between the adjacent acquired images and the acquisition area arranged at the central position;
according to the method, the independent and dynamically associated acquisition areas are formed through splitting the acquisition areas, the scene matrix and the associated matrix are correspondingly arranged to carry out dynamic and data description, the complex scene is simplified into the scene with smaller granularity and associated with the acquisition areas, so that the scene matrix is formed, and the internal and external condition factors to be considered in the data acquisition process are reduced; the independent and differential acquisition basis based on the incidence matrix acquisition area is further formed, the size of the image acquisition data of the coverage acquisition object to be acquired is reduced, and the subsequent possible storage and calculation cost is reduced;
preferably: the steps also comprise an incidence matrix simplifying step;
the association matrix is simplified, and specifically comprises the following steps: when the element value in the associated matrix is smaller than a preset minimum associated value, setting the corresponding element value of the associated matrix to 0; when the association value is 0, the image data of the corresponding acquisition area is not acquired subsequently, and the corresponding image data is directly filled with 0;
step S4: determining an acquisition quality level based on the acquisition resource requirements and the incidence matrix; the differentiated acquisition quality grades corresponding to the acquisition areas and the adjacent acquisition areas meet the acquisition resource requirements and accord with the incidence matrix; the coincidence incidence matrix refers to that the quality level respectively determined for the current acquisition area and the adjacent acquisition area is consistent with the incidence degree indicated by the corresponding element value in the incidence matrix;
the step S4 specifically includes the following steps:
step SA4: acquiring acquisition resource requirements, and an acquisition resource requirement upper limit, an acquisition resource requirement lower limit and an acquisition target corresponding to the acquisition resource requirements; when the acquisition target is the optimal target, entering a step SB4; when the acquisition target is the most saved target, entering step SC4;
step SB4: determining an acquisition quality grade in an acquisition resource demand upper limit judging mode;
the step SB4 specifically comprises the following steps:
step SB41: determining the size SZ of image data acquired by quality level Lc for an acquisition region Lc Wherein: c is the acquisition zoneDomain location numbering; lc is set to the highest quality level under initial conditions;
preferably: the higher the quality level is, the larger the size of the acquired image data is, and the larger the acquired resource requirement is, otherwise, the lower the quality level is, the smaller the size of the acquired image data is and the smaller the acquired resource requirement is;
step SB42: determining the size AJ_SZ of image data acquired by using adjacent acquisition area (i, j) as quality level La (i,j),La The method comprises the steps of carrying out a first treatment on the surface of the Setting the quality grade La to be the highest quality grade under the initial condition;
step SB43: calculating the total image data size SZall by adopting the following steps;
SZAll=SZ Lc +∑ (i,j)≠c a i,j ×AJ_SZ (i,j),La
step SB44: judging whether the size SZall of the total image data is larger than the upper limit of the acquisition resource requirement, if so, entering a step SD; otherwise, judging whether La is a minimum value, if La is a minimum value, entering step SB45; if La is not the minimum value, setting la=la-1, and returning to step SB42;
step SB45: judging whether Lc is a minimum value, if Lc is not a minimum value, setting lc=lc-1, and returning to step SB41; if Lc is the minimum value, go to step SD;
step SC4: determining an acquisition quality grade in a lower limit judgment mode of the acquisition resource demand;
the step SC4 specifically includes the following steps:
step SC41: determining the size SZ of image data acquired by quality level Lc for an acquisition region Lc Wherein: c is the acquisition area position number; setting Lc to the lowest quality level under initial conditions;
step SC42: determining the size AJ_SZ of image data acquired by using adjacent acquisition area (i, j) as quality level La (i,j),La The method comprises the steps of carrying out a first treatment on the surface of the Setting La as the lowest quality grade under the initial condition;
step SC43: calculating the total image data size SZall by adopting the following steps;
SZAll=SZ Lc +∑ (i,j)≠c a i,j ×AJ_SZ (i,j),La
step SC44: judging whether the size SZall of the total image data is larger than the lower limit of the acquisition resource requirement, if so, entering a step SD; otherwise, judging whether Lc is a maximum value, if Lc is a maximum value, entering step SC45; if Lc is not maximum, lc=lc+1 is set and step SC41 is returned;
step SC45: judging whether La is maximum, if not, setting la=la+1, and returning to step SC42; if La is the maximum value, go to step SD;
step SD: recording the current Lc, la value;
the correlation matrix is combined with the acquisition quality level, so that the requirements of a differential acquisition device for complex scene layout and a differential acquisition resource of a demand party are met while complex scene image acquisition is processed, and the optimal balance is achieved based on quantitative calculation;
step S5: determining an acquisition strategy of an acquisition region and an adjacent acquisition region based on the incidence matrix, and acquiring based on the acquisition strategy to obtain a current acquisition region image and an adjacent acquisition region image; the method comprises the following steps: setting the quality grade of the acquisition strategy corresponding to the acquisition area as Lc, adopting the original resolution corresponding to the quality grade Lc to acquire image data, setting the quality grade of the acquisition strategy corresponding to the adjacent acquisition area (i, j) as La, and adopting the scaling resolution corresponding to the quality grade La to acquire the image data; wherein: the scaling resolution is obtained after corresponding scaling is carried out on the basis of corresponding elements of the incidence matrix;
preferably: calculating the scaling resolution Df using La The method comprises the steps of carrying out a first treatment on the surface of the Wherein: f (f) La The original resolution corresponding to the quality grade La;
Df La =a i,j ×f La
preferably: image data acquisition is carried out by adopting the original resolution corresponding to the quality level Lc, and specifically comprises the following steps: selecting a collecting device closest to the quality level Lc for image data collection aiming at a collecting area; when no quality grade reaches the Lc grade of the acquisition device, selecting the highest grade acquisition device to acquire image data; when the quality grades of all the acquisition devices are farWhen the quality level is higher than the quality level Lc, selecting an acquisition device with the lowest quality level to acquire image data; reducing the size of the acquired image data to AJ_SZ (i,j),La The method comprises the steps of carrying out a first treatment on the surface of the The image data acquisition mode of adopting the quality grade La and the corresponding scaling resolution of the adjacent acquisition areas is similar;
preferably: the reduction mode can comprise the steps of reducing resolution, compressing an image, intercepting image data of the part closest to the current acquisition area and the like;
step S6: the acquired current acquisition region image and the acquired adjacent acquisition region image are spliced to obtain a complete acquisition image; the method comprises the following steps: splicing the current acquired region image data and the adjacent acquired region image data according to the corresponding relation of the spatial positions of the current acquired region image data and the adjacent acquired region image data to obtain complete acquired image data;
preferably: the steps further include: filling the space positions which are not determined to be the acquisition region and the adjacent acquisition region in the complete acquisition range with a value of 0 to obtain a complete acquisition image;
the step S6 specifically includes the following steps:
step S61: determining spatial zone size FSZ of current acquisition zone c And the spatial zone dimension FSZ of each adjacent acquisition zone (i,j)
Step S62: calculating the size ratio between the two
Step S63: calculating the splicing scaling multiple of each adjacent acquisition areaWherein: IMSZ c The size of the image data of the current acquisition area image; IMSZ (i,j) Is the image data size of the adjacent acquisition area image; IMSZ when the acquisition device is not limited c =SZ Lc ,IMSZ (i,j)
AJ_SZ (j,j),La
Step S64: will be of the size IMSZ (i,j) ×(E (i,j) -0-valued image data of 1) are uniformly inserted into the image data of an adjacent acquisition region (i, j);
step S65: splicing the current acquisition region image and the adjacent acquisition region images which are uniformly inserted according to the spatial position corresponding relation to obtain a complete acquisition image;
the application is based on scaling and 0-value filling of the association coefficient image data, so that the spliced image basically does not increase the subsequent image processing efficiency while avoiding the problems of possibly occurring such as reduced image data identification capability and incapability of identification; meanwhile, when the intelligent recognition model is adopted, compatibility is guaranteed; the 0 value filling can greatly increase the operation efficiency of the existing hardware image processing unit in a plurality of specific operation links;
based on the same inventive concept, as shown in fig. 1, the application provides a complex scene image data acquisition system, which comprises: a control center and one or more acquisition devices; the control center and the acquisition device are in communication connection with each other;
the acquisition device is an image acquisition device and is used for acquiring image data based on an acquisition strategy and sending the image data acquired aiming at an acquisition area to the control center;
the control center is used for executing the complex scene image data acquisition method; determining an acquisition device aiming at an acquisition area and an acquisition strategy to be adopted by the acquisition device, and sending the acquisition strategy to the acquisition device needing to acquire image data;
preferably: the control center comprises one or more artificial intelligence analysis servers, and the plurality of artificial intelligence analysis servers contained in the control center are transparent to the outside;
preferably: each artificial intelligence analysis server serves one or more mobile terminals of a user; the mobile terminal of the requiring party sends an image data acquisition request comprising an acquisition object and an acquisition resource requirement to a control center;
preferably: the mobile terminal of the requiring party is one of a computer, a mobile terminal, a monitoring mechanism server and the like; the mobile terminal of the requiring party accesses the control center in a wired or wireless mode;
the terms "control center", "artificial intelligence analysis server", "data processing system", "mobile terminal" or "consumer mobile terminal" or "computer device" encompass all kinds of apparatus, devices and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or a plurality of or a combination of the foregoing. The apparatus can comprise dedicated logic circuits, such as an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). In addition to hardware, the apparatus may include code to create an execution environment for the computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of the foregoing. The apparatus and execution environment may implement a variety of different computing model infrastructures, such as web services, distributed computing, and grid computing infrastructures.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object or other unit suitable for use in a computing environment. The computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program, or in multiple coordinated files (e.g., files that store one or more modules, subroutines, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the application without departing from the spirit and scope of the application, which is intended to be covered by the claims.

Claims (10)

1. A complex scene image data based acquisition method, comprising:
step S1: determining an acquisition region of image data and an adjacent acquisition region thereof aiming at an acquisition object; wherein: the image acquisition area is an acquisition area with the highest association degree with the acquisition object; the adjacent acquisition area is an acquisition area adjacent to the acquisition area in space position and associated with the acquisition object;
step S2: determining an acquisition area scene and adjacent acquisition area scenes; constructing a scene matrix based on the scene; specific: the scene is identified by a field Jing Xiangliang, and each element in the scene vector is an acquisition area related parameter; mapping each element in the vector into an enumeration type; then calculating the digest value of the vectors so that the digest value of each vector is unique, and setting the element value of the scene matrix corresponding to the acquisition area in the scene matrix as the digest value; wherein: each element in the scene matrix corresponds to an acquisition area; the position relation of the matrix elements corresponds to the position relation between the acquisition areas; the number of rows and columns of the scene matrix are respectively equal to the number of farthest acquisition areas covered by the current acquisition area and all adjacent acquisition areas in the X and Y axis directions in the two-dimensional space;
step S3: determining an incidence matrix corresponding to the acquisition region and the adjacent acquisition region based on the scene matrix; the element value in the association matrix is equal to the association degree value of the corresponding adjacent monitoring area and the current acquisition area; the larger the association degree value is, the larger the element value is, and conversely, the smaller the association degree value is, the smaller the element value is;
the related matrix corresponding to the acquisition area and the adjacent acquisition area is determined based on the scene matrix, specifically: pre-training an artificial intelligent model by adopting historical image data; inputting a scene matrix into an artificial intelligent model to obtain an associated matrix identifier corresponding to the scene matrix, and searching to obtain a corresponding associated matrix based on the associated matrix identifier;
the artificial intelligent model is trained in advance by adopting historical image data, and specifically comprises the following steps: pre-training the artificial intelligent model by adopting historical acquisition image data and an incidence matrix corresponding to the historical acquisition image data and an identification thereof as a sample set; firstly, determining an acquisition region and an adjacent acquisition region in a sample, and then determining scenes of the acquisition region and the adjacent acquisition region in the sample and corresponding scene matrixes; setting a corresponding incidence matrix and an identification thereof; as a sample, the correlation matrix am= [ a ] i,j ]The sample element value of the (i, j) th position in (b) is set toWherein: na (Na) i,j The number of times that the decision result corresponding to the adjacent acquisition area at the (i, j) th position in the historical image data is consistent with the decision result corresponding to the acquisition area; n is the decision number based on the decision result of the current acquisition area; the association degree between the acquisition area and the acquisition area is 1;
step S4: determining an acquisition quality level based on the acquisition resource requirements and the incidence matrix; the differentiated acquisition quality grades corresponding to the acquisition areas and the adjacent acquisition areas meet the acquisition resource requirements and accord with the incidence matrix; the coincidence incidence matrix refers to that the quality level respectively determined for the current acquisition area and the adjacent acquisition area is consistent with the incidence degree indicated by the corresponding element value in the incidence matrix;
step S5: determining an acquisition strategy of an acquisition region and an adjacent acquisition region based on the incidence matrix, and acquiring based on the acquisition strategy to obtain a current acquisition region image and an adjacent acquisition region image; the method comprises the following steps: setting the quality grade of the acquisition strategy corresponding to the acquisition area as Lc, adopting the original resolution corresponding to the quality grade Lc to acquire image data, setting the quality grade of the acquisition strategy corresponding to the adjacent acquisition area (i, j) as La, and adopting the scaling resolution corresponding to the quality grade La to acquire the image data; wherein: the scaling resolution is obtained after corresponding scaling is carried out on the basis of corresponding elements of the incidence matrix;
step S6: the acquired current acquisition region image and the acquired adjacent acquisition region image are spliced to obtain a complete acquisition image; the method comprises the following steps: and splicing the current acquired region image data and the adjacent acquired region image data according to the spatial position corresponding relation to obtain complete acquired image data.
2. The complex scene image data based acquisition method according to claim 1, wherein the artificial intelligence model is a neural network model.
3. The complex scene image data based acquisition method according to claim 2, wherein the correlation matrix is a 3*3 matrix.
4. A complex scene image data based acquisition method according to claim 3, wherein the correlation matrix element corresponding to the acquisition region is at the (2, 2) th position.
5. The complex scene image data based acquisition method according to claim 4, wherein the adjacent acquisition region is an acquisition region within a preset distance range from the acquisition region.
6. A complex scene image based data acquisition system for implementing the method of any of the preceding claims 1-5, characterized in that the system comprises: a control center and one or more acquisition devices; the control center and the acquisition device are in communication connection with each other;
the acquisition device is an image acquisition device and is used for acquiring image data based on an acquisition strategy and sending the image data acquired aiming at an acquisition area to the control center;
the control center is used for executing the complex scene image data acquisition method; and determining an acquisition device aiming at the acquisition area and an acquisition strategy to be adopted by the acquisition device, and sending the acquisition strategy to the acquisition device needing to acquire the image data.
7. The complex scene based image data collection system of claim 6, wherein the control center comprises one or more artificial intelligence analysis servers.
8. A big data analysis server, comprising a processor, the processor coupled to a memory, the memory storing program instructions that when executed by the processor implement the complex scene-based image data acquisition method of any of claims 1-5.
9. A computer readable storage medium comprising a program which, when run on a computer, causes the computer to perform the complex scene-based image data acquisition method of any one of claims 1-5.
10. An artificial intelligence analysis server comprising a processor coupled to a memory, the memory storing program instructions that when executed by the processor implement the complex scene image data based acquisition method of any one of claims 1-5.
CN202310559121.9A 2023-05-17 2023-05-17 Image data acquisition method and system based on complex scene Active CN116543135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310559121.9A CN116543135B (en) 2023-05-17 2023-05-17 Image data acquisition method and system based on complex scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310559121.9A CN116543135B (en) 2023-05-17 2023-05-17 Image data acquisition method and system based on complex scene

Publications (2)

Publication Number Publication Date
CN116543135A CN116543135A (en) 2023-08-04
CN116543135B true CN116543135B (en) 2023-09-29

Family

ID=87446758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310559121.9A Active CN116543135B (en) 2023-05-17 2023-05-17 Image data acquisition method and system based on complex scene

Country Status (1)

Country Link
CN (1) CN116543135B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084281A (en) * 2019-03-31 2019-08-02 华为技术有限公司 Image generating method, the compression method of neural network and relevant apparatus, equipment
CN111768338A (en) * 2020-06-29 2020-10-13 广东小天才科技有限公司 Method and device for splicing test question images, electronic equipment and storage medium
CN115626177A (en) * 2022-10-13 2023-01-20 中汽创智科技有限公司 Vehicle track prediction method and device and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7049983B2 (en) * 2018-12-26 2022-04-07 株式会社日立製作所 Object recognition device and object recognition method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084281A (en) * 2019-03-31 2019-08-02 华为技术有限公司 Image generating method, the compression method of neural network and relevant apparatus, equipment
CN111768338A (en) * 2020-06-29 2020-10-13 广东小天才科技有限公司 Method and device for splicing test question images, electronic equipment and storage medium
CN115626177A (en) * 2022-10-13 2023-01-20 中汽创智科技有限公司 Vehicle track prediction method and device and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
家庭智能空间下基于场景的人的行为理解;田国会;智能***学报(01);第61-66页 *

Also Published As

Publication number Publication date
CN116543135A (en) 2023-08-04

Similar Documents

Publication Publication Date Title
Verucchi et al. A systematic assessment of embedded neural networks for object detection
CN109543829A (en) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN111507378A (en) Method and apparatus for training image processing model
CN111797983A (en) Neural network construction method and device
US11961004B2 (en) Predicting brain data using machine learning models
Hadidi et al. Collaborative execution of deep neural networks on internet of things devices
CN112492297B (en) Video processing method and related equipment
CN114091554A (en) Training set processing method and device
CN113408321A (en) Real-time target detection method and device for lightweight image and video data
CN113627422A (en) Image classification method and related equipment thereof
Castanyer et al. Integration of convolutional neural networks in mobile applications
CN116543135B (en) Image data acquisition method and system based on complex scene
CN114169506A (en) Deep learning edge computing system framework based on industrial Internet of things platform
CN111447592B (en) Method, equipment and storage medium for determining transmission resources
CN109858341B (en) Rapid multi-face detection and tracking method based on embedded system
Tu et al. On designing the adaptive computation framework of distributed deep learning models for Internet-of-Things applications
Niazi-Razavi et al. Toward real-time object detection on heterogeneous embedded systems
Song et al. Residual Squeeze-and-Excitation Network for Battery Cell Surface Inspection
CN114582009A (en) Monocular fixation point estimation method and system based on mixed attention mechanism
CN115995079A (en) Image semantic similarity analysis method and homosemantic image retrieval method
Hou et al. Analyzing the hardware-software implications of multi-modal dnn workloads using mmbench
CN116193274B (en) Multi-camera safety control method and system
Briley et al. Hardware Acceleration for Real-Time Wildfire Detection Onboard Drone Networks
JP2019125128A (en) Information processing device, control method and program
CN114400049B (en) Training method and device for peptide fragment quantitative model, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant