CN113408518B - Audio and video acquisition equipment control method and device, electronic equipment and storage medium - Google Patents

Audio and video acquisition equipment control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113408518B
CN113408518B CN202110770452.8A CN202110770452A CN113408518B CN 113408518 B CN113408518 B CN 113408518B CN 202110770452 A CN202110770452 A CN 202110770452A CN 113408518 B CN113408518 B CN 113408518B
Authority
CN
China
Prior art keywords
feature vector
target user
audio
historical
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110770452.8A
Other languages
Chinese (zh)
Other versions
CN113408518A (en
Inventor
黄崇辉
雷国宾
蒋英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shibang Communication Co ltd
Original Assignee
Shibang Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shibang Communication Co ltd filed Critical Shibang Communication Co ltd
Priority to CN202110770452.8A priority Critical patent/CN113408518B/en
Publication of CN113408518A publication Critical patent/CN113408518A/en
Application granted granted Critical
Publication of CN113408518B publication Critical patent/CN113408518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a control method and device of audio and video acquisition equipment, electronic equipment and a storage medium. The method comprises the steps of obtaining historical position data of a target user; constructing a location feature vector based on the historical location data; judging a first position where the target user appears based on the position feature vector; determining at least one audio and video device for audio and video acquisition at the first position according to the first position; and controlling the at least one audio and video device to execute the operation of audio and video acquisition of the target user. The method and the device can pre-judge the position of the user which is possibly generated in advance according to the historical position data of the user, further control the audio and video acquisition equipment corresponding to the position to be started in advance to acquire the audio and video data, and prevent the audio and video acquisition equipment from missing important data when acquiring the data.

Description

Audio and video acquisition equipment control method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of audio and video acquisition, in particular to a control method and device of audio and video acquisition equipment, electronic equipment and a storage medium.
Background
At present, for some important places, a plurality of audio-video acquisition devices are often arranged to acquire audio-video data. The data collected by each audio-video capture device needs to be stored, and under some conditions, the audio-video data collected within at least a preset time (for example, within one or two years) may even be required to be stored. Because the number of the set audio and video acquisition devices is large and the storage time span is large, users often need to purchase a large memory to store audio and video data.
Based on this, the prior art provides a solution, namely, when there is no monitoring target in the monitoring area, the audio/video acquisition device in the monitoring area is closed, and only when the monitoring target appears in the monitoring area, the audio/video acquisition is performed. Therefore, unnecessary acquisition of audio and video data can be reduced, and the storage space of the audio and video data is reduced.
However, in the above manner, when a monitoring target appears in the monitoring area, the background often needs a long period of time to recognize the appearance of the monitoring target. After the audio and video data are identified, the audio and video acquisition equipment is in a closed state, and a period of time is required from the time the audio and video acquisition equipment is started to the time the audio and video data can be acquired by the audio and video acquisition equipment, so that the audio and video data are acquired in a lagging manner, and important audio and video data can be omitted.
Disclosure of Invention
The invention aims to provide a control method and device of audio and video acquisition equipment, electronic equipment and a storage medium, which can effectively acquire audio and video data and prevent important data from being omitted when the audio and video data are acquired.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides a control method of audio and video acquisition equipment, which comprises the following steps:
acquiring historical position data of a target user;
constructing a location feature vector based on the historical location data;
judging a first position where a target user appears based on the position feature vector;
determining at least one audio and video device for audio and video acquisition at the first position according to the first position;
and controlling at least one audio and video device to execute the operation of audio and video acquisition of the target user.
In one embodiment, a method for constructing a location feature vector based on historical location data includes:
constructing a first position feature vector, a second position feature vector and a third position feature vector based on the historical position data; wherein the first location feature vector characterizes a feature vector constructed based on historical location data of the target user over different time periods; the second position feature vector represents a feature vector constructed based on historical position data of the target user in different spaces; the third position feature vector represents a feature vector constructed based on the motion track of the target user; the motion trail is generated according to historical position data of the target user;
and determining the position feature vector of the target user by utilizing a first preset model according to the first position feature vector, the second position feature vector and the third position feature vector and the first confidence level, the second confidence level and the third confidence level respectively corresponding to the first position feature vector, the second position feature vector and the third position feature vector.
In one embodiment, a method for constructing a first location feature vector based on historical location data includes,
constructing a first position feature vector using the following equation (1):
Figure BDA0003150834160000021
wherein, P 1 (u) denotes a first position feature vector, I denotes the ith time dimension of all time dimensions I, | chromatic<u,t,l>∈Cu 1 |t=t 1,i Denotes the historical position data of the target user in the ith time dimension at the 1 st time period, a<u,t,l>∈Cu n |t=t n,i Denotes the historical position data of the target user in the ith time dimension, σ, at the nth time period 1 A feature importance value, σ, representing historical location data of the target user in the ith time dimension for the 1 st time period n A characteristic importance value T representing historical location data of the target user in the ith time dimension in the nth time period M Is a first characteristic parameter, D is a second characteristic parameter, M 0 Is the third characteristic parameter.
In one embodiment, a method for constructing a second location feature vector based on historical location data includes,
constructing a second position feature vector using the following equation (2):
Figure BDA0003150834160000031
wherein, P 2 (u) represents the second positional feature vector, I represents the ith spatial dimension of all spatial dimensions I, | map<u,t,l>∈Cu 1 |l=l 1,i Denotes the ith spaceDimension [ a ] historical position data of a target user in the 1 st spatial range<u,t,l>∈Cu n |l=l n,i Denotes the historical position data of the target user in the nth spatial dimension, β 1 A feature importance value, σ, representing historical location data of the target user in the 1 st spatial range in the ith spatial dimension n A characteristic importance value T representing historical position data of the target user in the nth spatial dimension M 'is a first characteristic parameter, D' is a second characteristic parameter, M 0 ' is a third characteristic parameter.
In one embodiment, a method for constructing a third location feature vector based on historical location data includes,
constructing a third position feature vector using the following equation (3):
Figure BDA0003150834160000032
wherein, P 3 (u) represents a third location feature vector,
Figure BDA0003150834160000033
represents a movement trajectory of a target user>
Figure BDA0003150834160000034
Location at a first time, representing a target user>
Figure BDA0003150834160000035
Indicating the location of the target user at the nth time, T M "is a first characteristic parameter, D" is a second characteristic parameter, M 0 "is the third characteristic parameter.
In one embodiment, a method for determining a first position where a target user appears at a next time based on a position feature vector includes:
acquiring a current position vector of a target user at the current moment;
and determining a first position where the target user appears at the next moment by using a second preset model based on the current position vector and the position feature vector.
In one embodiment, a method for determining at least one audio-video device for audio-video capture at a first location according to the first location includes:
determining a first range of activities of a target user according to the first position;
acquiring an acquisition area corresponding to each audio device;
determining at least one audio and video device with an intersection between the acquisition area and the first range according to the first range and the acquisition area;
and determining at least one audio and video device with intersection between the acquisition area and the first range as the at least one audio and video device for audio and video acquisition of the first position.
The invention also provides a control device of the audio and video acquisition equipment, which comprises:
the acquisition module is used for acquiring historical position data of a target user;
a construction module for constructing a location feature vector based on historical location data;
the pre-judging module is used for judging a first position where a target user appears at the next moment based on the position feature vector;
the determining module is used for determining at least one audio/video device for audio/video acquisition on the first position according to the first position;
and the control module is used for controlling at least one audio and video device to start executing audio and video acquisition of a target user.
The present invention also provides an electronic device, comprising: a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is adapted to perform the steps of any of the methods described above when running the computer program.
The invention also provides a storage medium in which a computer program is stored, which, when executed by a processor, implements the steps of any of the above-described methods.
The control method, the control device, the electronic equipment and the storage medium of the audio and video acquisition equipment provided by the invention are used for acquiring historical position data of a target user; constructing a location feature vector based on the historical location data; judging a first position where a target user appears based on the position feature vector; determining at least one audio and video device for audio and video acquisition at the first position according to the first position; and controlling at least one audio and video device to execute the operation of audio and video acquisition of the target user. The method and the device can pre-judge the possible position of the user in advance according to the historical position data of the user, further control the audio and video acquisition equipment corresponding to the position to be started in advance so as to acquire the audio and video data, and prevent the audio and video acquisition equipment from missing important data when acquiring the data.
Drawings
Fig. 1 is a schematic flow chart of a control method of an audio and video acquisition device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a control device of an audio and video acquisition device according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples.
The embodiment of the invention provides a control method of an audio and video acquisition device, which comprises the following steps of:
step 101: acquiring historical position data of a target user;
step 102: constructing a location feature vector based on the historical location data;
step 103: judging a first position where a target user appears based on the position feature vector;
step 104: determining at least one audio and video device for audio and video acquisition at the first position according to the first position;
step 105: and controlling at least one audio and video device to execute the operation of audio and video acquisition of the target user.
Specifically, historical position data of the target user can be acquired from audio and video data acquired by the audio and video acquisition device in the past. For example, the audio/video capture device a has previously captured the presence of the target user in the capture location L1 at time t1, time t2,... For time tn, and the audio/video capture device B has previously captured the presence of the target user at time t1', time t2,. For.. For example, if the time tn ' is collected that the target user appears in the collection position L2, the historical position data of the target user can be determined as (t 1, L1), (t 2, L1), (t 1 a..., (tn, L1), and (t 1', L2), (t 2', L2), (t.a..., (tn ', L2).
In addition, different target users correspond to different historical position data because different behavior tracks of different users are different.
After collecting the historical location data of the target user, a location feature vector of the target user may be constructed based on the collected historical location data.
Specifically, in one embodiment, constructing a location feature vector based on historical location data includes:
constructing a first position feature vector, a second position feature vector and a third position feature vector based on historical position data; wherein the first location feature vector characterizes a feature vector constructed based on historical location data of the target user over different time periods; the second position feature vector represents a feature vector constructed based on historical position data of the target user in different spaces; the third position feature vector represents a feature vector constructed based on the motion trail of the target user; the motion trail is generated according to historical position data of the target user;
and determining the position feature vector of the target user by utilizing a first preset model according to the first position feature vector, the second position feature vector and the third position feature vector and the first confidence level, the second confidence level and the third confidence level respectively corresponding to the first position feature vector, the second position feature vector and the third position feature vector.
Because different users tend to have different behavior habits, a first position feature vector representing the behavior habits of the target user can be determined from the time perspective, a second position feature vector representing the behavior habits of the target user can be determined from the space perspective, and a third position feature vector representing the behavior habits of the target user can be determined from the previous motion trajectory of the target user. The behavior habits of the target user are represented from the three angles, and the effect of comprehensive and accurate judgment can be achieved.
Here, since time may be divided into a plurality of dimensions, the first location feature vector characterizing the target user behavior habit may be determined from the plurality of time dimensions. For example, time can be divided into the following dimensions: day and night; spring, summer, autumn and winter; weekdays, holidays, and the like. Determining the first location feature vector characterizing the target user's behavior habits from different time dimensions can be more accurate.
Further, in an embodiment, constructing the first location feature vector based on the historical location data includes:
constructing a first location feature vector using the following equation (1):
Figure BDA0003150834160000071
wherein, P 1 (u) denotes a first position feature vector, I denotes the ith time dimension of all time dimensions I, | chromatic<u,t,l>∈Cu 1 |t=t 1,i Denotes the historical position data of the target user in the ith time dimension at the 1 st time period, a<u,t,l>∈Cu n |t=t n,i Denotes the historical position data of the target user in the ith time dimension, σ, at the nth time period 1 A feature importance value, σ, representing historical location data of the target user in the ith time dimension for the 1 st time period n A feature importance value, T, representing historical location data of the target user in the nth time period in the ith time dimension M Is a first characteristic parameter, D is a second characteristic parameter, M 0 Is the third characteristic parameter.
Also, since the space may be divided into multiple dimensions, the second location feature vector characterizing the behavior habits of the target user may be determined from the multiple spatial dimensions. For example, space can be divided into the following dimensions: a work area and a rest area; loud and quiet locations; public places and personal places, etc. Determining the second location feature vector characterizing the target user behavior habits from different spatial dimensions can be more accurate.
Further, in an embodiment, constructing the second location feature vector based on the historical location data includes:
constructing a second position feature vector using the following equation (2):
Figure BDA0003150834160000072
wherein, P 2 (u) represents the second position feature vector, I represents the ith spatial dimension of all spatial dimensions I, | chromatic<u,t,l>∈Cu 1 |l=l 1,i Denotes the historical position data of the target user in the 1 st spatial dimension, | a leaf<u,t,l>∈Cu n |l=l n,i Denotes the historical position data of the target user in the nth spatial dimension, β 1 A feature importance value, σ, representing historical location data of a target user in the 1 st spatial dimension in the ith spatial dimension n A characteristic importance value T representing historical position data of the target user in the nth spatial dimension M 'is a first characteristic parameter, D' is a second characteristic parameter, M 0 ' is a third characteristic parameter.
Besides analyzing the behavior habits of the target user from time and space, the past behavior habits of the target user can be analyzed from the past motion trajectory of the target user, and a third position feature vector representing the behavior habits of the target user is determined.
In one embodiment, constructing a third location feature vector based on the historical location data includes:
constructing a third position feature vector using the following equation (3):
Figure BDA0003150834160000081
wherein, P 3 (u) represents a third location feature vector,
Figure BDA0003150834160000082
represents a movement trajectory of a target user>
Figure BDA0003150834160000083
Indicates the position of the target user at the first moment in time>
Figure BDA0003150834160000084
Indicating the location of the target user at the nth time, T M "is a first characteristic parameter, D" is a second characteristic parameter, M 0 "is the third characteristic parameter.
The first position characteristic vector, the second position characteristic vector and the third position characteristic vector represent behavior habits of a target user from the angles of time, space and motion trail, so that the first position which is possibly generated at the next moment of the user can be accurately predicted by utilizing the first position characteristic vector, the second position characteristic vector and the third position characteristic vector, and audio and video acquisition equipment for acquiring audio and video of the first position is started in advance, so that when the target user is generated, audio and video data can be acquired in time, and the acquisition of important audio and video data is avoided from being omitted.
After the first, second and third location feature vectors are obtained, according to the first, second and third location feature vectors, first, second and third confidence levels corresponding to the first, second and third location feature vectors are obtained, and then the location feature vector of the target user is determined by using the first, second, third and first preset models.
Here, the first reliability, the second reliability, the third reliability, and the first preset model may be obtained by performing model training on a conventional data sample. The trained first confidence level, second confidence level, third confidence level and first preset model can be used for determining the position feature vector of the target user. The first predetermined model here may be a neural network model.
Because the behavior of the user has a certain rule, the first position of the target user appearing at the next moment can be well predicted by using the past position data of the target user for prediction.
In one embodiment, the determining a first position where the target user appears at the next moment based on the position feature vector includes:
acquiring a current position vector of a target user at the current moment;
and determining a first position where the target user appears at the next moment by utilizing a second preset model based on the current position vector and the position feature vector.
Here, the second preset model may also be a neural network model.
After the current position vector and the position feature vector of the target user are input into the second preset model, the second preset model can output a first position which is possibly appeared by the target user at the next moment. The second preset model can realize higher accuracy of the prediction result through training.
Further, in an embodiment, determining, according to the first location, at least one audio/video device for performing audio/video capture on the first location includes:
determining a first range of activities of a target user according to the first position;
acquiring an acquisition area corresponding to each audio device;
determining at least one audio and video device with an intersection of the acquisition area and the first range according to the first range and the acquisition area;
and determining at least one audio and video device with intersection between the acquisition area and the first range as the at least one audio and video device for audio and video acquisition of the first position.
Here, after the first position is determined, the first range in which the target user performs the activity may be determined according to a preset range. For example, a range within 2 meters of the periphery of the first position may be set as the first range. The first range may also be determined based on actual scene conditions. For example, all of the sofa positions around a sofa position range is the first range.
After the first range and the acquisition area corresponding to each audio device are determined, at least one audio and video device with the intersection of the acquisition area and the first range is used as at least one audio and video device for audio and video acquisition at the first position, so that the audio and video acquisition device is started in advance to acquire audio and video data.
The control method of the audio and video acquisition equipment provided by the embodiment of the invention is used for acquiring historical position data of a target user; constructing a location feature vector based on the historical location data; judging a first position where a target user appears based on the position feature vector; determining at least one audio/video device for audio/video acquisition at the first position according to the first position; and controlling at least one audio and video device to execute the operation of audio and video acquisition of the target user. According to the scheme of the embodiment of the invention, the position of the user which is possibly generated can be pre-judged in advance according to the historical position data of the user, so that the audio and video acquisition equipment corresponding to the position is controlled to be started in advance to acquire the audio and video data, and important data are prevented from being omitted when the audio and video acquisition equipment acquires the data.
In order to implement the method according to the embodiment of the present invention, an embodiment of the present invention further provides a control device for an audio/video acquisition device, which is disposed on an electronic device, and as shown in fig. 2, the control device 200 for an audio/video acquisition device includes: the device comprises an acquisition module 201, a construction module 202, a prejudgment module 203, a determination module 204 and a control module 205; wherein the content of the first and second substances,
an obtaining module 201, configured to obtain historical location data of a target user;
a construction module 202, configured to construct a location feature vector based on the historical location data;
the prejudging module 203 is configured to judge a first position where the target user appears based on the position feature vector;
the determining module 204 is configured to determine, according to the first location, at least one audio/video device for performing audio/video acquisition on the first location;
the control module 205 is configured to control at least one audio/video device to perform an operation of audio/video capture of a target user.
In actual application, the obtaining module 201, the constructing module 202, the prejudging module 203, the determining module 204, and the controlling module 205 may be implemented by a processor in a control device of an audio/video acquisition device.
It should be noted that: the above-mentioned apparatus provided in the above-mentioned embodiment is only exemplified by the division of the above-mentioned program modules when performing operations, and in practical applications, the above-mentioned processing distribution may be completed by different program modules according to needs, that is, the internal structure of the terminal is divided into different program modules to complete all or part of the above-mentioned processing. In addition, the apparatus provided by the above embodiment and the method embodiment belong to the same concept, and the specific implementation process thereof is described in the method embodiment and is not described herein again.
Based on the hardware implementation of the program module, in order to implement the method according to the embodiment of the present invention, an electronic device (computer device) is also provided in the embodiment of the present invention. Specifically, in one embodiment, the computer device may be a terminal, and its internal structure diagram may be as shown in fig. 3. The computer apparatus includes a processor a01, a network interface a02, a display screen a04, an input device a05, and a memory (not shown in the figure) connected through a system bus. Wherein the processor a01 of the computer device is arranged to provide computing and control capabilities. The memory of the computer apparatus includes an internal memory a03 and a nonvolatile storage medium a06. The nonvolatile storage medium a06 stores an operating system B01 and a computer program B02. The internal memory a03 provides an environment for running the operating system B01 and the computer program B02 in the nonvolatile storage medium a06. The network interface a02 of the computer apparatus is used for communicating with an external terminal through a network connection. The computer program is executed by the processor a01 to implement the method of any of the above embodiments. The display screen a04 of the computer device may be a liquid crystal display screen or an electronic ink display screen, and the input device a05 of the computer device may be a touch layer covered on the display screen, a key, a trackball or a touch pad arranged on a casing of the computer device, or an external keyboard, a touch pad or a mouse.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The device provided by the embodiment of the present invention includes a processor, a memory, and a program stored in the memory and capable of running on the processor, and when the processor executes the program, the method according to any one of the embodiments described above is implemented.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It will be appreciated that the memory of embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a magnetic random access Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), synchronous Dynamic Random Access Memory (SLDRAM), direct Memory (DRmb Access), and Random Access Memory (DRAM). The described memory for embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A control method for an audio/video acquisition device is characterized by comprising the following steps:
acquiring historical position data of a target user;
constructing a location feature vector based on the historical location data;
judging a first position where the target user appears based on the position feature vector;
determining at least one audio and video device for audio and video acquisition at the first position according to the first position;
controlling the at least one audio and video device to execute the operation of audio and video acquisition of the target user;
the method for constructing the position feature vector based on the historical position data comprises the following steps:
constructing a first position feature vector, a second position feature vector and a third position feature vector based on the historical position data; wherein the first location feature vector characterizes a feature vector constructed based on historical location data of the target user over different time periods; the second location feature vector characterizes a feature vector constructed based on historical location data of the target user in a different space; the third position feature vector represents a feature vector constructed based on the motion trail of the target user; wherein the motion trail is generated according to historical position data of the target user;
determining a position feature vector of the target user by utilizing a first preset model according to the first position feature vector, the second position feature vector and the third position feature vector and a first confidence level, a second confidence level and a third confidence level which respectively correspond to the first position feature vector, the second position feature vector and the third position feature vector; the first confidence level, the second confidence level, the third confidence level and the first preset model are obtained after model training is carried out on past data samples.
2. The method of claim 1, wherein the method of constructing a first location feature vector based on the historical location data comprises,
constructing a first position feature vector using the following equation (1):
Figure FDA0004051812300000011
wherein, P 1 (u) denotes a first location feature vector, I denotes the ith time dimension of all time dimensions I,<u,t,l>for characterizing historical position data, |<u,t,l>∈Cu 1 |t=t 1,i Denotes the historical position data of the target user u in the ith time dimension at the 1 st time period at time t, a | leaf<u,t,l>∈Cu n |t=t n,i Denotes the historical position data of the target user u in the ith time dimension for the nth time period at time t, σ 1 A feature importance value, σ, representing historical location data of the target user in the ith time dimension for the 1 st time period n A characteristic importance value T representing historical location data of the target user in the ith time dimension in the nth time period M Is a first characteristic parameter, D is a second characteristic parameter, M 0 Is a third characteristic parameter.
3. The method of claim 1, wherein the method of constructing a second location feature vector based on the historical location data comprises,
constructing a second position feature vector using the following equation (2):
Figure FDA0004051812300000021
wherein, P 2 (u) represents the second location feature vector, I represents the ith spatial dimension of all spatial dimensions I,<u,t,l>for characterizing historical position data, | chromatic map<u,t,l>∈Cu 1 |l=l 1,i Denotes the historical position data of the target user u in the ith spatial dimension within the spatial range of which the position l is 1 st, | a great<u,t,l>∈Cu n |l=l n,i Denotes historical position data of the target user u in the ith spatial dimension within the spatial range with the position l as the nth spatial range, β 1 A characteristic importance value, beta, representing historical location data of the target user within the 1 st spatial range in the ith spatial dimension n A characteristic importance value T representing historical position data of the target user in the nth spatial dimension M 'is a first characteristic parameter, D' is a second characteristic parameter, M 0 ' is a third characteristic parameter.
4. The method of claim 1, wherein the method of constructing a third location feature vector based on the historical location data comprises,
constructing a third position feature vector using the following equation (3):
Figure FDA0004051812300000022
wherein, P 3 (u) represents a third location feature vector,
Figure FDA0004051812300000023
represents a movement trajectory of a target user>
Figure FDA0004051812300000024
Indicates the position of the target user at the first moment in time>
Figure FDA0004051812300000025
Indicating the location of the target user at the nth time, T M "is a first characteristic parameter, D" is a second characteristic parameter, M 0 "is a third characteristic parameter.
5. The method of claim 1, wherein the determining the first location of the target user based on the location feature vector comprises:
acquiring a current position vector of the target user at the current moment;
and determining a first position of the target user at the next moment by utilizing a second preset model based on the current position vector and the position feature vector.
6. The method of claim 1, wherein determining at least one audio-video device for audio-video capture of the first location based on the first location comprises:
determining a first range of activities of the target user according to the first position;
acquiring an acquisition area corresponding to each audio device;
determining at least one audio and video device with an intersection between the acquisition area and the first range according to the first range and the acquisition area;
and determining at least one audio/video device with intersection between the acquisition area and the first range as at least one audio/video device for audio/video acquisition of the first position.
7. An audio/video acquisition device control apparatus, comprising:
the acquisition module is used for acquiring historical position data of a target user;
a construction module for constructing a location feature vector based on the historical location data;
the prejudging module is used for judging a first position where the target user appears based on the position feature vector;
the determining module is used for determining at least one audio and video device for audio and video acquisition of the first position according to the first position;
the control module is used for controlling the at least one audio and video device to start executing audio and video acquisition on the target user;
the constructing module is further specifically configured to construct a first position feature vector, a second position feature vector, and a third position feature vector based on the historical position data; wherein the first location feature vector characterizes a feature vector constructed based on historical location data of the target user over different time periods; the second location feature vector characterizes a feature vector constructed based on historical location data of the target user in a different space; the third position feature vector represents a feature vector constructed based on the motion trail of the target user; wherein the motion trail is generated according to historical position data of the target user; determining the position feature vector of the target user by utilizing a first preset model according to the first position feature vector, the second position feature vector and the third position feature vector and the first credibility, the second credibility and the third credibility which respectively correspond to the first position feature vector, the second position feature vector and the third position feature vector; the first confidence level, the second confidence level, the third confidence level and the first preset model are obtained after model training is carried out on past data samples.
8. An electronic device, comprising: a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is adapted to perform the steps of the method of any one of claims 1 to 6 when running the computer program.
9. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any one of claims 1 to 6.
CN202110770452.8A 2021-07-06 2021-07-06 Audio and video acquisition equipment control method and device, electronic equipment and storage medium Active CN113408518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110770452.8A CN113408518B (en) 2021-07-06 2021-07-06 Audio and video acquisition equipment control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110770452.8A CN113408518B (en) 2021-07-06 2021-07-06 Audio and video acquisition equipment control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113408518A CN113408518A (en) 2021-09-17
CN113408518B true CN113408518B (en) 2023-04-07

Family

ID=77685413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110770452.8A Active CN113408518B (en) 2021-07-06 2021-07-06 Audio and video acquisition equipment control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113408518B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016311B (en) * 2022-07-06 2023-05-23 慕思健康睡眠股份有限公司 Intelligent device control method, device, equipment and storage medium

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4346993B2 (en) * 2003-08-26 2009-10-21 富士重工業株式会社 Vehicle guidance control device
JP2005354367A (en) * 2004-06-10 2005-12-22 Matsushita Electric Ind Co Ltd Network camera system
WO2016151925A1 (en) * 2015-03-26 2016-09-29 富士フイルム株式会社 Tracking control device, tracking control method, tracking control program, and automatic tracking/image-capturing system
CN104994289B (en) * 2015-06-30 2017-11-24 广东欧珀移动通信有限公司 A kind of big visual angle camera starts method and system, camera terminal
CN106534782A (en) * 2016-11-14 2017-03-22 刘兰平 Method and device for tracking full-standard mobile signal guide videos
CN106657775B (en) * 2016-11-28 2020-10-16 浙江宇视科技有限公司 Tracking monitoring method, device and system
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
AU2019200854A1 (en) * 2019-02-07 2020-08-27 Canon Kabushiki Kaisha A method and system for controlling a camera by predicting future positions of an object
CN112207812A (en) * 2019-07-12 2021-01-12 阿里巴巴集团控股有限公司 Device control method, device, system and storage medium
KR102068800B1 (en) * 2019-08-09 2020-01-22 국방과학연구소 Remote active camera and method for controlling thereof
CN112750301A (en) * 2019-10-30 2021-05-04 杭州海康威视***技术有限公司 Target object tracking method, device, equipment and computer readable storage medium
CN110928993B (en) * 2019-11-26 2023-06-30 重庆邮电大学 User position prediction method and system based on deep cyclic neural network
CN110968098B (en) * 2019-12-13 2020-12-11 珠海大横琴科技发展有限公司 Method, device and system for monitoring vehicles in transit of electronic purse net and storage medium
CN111698467B (en) * 2020-05-08 2022-05-06 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN112383714A (en) * 2020-11-13 2021-02-19 珠海大横琴科技发展有限公司 Target object tracking method and device

Also Published As

Publication number Publication date
CN113408518A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
US11228653B2 (en) Terminal, cloud apparatus, driving method of terminal, method for processing cooperative data, computer readable recording medium
KR102531593B1 (en) Method and apparatus, device and readable storage medium for predicting power consumption
KR20180119674A (en) User credit evaluation method and apparatus, and storage medium
EP3133502B1 (en) Terminal device and method for cooperatively processing data
KR101986307B1 (en) Method and system of attention memory for locating an object through visual dialogue
CN110704741A (en) Interest point prediction method based on space-time point process
CN113408518B (en) Audio and video acquisition equipment control method and device, electronic equipment and storage medium
US20140278338A1 (en) Stream input reduction through capture and simulation
US20190297473A1 (en) Data usage recommendation generator
CN110659435A (en) Page data acquisition processing method and device, computer equipment and storage medium
US8417811B1 (en) Predicting hardware usage in a computing system
CN111291278A (en) Method and device for calculating track similarity, storage medium and terminal
CN112801156B (en) Business big data acquisition method and server for artificial intelligence machine learning
WO2022022059A1 (en) Context aware anomaly detection
CN109213906B (en) Session duration calculation method, device and system
CN105843607A (en) Information displaying method and device
CN117194792B (en) Child drawing recommendation method and system based on role prediction
Gaska et al. MLStar: machine learning in energy profile estimation of android apps
CN113420844B (en) Object defect detection method and device, electronic equipment and storage medium
US20230030217A1 (en) Systems and methods for organizing displays of software applications running on a device
CN114756710A (en) Video tag obtaining method and device, electronic equipment and storage medium
CN117474603A (en) Popularization information generation method and device, storage medium and electronic equipment
CN114897557A (en) Method, device, equipment and medium for predicting loss of user
CN115422996A (en) Advertisement exception equipment processing method and device, electronic equipment and storage medium
CN113759735A (en) Method and device for recommending electric appliance operation parameter values, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant