CN113344761A - Poverty-alleviation object determining method and device - Google Patents

Poverty-alleviation object determining method and device Download PDF

Info

Publication number
CN113344761A
CN113344761A CN202110852734.2A CN202110852734A CN113344761A CN 113344761 A CN113344761 A CN 113344761A CN 202110852734 A CN202110852734 A CN 202110852734A CN 113344761 A CN113344761 A CN 113344761A
Authority
CN
China
Prior art keywords
data
determining
research
condition
poverty
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110852734.2A
Other languages
Chinese (zh)
Inventor
刘彦随
李裕瑞
黄云鑫
王黎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geographic Sciences and Natural Resources of CAS
Original Assignee
Institute of Geographic Sciences and Natural Resources of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geographic Sciences and Natural Resources of CAS filed Critical Institute of Geographic Sciences and Natural Resources of CAS
Priority to CN202110852734.2A priority Critical patent/CN113344761A/en
Publication of CN113344761A publication Critical patent/CN113344761A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a device for determining poverty-relieving objects. The method comprises the following steps: acquiring research data of a research object; performing fusion processing on the investigation data to obtain multi-dimensional characteristic data of the investigation object; calculating the poverty and handicap degree index of the investigation object according to the multi-dimensional characteristic data; and when the poverty-stricken obstacle degree index reaches a preset threshold value, determining the investigation object as a poverty-stricken object. By adopting the method provided by the invention, the poor and poverty condition of the research object is modeled through a plurality of different angles, objective and automatic analysis is carried out, the interference of human factors is avoided, in addition, error data is identified through the cross validation of data from different sources, the data accuracy is improved, and the objective, accurate and rapid identification of the poor and poverty user is further realized.

Description

Poverty-alleviation object determining method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a poverty-alleviation object determining method and device.
Background
Poverty removal and hardness removal are currently an important political task in China, and the poverty relief development work in China is reformed and opened to sequentially pass through four stages of rural poverty relief, development type poverty relief, comprehensive poverty relief and accurate poverty relief. The precise poverty relief refers to a poverty relief mode of precisely identifying, precisely assisting and precisely managing poverty relief objects by applying a scientific and effective program aiming at different poverty relief area environments and different poverty relief farmer conditions. The poor poverty-relieving process is mainly performed by farmers, and the farmers, particularly the poor poverty-relieving farmers, need to give more attention and support. At the present stage, information such as income conditions, labor force, houses and the like of the investigation objects is obtained by carrying out field investigation and investigation on the candidate objects, the grade of the determined poverty degree of the peasant household is analyzed and determined through manual judgment, and corresponding project arrangement and resource fund allocation decision are carried out according to the grade of the poverty degree of the poverty objects. In addition, the existing investigation method can only obtain qualitative poverty degree, is lack of quantitative analysis, is very easy to be interfered by subjectivity, such as guidance of farmers, and the result reference is poor. Meanwhile, the existing investigation method can only know the poverty through results, can not accurately identify poverty reasons such as diseases and low education degree, and can not perform key monitoring on families with potential risks.
Therefore, how to provide a fast and accurate poor user identification method based on the information technology becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention provides a poverty-alleviation object determining method and device, which are used for quickly and accurately identifying poverty-alleviation users and selecting key poverty-alleviation sampling objects.
In order to solve the technical problem, the embodiment of the application adopts the following technical scheme: the invention provides a poverty-alleviation object determining method, which comprises the following steps:
acquiring research data of a research object, wherein the research data comprises questionnaire data, accurate poverty-relieving big data platform data, video data, audio data, photo data, GPS positioning data and remote sensing image data;
performing fusion processing on the research data to obtain multi-dimensional characteristic data of the research object, wherein the multi-dimensional characteristic data at least comprises one of the following scores: family property scores, labor scores, diet guarantee scores, medical consumption scores, and education guarantee scores;
calculating the poverty and handicap degree index of the investigation object according to the multi-dimensional characteristic data;
and when the poverty-stricken obstacle degree index reaches a preset threshold value, determining the investigation object as a poverty-stricken object.
The invention has the beneficial effects that: by combining the questionnaire, the accurate poverty-relieving big data, the video data, the audio data, the photo data, the GPS positioning data and the remote sensing image data, a model is established for the poverty-relieving situation of the research object from a plurality of different angles such as property conditions, labor conditions, education conditions, medical conditions and the like, the poverty-relieving obstacle degree index is determined, and the research object reaching the preset threshold value is determined to be the poverty-relieving object. And further, based on the information technology, a quick and accurate poor user identification method is provided to perform objective and automatic analysis and avoid the interference of human factors. In addition, through the cross validation of data from different sources, error data is identified, the data accuracy is improved, and objective, accurate and rapid identification of poverty-stricken users is realized.
In one embodiment, the fusing the research data to obtain multi-dimensional feature data of the research object includes:
acquiring the family annual income of a research object from the questionnaire data;
acquiring a vehicle image and a house image of a research object from the video data, and determining the vehicle condition and the house condition of the research family according to the vehicle image and the house image;
and determining the household property score of the investigation object according to the household annual income, the vehicle condition and the house condition of the investigation object.
In one embodiment, the fusing the research data to obtain multi-dimensional feature data of the research object further includes:
determining family members of the subject of investigation from the questionnaire data, and determining the age, health condition and education level of the family members from the questionnaire data;
acquiring a family member image of the investigation object from the video data, and determining a family labor force condition and a population burden condition of the investigation object according to the family member image;
and determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object.
In one embodiment, the fusing the research data to obtain multi-dimensional feature data of the research object further includes:
extracting a food image and a drinking water image of a research object from the video data;
determining the diet condition and the drinking condition of the investigation object according to the food image and the drinking image;
and determining the diet guarantee score of the research object according to the diet condition and the drinking condition.
In one embodiment, the calculating the poverty-stricken obstacle degree index of the research object according to the multi-dimensional feature data comprises:
determining the poverty impairment degree index of the research object according to the following formula:
Figure BDA0003183055640000031
wherein P is poverty obstacle degree index, n is dimension of characteristic data, m is index number under corresponding dimension, and XijA score corresponding to the ith characteristic data of the ith dimensionijIs a weight coefficient, aiAre dimension weights.
The invention also provides a lean-alleviation object determination device, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring research data of a research object, and the research data comprises questionnaire data, accurate poverty-relieving big data platform data, video data, audio data, photo data, GPS positioning data and remote sensing image data;
a fusion module, configured to perform fusion processing on the research data to obtain multidimensional feature data of the research object, where the multidimensional feature data at least includes one of the following scores: family property scores, labor scores, diet support scores, and education support scores;
the calculation module is used for calculating the poverty and handicap degree index of the investigation object according to the multi-dimensional characteristic data;
and the determining module is used for determining the investigation object as the poverty alleviation object when the poverty alleviation obstacle degree index reaches a preset threshold value.
In one embodiment, the fusion module includes:
the first acquisition sub-module is used for acquiring the family annual income of the research object from the questionnaire data;
the second acquisition submodule is used for acquiring a vehicle image and a house image of a research object from the video data and determining the vehicle condition and the house condition of the research family according to the vehicle image and the house image;
and the first determining submodule is used for determining the household property score of the research object according to the household annual income, the vehicle condition and the house condition of the research object.
In one embodiment, the fusion module further includes:
a second determination submodule for determining family members of the investigation subject from the questionnaire data, and determining the age, health condition and education level of the family members from the questionnaire data;
the third determining submodule is used for acquiring the family member images of the investigation object from the video data and determining the family labor force condition and the population burden condition of the investigation object according to the family member images;
and the fourth determining submodule is used for determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object.
In one embodiment, the fusion module further includes:
the extraction submodule is used for extracting a food image and a drinking water image of a research object from the video data;
the fifth determining submodule is used for determining the diet condition and the drinking condition of the object to be researched according to the food image and the drinking image;
and the sixth determining submodule is used for determining the diet guarantee score of the research object according to the diet condition and the drinking water condition.
In one embodiment, the calculation module includes:
a seventh determining submodule, configured to determine a poverty impairment degree index of the research object according to the following formula:
Figure BDA0003183055640000051
wherein P is poverty obstacle degree index, n is dimension of characteristic data, m is index number under corresponding dimension, and XijA score corresponding to the ith characteristic data of the ith dimensionijIs a weight coefficient, aiAre dimension weights.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a lean-alleviation object determination method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the structure of data partitioning and matching search according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a self-organizing iterative flow according to an embodiment of the present invention;
fig. 4 is a block diagram of a lean-alleviation object determination device according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 shows a lean object determination method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps S11-S13:
in step S11, obtaining research data of a research object, where the research data includes questionnaire data, precise poverty-relief big data platform data, video data, audio data, GPS positioning data, and remote sensing image data;
in step S12, performing fusion processing on the research data to obtain multidimensional feature data of the research object, where the multidimensional feature data at least includes one of the following scores: family property scores, labor scores, diet guarantee scores, medical consumption scores, and education guarantee scores;
in step S13, a poverty impairment degree index of the investigation object is calculated from the multi-dimensional feature data;
in step S14, when the poverty-stricken obstacle degree index reaches a preset threshold, it is determined that the subject of investigation is a poverty-stricken subject.
In this embodiment, research data of a research object is obtained, where the research data includes questionnaire data, precise poverty-alleviation large data platform data, video data, audio data, GPS positioning data, and remote sensing image data. Specifically, the basic information of an investigation object is collected through an APP questionnaire system, a video camera is used for shooting video images and the like in the investigation process, an audio data in the investigation process is collected through a recording pen, typical photos of a house, a courtyard, related certificates, reward materials, data and the like of the investigation object are collected through the camera, longitude and latitude and elevation information of the investigation place are collected through a GPS (global positioning system) locator, the accurate position of the house of a farmer is recorded, and a geographical environment satellite image of the investigation object and the position of infrastructure assisting new construction are collected through a map and a remote sensing system.
Performing fusion processing on the research data to obtain multi-dimensional characteristic data of the research object, wherein the multi-dimensional characteristic data at least comprises one of the following scores: family property scores, labor scores, diet guarantee scores, medical consumption scores, and education guarantee scores; the basic information of the research object comprises identity information such as names, ages and appearances of farmers, family member information and the like.
When fusion processing is performed on multi-source research data, preprocessing, especially data verification, is required for the data. In the present embodiment, at least one candidate data of the research object is acquired. And performing first verification on target candidate data in the candidate data, wherein the first verification comprises verifying the target candidate data according to the attribute value of the target candidate data. Specifically, when the target candidate data is the attribute of the investigation object, judging whether the data value of the candidate data exceeds a preset interval; when the data value of the candidate data does not exceed a preset interval, determining that the first verification of the target candidate data is passed; and when the data value of the candidate data exceeds a preset interval, determining that the first verification of the target candidate data is failed. When the first verification passes, performing second verification on the target candidate data according to at least one candidate data; when the second verification is performed, the person image verification is performed first to ensure that the collected data is data corresponding to the investigation subject himself. Specifically, a face image of a research object is extracted from video data; comparing and verifying the facial image with data in the accurate poverty alleviation large data platform data; when the research object information does not exist in the precise poverty-relief big data platform data, reminding that the verification fails and outputting a reminding message for judging whether new research object information data is needed or not; when the verification is passed, respectively performing second verification on candidate data corresponding to the investigation object, for example, when the attribute of the investigation object is the age of the investigation object, acquiring identity card data of the investigation object, wherein the identity card data may be obtained by directly inputting in a system or obtained by extracting image data from an acquired photo; and verifying the authenticity of the age of the investigation object according to the age information recorded in the identity card data of the investigation object. For another example, when the attribute of the research object is the height of the research object, the image data of the research object may be acquired; predicting the height of the investigation object according to the investigation object and the surrounding environment information in the image data of the investigation object to obtain a predicted value of the height of the investigation object; comparing the height of the investigation object with the predicted value of the height of the investigation object; and when the difference value of the comparison result is smaller than a first preset difference value, determining that the second verification is passed. For another example, when the target candidate data is the income value of a research object, acquiring information capable of representing the economic condition of the research object; determining a predicted value of the income of the research object according to the information capable of representing the economic condition of the research object; comparing the predicted value of the income of the research object with the income value of the research object; and when the difference value of the comparison result is smaller than a second preset difference value, determining that the second verification is passed. And when the second verification is passed, determining the target candidate data as the investigation data of the investigation object.
After completing the double verification of each candidate data point, the data is pre-judged in advance according to the poor dimensionality. The multi-source research data is judged from four aspects of family property, family population, health condition and education condition. For example, for family property, firstly, family income is obtained from questionnaire data and is compared with accurate poverty-improving big data platform data, and when the annual average change amplitude of income exceeds a preset threshold value, the research object is determined not to be poverty-improving; and then calculating the annual income change amplitude of the research object, and when the annual income change amplitude of the research object exceeds a preset threshold value, correcting the annual income of the research object by using the local average income level change amplitude as the annual income of the research object. Then, the home assets such as houses and vehicles are analyzed through the video data and the image data. For the house condition, whether a person normally lives or not is judged according to whether a life trace exists or not, for example, whether daily living goods such as tableware, food, clothes and the like exist in the investigated house or not is identified through video images, whether monthly water consumption and electricity consumption conditions are smaller than preset threshold values or not is judged, when the investigated house does not normally live, at least two sets of houses of the investigated house are determined, the conditions of the second set of houses are better than those of the investigated house, and the investigated subject is not poverty. And the safety and value judgment can be carried out through image recognition on the basic appearance of the house, the appearance of the courtyard, economy of whether the courtyard exists or not, such as whether the house has risks of cracking, inclining and collapsing. In addition, the house value can be automatically evaluated by uploading house images, geographical positions and other information by butting a house evaluation system. And for the vehicle condition, the vehicle value is given according to a preset model by identifying the vehicle type and the vehicle application. And finally, predicting the asset condition of the investigation object according to the house value and the vehicle value, and determining that the investigation object is not poverty when the total predicted asset of the investigation object is greater than a preset value. For another example, for family population, the number, age, education condition, occupation condition and the like of family members are firstly obtained from questionnaire data and accurate poverty relief data, the labor population and the fostered population are analyzed, and the poverty of the family is determined through foster expenditure/annual income proportion of the research subject. It should be noted that the family income situation can be further analyzed and judged according to the family labor force situation, for example, if a farmer has no labor force at all and related materials show that the relevant materials have higher work taking income, the data is not true and the reminding processing is required; if a certain farmer indicates that the income is low, and the property and the consumption capacity with higher value are analyzed through the data such as photos, videos and the like of a family house, a living home and the like, the survey data is not real, and the reminding processing is needed. For health conditions and education conditions, information can be further extracted for evidences, medical records and the like through photo data.
After the verification is completed on the data from different sources, the data from different sources can be fused, and according to preset rules, household property scores, labor force scores, medical consumption scores, education support scores, diet support scores, consumption scores and the like are formed.
For the family property score, acquiring the family annual income of a research object from questionnaire data; acquiring a vehicle image and a house image of a research object from the video data, and determining the vehicle condition and the house condition of a research family according to the vehicle image and the house image; and determining the household property score of the investigation object according to the household annual income, the vehicle condition and the house condition of the investigation object.
For the labor force score, determining family members of the investigation object from the questionnaire data, and determining the age, health condition and education level of the family members from the questionnaire data; family member images are obtained from the video data, and family labor force conditions and population burden conditions of the investigation object are determined according to the family member images; and determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object.
For the medical consumption score, determining the medical insurance condition of the investigation object and the family year medical investment data from the questionnaire data; acquiring the number of hospitals with different levels within a preset distance range of a family of the investigation object according to the family position information of the investigation object; and determining the medical consumption score of the research object according to the medical insurance condition of the research object, the family annual medical input data and the number of hospitals with different grades.
For the education guarantee score, determining the number of family educated members of the investigation object, the education level of the family members and the education insurance condition from the questionnaire data; and determining the education guarantee value of the investigation object according to the number of family educated members of the investigation object, the education level of the family members and the education insurance condition.
Extracting a food image and a drinking water image of a research object from the video data for the diet guarantee score; determining the diet condition and the drinking condition of the object to be investigated according to the food image and the drinking image; and determining the diet guarantee score of the research object according to the diet condition and the drinking water condition.
For the consumption value, acquiring the food expenditure, the clothing expenditure, the communication expenditure and the trip expenditure of the investigation object from the investigation data and the accurate poverty-relief big data platform; determining the value of the investigation subject according to the food expenditure, the clothing expenditure, the communication expenditure and the trip expenditure.
It should be noted that, for the above multiple scores, when the scores of the multiple items satisfy different preset thresholds, it indicates that the research object has a condition that needs to be supported on different layers, and gives an early warning prompt to the scores of the research object.
Calculating the poverty and handicap obstacle degree index of the investigation object according to the multi-dimensional characteristic data; and when the poverty-stricken obstacle degree index reaches a preset threshold value, determining the investigation object as a poverty-stricken object. It should be noted that, in order to make the data comparable, each index needs to be normalized. And calculating the poverty-stricken obstacle degree index of the multi-dimensional characteristic data according to a preset rule, determining different grades according to the index, and determining the poverty-stricken obstacle degree index as a poverty-stricken object when the poverty-stricken obstacle degree index meets a preset threshold value.
The beneficial effect of this embodiment lies in: by combining the questionnaire, the accurate poverty relief big data, the video data, the audio data, the photo data, the GPS positioning data and the remote sensing image data, the poverty relief condition of the research object is modeled from a plurality of different angles such as property conditions, labor conditions, education conditions and medical conditions, objective and automatic analysis is carried out, interference of human factors is avoided, and early warning on poverty relief reasons from different layers can be realized. In addition, the data from different sources are verified through the cross verification of the data from different sources, error data are identified, the data accuracy is improved, and objective, accurate and quick identification of poverty-stricken users is achieved.
In one embodiment, the above step S12 may be implemented as the following steps a1-A3, including:
in step a1, family annual income of the research subject is acquired from questionnaire data;
in step a2, a vehicle image and a house image of a research object are acquired from the video data, and a vehicle condition and a house condition of a research family are determined according to the vehicle image and the house image;
in step A3, the home property score of the subject is determined based on the subject's annual household income, vehicle condition, and house condition.
In the present embodiment, the family annual income of the research subject is acquired from the questionnaire data; specifically, the annual income of the investigated subject family is compared with the data in the accurate poverty-alleviation big data platform and the local average income level data, and when the annual income data change rate of the investigated subject family exceeds a preset threshold or the ratio of the annual income of the investigated subject family to the local average income level exceeds a preset range, the questionnaire data is considered to be unreal and not considered, and the local average income level is taken as the annual income of the investigated subject family.
And acquiring a vehicle image and a house image of the investigation object from the video data, and determining the vehicle condition and the house condition of the investigation family according to the vehicle image and the house image. Specifically, for the house condition, whether a person lives in a house or not is judged according to whether a living trace exists, for example, whether daily living goods such as tableware, food, clothes and the like exist in the house of the investigation object is identified through video images, and whether the monthly water consumption and the electricity consumption condition are smaller than a preset threshold value or not is judged; secondly, the safety and value of the house are judged through image recognition according to the basic appearance, courtyard economy and the like, for example, whether the house has risks of cracking, inclination and collapse, and the house value is automatically evaluated by butting a house evaluation system, uploading house images, geographical positions and other information. And for the vehicle condition, the vehicle value is given according to a preset model by identifying the vehicle type and the vehicle application.
And determining the household property score of the investigation object according to the household annual income, the vehicle condition and the house condition of the investigation object. According to international practice, the room price income ratio is 3-6 times of a reasonable interval, and the rational vehicle price is 1-1.5 times of annual income. And determining the household property score of the investigation object by taking the household annual income as a base number and performing supplementary adjustment on the room price income ratio and the car price income ratio. Specifically, in this embodiment, the home property score of the research target is determined by the following formula:
Figure BDA0003183055640000111
the beneficial effect of this embodiment lies in: the household property of the investigation object is determined not only through the household annual income, but also through the judgment of the house value and the vehicle value of the investigation object through the video data, so that the household annual income is corrected and supplemented, the error caused by unbalanced household income or property misrepresentation is avoided, and the objectivity and accuracy of the investigation result are ensured.
In one embodiment, the above step S12 can also be implemented as the following steps B1-B3:
in step B1, the family members of the investigation subject are determined from the questionnaire data, and the age, health condition and education level of the family members are determined from the questionnaire data;
in step B2, family member images of the investigation target are acquired from the video data, and family labor force conditions and population burden conditions of the investigation target are determined according to the family member images;
in step B3, the labor score of the subject of investigation is determined based on the family labor status, population burden status, and education level of the subject of investigation.
In the present embodiment, the family members of the investigation subject are determined from the questionnaire data, and the age, health condition, and education level of the family members are determined from the questionnaire data; family member images are obtained from the video data, and family labor force conditions and population burden conditions of the investigation object are determined according to the family member images. Specifically, the number, age, educational condition, occupational condition and the like of family members are obtained from questionnaire data and accurate poverty-relief big data, then family personnel are called through video images and photo data, and information in the research data and the accurate poverty-relief big data is verified and corrected, for example, the number, age and the like of family population are determined according to the photo of the family identification card. According to the ages and health conditions of family members, the labor force population and the fostered population are determined, and the corresponding education levels are respectively determined.
And determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object. Specifically, in this embodiment, the labor force score of the research target is determined by the following formula:
Figure BDA0003183055640000121
wherein n represents the number of labor population in the investigation family, m represents the number of population supported in the investigation family, yiIndicates the age of the ith family member, eiIndicating the level of education, h, of the ith family memberiIndicating the health level of the ith family member. In this embodiment, the education level is determined according to the following rules: the level of education determined by the master and above is 2, the level of education determined by the subject is 1.5, the level of education determined by the subject is 1, the level of education determined by the high school is 0.8, and the level of education determined below the high school is 0.5; the health level is determined according to the following rules: can participate in high-strength work as 2, can participate in normal work as 1, can be 0.8 in self-care life, and cannot be 0.5 in self-care life. Wherein family members under the age of 18, above the age of 70, or unable to live their own are identified as the fostered population, and family members between the age of 18 and 70 and able to live their own are identified as the workforce population.
The beneficial effect of this embodiment lies in: the poverty of the family to be researched is analyzed through the static property condition, the ability of creating value and the burden situation of the family population are comprehensively considered through the combination of labor force, the number of the population to be supported, health information and education level in the family, and the family with high poverty stress caused by the situations of large number of the population to be supported, small number of the population to be supported and the like in the family is identified.
In one embodiment, the above step S12 can also be implemented as the following steps C1-C3:
in step C1, extracting a food image and a drink image of the investigation subject from the video data;
in step C2, determining the diet and drinking conditions of the subject to be investigated according to the food image and the drinking image;
in step C3, the diet guarantee score of the subject is determined according to the diet and drinking conditions.
The safety of food and water is the key work of 'not worrying about two or three guarantees' in the overall goal of 'taking off poverty and attacking hardness'. In this embodiment, a food image and a drinking water image of a research object are extracted from video data; and determining the diet condition and the drinking condition of the investigation object according to the food image and the drinking image. Specifically, the drinking value in this embodiment mainly considers whether drinking is convenient, that is, whether tap water exists is determined, if so, the coefficient is determined to be 1, and if not, the coefficient is not determined to be 0.5. And determining the source, the type and the average daily expenditure of the daily food by extracting the daily food image of the research object from the video data. In this example, the food source (S) was classified into outsourcing, self-producing, and giving away, and assigned scores of 2, 1, and 0.5, respectively; the number (T) of the types of the daily food images is assigned a score of 2 when 5 or more types are satisfied, a score of 1 when the number of the types is between 3 and 5 types, and a score of 0.5 when the number of the types is less than 3; and (4) acquiring the value M of various articles by combining big data when daily food expenditure is average. And further determining the diet score of the research subject as:
Figure BDA0003183055640000131
wherein n represents the number of food types, T represents the number of daily food types, SiIndicates the source of the ith food, MiIndicating the price of the ith food product.
Determining the diet guarantee score of the research object according to the diet condition and the drinking water condition:
diet guarantee score is drinking score × diet score.
The beneficial effect of this embodiment lies in: whether the daily drinking water of the investigation object is convenient, whether the daily diet is balanced and the daily total expenditure of diet is analyzed through the video data, the photo data and the like in the investigation data, whether the investigation object meets the basic diet guarantee or not is determined, whether the requirement of 'not being worried about eating' can be met or not is determined, and the poverty-stricken user who has problems in the diet guarantee is accurately identified.
In one embodiment, the above step S12 can also be implemented as the following steps D1-D2:
in step D1, family member information of the research subject is obtained from the research data, wherein the family member information at least comprises the age, health level and annual medical expenditure of the family member;
in step D2, a medical consumption score is determined based on the family member information of the subject of investigation.
In the present embodiment, family member information of a research subject is acquired from research data, wherein the family member information includes at least an age (y), a health level (h), and an annual medical expenditure (p) of a family member. Wherein when the health level h is determined according to the following rule: can participate in high-strength work as 2, can participate in normal work as 1, can be 0.8 in self-care life, and cannot be 0.5 in self-care life; the annual medical expenditure comprises two parts, one part is the basic medical expenditure (p) and the other part is the medical insurance expenditure p'. Determining a medical consumption score according to the family member information of the research object, wherein in the embodiment, the medical consumption score is determined by the following formula:
Figure BDA0003183055640000132
it should be noted that the position information of the research target may be determined by acquiring GPS positioning data of the research target. Determining the distance(s) between the investigation object and the nearest Hospital according to the GPS positioning data and the map information data of the investigation object; according to the distance between the research object and the nearest Hospital and the medical consumption score,
Figure BDA0003183055640000141
the beneficial effect of this embodiment lies in: by increasing medical consumption as a factor for judging poverty, poverty-stricken objects with weak medical guarantee capability of families, which are caused by the fact that basic medical insurance is not participated in, common diseases and chronic diseases cannot be treated in time, medical facilities are weak and the like, are accurately identified.
In one embodiment, the above step S12 can also be implemented as the following steps E1-E3:
in step E1, family member information of the research target is acquired from the research data;
in step E2, determining the number of family members and education level in the education stage;
in step E3, the education support ability of the family members is determined according to the number of family members in the education stage and the education level.
In the embodiment, family member information of a research object is acquired from research data; determining the number of family members and education level in the education stage; in this example, a family member older than 3 and younger than 15 years is considered to be a population in the educational phase. The education support ability of the family members is determined according to the number of the family members in the education stage and the education level, and the specific embodiment determines the family support ability of the investigation object by the following formula:
Figure BDA0003183055640000142
wherein n represents the number of family population; l represents whether the device is in an educational state, and represents that the device is not in the educational state when k is 0 and represents that the device is in the educational state when k is 1; e.g. of the typeiThe education level of the ith family member is shown, and in the embodiment, the education level is determined according to the following rules: the level of education determined by the master and above was 2, the level of education determined by the department was 1.5, the level of education determined by the specialist was 1, the level of education determined by the high school was 0.8, and the level of education determined below the high school was 0.5.
It should be noted that the position information of the research target may be determined by acquiring GPS positioning data of the research target. Determining the distance(s) between the research object and the closest primary school and middle school through the GPS positioning data and the map information data of the research object; according to the distance between the research object and the primary school and the middle school and the medical consumption score,
Figure BDA0003183055640000151
the beneficial effect of this embodiment lies in: the education guarantee ability of the research object is brought into the research range, and the situation that the research object cannot enjoy fair education resources due to poverty is identified by combining the geographic information, so that poverty-stricken households with insufficient education can be accurately identified.
In one embodiment, the above step S12 may also be implemented as the following steps F1-F3:
in step F1, acquiring GPS positioning data of the research object;
in step F2, obtaining remote sensing image data of an area where the research object is located according to the GPS positioning data;
in step F3, the accessibility score of the area where the research object is located is determined based on the remote sensing image data of the area where the research object is located.
In this embodiment, the GPS positioning data of the research object is acquired; acquiring remote sensing image data of the area where the research object is located according to the GPS positioning data; and determining the accessibility score of the region where the research object is located according to the remote sensing image data of the region where the research object is located. Specifically, road information of a region where the distance from the investigation object home position is located is acquired from the remote sensing image, different maximum speed values are given to different roads, the time of driving from the investigation object home position to the city government in the city where the distance is located is calculated, and the reciprocal of the time is used as the accessibility score of the investigation object.
The beneficial effect of this embodiment lies in: the traffic factor is brought into the investigation range, and the distance between the investigation object and the city government is taken into consideration as an index for judging the traffic accessibility, so that the poor factors caused by traffic inconvenience are identified.
In one embodiment, the above step S13 may be implemented to determine the poverty-stricken obstacle degree index of the investigation subject according to the following formula:
Figure BDA0003183055640000152
wherein P is poverty obstacle degree index, n is dimension of characteristic data, m is index number under corresponding dimension, and XijA score corresponding to the ith characteristic data of the ith dimensionijIs a weight coefficient, aiAre dimension weights.
In this embodiment, the feature data includes a family property score, a labor score, a diet guarantee score, a medical consumption score, an education guarantee score, and a accessibility score, and in addition, in order to ensure that each score has referential property, the feature data is normalized before the poverty-handicap index is calculated by the above formula. In the embodiment, poverty-handicap degree indexes obtained after the various scores are standardized are ranked in an ascending order, and families with preset denominations and the lowest indexes are selected as finally selected poverty-handicapped objects.
It should be noted that the selected poverty alleviation objects can be sorted by various scores, and the index with the lowest characteristic data is analyzed, so that the reason for the poverty alleviation of the family is identified and researched.
In addition, in this embodiment, each score is sorted in an ascending order separately, and a home in which the individual score is lower than a preset threshold is warned to perform key monitoring and subsequent dynamic monitoring.
As shown in FIG. 2, in one embodiment, the above step S13 can be implemented as the following steps S21-S24:
in step S21, initializing each score of the feature data, including setting a random number of an initialization weight, an iteration number, a learning rate, and a distance calculation method;
in step S22, the geometric center coordinates of the position information of the investigation object are read as a spatial dimension, and the feature data index layer of the investigation object is read as an attribute dimension;
in step S23, calculating a distance between the input sample data of the spatial dimension and the attribute dimension and each competitive neuron, wherein the smallest distance is a winning neuron;
in step S24, setting a threshold and a neighborhood to obtain a final winning neuron and updating a neighborhood neuron matrix;
in step S25, the iteration results of the space dimension and the attribute dimension are combined to obtain a final order iteration result, and the procedure ends.
In the present embodiment, the number of iterations is 2500, the learning rate is 0.1, and the distance is the euclidean distance. Let x be a set of n-dimensional training samples x, as shown in FIG. 31,x2,……,xnEach sample containing a spatial dimension siAnd an attribute dimension ai(ii) a Is a matrix of p × q, element wijI and j in (a) respectively represent the row and column numbers in the matrix; each wijAll contain a space element wsijAnd an attribute element waijI.e. wij=[wsij,waij](ii) a α is the learning rate, initialized to real numbers within 0-1; r is a neighborhood function h (w)ij,wmn,r) The parameter of (2), representing the neighborhood radius; s and a are respectively the limit values of the optimal matching unit on the space domain and the attribute domain; and if the spatial distance between the sample and the output neuron is not more than s, the spatial adjacency is considered, and if the attribute distance between the sample and the output neuron is not more than a, the attribute similarity is considered.
The spatial dimension constraint code is as follows:
Form=1ton:
for all wij∈W:
Calculating the space distance: dij=|wsm-wsij|
Selection of dijIs the minimum value of (1), namely the winning neuron W of the spatial domainBs(ii) a Determining a set WBLet W beBOf (5) and WBsIs not greater than a limit value s;
for all wij∈WB
Calculating a distance value: dij=|xm-wij|
According to dijSelection of winning neuron wBSo that it corresponds to dijTake the minimum min (d)ij);
Updating the connection weight corresponding to the winning neuron and the neuron in the neighborhood thereof: w is aij∈W:wij=wij+αh(wB,wij,r)|xm-wij|;
Decreasing the values of α and r;
and circularly executing the steps until convergence.
The attribute dimension constraint code is as follows:
Form=1ton:
for all wij∈W:
Calculating a distance value: dij=|wsm-wsij|
Selection of dijIs the attribute domain winning neuron WBs(ii) a Determining a set WBLet W beBOf (5) and WBsIs not greater than the limit s;
for all wij∈WB
Calculating a distance value: dij=|wsm-wij|
According to dijSelecting winning neuron WBSo that it corresponds to dijTake the minimum min (d)ij);
Updating the connection weight corresponding to the winning neuron and the neuron in the neighborhood thereof: dij
Decreasing the values of α and r;
and circularly executing the steps until convergence.
And combining the 2 processes to obtain a final result.
A novel investigation mode with three fusion modes of big data investigation, traditional on-site investigation and remote sensing image checking is established, a human-computer object ternary data fusion method based on a body is provided, and a novel investigation mode with three complementary investigation methods is created, so that the operation efficiency is improved, and the data quality is improved. The method is characterized in that a human-computer three-element data fusion modeling is carried out, when triples are extracted, the method is different from a traditional assembly line type extraction mode, the model is firstly subjected to word serialization through a pre-training model, entity recognition and relation classification tasks are completed after operations such as maximum pooling, full connection and the like, the extracted human-computer three-element data are obtained, and finally the extracted human-computer three-element data are stored in a graph database to realize the construction of a body model.
The beneficial effect of this embodiment lies in: the research provides a new self-organizing iterative improvement method, which comprises the steps of designing composite distance statistics of space and attribute weighting, adding space constraint to ensure that attribute result space is continuous, searching an optimal matching unit set according to the space distance, searching an optimal matching unit in the set according to the space and the attribute, and executing space constraint-first iteration in the process; further searching a best matching unit set according to the attribute distance, then searching the best matching unit in the set according to the space and the attribute, and executing 'attribute constraint priority' iteration. Results of the two processes are combined, space constraint and attribute constraint are met respectively, and fusion of the two results is achieved. Through an objective iteration effect, the peasant households with the largest obstacle degree index are selected one by one, and the spatial continuity and the attribute similarity of the multi-dimensional data set can be considered at the same time. And objective basis is provided for accurate identification of poor users.
The present invention also provides a lean-alleviation object determination device, as shown in fig. 4, comprising:
the acquisition module 41 is configured to acquire research data of a research object, where the research data includes questionnaire data, precise poverty-relieving big data platform data, video data, audio data, GPS positioning data, and remote sensing image data;
the fusion module 42 is configured to perform fusion processing on the research data to obtain multidimensional characteristic data of the research object, where the multidimensional characteristic data at least includes one of the following scores: family property scores, labor scores, diet support scores, and education support scores;
a calculating module 43, configured to calculate a poverty impairment index of the research object according to the multidimensional feature data;
and the determining module 44 is configured to determine that the investigation object is a poverty alleviation object when the poverty alleviation obstacle degree index reaches a preset threshold.
In one embodiment, the fusion module 42 includes:
the first acquisition sub-module is used for acquiring the family annual income of the investigation object from the questionnaire data;
the second acquisition submodule is used for acquiring a vehicle image and a house image of a research object from the video data and determining the condition of a vehicle and the condition of a house of a research family according to the vehicle image and the house image;
and the first determining submodule is used for determining the household property score of the research object according to the household annual income, the vehicle condition and the house condition of the research object.
In one embodiment, the fusion module 42 further includes:
a second determination submodule for determining family members of the investigation subject from the questionnaire data, and determining the age, health condition and education level of the family members from the questionnaire data;
the third determining submodule is used for acquiring the family member images of the investigation object from the video data and determining the family labor force condition and the population burden condition of the investigation object according to the family member images;
and the fourth determination submodule is used for determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object.
In one embodiment, the fusion module 42 further includes:
the first extraction submodule is used for extracting a food image and a drinking water image of a research object from the video data;
the fifth determining submodule is used for determining the diet condition and the drinking condition of the investigation object according to the food image and the drinking image;
and the sixth determining submodule is used for determining the diet guarantee score of the research object according to the diet condition and the drinking water condition.
In one embodiment, the fusion module 42 further includes:
the second extraction submodule is used for acquiring family member information of a research object from research data, wherein the family member information at least comprises the age, health level and annual medical expenditure of family members;
and the eighth determining submodule is used for determining the medical consumption score according to the family member information of the research object.
In one embodiment, the fusion module 42 further includes:
the third extraction submodule is used for acquiring family member information of a research object from the research data;
a ninth determining sub-module for determining the number of family members and education level in the education stage;
and the tenth determining submodule is used for determining the education support capability of the family members according to the number of the family members in the education stage and the education level.
In one embodiment, the fusion module 42 further includes:
the third acquisition submodule is used for acquiring the GPS positioning data of the research object;
the fourth acquisition submodule is used for acquiring remote sensing image data of the area where the research object is located according to the GPS positioning data;
and the eleventh determining submodule is used for determining the accessibility score of the area where the research object is located according to the remote sensing image data of the area where the research object is located.
In one embodiment, the calculation module 43 includes:
a seventh determining submodule, configured to determine a poverty impairment degree index of the research object according to the following formula:
Figure BDA0003183055640000201
wherein P is poverty obstacle degree index, n is dimension of characteristic data, m is index number under corresponding dimension, and XijA score corresponding to the ith characteristic data of the ith dimensionijIs a weight coefficient, aiAre dimension weights.
In one embodiment, the calculation module is further configured to: initializing each score, including setting random number of initialization weight, iteration times, learning rate and distance calculation method; reading in the geometric center coordinates of the position information of the investigation object as a space dimension, and reading in the characteristic data index layer of the investigation object as an attribute dimension; calculating the distance between input sample data of the space dimension and the attribute dimension and each competitive neuron, wherein the smallest distance is a winning neuron; setting a threshold value and a neighborhood to obtain a final winning neuron and updating a neighborhood neuron matrix; and combining the iterative results of the space dimension and the attribute dimension to obtain a final sequence arrangement iterative result, and ending the program.
It should be noted that the embodiment corresponding to the lean-alleviation object determination method can be executed by the lean-alleviation object determination device, that is, the lean-alleviation object determination device can execute any embodiment corresponding to the lean-alleviation object determination method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A poverty-alleviation object determination method, characterized by comprising:
acquiring research data of a research object, wherein the research data comprises questionnaire data, accurate poverty-relieving big data platform data, video data, audio data, photo data, GPS positioning data and remote sensing image data;
performing fusion processing on the research data to obtain multi-dimensional characteristic data of the research object, wherein the multi-dimensional characteristic data at least comprises one of the following scores: family property scores, labor scores, diet guarantee scores, medical consumption scores, and education guarantee scores;
calculating the poverty and handicap degree index of the investigation object according to the multi-dimensional characteristic data;
and when the poverty-stricken obstacle degree index reaches a preset threshold value, determining the investigation object as a poverty-stricken object.
2. The method according to claim 1, wherein the fusing the research data to obtain the multidimensional characteristic data of the research object comprises:
acquiring the family annual income of a research object from the questionnaire data;
acquiring a vehicle image and a house image of a research object from the video data, and determining the vehicle condition and the house condition of the research family according to the vehicle image and the house image;
and determining the household property score of the investigation object according to the household annual income, the vehicle condition and the house condition of the investigation object.
3. The method according to claim 1, wherein the fusing the research data to obtain multi-dimensional feature data of the research object further comprises:
determining family members of the subject of investigation from the questionnaire data, and determining the age, health condition and education level of the family members from the questionnaire data;
acquiring a family member image of the investigation object from the video data, and determining a family labor force condition and a population burden condition of the investigation object according to the family member image;
and determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object.
4. The method according to claim 1, wherein the fusing the research data to obtain multi-dimensional feature data of the research object further comprises:
extracting a food image and a drinking water image of a research object from the video data;
determining the diet condition and the drinking condition of the investigation object according to the food image and the drinking image;
and determining the diet guarantee score of the research object according to the diet condition and the drinking condition.
5. The method of claim 1, wherein said calculating a poverty impairment index for said subject of investigation from said multi-dimensional feature data comprises:
determining the poverty impairment degree index of the research object according to the following formula:
Figure FDA0003183055630000021
wherein P is poverty obstacle degree index, n is dimension of characteristic data, m is index number under corresponding dimension, and XijA score corresponding to the ith characteristic data of the ith dimensionijIs a weight coefficient, aiAre dimension weights.
6. A lean object determining apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring research data of a research object, and the research data comprises questionnaire data, accurate poverty-relieving big data platform data, video data, audio data, photo data, GPS positioning data and remote sensing image data;
a fusion module, configured to perform fusion processing on the research data to obtain multidimensional feature data of the research object, where the multidimensional feature data at least includes one of the following scores: family property scores, labor scores, diet support scores, and education support scores;
the calculation module is used for calculating the poverty and handicap degree index of the investigation object according to the multi-dimensional characteristic data;
and the determining module is used for determining the investigation object as the poverty alleviation object when the poverty alleviation obstacle degree index reaches a preset threshold value.
7. The apparatus of claim 6, wherein the fusion module comprises:
the first acquisition sub-module is used for acquiring the family annual income of the research object from the questionnaire data;
the second acquisition submodule is used for acquiring a vehicle image and a house image of a research object from the video data and determining the vehicle condition and the house condition of the research family according to the vehicle image and the house image;
and the first determining submodule is used for determining the household property score of the research object according to the household annual income, the vehicle condition and the house condition of the research object.
8. The method of claim 6, wherein the fusion module further comprises:
a second determination submodule for determining family members of the investigation subject from the questionnaire data, and determining the age, health condition and education level of the family members from the questionnaire data;
the third determining submodule is used for acquiring the family member images of the investigation object from the video data and determining the family labor force condition and the population burden condition of the investigation object according to the family member images;
and the fourth determining submodule is used for determining the labor force score of the investigation object according to the family labor force condition, the population burden condition and the education level of the investigation object.
9. The apparatus of claim 6, wherein the fusion module further comprises:
the extraction submodule is used for extracting a food image and a drinking water image of a research object from the video data;
the fifth determining submodule is used for determining the diet condition and the drinking condition of the object to be researched according to the food image and the drinking image;
and the sixth determining submodule is used for determining the diet guarantee score of the research object according to the diet condition and the drinking water condition.
10. The apparatus of claim 6, wherein the computing module comprises:
a seventh determining submodule, configured to determine a poverty impairment degree index of the research object according to the following formula:
Figure FDA0003183055630000031
wherein P is poverty obstacle degree index, n is dimension of characteristic data, m is index number under corresponding dimension, and XijA score corresponding to the ith characteristic data of the ith dimensionijIs a weight coefficient, aiAre dimension weights.
CN202110852734.2A 2021-07-27 2021-07-27 Poverty-alleviation object determining method and device Pending CN113344761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852734.2A CN113344761A (en) 2021-07-27 2021-07-27 Poverty-alleviation object determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852734.2A CN113344761A (en) 2021-07-27 2021-07-27 Poverty-alleviation object determining method and device

Publications (1)

Publication Number Publication Date
CN113344761A true CN113344761A (en) 2021-09-03

Family

ID=77480394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852734.2A Pending CN113344761A (en) 2021-07-27 2021-07-27 Poverty-alleviation object determining method and device

Country Status (1)

Country Link
CN (1) CN113344761A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648257A (en) * 2022-05-23 2022-06-21 德州市民政局 Information processing method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104765944A (en) * 2014-03-26 2015-07-08 中国科学院地理科学与资源研究所 Comprehensive measuring technological method for urban vulnerability
CN107657577A (en) * 2017-10-20 2018-02-02 成都务本科技有限公司 A kind of method and system of precisely poverty alleviation
CN107945079A (en) * 2016-10-12 2018-04-20 普天信息技术有限公司 A kind of poverty alleviation object selection method and device
CN111325453A (en) * 2020-02-05 2020-06-23 北京明略软件***有限公司 Method, device and equipment for determining poverty-alleviation object and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104765944A (en) * 2014-03-26 2015-07-08 中国科学院地理科学与资源研究所 Comprehensive measuring technological method for urban vulnerability
CN107945079A (en) * 2016-10-12 2018-04-20 普天信息技术有限公司 A kind of poverty alleviation object selection method and device
CN107657577A (en) * 2017-10-20 2018-02-02 成都务本科技有限公司 A kind of method and system of precisely poverty alleviation
CN111325453A (en) * 2020-02-05 2020-06-23 北京明略软件***有限公司 Method, device and equipment for determining poverty-alleviation object and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648257A (en) * 2022-05-23 2022-06-21 德州市民政局 Information processing method, device and equipment

Similar Documents

Publication Publication Date Title
Iban An explainable model for the mass appraisal of residences: The application of tree-based Machine Learning algorithms and interpretation of value determinants
Muenchow et al. Reviewing qualitative GIS research—Toward a wider usage of open‐source GIS and reproducible research practices
KR102290132B1 (en) Apparatus and method to predict real estate prices
KR20210082104A (en) A method for generating a learning model for predicting real estate transaction price
Sun et al. Aligning geographic entities from historical maps for building knowledge graphs
Das et al. A decision making model using soft set and rough set on fuzzy approximation spaces
CN117668360A (en) Personalized problem recommendation method based on online learning behavior analysis of learner
Wanke et al. Revisiting camels rating system and the performance of Asean banks: a comprehensive mcdm/z-numbers approach
Mia et al. Registration status prediction of students using machine learning in the context of Private University of Bangladesh
CN113344761A (en) Poverty-alleviation object determining method and device
Zhang et al. Enabling rapid large-scale seismic bridge vulnerability assessment through artificial intelligence
Melanda et al. Identification of locational influence on real property values using data mining methods
Hecht et al. Crowd-sourced data collection to support automatic classification of building footprint data
Naviamos et al. A study on determining household poverty status: SVM based classification model
CN115713441A (en) Teaching quality evaluation method and system based on AHP-Fuzzy algorithm and neural network
Moradi et al. A novel approach to support majority voting in spatial group MCDM using density induced OWA operator for seismic vulnerability assessment
Gáll Determining the significance level of tourist regions in the Slovak Republic by cluster analysis
CN112650949A (en) Regional POI (Point of interest) demand identification method based on multi-source feature fusion collaborative filtering
CN113689078A (en) Survey data verification method and device
Pamungkas et al. Classification of Student Grade Data Using the K-Means Clustering Method
Afijal et al. Decision Support System Determination for Poor Houses Beneficiary Using Profile Matching Method
Hermansyah et al. Classification of Student Readiness for Educational Unit Exams: Decision Tree Approach C4. 5 Based on Try Out Scores at MTs Nahdlatul Arifin
Mushi et al. Prediction of mathematics performance using educational data mining techniques
Bineid Predicting Student Withdrawal from UAE CHEDS Repository using Data Mining Methodology
Gupta et al. Power calculations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination