About
FinNeeds

A new understanding of financial services usage

Read more

Explore
indicators

Framework components and key indicators

Read more

Get
started

Apply the FinNeeds approach

Read more

Collect
data

Data collection options and techniques

Read more

Analyse
data

How to approach data analysis

Read more

Generate
insight

Answering key policy and private sector questions

Read more

Analyse data

Download a detailed technical guide on how to conduct FinNeeds analysis here.

Once you’ve gathered the data, how do you analyse it to render relevant FinNeeds insights?

Demand-side data analysis

Clean the data

The first step when engaging with a new dataset is to clean the data. Data cleaning primarily focuses on checking that the data matches the questionnaire and removing outliers. Another important consideration is checking weighted and unweighted tabulations of the data to ensure that it is in line with what economic theory would predict.

Once the data is cleaned, the analytical framework for demand-side data analysis follows the different elements of the FinNeeds framework, namely needs and use, devices, usage, drivers and outcomes:

Needs analysis based on use-case incidence

The first step is to take a granular look at the use cases within each need category. This is done by calculating the proportion of the population who experienced the respective use cases making up the four financial needs and then highlighting key use cases. This positions the landscape of financial use cases, grouped according to the four needs categories, that will form the basis for the rest of the analysis.

Populate portfolio of financial devices

The next step in the analytical framework is to consider how use cases are met. In answering the questionnaire, the respondent specifies the financial action taken when addressing each use case. During the process of analysis, the use case should be recoded into the underlying need and the financial device response into the product market (credit, savings, payments or insurance) or provider type (formal, informal, social or personal). For example, if one borrows from a family member or friend to buy food, it implies that credit from family and friends is used to address a liquidity need. A comprehensive view of the contribution of the different product markets or provider types in addressing a specific need provides an understanding of how well the financial system is serving the population. At this stage, the use of specific devices in meeting financial needs can already provide early insights on the functioning of the financial system or financial outcomes. For example, if most people use informal credit or savings rather than insurance to cope with large medical expenses, it may point to a gap in the insurance market to serve front-of-mind resilience needs.

It may also be relevant from a policy perspective to consider the proportion of people who responded that they “did nothing” in response to a use case, or who employed welfare-reducing strategies such as selling an asset or reducing expenditure. For instance, the proportion of people who sold something to address a resilience need may point to gaps in risk-coping mechanisms. Identifying these gaps is an important step in coming up with appropriate solutions to improve the role of the financial sector in building resilience.

Segmentation analysis

Often, conducting the analysis at a consumer segment level will render more meaningful insights than for the population as a whole. For instance, if 20% of the sample population use informal services to address a prominent resilience use case, further analysis may reveal significant gender effects. If women largely constitute the 20% using informal services, targeted interventions are required to close the gender gap in the use of formal financial services.

Applying the FinNeeds framework requires an intuitive approach in identifying sensible segmenting variables that may give a granular view of financial inclusion. Other variables useful in segmentation are locality (rural vs. urban), age, and income or socio-economic status. Education or relationship status may also be relevant. While it may not be necessary to indicate results for all segments in all instances, it is useful to run the analysis at a segment level to pick up on skews to represent where relevant.

Drivers of use

A dedicated FinNeeds questionnaire contains a module requiring respondents to list their reasons for choosing particular devices. The first step in analysing drivers is to tabulate the responses, which will elicit, for example, the percentage who indicated that they prefer a device for functional reasons such as convenience or cost-effectiveness, or because of relational reasons such as trust or a sense of belonging. Analysing the output should reveal variation in preferences across types of devices or population segments. Not all outputs will be contained in the eventual analysis: it is important to gauge which comparisons are most relevant and interesting given the overall research objective.

Beyond the functional and relational drivers, the demographic information in the survey can be used to get a sense of contextual drivers. This can be done by cross-tabulating personal characteristics with devices used to see if interesting trends emerge. It is useful to define hypotheses upfront, which can then be tested through exploratory data analysis. For example, a hypothesis might be formal users are more likely to be from a higher socio-economic class, more educated and more urban.

Drivers of use can also be tested statistically through regression analysis. See the discussion under transactional data analysis below for more details.

Outcomes of use

Gauging outcomes of use is the most complex part of the analytical framework. The FinNeeds theory holds that one should be able to analytically classify survey respondents into different outcome segments (those who are “resilient” or “not resilient”; those who maintain liquidity vs. those who have infrequent experiences of illiquidity vs. those who are chronically illiquid, etc.) based on the questionnaire responses. For example, where the questionnaire asks a person how recently they were not able to balance income and expenses and how often it happens that they cannot do so, a combination of these two responses can be used to classify respondents into three categories: those who never experience any problems as “liquid”, those who experienced liquidity distress in only one month in the past 12 months as “some liquidity distress” and the rest as “severe liquidity distress”. The exact parameters to be used will depend on the nature of the questionnaire and on the country context. It will require the analyst to run some scenarios and make a judgment on the most meaningful distinctions in context.

The next step is to compare the profiles of the different outcome segments: do those who regularly maintain liquidity have a different device portfolio than those who experience liquidity distress? Are lower-income people more likely to experience liquidity distress than the middle class? Once again, it is useful to set hypotheses upfront that can then be tested. For example, that people with insurance are more likely to be resilient than those without. If the findings reveal that this is indeed not the case, it raises compelling questions from a policy perspective.

One could also test the determinants of outcomes using various statistical procedures (subject to a large enough sample size). The appropriate method will follow from the nature of the outcome variable and assumptions made in the model. For example, a binary outcome variable classifying respondents into “resilient” and “not resilient” could be modelled using an alternative-specific probit or similar statistical technique, as appropriate.

Outcomes analysis is exploratory, and it may require consultation between the analyst and policymakers and financial service providers to explore the most meaningful outcome parameters to set, which hypotheses to test and angles or segments to explore.

Transactional or financial service provider data analysis

Transactional data can come from a range of financial products and from different types of financial service providers. Thus, the variables of interest differ depending on the nature of the dataset. In general, however, transactional data analysis focuses on two elements of the FinNeeds framework: usage and drivers:

  • For building a picture of usage, the RFMD (recency, frequency, monetary value, duration) analysis provides a generic framework that is useful in determining the variables of interest. The usage metric usually depends on the type of financial product under consideration. For instance, in a credit product, financial actions typically involve borrowing a certain amount and subsequent usage is in terms of repayments at given intervals. A transactional account on the other hand can be used for both saving and purchases. Usage usually involves accruing balances in that account and the frequency and recency at which one makes purchases or transfers using that account. To account for multiple financial actions in the use of certain products, it is useful to construct a composite “usage intensity” score to integrate the different dimensions of usage. This can then be used to segment or cluster users into different usage profile groups.
  • The drivers analysis models the relationship between different demographic variables and usage using a number of standard statistical techniques.
Download the full technical guide to FinNeeds data analysis here or read a brief description.

Mapping the structure of the data

No two transactional datasets are alike. Thus, when working with transactional data, the analyst’s first step is to map the structure and contents of the dataset to gauge the data gaps and understand the scope of analysis that will be possible. The analytical approach should then be adjusted accordingly.

Cleaning the data

Cleaning the data is very important when working with an existing transactional dataset. Cleaning the data usually involves removing or, where relevant, filling in blanks in the data. It is also necessary to check for database accuracy and remove outliers accordingly. For example, are there people older than 120 years in the database?

Constructing usage indicators

To analyse usage intensity, it is useful to construct a composite usage indicator. For example, in one of our pilot studies, we constructed a composite usage score consisting of four parameters:

  • Recency, measured in terms of days since the last transaction
  • Frequency, measured as average number of transactions over a specific period
  • Monetary value, measured in terms of the average amount transacted
  • Duration, measured as the “age” of the account

Modelling the determinants of usage

The composite usage score can be used to model the determinants of usage. In the example above, usage was modelled based on ordered logit regression techniques to determine what the most statistically significant demographic determinants of usage are.

The statistical model can be built using only transactional data or, where available, integrating demand-side variables from the merged dataset into the model and testing their predictive power.

As with the demand-side data analysis, it is useful at the outset to set some hypotheses to test. For example, are high users more likely to be male or female, urban or rural? Does socio-economic class matter more than gender in determining usage? To what extent does the device portfolio matter in explaining usage? Further examples include, does receiving income into an account remain the most likely predictor of high usage or engagement? Do people with more than a certain number of devices outside of their formal account have lower usage profiles than those whose portfolio is largely formal? The exact hypotheses to be set will depend on the context and the particular financial inclusion policy questions and gaps to be considered.

Clustering

Another important way to analyse usage is to classify users with similar usage patterns into usage clusters and then to compare the usage and demographic profiles of different clusters. Understanding the different characteristics of different usage clusters can help financial service providers and policymakers improve the use of formal financial products.

Clustering is achieved by using algorithms that identify clusters of users with similar usage profiles. For example:

In one of our pilot studies, we used k-means clustering methods to generate statistically significant clusters of users. This was done by asking the algorithm to run various iterations whereby users are compared according to six variables: average number of transactions, average amount transacted, gender, age, income and education. The exercise stops when the algorithm determines clusters of users that are sufficiently different not to warrant separate classification. In this instance, the exercise rendered six distinct clusters of users.

This is a form of unsupervised clustering. An alternative would be to conduct supervised clustering whereby quintiles of users are determined according to their usage intensity scores as discussed above.

Newsletter

Sign up here for our monthly Data Briefing on data and innovation in the financial services sector.

And, to receive insight2impact updates, subscribe to the insight2impact quarterly newsletter.

Contact us

Please write to us and give us your feedback. info@i2ifacility.org

Data Portal      Insights        Initiatives         Tools         About         Terms of use  


Follow us

Established by

 

Sponsored by