Agenda

Local Community Perceptions Regarding Services and Decision Making Processes in NW Syria

Published on: 2022-06-03نُشِرَ بتاريخ: 2022-06-03

Large territories in northern Syria have been controlled by various opposition and other forces and non-state actors, after these territories were liberated from the control of the Syrian regime. While these entities govern the liberated areas, and are in charge of the affairs of the Syrians residing there by providing public services, maintain security, and resolve disputes, it is important to note that there is no single administrative and military entity which has a monopoly of control across the different parts of the region.
The present study was conducted to find out the reality of the liberated Syrian north, through identifying the entities responsible for governing each of these pieces of land, the status of public service provision and the level of citizen satisfaction, as well as understanding the state of affairs in terms of security, criminality, access to justice, and rule of law. Finally, the study aims to shed light to the decision-making mechanisms for important issues within the region, the extent to which citizens can participate in these mechanisms, and external influence on the governance and decision-making processes of the region.
The research has been designed and executed during the first half of 2021 in Idlib, Olive Branch and Euphrates Shield areas. It is based on field research involving survey of Syrians both from host communities and internally displaced persons (IDP’s) residing in the research area, and complemented with interviews with key informants (KIIs) from local government bodies or non-government organizations (NGOs).
The results of the study demonstrate a low level of knowledge of the residents of northern Syria of those responsible for governance and the provision of public services, mainly due to confusion between service providers and those responsible for managing the sector concerned. Another important result is in general, the residents of the region have low level of satisfaction for the services provided, which is being observed across all three regions.
With regard to the security situation, Idlib was the safest area according to the opinions of key informants and participants in the survey, where Hay’at Tahrir Al Sham was able to firmly control the security situation in the area and deal to a large extent with the security threats of bombings and kidnappings. Theft remains the main security concern in Idlib. In the areas of Euphrates Shield and Olive Branch, the level of safety was found to be very low, where both areas suffer from explosions targeting markets and residential areas as well as many cases of theft, kidnappings, killings and factional fighting. The people of Olive Branch area suffer especially from the seizing of their rights and property by military factions, and are being subject to arbitrarily arrest and kidnapping and have to pay funds get released.

Issues of Asking Direct Questions

Published on: 2022-05-24نُشِرَ بتاريخ: 2022-05-24

Researchers and workers of all research fields (monitoring and evaluation, market research, opinion polls… etc.) usually work on identifying a set of research topics (usually called either research topics, key questions, or hypotheses…), then derive the questions that will be asked in the research tools from these topics. The problem I noticed that many researchers have, especially those working on developing #questionnaires / #research_tools, is that the phrasing of the questions uses almost the same words as the research topics. i.e., If we had a question about “the needs that would help increase the level of inclusion of people with disabilities in education”, the researcher asked people with disabilities “What are the needs that would help increase the level of your inclusion in education?”.

This method of phrasing results in many problems that would lead to not obtaining correct results or to a failure in answering the questions of the research, and this happens because:

1. The research topic may include terms that the participants are not familiar with, as academic terms are often used in research topics, therefore, other equivalent words that are used in real life must be used.
2. Most of the main research topics are complicated which cannot be answered by answering a single question, rather, they should be partitioned into sub-topics. Those sub-topics shall be phrased into questions (taking into consideration the appropriate amendment of the phrasing also), therefore, presenting the research topic directly and literally will cause confusion for the respondents, as they will be facing a broad and general question that is difficult for them to answer in this way.
3. In most cases, the participants do not have a level of knowledge that would help them answer the question in this form, this means that when studying the needs of people with disabilities that are required to increase their inclusion in education, it is better to ask the questions that related to the problems and difficulties they face that hinder their access to an appropriate education, with the necessity of emphasizing that asking about these problems and difficulties must be in a detailed way.

In summary, it can be said that the process of developing questionnaires appears to be easy for workers in this field, especially non-specialists, and anyone can work on the development of the questionnaires, but the experience, especially at the time of receiving data after all the efforts exerted for structuring the sample, and research methodology, shows that the data are useless, and this is due to the wrong design of the questionnaires.

Questionnaires can be expressed as the clearest example of the phrase “deceptively simple”, as anyone can develop a questionnaire, but the challenge comes with the obtained data. I recommend all workers in the field of research to improve their skills in #questionnaire_writing, and concentrate on the applied references, as most of the books only tackle theoretical aspects.

By:
Ghaith Albahr: CEO of INDICATORS

Rubbish data

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

Through my experience of working with many organizations, research centers, and academic researchers, I have noticed an issue in the collected data that only can be named as rubbish data or useless data.

The idea of useless data can be summarized as data or questions asked in questionnaires that are not useful in anything related to the objectives of the research, for example in many monitoring or evaluation activities, questions are asked in beneficiary interviews about the family structure in detail, such as asking about the family members disaggregated by gender and age groups. Some may think that these data are important, but experience says the opposite, as these data are important in the phase of needs assessment and selection of beneficiaries, which were already collected in the previous activities, and all the cases I witnessed did not use this data (in the course of writing a monitoring or evaluation report), and in the best case, the family members data were grouped into a final number, so why were all these details asked and make the beneficiaries exhausted with all these questions?

The belief of some researchers that if these data are not useful, it will not cause any issues is wrong, as a large number of questions and asking questions that have nothing to do with the research objectives causes several problems, including an increase in costs, an increase in the participants’ hesitation and fear due to a large number of details that are asked about and the lack of Its rationality, the decrease in the participants’ interest in providing serious answers due to the increase in the duration of the interview and their fatigue, an increase in the possibility of errors in data collection, an increase in the complexities of data analysis, distracting the researcher from the processing data and writing the report and thus discussing topics that not related to the objectives of the research and distracting the decision-makers.

The observed cases that may be called rubbish data are uncountable. Asking about the name of the participant in a political poll in which the name of the participant does not matter at all, it only expresses his legal personality as a representative of a sample of the surveyed community groups (except in rare cases related to verification and follow-up of the data collection teams), asking about the participant’s name will necessarily lead to providing answers that stray more from his true opinions, as a result of his fear of linking those answers to his name and exposing him to any harm. I always advise that the questions we ask to be linked to the objectives of our research and not to say, “We wouldn’t lose anything if we ask this question.”

By:
Ghaith Albahr: CEO of INDICATORS

Ordinal Questions , Challenges and Issues

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

The ordinal questions, where the participant is asked to answer several options in order of priority contain many issues.
I will talk through my observation of many cases about these questions focusing on the negatives points:
1. The process of arranging options according to the most important and less important is a cumbersome and time-consuming process, so it is noted that most participants do not answer them seriously, and therefore the order obtained is inaccurate.
2. In the questions in which we choose the most three important answers in an orderly way, the order tends to follow the order of the same answers in the design of the questionnaire, meaning that the participants tend to choose the answers that are mentioned to them at the beginning as the most important.
3. A big problem with the analysis of the ordinal questions is due to the weakness of most statistical programs, and the lack of ready-made analytical methods for these questions, so the data analyst is forced to do manual calculations, which causes issues in the analysis.
4. A problem in the outputs of the analysis: -Calculating the order as weights will give a result that may exceed the real value, meaning that the numerical result that we will get does not express a real value, but rather expresses the weight and importance of this option compared to the other options and not the percentage of those who chose it.
-Many data analysts have difficulty dealing with these questions, so they tend to use inappropriate methods such as displaying the analysis of the first priority only, displaying the analysis of each priority separately, or analyzing the question as a usual multi-select question.
-An error in calculating weights, the weighting system in statistics is not arbitrary, that is, in cases, it is considered in the form of degrees 1, 2, 3, or in the form of probabilities or percentages of the original answers…etc.
-An error in defining the weights of the answers, as the first priority should take the number 3 and the third should take the number 1, knowing that the logical order is the opposite, but as a final value it must give a higher number to the first priority, and this is usually the mistake that some data analysts made.
5. Issues with the report writers where some of them are confused about how to present and discuss the results in the report correctly.
6. Problems in the disaggregation of ordinal questions with other questions, as the question exists in several columns in the database, in addition to the need to take into account the weights, and to the disaggregation with one or more questions, which leads to many data analysts to make mistakes in analyzing these questions

By:
Ghaith Albahr: CEO of INDICATORS

Issues of Dealing with Missing Values

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

A lot of data analysis programs do not have the ability to distinguish between many values, namely:
· Missing Values
· Blanks
· Zero

This weakness of data analysis programs also extends to the failure of many data analysts to distinguish between these values, therefore, these values are not being distinguished or dealt with, and data are not being analyzed based on these differences.

Some may think that these differences are not very important, and they ignore them and leave dealing with them to data analysis programs, but in most cases, this gives catastrophic results that many people do not realize.

I will attempt to illustrate these differences through some examples:

1. If we want to analyze the average income of households in a country suffering from a crisis, it was noticed that a high percentage of respondents said that they have no income of any kind, and the percentage of these respondents is over 40% of the surveyed families. Data analysts dealt with these cases as missing values, the thing that gave results that are utterly different from the situation of society, as the socio-economic indicators in this case will show, for example, that only 10% of HHs are below the extreme poverty line, but the truth is that the percentage is more than 50%, because whoever does not have any income must be considered as his income is zero rather than a missing value, because the missing value is not included in the calculations, while the value zero is, and thus affects the percentages and the general average of income. In the opposite case, in the event of asking about the monthly salary, the salary of a person who does not have a job will be considered as a missing value rather than a zero, as he is unemployed and the salary is not calculated as a zero.
2. Many programs do not consider the blanks in the text questions as a missing value. For example, we find that the SPSS program does not consider the empty cell in the text questions as a missing value, but rather considers it a valid value, as in the Gender column, if it is a text question, the program will calculate the empty values, the thing that significantly affects results such as percentages, knowing that those who did not indicate their gender (male or female) should be considered a missing value.
3. In the SPSS, when trying to calculate a new data column from other columns, we find that some of the codes (formulas) can deal with the missing values effectively and some formulas cannot, for example when trying to calculate the total number of the family members out of the family members of each group, and we used the (sum) formula. We notice that SPSS gives the sum result even if there is a missing value in one of the categories, while calculating as a manual sum will give the sum result as a missing value when any of the cases with a missing value is encountered.

The cases in which there are issues in defining the missing values are unlimited, and I do not advise in any case to give the data analysis program nor the data analyst alone the freedom to guess and deal with those values, as the appropriate treatment and definition of the empty value must be determined, as we explained in the income case, the missing value must be considered as zero, while in the salary case, it must be considered a missing value, and in our third example, the empty cells of any category of family members must be considered zero, knowing that from the beginning, data collectors must be told that if a family does not have any member of a certain category, they must not leave a missing value, rather, they should fill it with a zero.

By:
Ghaith Albahr: CEO of INDICATORS

Outliers Processing

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

Some data analysts do not grant any attention to outliers, and they may have first heard this term while reading this article. Outliers have a significant impact on many statistical indicators, and the methods of handling and processing them are related to many factors, some of which are simple, and some are more complex and related to the type of statistical indicator, as the data analyst must know the classification of the Smooth Parameters and the that’s not, and this indicates the degree to which it is affected by the outliers.

For example, the mean is considered one of the best indicators/coefficients of central tendency, but it is extremely affectable by outliers compared to the median, knowing that the median is not considered an accurate coefficient compared to the mean.

Within the following lines, I will try to tackle an important aspect related to the outliers, which is the simplest, it’s the methods of processing outliers:

Methods of processing outliers:
1. Revision of the source: we revise the source in order to check the value, if there is an entry mistake, it is corrected, such as writing the age for a study about children as 22 by mistake instead of 2, so, we simply discover that it is an entry mistake and correct it.
2. Logical processing of outliers: Mistakes of outliers can be discovered through logical processing, simply, when studying the labor force, for example, the data of a person who is 7 years old are deleted because he is not classified as a labor force.
3. Distinguishing between what to keep and what to delete: This process is considered very exhausting, as there are no precise criteria for accepting or rejecting outliers. In this regard, SPSS program offers a useful feature, which is classifying outliers into two types, Outliers (which are between the first/third quartile and one and a half of the inter-quartile range), and Extreme values (which are between one and a half to three times the inter-quartile range), in other words, data far from the center of the data and data extremely far from it, in this case this classification can be adopted by accepting outliers and deleting extreme values.
4. Replacing the outliers that have been deleted: The last and most sensitive step is the decision to deal with the deleted outliers, whether to keep them deleted (as missing values) or replace them, the challenge begins with the decision to replace them, as leaving them as missing values entails consequences and challenges, similarly, replacing them also entails consequences and challenges. The decision of replacing deleted outliers is followed by the appropriate methodology for replacement, as the process of replacing missing values is also complicated and has various methodologies and options, each of these methodologies will have an impact in a way on the results of data analysis (I will talk about replacing missing values in another post).

It is not simple to summarize the methodologies for dealing with outliers within these few lines, as deleting outliers puts us in front of other options; shall we leave it as a missing value or replace it with alternative values? Also, when we delete outliers and reanalyze the data, we will find that new outliers have appeared, these values were not considered outliers considering the database before it was modified (before deleting the outliers in the first stage), therefore, I recommend Data Analysts to study more about this topic, considering the extent of studying they need based on the volume and sensitivity of the data.

By:
Ghaith AlBahr (Mustafa Deniz): CEO of INDICATORS

Comparing SPSS vs Excel

Published on: 2022-04-25نُشِرَ بتاريخ: 2022-04-25

Data Analysis, Excel VS SPSS Statistics

An important question occurs to many of people interested in the field of data analysis or people who may need to use data analysis programs either for work or research; “What is the difference between Excel and SPSS? And when is each of them recommended?”.

In this article we provide a brief description of the advantages and disadvantages, this description is categorized according to the specialization or field of the required data analysis:

First: data analysis for academic research

We absolutely recommend using SPSS, as it offers very wide statistical analyses that has endless options. In this field, Excel cannot in any way provide what SPSS does.

For example, SPSS provides:

Parametric and non-parametric tests with wide options that include many tests required for researchers who are not specialized in statistics.
Regression and correlation analysis of its various types, linear and non-linear, with tests for them and analysis options that are widely related to them.
Time series analysis.
Questionnaire reliability tests.
Neural networks analysis.
Factorial analysis.
Survival analysis.
Statistical quality control analysis and charts.

Along with many other statistical analyses that serve academic fields.

Second: data analysis for non-academic research

It can be classified into several levels of data analysis:

Descriptive data analysis:

In general, the two programs are able to provide all the analyses required in descriptive statistical analysis, but Excel contains some minor flaws, such as that it does not arrange the answers according to their logical order, but rather in an alphabetical order, and it can’t provide calculations related to questions that include texts in addition to calculations related to their own order (Ordinal data) such as calculating the Likert Scale.

SPSS is characterized by providing tools for analyzing multi-select questions and with advanced options, which Excel does not provide, therefore, we need to use functions to get those analyses which options are limited with problems with the percentage that we get from it.

Disaggregation analysis:

It can be said that both programs are reliable in this aspect, except in the case of multiple and complex disaggregation/cross-tabulation with multi-select questions, in these cases, Excel becomes slower and less effective, while SPSS offers all options, no matter how complex they are, at the same speed required for descriptive statistical analysis and simple disaggregation. In addition to aforementioned, there are features such as filtering and data splitting features provided by SPSS, which accelerate data analysis to a very big scale, as it is possible to analyze the required data for 20 regions separately to be done at the same speed of analyzing data for one region, while in Excel, this means doing 20 times the work.

SPSS provides the features of descriptive analysis and data disaggregation much faster than we may think, as some analyses that take a week using Excel can be completed in just a few minutes using SPSS.

Third: Analyzing data of demographic indicators

When talking about demographic indicators, we find a challenge facing each of these two programs. In SPSS, we can perform numerous, complex and very fast arithmetic operations that outperform Excel, however, SPSS has some minor weaknesses that are important at the same time; among the most important matters that have been noticed in this regard is conducting multi-column conditional arithmetic operations, as SPSS provides multi-column arithmetic operations, but these operations do not contain multiple conditions, on the other hand, Excel provides this feature with a wide variety of conditional and effective functions.

Fourth: Data management and linking databases in the analysis

In this particular aspect, we find the clear distinction of Excel, as with the Power Query package, it offers features of data management, merging, and the possibility for aggregation and cleaning the data, in addition to the ability to link various databases without merging them, and analyzing them together with all types of analyses.

As for SPSS program, it does not include the feature of analyzing isolated databases without the need to merge them, on the other hand, it can solve a large part of this problem by merging databases, but this entails many challenges and great possibilities for error. When merging more than one database, there is usually a repetition of cases to match the other database, and this means that when we analyze the database that has been duplicated, we must perform operations that cancel this repetition in order to obtain correct analyses.

The features of data management and analyzing isolated databases together is considered as a great advantage of Excel, but in most cases it is not required, as it is only needed in complex and advanced projects.

On the other hand, SPSS program in the Data menu provides many features that can only be described as great, and the lines of this article are insufficient to talk about them, but they can be briefly described by saying that they gives data management some features that can outperform Excel in some aspects, such as the Unpivot or Restructure features that SPSS provides including features that are far more advanced and powerful than Excel.

Fifth: Weighting

One of the very important aspects of data analysis, especially with regard to demographic statistics, humanitarian needs analysis and advanced market research, is the Weighting feature, which helps to calculate the data after taking into account a weight that expresses, for example, the population of the governorate or the studied area, which gives it an amount of needs that is commensurate with its size.

This feature is not provided by Excel, if we wanted to calculate the weights manually using functions in it, this sometimes causes problems in the results, especially in the disaggregation analyses.

In SPSS, once you choose the option of Calculating Weights, it will be automatically applied to all calculations whatever they are, even on charts, and we can stop calculating weights with only one click.

This is a simple comparison between the two programs, we hope this comparison gives a preliminary perspective and help data analysis specialists and institutions that need to build the capacities of their team in this field to choose the most suitable program for them.

 

By:
Ghaith Albahr: CEO of INDICATORS

Capacity Assessment Of Monitoring And Evaluation Departments In Syrian Organization

Published on: 2022-03-15نُشِرَ بتاريخ: 2022-03-15

Monitoring and Evaluation Department is considered one of the most important departments in NGOs, as it has an essential role in all phases of the project, starting from the planning of activities through assessing the needs of the targeted beneficiaries of the organization’s activities, and during the implementation phase and field follow-up, and even after the project ends through the process of final evaluation of the project and its impact assessment, and concluding lessons learned that can contribute to designing and developing future projects that the organization intends to implement, in addition to the role of the department in raising the degree of donors’ confidence in the organization.

Given the importance of the Work of MEAL Departments in NGOs, we conducted this study, which aims to identify the situation of the work of the M&E departments in Syrian organizations operating in Turkey, the extent to which these departments are organized, the existence of policies and SOP’s guides, its effectiveness within the organization and their relationship with the rest of the organization’s departments, in addition to identifying the expertise and competencies that workers of these departments possess and their most important training needs. The study included 20 Syrian organizations located in both Gaziantep and Istanbul.

The study showed that the Work of MEAL Departments in the majority of Syrian NGOs is limited to monitoring and evaluation only, without having a role in the accountability or learning process, and employees of these departments suffer a lack of expertise, especially expertise related to reports reporting, quality standards for questionnaires, sampling methodologies and PSEA, also, the vast majority of organizations lack policies and SOPs guides, in addition to the weak relationship and coordination between the M&E departments and the programs departments in about half of the organizations.

The study was conducted during October and November 2021.

Customer experience testing

Published on: 2022-03-05نُشِرَ بتاريخ: 2022-03-05

It is the impression you leave on your customer at every stage of his journey to purchase a product or service, which leads him to think of your brand and promote it among his acquaintances and friends.

The difference between customer experience and customer service:
Customer Service

It consists of interactions with the customer in order to obtain the offered product or service, giving the customer the information he wants to know, and receiving complaints and inquiries.

Customer Experience

It can be simply explained as accompanying the customer from the beginning to the end of the journey, i.e. the purchase of the product and the impressions it provides at each stage of his communication with the company and the impressions about the product or service after purchase.

Customer service can be considered as part of the customer experience, as they are strongly related, but there is a difference between them.

To simplify the Customer Experience term, the following example can be used:
Suppose we have a movie, to produce this movie, we need:
1. Director – Executive Director – Sound Director – Cameraman – Producer… (Management Team).
2. The actors and all the individuals who appear in front of the camera.. (Employees who are in direct contact with clients).
3. The script, the dialogue, and the area in which the movie is filmed … (the tools used to produce the product or service offered by the company).
4. Current viewers..(existing clients and potential clients).

We have a highly professional script, dialogue, venue, director, lighting and sound engineer, and cameraman, but the experience of the actors is low, or we could say that it is not good, will the viewers get the aim or the moral of the movie? (Of course not) and this is the biggest mistake that current companies and institutions make with regard to the customer experience, as they focus on the management team so that they have high expertise to produce the product or service and do not care about the employees who are in direct contact with the customer who is the face of the work, the thing that negatively affect the company’s reputation, and may lead to the loss of existing customers and failure to get new ones.

The Importance Of Customer Experience

It is very important for the continuous growth of a business, as ensuring a positive customer experience contributes to:
• Building brand loyalty among customers.
• Activating your product or service and embed it in the minds of customers.
• Creating marketing opportunities by customers themselves, by writing positive comments and impressions that are more important than paid advertisements and promotions, and more influential on other customers.

On the other hand, customers want to feel connected to their favorite brand and want to feel that it knows them, respects them and cares about them. For example, suppose there are two cafes that are close to each other, and they have the same brand of coffee and the same qualities, but one of them is more expensive than the other, and the most expensive pays attention to its customers and their details, for example: He says to his customers: (Your usual drink?) which leads customers to go to the most expensive, because it satisfies the customer’s needs of drinking coffee with a feeling of care and good treatment.

Customer Experience Methodology
1. Developing customer journey map

A customer journey map is defined as a story supported by a map that includes all the interactions and communications that the customer has with the company in order to obtain a particular product or service.

Whereas a map is drawn for all potential customer paths during his journey in order to obtain a product or service and identify all channels and interactions that the customer can make at each stage of the map.

2. Evaluating the integration of operations in companies

Evaluating each of the stages that the customer goes through in order to obtain a specific product or service in terms of customer satisfaction and whether it is integrated or not, this is done through studying the customer’s experience in each of the operations in detail.

3. CRM Evaluation

It is done through an assessment of the company’s interaction with current and future customers, where customer data with the company is analyzed in order to get the best path of customer relations, with a focus on retaining old customers.

4. Experimental implementation of customer experience

It is one of the most important stages of studying customer satisfaction or customer experience, and it is done through conducting an experimental implementation of a customer journey after developing the previous tools, and accompanying the customer from the stage of purchasing the product until reaching the post-purchase stage, to get the feedback of the customer about the product, and knowing all stages and paths he went through during his journey with the company.

5. Analyzing Customer Satisfaction

This stage begins after the customer purchases the product or service and gives feedback on his journey with the company and the product or service, whereas the customer’s opinions and feedback are analyzed to reach the problems that the customer may face, and the positive and negative things that he can see in the product, then the necessary measures are taken to address these matters.

6. Studying the customer’s perception about the company

This is done by conducting a short questionnaire for the customer, in which he is asked about all of the stages of his journey, the nature of the relationship at each stage, and his views on how to make the service or journey better, and then analyzing the data coming from the customers to obtain comprehensive and general perceptions to improve the stages of the customer’s journey.
Example of customers’ journey:
For example, what does Google say about its customer experience testing?
If users can’t spell, it’s our problem
If they don’t know how to form the query, it’s our problem
If they don’t know what words to use, it’s our problem
If they can’t speak the language, it’s our problem
If there’s not enough content on the web, it’s our problem
If the web is too slow, it’s our problem

The purpose of customer experience testing is to be concerned with customer needs rather than the amount of sales.
The focus must change from absolutely focusing on the product to focusing on the customer experience to Improve the product based on results, and from being concerned about the broad market to being concerned about the individuals who connect you to the broad market.

By:
Ghaith Albahr: CEO of INDICATORS
Anas Attar Sabbagh: Research officer in INDICATORS

The influence of Turkish Language Level on the Integration of Syrians Refugees

Published on: 2022-02-07نُشِرَ بتاريخ: 2022-02-07

view of the importance of working to achieve the integration of Syrian refugees into Turkish society, to reduce the tension among Turks towards the Syrians, and to determine the role of the Turkish language in achieving that integration, we have conducted this study, which aims to reveal the level of mastering the Turkish language among Syrian refugees in Turkey, identifying the reasons that hinder their ability to learn the Turkish language, knowing the degree of integration of Syrians into Turkish society, and the impact of their mastery of the language on their integration and Turks acceptance of them. and to know the situation of Syrian refugees in Germany regarding learning the German language in order to benefit from the German experience in developing the language abilities and skills of Syrian refugees in Turkey.

The study was conducted during the second half of 2020 and covered the states of Istanbul, Gaziantep, Hatay, and Urfa, which are the states in which the largest number of Syrians reside. During the study, key informant interviews were conducted with key informants interested in refugees’ integration in Turkey and Germany, and questionnaires were conducted with 340 Syrians residing within the states covered by the study, and the study adopted a stratified random sampling method to ensure including Syrians according to several variables such as gender, age, and educational level

PSEA

Published on: 2022-01-17نُشِرَ بتاريخ: 2022-01-17

Given the severity of the IDPs in Syria being subjected to sexual or financial exploitation, we conducted a study in this regard, as we shed light on:

Showing the percentage of people who were subjected to exploitation and abuse by some humanitarian or relief sectors.
The reasons that made the victims of such abuses refrain from filing complaints.
The type and form of the abuse or exploitation they were subjected to
Their extent of their knowledge about how to get help and support in case they are subjected to such abuses.

Work follow-up platform

Published on: 2021-12-09نُشِرَ بتاريخ: 2021-12-09


Local Community Perceptions Regarding Services and Decision Making Processes in NW Syria

Published on: 2022-06-03نُشِرَ بتاريخ: 2022-06-03

Large territories in northern Syria have been controlled by various opposition and other forces and non-state actors, after these territories were liberated from the control of the Syrian regime. While these entities govern the liberated areas, and are in charge of the affairs of the Syrians residing there by providing public services, maintain security, and resolve disputes, it is important to note that there is no single administrative and military entity which has a monopoly of control across the different parts of the region.
The present study was conducted to find out the reality of the liberated Syrian north, through identifying the entities responsible for governing each of these pieces of land, the status of public service provision and the level of citizen satisfaction, as well as understanding the state of affairs in terms of security, criminality, access to justice, and rule of law. Finally, the study aims to shed light to the decision-making mechanisms for important issues within the region, the extent to which citizens can participate in these mechanisms, and external influence on the governance and decision-making processes of the region.
The research has been designed and executed during the first half of 2021 in Idlib, Olive Branch and Euphrates Shield areas. It is based on field research involving survey of Syrians both from host communities and internally displaced persons (IDP’s) residing in the research area, and complemented with interviews with key informants (KIIs) from local government bodies or non-government organizations (NGOs).
The results of the study demonstrate a low level of knowledge of the residents of northern Syria of those responsible for governance and the provision of public services, mainly due to confusion between service providers and those responsible for managing the sector concerned. Another important result is in general, the residents of the region have low level of satisfaction for the services provided, which is being observed across all three regions.
With regard to the security situation, Idlib was the safest area according to the opinions of key informants and participants in the survey, where Hay’at Tahrir Al Sham was able to firmly control the security situation in the area and deal to a large extent with the security threats of bombings and kidnappings. Theft remains the main security concern in Idlib. In the areas of Euphrates Shield and Olive Branch, the level of safety was found to be very low, where both areas suffer from explosions targeting markets and residential areas as well as many cases of theft, kidnappings, killings and factional fighting. The people of Olive Branch area suffer especially from the seizing of their rights and property by military factions, and are being subject to arbitrarily arrest and kidnapping and have to pay funds get released.

Issues of Asking Direct Questions

Published on: 2022-05-24نُشِرَ بتاريخ: 2022-05-24

Researchers and workers of all research fields (monitoring and evaluation, market research, opinion polls… etc.) usually work on identifying a set of research topics (usually called either research topics, key questions, or hypotheses…), then derive the questions that will be asked in the research tools from these topics. The problem I noticed that many researchers have, especially those working on developing #questionnaires / #research_tools, is that the phrasing of the questions uses almost the same words as the research topics. i.e., If we had a question about “the needs that would help increase the level of inclusion of people with disabilities in education”, the researcher asked people with disabilities “What are the needs that would help increase the level of your inclusion in education?”.

This method of phrasing results in many problems that would lead to not obtaining correct results or to a failure in answering the questions of the research, and this happens because:

1. The research topic may include terms that the participants are not familiar with, as academic terms are often used in research topics, therefore, other equivalent words that are used in real life must be used.
2. Most of the main research topics are complicated which cannot be answered by answering a single question, rather, they should be partitioned into sub-topics. Those sub-topics shall be phrased into questions (taking into consideration the appropriate amendment of the phrasing also), therefore, presenting the research topic directly and literally will cause confusion for the respondents, as they will be facing a broad and general question that is difficult for them to answer in this way.
3. In most cases, the participants do not have a level of knowledge that would help them answer the question in this form, this means that when studying the needs of people with disabilities that are required to increase their inclusion in education, it is better to ask the questions that related to the problems and difficulties they face that hinder their access to an appropriate education, with the necessity of emphasizing that asking about these problems and difficulties must be in a detailed way.

In summary, it can be said that the process of developing questionnaires appears to be easy for workers in this field, especially non-specialists, and anyone can work on the development of the questionnaires, but the experience, especially at the time of receiving data after all the efforts exerted for structuring the sample, and research methodology, shows that the data are useless, and this is due to the wrong design of the questionnaires.

Questionnaires can be expressed as the clearest example of the phrase “deceptively simple”, as anyone can develop a questionnaire, but the challenge comes with the obtained data. I recommend all workers in the field of research to improve their skills in #questionnaire_writing, and concentrate on the applied references, as most of the books only tackle theoretical aspects.

By:
Ghaith Albahr: CEO of INDICATORS

Rubbish data

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

Through my experience of working with many organizations, research centers, and academic researchers, I have noticed an issue in the collected data that only can be named as rubbish data or useless data.

The idea of useless data can be summarized as data or questions asked in questionnaires that are not useful in anything related to the objectives of the research, for example in many monitoring or evaluation activities, questions are asked in beneficiary interviews about the family structure in detail, such as asking about the family members disaggregated by gender and age groups. Some may think that these data are important, but experience says the opposite, as these data are important in the phase of needs assessment and selection of beneficiaries, which were already collected in the previous activities, and all the cases I witnessed did not use this data (in the course of writing a monitoring or evaluation report), and in the best case, the family members data were grouped into a final number, so why were all these details asked and make the beneficiaries exhausted with all these questions?

The belief of some researchers that if these data are not useful, it will not cause any issues is wrong, as a large number of questions and asking questions that have nothing to do with the research objectives causes several problems, including an increase in costs, an increase in the participants’ hesitation and fear due to a large number of details that are asked about and the lack of Its rationality, the decrease in the participants’ interest in providing serious answers due to the increase in the duration of the interview and their fatigue, an increase in the possibility of errors in data collection, an increase in the complexities of data analysis, distracting the researcher from the processing data and writing the report and thus discussing topics that not related to the objectives of the research and distracting the decision-makers.

The observed cases that may be called rubbish data are uncountable. Asking about the name of the participant in a political poll in which the name of the participant does not matter at all, it only expresses his legal personality as a representative of a sample of the surveyed community groups (except in rare cases related to verification and follow-up of the data collection teams), asking about the participant’s name will necessarily lead to providing answers that stray more from his true opinions, as a result of his fear of linking those answers to his name and exposing him to any harm. I always advise that the questions we ask to be linked to the objectives of our research and not to say, “We wouldn’t lose anything if we ask this question.”

By:
Ghaith Albahr: CEO of INDICATORS

Ordinal Questions , Challenges and Issues

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

The ordinal questions, where the participant is asked to answer several options in order of priority contain many issues.
I will talk through my observation of many cases about these questions focusing on the negatives points:
1. The process of arranging options according to the most important and less important is a cumbersome and time-consuming process, so it is noted that most participants do not answer them seriously, and therefore the order obtained is inaccurate.
2. In the questions in which we choose the most three important answers in an orderly way, the order tends to follow the order of the same answers in the design of the questionnaire, meaning that the participants tend to choose the answers that are mentioned to them at the beginning as the most important.
3. A big problem with the analysis of the ordinal questions is due to the weakness of most statistical programs, and the lack of ready-made analytical methods for these questions, so the data analyst is forced to do manual calculations, which causes issues in the analysis.
4. A problem in the outputs of the analysis: -Calculating the order as weights will give a result that may exceed the real value, meaning that the numerical result that we will get does not express a real value, but rather expresses the weight and importance of this option compared to the other options and not the percentage of those who chose it.
-Many data analysts have difficulty dealing with these questions, so they tend to use inappropriate methods such as displaying the analysis of the first priority only, displaying the analysis of each priority separately, or analyzing the question as a usual multi-select question.
-An error in calculating weights, the weighting system in statistics is not arbitrary, that is, in cases, it is considered in the form of degrees 1, 2, 3, or in the form of probabilities or percentages of the original answers…etc.
-An error in defining the weights of the answers, as the first priority should take the number 3 and the third should take the number 1, knowing that the logical order is the opposite, but as a final value it must give a higher number to the first priority, and this is usually the mistake that some data analysts made.
5. Issues with the report writers where some of them are confused about how to present and discuss the results in the report correctly.
6. Problems in the disaggregation of ordinal questions with other questions, as the question exists in several columns in the database, in addition to the need to take into account the weights, and to the disaggregation with one or more questions, which leads to many data analysts to make mistakes in analyzing these questions

By:
Ghaith Albahr: CEO of INDICATORS

Issues of Dealing with Missing Values

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

A lot of data analysis programs do not have the ability to distinguish between many values, namely:
· Missing Values
· Blanks
· Zero

This weakness of data analysis programs also extends to the failure of many data analysts to distinguish between these values, therefore, these values are not being distinguished or dealt with, and data are not being analyzed based on these differences.

Some may think that these differences are not very important, and they ignore them and leave dealing with them to data analysis programs, but in most cases, this gives catastrophic results that many people do not realize.

I will attempt to illustrate these differences through some examples:

1. If we want to analyze the average income of households in a country suffering from a crisis, it was noticed that a high percentage of respondents said that they have no income of any kind, and the percentage of these respondents is over 40% of the surveyed families. Data analysts dealt with these cases as missing values, the thing that gave results that are utterly different from the situation of society, as the socio-economic indicators in this case will show, for example, that only 10% of HHs are below the extreme poverty line, but the truth is that the percentage is more than 50%, because whoever does not have any income must be considered as his income is zero rather than a missing value, because the missing value is not included in the calculations, while the value zero is, and thus affects the percentages and the general average of income. In the opposite case, in the event of asking about the monthly salary, the salary of a person who does not have a job will be considered as a missing value rather than a zero, as he is unemployed and the salary is not calculated as a zero.
2. Many programs do not consider the blanks in the text questions as a missing value. For example, we find that the SPSS program does not consider the empty cell in the text questions as a missing value, but rather considers it a valid value, as in the Gender column, if it is a text question, the program will calculate the empty values, the thing that significantly affects results such as percentages, knowing that those who did not indicate their gender (male or female) should be considered a missing value.
3. In the SPSS, when trying to calculate a new data column from other columns, we find that some of the codes (formulas) can deal with the missing values effectively and some formulas cannot, for example when trying to calculate the total number of the family members out of the family members of each group, and we used the (sum) formula. We notice that SPSS gives the sum result even if there is a missing value in one of the categories, while calculating as a manual sum will give the sum result as a missing value when any of the cases with a missing value is encountered.

The cases in which there are issues in defining the missing values are unlimited, and I do not advise in any case to give the data analysis program nor the data analyst alone the freedom to guess and deal with those values, as the appropriate treatment and definition of the empty value must be determined, as we explained in the income case, the missing value must be considered as zero, while in the salary case, it must be considered a missing value, and in our third example, the empty cells of any category of family members must be considered zero, knowing that from the beginning, data collectors must be told that if a family does not have any member of a certain category, they must not leave a missing value, rather, they should fill it with a zero.

By:
Ghaith Albahr: CEO of INDICATORS

Outliers Processing

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

Some data analysts do not grant any attention to outliers, and they may have first heard this term while reading this article. Outliers have a significant impact on many statistical indicators, and the methods of handling and processing them are related to many factors, some of which are simple, and some are more complex and related to the type of statistical indicator, as the data analyst must know the classification of the Smooth Parameters and the that’s not, and this indicates the degree to which it is affected by the outliers.

For example, the mean is considered one of the best indicators/coefficients of central tendency, but it is extremely affectable by outliers compared to the median, knowing that the median is not considered an accurate coefficient compared to the mean.

Within the following lines, I will try to tackle an important aspect related to the outliers, which is the simplest, it’s the methods of processing outliers:

Methods of processing outliers:
1. Revision of the source: we revise the source in order to check the value, if there is an entry mistake, it is corrected, such as writing the age for a study about children as 22 by mistake instead of 2, so, we simply discover that it is an entry mistake and correct it.
2. Logical processing of outliers: Mistakes of outliers can be discovered through logical processing, simply, when studying the labor force, for example, the data of a person who is 7 years old are deleted because he is not classified as a labor force.
3. Distinguishing between what to keep and what to delete: This process is considered very exhausting, as there are no precise criteria for accepting or rejecting outliers. In this regard, SPSS program offers a useful feature, which is classifying outliers into two types, Outliers (which are between the first/third quartile and one and a half of the inter-quartile range), and Extreme values (which are between one and a half to three times the inter-quartile range), in other words, data far from the center of the data and data extremely far from it, in this case this classification can be adopted by accepting outliers and deleting extreme values.
4. Replacing the outliers that have been deleted: The last and most sensitive step is the decision to deal with the deleted outliers, whether to keep them deleted (as missing values) or replace them, the challenge begins with the decision to replace them, as leaving them as missing values entails consequences and challenges, similarly, replacing them also entails consequences and challenges. The decision of replacing deleted outliers is followed by the appropriate methodology for replacement, as the process of replacing missing values is also complicated and has various methodologies and options, each of these methodologies will have an impact in a way on the results of data analysis (I will talk about replacing missing values in another post).

It is not simple to summarize the methodologies for dealing with outliers within these few lines, as deleting outliers puts us in front of other options; shall we leave it as a missing value or replace it with alternative values? Also, when we delete outliers and reanalyze the data, we will find that new outliers have appeared, these values were not considered outliers considering the database before it was modified (before deleting the outliers in the first stage), therefore, I recommend Data Analysts to study more about this topic, considering the extent of studying they need based on the volume and sensitivity of the data.

By:
Ghaith AlBahr (Mustafa Deniz): CEO of INDICATORS

Comparing SPSS vs Excel

Published on: 2022-04-25نُشِرَ بتاريخ: 2022-04-25

Data Analysis, Excel VS SPSS Statistics

An important question occurs to many of people interested in the field of data analysis or people who may need to use data analysis programs either for work or research; “What is the difference between Excel and SPSS? And when is each of them recommended?”.

In this article we provide a brief description of the advantages and disadvantages, this description is categorized according to the specialization or field of the required data analysis:

First: data analysis for academic research

We absolutely recommend using SPSS, as it offers very wide statistical analyses that has endless options. In this field, Excel cannot in any way provide what SPSS does.

For example, SPSS provides:

Parametric and non-parametric tests with wide options that include many tests required for researchers who are not specialized in statistics.
Regression and correlation analysis of its various types, linear and non-linear, with tests for them and analysis options that are widely related to them.
Time series analysis.
Questionnaire reliability tests.
Neural networks analysis.
Factorial analysis.
Survival analysis.
Statistical quality control analysis and charts.

Along with many other statistical analyses that serve academic fields.

Second: data analysis for non-academic research

It can be classified into several levels of data analysis:

Descriptive data analysis:

In general, the two programs are able to provide all the analyses required in descriptive statistical analysis, but Excel contains some minor flaws, such as that it does not arrange the answers according to their logical order, but rather in an alphabetical order, and it can’t provide calculations related to questions that include texts in addition to calculations related to their own order (Ordinal data) such as calculating the Likert Scale.

SPSS is characterized by providing tools for analyzing multi-select questions and with advanced options, which Excel does not provide, therefore, we need to use functions to get those analyses which options are limited with problems with the percentage that we get from it.

Disaggregation analysis:

It can be said that both programs are reliable in this aspect, except in the case of multiple and complex disaggregation/cross-tabulation with multi-select questions, in these cases, Excel becomes slower and less effective, while SPSS offers all options, no matter how complex they are, at the same speed required for descriptive statistical analysis and simple disaggregation. In addition to aforementioned, there are features such as filtering and data splitting features provided by SPSS, which accelerate data analysis to a very big scale, as it is possible to analyze the required data for 20 regions separately to be done at the same speed of analyzing data for one region, while in Excel, this means doing 20 times the work.

SPSS provides the features of descriptive analysis and data disaggregation much faster than we may think, as some analyses that take a week using Excel can be completed in just a few minutes using SPSS.

Third: Analyzing data of demographic indicators

When talking about demographic indicators, we find a challenge facing each of these two programs. In SPSS, we can perform numerous, complex and very fast arithmetic operations that outperform Excel, however, SPSS has some minor weaknesses that are important at the same time; among the most important matters that have been noticed in this regard is conducting multi-column conditional arithmetic operations, as SPSS provides multi-column arithmetic operations, but these operations do not contain multiple conditions, on the other hand, Excel provides this feature with a wide variety of conditional and effective functions.

Fourth: Data management and linking databases in the analysis

In this particular aspect, we find the clear distinction of Excel, as with the Power Query package, it offers features of data management, merging, and the possibility for aggregation and cleaning the data, in addition to the ability to link various databases without merging them, and analyzing them together with all types of analyses.

As for SPSS program, it does not include the feature of analyzing isolated databases without the need to merge them, on the other hand, it can solve a large part of this problem by merging databases, but this entails many challenges and great possibilities for error. When merging more than one database, there is usually a repetition of cases to match the other database, and this means that when we analyze the database that has been duplicated, we must perform operations that cancel this repetition in order to obtain correct analyses.

The features of data management and analyzing isolated databases together is considered as a great advantage of Excel, but in most cases it is not required, as it is only needed in complex and advanced projects.

On the other hand, SPSS program in the Data menu provides many features that can only be described as great, and the lines of this article are insufficient to talk about them, but they can be briefly described by saying that they gives data management some features that can outperform Excel in some aspects, such as the Unpivot or Restructure features that SPSS provides including features that are far more advanced and powerful than Excel.

Fifth: Weighting

One of the very important aspects of data analysis, especially with regard to demographic statistics, humanitarian needs analysis and advanced market research, is the Weighting feature, which helps to calculate the data after taking into account a weight that expresses, for example, the population of the governorate or the studied area, which gives it an amount of needs that is commensurate with its size.

This feature is not provided by Excel, if we wanted to calculate the weights manually using functions in it, this sometimes causes problems in the results, especially in the disaggregation analyses.

In SPSS, once you choose the option of Calculating Weights, it will be automatically applied to all calculations whatever they are, even on charts, and we can stop calculating weights with only one click.

This is a simple comparison between the two programs, we hope this comparison gives a preliminary perspective and help data analysis specialists and institutions that need to build the capacities of their team in this field to choose the most suitable program for them.

 

By:
Ghaith Albahr: CEO of INDICATORS

Capacity Assessment Of Monitoring And Evaluation Departments In Syrian Organization

Published on: 2022-03-15نُشِرَ بتاريخ: 2022-03-15

Monitoring and Evaluation Department is considered one of the most important departments in NGOs, as it has an essential role in all phases of the project, starting from the planning of activities through assessing the needs of the targeted beneficiaries of the organization’s activities, and during the implementation phase and field follow-up, and even after the project ends through the process of final evaluation of the project and its impact assessment, and concluding lessons learned that can contribute to designing and developing future projects that the organization intends to implement, in addition to the role of the department in raising the degree of donors’ confidence in the organization.

Given the importance of the Work of MEAL Departments in NGOs, we conducted this study, which aims to identify the situation of the work of the M&E departments in Syrian organizations operating in Turkey, the extent to which these departments are organized, the existence of policies and SOP’s guides, its effectiveness within the organization and their relationship with the rest of the organization’s departments, in addition to identifying the expertise and competencies that workers of these departments possess and their most important training needs. The study included 20 Syrian organizations located in both Gaziantep and Istanbul.

The study showed that the Work of MEAL Departments in the majority of Syrian NGOs is limited to monitoring and evaluation only, without having a role in the accountability or learning process, and employees of these departments suffer a lack of expertise, especially expertise related to reports reporting, quality standards for questionnaires, sampling methodologies and PSEA, also, the vast majority of organizations lack policies and SOPs guides, in addition to the weak relationship and coordination between the M&E departments and the programs departments in about half of the organizations.

The study was conducted during October and November 2021.

The influence of Turkish Language Level on the Integration of Syrians Refugees

Published on: 2022-02-07نُشِرَ بتاريخ: 2022-02-07

view of the importance of working to achieve the integration of Syrian refugees into Turkish society, to reduce the tension among Turks towards the Syrians, and to determine the role of the Turkish language in achieving that integration, we have conducted this study, which aims to reveal the level of mastering the Turkish language among Syrian refugees in Turkey, identifying the reasons that hinder their ability to learn the Turkish language, knowing the degree of integration of Syrians into Turkish society, and the impact of their mastery of the language on their integration and Turks acceptance of them. and to know the situation of Syrian refugees in Germany regarding learning the German language in order to benefit from the German experience in developing the language abilities and skills of Syrian refugees in Turkey.

The study was conducted during the second half of 2020 and covered the states of Istanbul, Gaziantep, Hatay, and Urfa, which are the states in which the largest number of Syrians reside. During the study, key informant interviews were conducted with key informants interested in refugees’ integration in Turkey and Germany, and questionnaires were conducted with 340 Syrians residing within the states covered by the study, and the study adopted a stratified random sampling method to ensure including Syrians according to several variables such as gender, age, and educational level

PSEA

Published on: 2022-01-17نُشِرَ بتاريخ: 2022-01-17

Given the severity of the IDPs in Syria being subjected to sexual or financial exploitation, we conducted a study in this regard, as we shed light on:

Showing the percentage of people who were subjected to exploitation and abuse by some humanitarian or relief sectors.
The reasons that made the victims of such abuses refrain from filing complaints.
The type and form of the abuse or exploitation they were subjected to
Their extent of their knowledge about how to get help and support in case they are subjected to such abuses.

SUSPENDED WOMEN

Published on: 2020-04-01نُشِرَ بتاريخ: 2020-04-01

Objectives:
The study aims to research the situation of divorced women residing in Turkey and to determine the legal options available to them to register their marriage contracts or divorce cases at the official authorities in both Syria and Turkey, and to reveal the most important difficulties and challenges facing them on the various aspects such as social, financial and legal aspects

Research type: Divorced women

Publish date: April 2020

Publisher: INDICATORS Center

Accountability to Affected Population AAP

Published on: 2019-08-01نُشِرَ بتاريخ: 2019-08-01

Guide type: Introductory guide

Publish date: August 2019

Language: Arabic

Worked on this guide:

Ghaith Albahr

Bassel Faraj

Ghais Hmedan

Issues of Asking Direct Questions

Published on: 2022-05-24نُشِرَ بتاريخ: 2022-05-24

Researchers and workers of all research fields (monitoring and evaluation, market research, opinion polls… etc.) usually work on identifying a set of research topics (usually called either research topics, key questions, or hypotheses…), then derive the questions that will be asked in the research tools from these topics. The problem I noticed that many researchers have, especially those working on developing #questionnaires / #research_tools, is that the phrasing of the questions uses almost the same words as the research topics. i.e., If we had a question about “the needs that would help increase the level of inclusion of people with disabilities in education”, the researcher asked people with disabilities “What are the needs that would help increase the level of your inclusion in education?”.

This method of phrasing results in many problems that would lead to not obtaining correct results or to a failure in answering the questions of the research, and this happens because:

1. The research topic may include terms that the participants are not familiar with, as academic terms are often used in research topics, therefore, other equivalent words that are used in real life must be used.
2. Most of the main research topics are complicated which cannot be answered by answering a single question, rather, they should be partitioned into sub-topics. Those sub-topics shall be phrased into questions (taking into consideration the appropriate amendment of the phrasing also), therefore, presenting the research topic directly and literally will cause confusion for the respondents, as they will be facing a broad and general question that is difficult for them to answer in this way.
3. In most cases, the participants do not have a level of knowledge that would help them answer the question in this form, this means that when studying the needs of people with disabilities that are required to increase their inclusion in education, it is better to ask the questions that related to the problems and difficulties they face that hinder their access to an appropriate education, with the necessity of emphasizing that asking about these problems and difficulties must be in a detailed way.

In summary, it can be said that the process of developing questionnaires appears to be easy for workers in this field, especially non-specialists, and anyone can work on the development of the questionnaires, but the experience, especially at the time of receiving data after all the efforts exerted for structuring the sample, and research methodology, shows that the data are useless, and this is due to the wrong design of the questionnaires.

Questionnaires can be expressed as the clearest example of the phrase “deceptively simple”, as anyone can develop a questionnaire, but the challenge comes with the obtained data. I recommend all workers in the field of research to improve their skills in #questionnaire_writing, and concentrate on the applied references, as most of the books only tackle theoretical aspects.

By:
Ghaith Albahr: CEO of INDICATORS

Rubbish data

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

Through my experience of working with many organizations, research centers, and academic researchers, I have noticed an issue in the collected data that only can be named as rubbish data or useless data.

The idea of useless data can be summarized as data or questions asked in questionnaires that are not useful in anything related to the objectives of the research, for example in many monitoring or evaluation activities, questions are asked in beneficiary interviews about the family structure in detail, such as asking about the family members disaggregated by gender and age groups. Some may think that these data are important, but experience says the opposite, as these data are important in the phase of needs assessment and selection of beneficiaries, which were already collected in the previous activities, and all the cases I witnessed did not use this data (in the course of writing a monitoring or evaluation report), and in the best case, the family members data were grouped into a final number, so why were all these details asked and make the beneficiaries exhausted with all these questions?

The belief of some researchers that if these data are not useful, it will not cause any issues is wrong, as a large number of questions and asking questions that have nothing to do with the research objectives causes several problems, including an increase in costs, an increase in the participants’ hesitation and fear due to a large number of details that are asked about and the lack of Its rationality, the decrease in the participants’ interest in providing serious answers due to the increase in the duration of the interview and their fatigue, an increase in the possibility of errors in data collection, an increase in the complexities of data analysis, distracting the researcher from the processing data and writing the report and thus discussing topics that not related to the objectives of the research and distracting the decision-makers.

The observed cases that may be called rubbish data are uncountable. Asking about the name of the participant in a political poll in which the name of the participant does not matter at all, it only expresses his legal personality as a representative of a sample of the surveyed community groups (except in rare cases related to verification and follow-up of the data collection teams), asking about the participant’s name will necessarily lead to providing answers that stray more from his true opinions, as a result of his fear of linking those answers to his name and exposing him to any harm. I always advise that the questions we ask to be linked to the objectives of our research and not to say, “We wouldn’t lose anything if we ask this question.”

By:
Ghaith Albahr: CEO of INDICATORS

Ordinal Questions , Challenges and Issues

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

The ordinal questions, where the participant is asked to answer several options in order of priority contain many issues.
I will talk through my observation of many cases about these questions focusing on the negatives points:
1. The process of arranging options according to the most important and less important is a cumbersome and time-consuming process, so it is noted that most participants do not answer them seriously, and therefore the order obtained is inaccurate.
2. In the questions in which we choose the most three important answers in an orderly way, the order tends to follow the order of the same answers in the design of the questionnaire, meaning that the participants tend to choose the answers that are mentioned to them at the beginning as the most important.
3. A big problem with the analysis of the ordinal questions is due to the weakness of most statistical programs, and the lack of ready-made analytical methods for these questions, so the data analyst is forced to do manual calculations, which causes issues in the analysis.
4. A problem in the outputs of the analysis: -Calculating the order as weights will give a result that may exceed the real value, meaning that the numerical result that we will get does not express a real value, but rather expresses the weight and importance of this option compared to the other options and not the percentage of those who chose it.
-Many data analysts have difficulty dealing with these questions, so they tend to use inappropriate methods such as displaying the analysis of the first priority only, displaying the analysis of each priority separately, or analyzing the question as a usual multi-select question.
-An error in calculating weights, the weighting system in statistics is not arbitrary, that is, in cases, it is considered in the form of degrees 1, 2, 3, or in the form of probabilities or percentages of the original answers…etc.
-An error in defining the weights of the answers, as the first priority should take the number 3 and the third should take the number 1, knowing that the logical order is the opposite, but as a final value it must give a higher number to the first priority, and this is usually the mistake that some data analysts made.
5. Issues with the report writers where some of them are confused about how to present and discuss the results in the report correctly.
6. Problems in the disaggregation of ordinal questions with other questions, as the question exists in several columns in the database, in addition to the need to take into account the weights, and to the disaggregation with one or more questions, which leads to many data analysts to make mistakes in analyzing these questions

By:
Ghaith Albahr: CEO of INDICATORS

Issues of Dealing with Missing Values

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

A lot of data analysis programs do not have the ability to distinguish between many values, namely:
· Missing Values
· Blanks
· Zero

This weakness of data analysis programs also extends to the failure of many data analysts to distinguish between these values, therefore, these values are not being distinguished or dealt with, and data are not being analyzed based on these differences.

Some may think that these differences are not very important, and they ignore them and leave dealing with them to data analysis programs, but in most cases, this gives catastrophic results that many people do not realize.

I will attempt to illustrate these differences through some examples:

1. If we want to analyze the average income of households in a country suffering from a crisis, it was noticed that a high percentage of respondents said that they have no income of any kind, and the percentage of these respondents is over 40% of the surveyed families. Data analysts dealt with these cases as missing values, the thing that gave results that are utterly different from the situation of society, as the socio-economic indicators in this case will show, for example, that only 10% of HHs are below the extreme poverty line, but the truth is that the percentage is more than 50%, because whoever does not have any income must be considered as his income is zero rather than a missing value, because the missing value is not included in the calculations, while the value zero is, and thus affects the percentages and the general average of income. In the opposite case, in the event of asking about the monthly salary, the salary of a person who does not have a job will be considered as a missing value rather than a zero, as he is unemployed and the salary is not calculated as a zero.
2. Many programs do not consider the blanks in the text questions as a missing value. For example, we find that the SPSS program does not consider the empty cell in the text questions as a missing value, but rather considers it a valid value, as in the Gender column, if it is a text question, the program will calculate the empty values, the thing that significantly affects results such as percentages, knowing that those who did not indicate their gender (male or female) should be considered a missing value.
3. In the SPSS, when trying to calculate a new data column from other columns, we find that some of the codes (formulas) can deal with the missing values effectively and some formulas cannot, for example when trying to calculate the total number of the family members out of the family members of each group, and we used the (sum) formula. We notice that SPSS gives the sum result even if there is a missing value in one of the categories, while calculating as a manual sum will give the sum result as a missing value when any of the cases with a missing value is encountered.

The cases in which there are issues in defining the missing values are unlimited, and I do not advise in any case to give the data analysis program nor the data analyst alone the freedom to guess and deal with those values, as the appropriate treatment and definition of the empty value must be determined, as we explained in the income case, the missing value must be considered as zero, while in the salary case, it must be considered a missing value, and in our third example, the empty cells of any category of family members must be considered zero, knowing that from the beginning, data collectors must be told that if a family does not have any member of a certain category, they must not leave a missing value, rather, they should fill it with a zero.

By:
Ghaith Albahr: CEO of INDICATORS

Outliers Processing

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

Some data analysts do not grant any attention to outliers, and they may have first heard this term while reading this article. Outliers have a significant impact on many statistical indicators, and the methods of handling and processing them are related to many factors, some of which are simple, and some are more complex and related to the type of statistical indicator, as the data analyst must know the classification of the Smooth Parameters and the that’s not, and this indicates the degree to which it is affected by the outliers.

For example, the mean is considered one of the best indicators/coefficients of central tendency, but it is extremely affectable by outliers compared to the median, knowing that the median is not considered an accurate coefficient compared to the mean.

Within the following lines, I will try to tackle an important aspect related to the outliers, which is the simplest, it’s the methods of processing outliers:

Methods of processing outliers:
1. Revision of the source: we revise the source in order to check the value, if there is an entry mistake, it is corrected, such as writing the age for a study about children as 22 by mistake instead of 2, so, we simply discover that it is an entry mistake and correct it.
2. Logical processing of outliers: Mistakes of outliers can be discovered through logical processing, simply, when studying the labor force, for example, the data of a person who is 7 years old are deleted because he is not classified as a labor force.
3. Distinguishing between what to keep and what to delete: This process is considered very exhausting, as there are no precise criteria for accepting or rejecting outliers. In this regard, SPSS program offers a useful feature, which is classifying outliers into two types, Outliers (which are between the first/third quartile and one and a half of the inter-quartile range), and Extreme values (which are between one and a half to three times the inter-quartile range), in other words, data far from the center of the data and data extremely far from it, in this case this classification can be adopted by accepting outliers and deleting extreme values.
4. Replacing the outliers that have been deleted: The last and most sensitive step is the decision to deal with the deleted outliers, whether to keep them deleted (as missing values) or replace them, the challenge begins with the decision to replace them, as leaving them as missing values entails consequences and challenges, similarly, replacing them also entails consequences and challenges. The decision of replacing deleted outliers is followed by the appropriate methodology for replacement, as the process of replacing missing values is also complicated and has various methodologies and options, each of these methodologies will have an impact in a way on the results of data analysis (I will talk about replacing missing values in another post).

It is not simple to summarize the methodologies for dealing with outliers within these few lines, as deleting outliers puts us in front of other options; shall we leave it as a missing value or replace it with alternative values? Also, when we delete outliers and reanalyze the data, we will find that new outliers have appeared, these values were not considered outliers considering the database before it was modified (before deleting the outliers in the first stage), therefore, I recommend Data Analysts to study more about this topic, considering the extent of studying they need based on the volume and sensitivity of the data.

By:
Ghaith AlBahr (Mustafa Deniz): CEO of INDICATORS

Comparing SPSS vs Excel

Published on: 2022-04-25نُشِرَ بتاريخ: 2022-04-25

Data Analysis, Excel VS SPSS Statistics

An important question occurs to many of people interested in the field of data analysis or people who may need to use data analysis programs either for work or research; “What is the difference between Excel and SPSS? And when is each of them recommended?”.

In this article we provide a brief description of the advantages and disadvantages, this description is categorized according to the specialization or field of the required data analysis:

First: data analysis for academic research

We absolutely recommend using SPSS, as it offers very wide statistical analyses that has endless options. In this field, Excel cannot in any way provide what SPSS does.

For example, SPSS provides:

Parametric and non-parametric tests with wide options that include many tests required for researchers who are not specialized in statistics.
Regression and correlation analysis of its various types, linear and non-linear, with tests for them and analysis options that are widely related to them.
Time series analysis.
Questionnaire reliability tests.
Neural networks analysis.
Factorial analysis.
Survival analysis.
Statistical quality control analysis and charts.

Along with many other statistical analyses that serve academic fields.

Second: data analysis for non-academic research

It can be classified into several levels of data analysis:

Descriptive data analysis:

In general, the two programs are able to provide all the analyses required in descriptive statistical analysis, but Excel contains some minor flaws, such as that it does not arrange the answers according to their logical order, but rather in an alphabetical order, and it can’t provide calculations related to questions that include texts in addition to calculations related to their own order (Ordinal data) such as calculating the Likert Scale.

SPSS is characterized by providing tools for analyzing multi-select questions and with advanced options, which Excel does not provide, therefore, we need to use functions to get those analyses which options are limited with problems with the percentage that we get from it.

Disaggregation analysis:

It can be said that both programs are reliable in this aspect, except in the case of multiple and complex disaggregation/cross-tabulation with multi-select questions, in these cases, Excel becomes slower and less effective, while SPSS offers all options, no matter how complex they are, at the same speed required for descriptive statistical analysis and simple disaggregation. In addition to aforementioned, there are features such as filtering and data splitting features provided by SPSS, which accelerate data analysis to a very big scale, as it is possible to analyze the required data for 20 regions separately to be done at the same speed of analyzing data for one region, while in Excel, this means doing 20 times the work.

SPSS provides the features of descriptive analysis and data disaggregation much faster than we may think, as some analyses that take a week using Excel can be completed in just a few minutes using SPSS.

Third: Analyzing data of demographic indicators

When talking about demographic indicators, we find a challenge facing each of these two programs. In SPSS, we can perform numerous, complex and very fast arithmetic operations that outperform Excel, however, SPSS has some minor weaknesses that are important at the same time; among the most important matters that have been noticed in this regard is conducting multi-column conditional arithmetic operations, as SPSS provides multi-column arithmetic operations, but these operations do not contain multiple conditions, on the other hand, Excel provides this feature with a wide variety of conditional and effective functions.

Fourth: Data management and linking databases in the analysis

In this particular aspect, we find the clear distinction of Excel, as with the Power Query package, it offers features of data management, merging, and the possibility for aggregation and cleaning the data, in addition to the ability to link various databases without merging them, and analyzing them together with all types of analyses.

As for SPSS program, it does not include the feature of analyzing isolated databases without the need to merge them, on the other hand, it can solve a large part of this problem by merging databases, but this entails many challenges and great possibilities for error. When merging more than one database, there is usually a repetition of cases to match the other database, and this means that when we analyze the database that has been duplicated, we must perform operations that cancel this repetition in order to obtain correct analyses.

The features of data management and analyzing isolated databases together is considered as a great advantage of Excel, but in most cases it is not required, as it is only needed in complex and advanced projects.

On the other hand, SPSS program in the Data menu provides many features that can only be described as great, and the lines of this article are insufficient to talk about them, but they can be briefly described by saying that they gives data management some features that can outperform Excel in some aspects, such as the Unpivot or Restructure features that SPSS provides including features that are far more advanced and powerful than Excel.

Fifth: Weighting

One of the very important aspects of data analysis, especially with regard to demographic statistics, humanitarian needs analysis and advanced market research, is the Weighting feature, which helps to calculate the data after taking into account a weight that expresses, for example, the population of the governorate or the studied area, which gives it an amount of needs that is commensurate with its size.

This feature is not provided by Excel, if we wanted to calculate the weights manually using functions in it, this sometimes causes problems in the results, especially in the disaggregation analyses.

In SPSS, once you choose the option of Calculating Weights, it will be automatically applied to all calculations whatever they are, even on charts, and we can stop calculating weights with only one click.

This is a simple comparison between the two programs, we hope this comparison gives a preliminary perspective and help data analysis specialists and institutions that need to build the capacities of their team in this field to choose the most suitable program for them.

 

By:
Ghaith Albahr: CEO of INDICATORS

Customer experience testing

Published on: 2022-03-05نُشِرَ بتاريخ: 2022-03-05

It is the impression you leave on your customer at every stage of his journey to purchase a product or service, which leads him to think of your brand and promote it among his acquaintances and friends.

The difference between customer experience and customer service:
Customer Service

It consists of interactions with the customer in order to obtain the offered product or service, giving the customer the information he wants to know, and receiving complaints and inquiries.

Customer Experience

It can be simply explained as accompanying the customer from the beginning to the end of the journey, i.e. the purchase of the product and the impressions it provides at each stage of his communication with the company and the impressions about the product or service after purchase.

Customer service can be considered as part of the customer experience, as they are strongly related, but there is a difference between them.

To simplify the Customer Experience term, the following example can be used:
Suppose we have a movie, to produce this movie, we need:
1. Director – Executive Director – Sound Director – Cameraman – Producer… (Management Team).
2. The actors and all the individuals who appear in front of the camera.. (Employees who are in direct contact with clients).
3. The script, the dialogue, and the area in which the movie is filmed … (the tools used to produce the product or service offered by the company).
4. Current viewers..(existing clients and potential clients).

We have a highly professional script, dialogue, venue, director, lighting and sound engineer, and cameraman, but the experience of the actors is low, or we could say that it is not good, will the viewers get the aim or the moral of the movie? (Of course not) and this is the biggest mistake that current companies and institutions make with regard to the customer experience, as they focus on the management team so that they have high expertise to produce the product or service and do not care about the employees who are in direct contact with the customer who is the face of the work, the thing that negatively affect the company’s reputation, and may lead to the loss of existing customers and failure to get new ones.

The Importance Of Customer Experience

It is very important for the continuous growth of a business, as ensuring a positive customer experience contributes to:
• Building brand loyalty among customers.
• Activating your product or service and embed it in the minds of customers.
• Creating marketing opportunities by customers themselves, by writing positive comments and impressions that are more important than paid advertisements and promotions, and more influential on other customers.

On the other hand, customers want to feel connected to their favorite brand and want to feel that it knows them, respects them and cares about them. For example, suppose there are two cafes that are close to each other, and they have the same brand of coffee and the same qualities, but one of them is more expensive than the other, and the most expensive pays attention to its customers and their details, for example: He says to his customers: (Your usual drink?) which leads customers to go to the most expensive, because it satisfies the customer’s needs of drinking coffee with a feeling of care and good treatment.

Customer Experience Methodology
1. Developing customer journey map

A customer journey map is defined as a story supported by a map that includes all the interactions and communications that the customer has with the company in order to obtain a particular product or service.

Whereas a map is drawn for all potential customer paths during his journey in order to obtain a product or service and identify all channels and interactions that the customer can make at each stage of the map.

2. Evaluating the integration of operations in companies

Evaluating each of the stages that the customer goes through in order to obtain a specific product or service in terms of customer satisfaction and whether it is integrated or not, this is done through studying the customer’s experience in each of the operations in detail.

3. CRM Evaluation

It is done through an assessment of the company’s interaction with current and future customers, where customer data with the company is analyzed in order to get the best path of customer relations, with a focus on retaining old customers.

4. Experimental implementation of customer experience

It is one of the most important stages of studying customer satisfaction or customer experience, and it is done through conducting an experimental implementation of a customer journey after developing the previous tools, and accompanying the customer from the stage of purchasing the product until reaching the post-purchase stage, to get the feedback of the customer about the product, and knowing all stages and paths he went through during his journey with the company.

5. Analyzing Customer Satisfaction

This stage begins after the customer purchases the product or service and gives feedback on his journey with the company and the product or service, whereas the customer’s opinions and feedback are analyzed to reach the problems that the customer may face, and the positive and negative things that he can see in the product, then the necessary measures are taken to address these matters.

6. Studying the customer’s perception about the company

This is done by conducting a short questionnaire for the customer, in which he is asked about all of the stages of his journey, the nature of the relationship at each stage, and his views on how to make the service or journey better, and then analyzing the data coming from the customers to obtain comprehensive and general perceptions to improve the stages of the customer’s journey.
Example of customers’ journey:
For example, what does Google say about its customer experience testing?
If users can’t spell, it’s our problem
If they don’t know how to form the query, it’s our problem
If they don’t know what words to use, it’s our problem
If they can’t speak the language, it’s our problem
If there’s not enough content on the web, it’s our problem
If the web is too slow, it’s our problem

The purpose of customer experience testing is to be concerned with customer needs rather than the amount of sales.
The focus must change from absolutely focusing on the product to focusing on the customer experience to Improve the product based on results, and from being concerned about the broad market to being concerned about the individuals who connect you to the broad market.

By:
Ghaith Albahr: CEO of INDICATORS
Anas Attar Sabbagh: Research officer in INDICATORS

Work follow-up platform

Published on: 2021-12-09نُشِرَ بتاريخ: 2021-12-09


PRODUCT DEVELOPMENT

Published on: 2021-10-28نُشِرَ بتاريخ: 2021-10-28

By:
Ghaith Albahr: CEO of INDICATORS
Anas Attar Sabbagh: Research officer in INDICATORS

Product Development Strategies:
Product development strategy refers to the methods and procedures used to present new products to the market or to modify existing products to create new businesses.

Product Development Stages:

1. Identifying opportunities (the emptiness that the product will fill)
2. The stage of creating new ideas
3. Idea’s assessment stage
4. Studying the new product in terms of cost and quality
5. Testing the developed product
6. The stage of introducing the developed product to the market
7. Post-marketing evaluation stage

The Importance of Product Development
Product development is one of the important marketing activities during the life cycle of the product and the activity of the establishment as a whole, as this process represents the stage of innovating, creating, and presenting all that is new, on the basis that the consumer expects the establishment to provide him with the best in terms of quality and efficacy at the convenient price and at the convenient time and place.
The following chart shows the percentage of the expenditures spent by (APPLE)
Co. on research and development in relation to the total revenues

yus-1

Product Development Data Sources
1. Customer needs analysis
2. Customer behavior analysis
3. Competitor analysis
4. Customer feedback analysis
5. Studying customer satisfaction
6. Testing customer experience
7. Compared to other experiences
8. Analyzing competitor products and alternative products

Product Development tools
1. Innovation
2. The new product must be eco-friendly
3. Manufacturability
4. Improving maintainability
5. Reducing complexity and increasing modularity
6. Increasing efficacy and durability
7. Reducing production costs

Product Development Risks
1. Takes a long time
2. The product development process is expensive
3. Strict legal requirements
4. Failure in estimating results

UNEMPLOYMENT IN SAUDI ARABIA

Published on: 2021-10-20نُشِرَ بتاريخ: 2021-10-20

By:
Ghaith Albahr: CEO of INDICATORS
Anas Attar Sabbagh: Research officer in INDICATORS

UNEMPLOYMENT IN SAUDI ARABIA
The unemployment percentage has reached its lowest in 2019 compared to the previous years, which indicates the economic recovery in Saudi Arabia due to the efforts of the government exerted in the programs of the Saudi Vision 2030

In 2020, there was a noticeable increase in the unemployment rate due to the spread of the COVID-19 pandemic

In the first half of 2021, we notice that there was a noticeable decrease in the unemployment rate among Saudis, as the unemployment rate reached its lowest since 2017, and the recent data shows a recovery from the repercussions of COVID-19 on the economic activity in the country and confirms the ability of the Saudi economy to accommodate thousands of job seekers

yus-1

In view of the above, we notice that the decrease of the unemployment rate was faster among females than among males at the beginning of this year, which shows the role of the Saudi government in supporting and empowering Saudi women.
And it also shows that the programs of Saudi Vision 2030 for employing Saudis which aim to reduce the unemployment rate to 7% by 2030 have started to yield results

Distribution of unemployment by age in the second quarter of 2021

yus-1

There is a noticeable increase in the unemployment rates among the youth between 25 and 29 years old, who supposedly have completed their educational attainment and received education and expertise required for employment, however, the unemployment rate among this age group has almost reached 18%, and it drops with the increase of age, which shows that the unemployment crisis is concentrated among the youth

Unemployment rate by the province in the second quarter of 2021

yus-1

Source: General Authority for Statistics – Kingdom of Saudi Arabia

PESTEL tool

Published on: 2020-04-21نُشِرَ بتاريخ: 2020-04-21

By:

Ghaith Albahr: CEO of INDICATORS Company

Reem Barakat: Research Coordinator in INDICATORS

What are the external factors that affect the success or the failure of startups?
Business sector is a very complex, anything happens in the country affects it, directly or indirectly as well as the internal factors that affect companies such as the employees and the required logistics, and the external factors such as competitors, customers, and suppliers…etc, there are bigger and more dangerous factors, if not taken into account, these factors are generally centered around the surrounding regional environment such as the economic downturn, the changing climate of some countries, the political circumstances, society targeted by the company, and several other factors that must be taken into consideration.

When proposing a new project idea it cannot be adopted only because it is unique, for example according to Wikipedia KitKat company offered 300 different flavors of chocolate bars in Japan since 2000 to test and release new products in Japanese market, taking advantage of the low of fees on primary products, this helped in the company success and achieve more sales in Japan from 2012 to 2014, due to the fact that generally known about Japanese people love green tea, this made KitKat launch a chocolate bar with green tea flavor in 2004, even it changed the cover of this chocolate to green color, while it is in all other countries where offered in red cover, KitKat depended on the research of society norms and traditions, that why it went to this big change in Japan and which was one of the most important factors that helped the company make huge profit.

As well as the example of KitKat, many companies try to enter the market without taking into account these external and regional factors which caused its failure.

What are the external and regional factors that must be considered for your new business?
As we’ve see in KitKat example it is necessary to pay attention of many regional factors, in order to test these factors correctly without neglecting any of sensitive aspects it’s recommended to use PESTEL tool which is considered one of the idea validation tools, this tool helps to know the circumstances and the general factors that surrounding the company and their impact on it. PESTEL focuses on six main factors that neglecting them may cause the company failure or loss of money and time, for example, if we are seeking to establish a construction company that costs millions of dollars which will be in a country where the market in need of the services of such company, but in terms of political and economic conditions, it has been found that the continuous depreciation of the currency of the country has a high likelihood to cause the company failure, if several million dollars are invested in the company and the money transferred to the local currency and the value of currency decreased to the half over three years this means that if the company gain 100% profit actually it will just be reached to zero point comparing to the value of the capital in dollars.

In order to have integrated analysis of the regional factors affecting the company, PESTEL tool focuses on the following six factors:

  • POLITICAL: which means studying the country political stability in its relations with neighboring countries and other countries, and how that affects the company we want to establish, such as political boycotts that occur between countries that negatively affect import and export, and tax policy that the state provides for foreign companies or goods imported from certain countries, I.e. the economic war launched by America against China, in this period it’s not recommended for a US company to start a business that highly dependent on Chinese electronic parts because double the taxes will cause the business failure because of the high prices of their products, which leads to their inability to compete.

  • ECONOMIC: Which means knowing whether the country is in economic recession or growth, the stability of the local currency, what is the situation of country credit rating, the extent of confidence in the products it exports, and everything related to the economic aspects of the country with focusing especially on the factors that affect our company, in the example that we’ve mentioned before, we’ve seen how the decrease in the value of the currency cause the company to lose all of the profit despite the fact that all indicators related to the demand for the construction services were positive.

  • SOCIAL: all about social customs and traditions, the composition of the society, religions and intellectual currents…etc. Let’s say that the company works in Middle East in the field of food products, when trying to enter the Japanese market, it launched products similar to the ones it offers in the Middle East market and faced low sales and huge losses even though the same products were successful in the Middle East, so when returning to the reasons it found that the company was offering products of family sizes which is too big for the Japanese family, while in the Middle East the family consists of an average of six, while the Japanese family consists of a maximum of three.

  • TECHNOLOGICAL: It is all related to the technological infrastructure in the country that affects our business which its negligence could be the reason for its failure, especially if it depends in a large part of its work on that. I.e. before YouTube, several companies tried to launch sites of video watching but they failed because the Internet was at that time still on Dial-Up system which did not help to attract people to watch videos because it takes too long to be loaded, as for YouTube succeed because it was established with the beginning of the DSL internet, which was the most important factor for its success.

  • ENVIRONMENTAL: everything related to the environmental conditions in the country, regulations related to the environment and environmental licenses, which have a direct impact on the company in this aspect. I.e. it was noticed that many investors who moved from countries that did not require complex environmental licenses and established factories in other countries immediately started planning for work with maximum capacity and built financial plans on their expectations to start production directly they were shocked that these countries did not allow them to operate except for a limited capacity for factory waste and exhaust tests to obtain the environmental licenses and operate the factory with full capacity, some factories required at least six months to obtain the environmental licenses and that led to huge losses that started from contracts that were concluded with customers and were not fulfilled and employment of workers that should work with full capacity, in addition to mistakes in the financial calculations.

  • LEGAL: it means the regulations stipulated by the state related to employment regulations, consumer protection, ownership, health, education and the conditions that the state sets in general for the establishment of any company, neglecting the regulations will cause the company’s failure as if the company establishes the project without paying attention to the legal conditions related to employment and calculates the cost of the product and the pricing neglecting the costs related to employment, which leads to mistakes in pricing and the company’s loss.

How do I make the best use of PESTEL?
Mostly, PESTEL analysis implemented through workshops in which investors and people who participate in the establishment of the company, in addition to experts and specialists in several fields, the most important of which is the company’s field of work itself and specialists in economics and law, and the other fields that related to PESTEL analysis. The depth of the analysis, discussions and the number of workshops that needed depend on the business size and complexity.

To benefit more from PESTEL analysis, it is recommended to look at the risks that this analysis reveals as opportunities, as it can be turned into opportunities by building procedures or bringing about changes in the project idea so that it is able to deal with those risks or use them as a market entry. In other words, if it shows from the results of PESTEL analysis that there is a big risk that the company may face in an aspect it is not necessary to consider the business idea a failure or cancel it, but rather to think about how to develop the business idea in order to exceed this risk, make it as a competitive advantage and increase the opportunities of the success of our business.

Dell’s success story

Published on: 2020-04-10نُشِرَ بتاريخ: 2020-04-10

By:
Reem Barakat

From its beginnings Dell company worked as a leader in the “build-to-order” approach, providing individual computers that were made to the customer’s request, and according to the Neronet-academy website the beginning of the company was from the apartment of a college student named “Dell” and His first clients were on a low income so he started to collect computers By himself and sell it to his customers directly.

In 1985, Dell sold the first computer designed by his company from the Turbo PC, and participated in many exhibitions to show his strength in the competition, and one of the things that made his product unique is the design of a computer with good specifications and a competitive price for a wide range of customers, and that was the result of his constant proximity to His clients, which enabled him to know their needs. Over time, Dell outperformed many competitors, and in the 1990s it ranked fourth in the first 10 companies that won customer confidence, both Security and technical.
One of the most important elements of the company’s success was its constant focus on research and development and market research, so it owned an analysis department that oversaw pricing, web analytics, and supply chain analytics, and also hired researchers in customer service, It has built tools and system to study customer satisfaction and obtain feedback, as well as a prompt and accurate complaints response system, By exploiting the development of the Internet and the emergence of social media, it was able to study customer satisfaction more accurately and respond to all customer complaints, no matter how many, which increased consumer confidence in the company and its products In addition to its reversal of complaints and feedback analyzes on developing its products and improving its quality, this increased its success, market share and ability to compete, which reflected on increasing its profits significantly and making it maintain its name as one of the world’s largest companies in the computer industry.

Local Community Perceptions Regarding Services and Decision Making Processes in NW Syria

Published on: 2022-06-03نُشِرَ بتاريخ: 2022-06-03

Large territories in northern Syria have been controlled by various opposition and other forces and non-state actors, after these territories were liberated from the control of the Syrian regime. While these entities govern the liberated areas, and are in charge of the affairs of the Syrians residing there by providing public services, maintain security, and resolve disputes, it is important to note that there is no single administrative and military entity which has a monopoly of control across the different parts of the region.
The present study was conducted to find out the reality of the liberated Syrian north, through identifying the entities responsible for governing each of these pieces of land, the status of public service provision and the level of citizen satisfaction, as well as understanding the state of affairs in terms of security, criminality, access to justice, and rule of law. Finally, the study aims to shed light to the decision-making mechanisms for important issues within the region, the extent to which citizens can participate in these mechanisms, and external influence on the governance and decision-making processes of the region.
The research has been designed and executed during the first half of 2021 in Idlib, Olive Branch and Euphrates Shield areas. It is based on field research involving survey of Syrians both from host communities and internally displaced persons (IDP’s) residing in the research area, and complemented with interviews with key informants (KIIs) from local government bodies or non-government organizations (NGOs).
The results of the study demonstrate a low level of knowledge of the residents of northern Syria of those responsible for governance and the provision of public services, mainly due to confusion between service providers and those responsible for managing the sector concerned. Another important result is in general, the residents of the region have low level of satisfaction for the services provided, which is being observed across all three regions.
With regard to the security situation, Idlib was the safest area according to the opinions of key informants and participants in the survey, where Hay’at Tahrir Al Sham was able to firmly control the security situation in the area and deal to a large extent with the security threats of bombings and kidnappings. Theft remains the main security concern in Idlib. In the areas of Euphrates Shield and Olive Branch, the level of safety was found to be very low, where both areas suffer from explosions targeting markets and residential areas as well as many cases of theft, kidnappings, killings and factional fighting. The people of Olive Branch area suffer especially from the seizing of their rights and property by military factions, and are being subject to arbitrarily arrest and kidnapping and have to pay funds get released.

Issues of Asking Direct Questions

Published on: 2022-05-24نُشِرَ بتاريخ: 2022-05-24

Researchers and workers of all research fields (monitoring and evaluation, market research, opinion polls… etc.) usually work on identifying a set of research topics (usually called either research topics, key questions, or hypotheses…), then derive the questions that will be asked in the research tools from these topics. The problem I noticed that many researchers have, especially those working on developing #questionnaires / #research_tools, is that the phrasing of the questions uses almost the same words as the research topics. i.e., If we had a question about “the needs that would help increase the level of inclusion of people with disabilities in education”, the researcher asked people with disabilities “What are the needs that would help increase the level of your inclusion in education?”.

This method of phrasing results in many problems that would lead to not obtaining correct results or to a failure in answering the questions of the research, and this happens because:

1. The research topic may include terms that the participants are not familiar with, as academic terms are often used in research topics, therefore, other equivalent words that are used in real life must be used.
2. Most of the main research topics are complicated which cannot be answered by answering a single question, rather, they should be partitioned into sub-topics. Those sub-topics shall be phrased into questions (taking into consideration the appropriate amendment of the phrasing also), therefore, presenting the research topic directly and literally will cause confusion for the respondents, as they will be facing a broad and general question that is difficult for them to answer in this way.
3. In most cases, the participants do not have a level of knowledge that would help them answer the question in this form, this means that when studying the needs of people with disabilities that are required to increase their inclusion in education, it is better to ask the questions that related to the problems and difficulties they face that hinder their access to an appropriate education, with the necessity of emphasizing that asking about these problems and difficulties must be in a detailed way.

In summary, it can be said that the process of developing questionnaires appears to be easy for workers in this field, especially non-specialists, and anyone can work on the development of the questionnaires, but the experience, especially at the time of receiving data after all the efforts exerted for structuring the sample, and research methodology, shows that the data are useless, and this is due to the wrong design of the questionnaires.

Questionnaires can be expressed as the clearest example of the phrase “deceptively simple”, as anyone can develop a questionnaire, but the challenge comes with the obtained data. I recommend all workers in the field of research to improve their skills in #questionnaire_writing, and concentrate on the applied references, as most of the books only tackle theoretical aspects.

By:
Ghaith Albahr: CEO of INDICATORS

Rubbish data

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

Through my experience of working with many organizations, research centers, and academic researchers, I have noticed an issue in the collected data that only can be named as rubbish data or useless data.

The idea of useless data can be summarized as data or questions asked in questionnaires that are not useful in anything related to the objectives of the research, for example in many monitoring or evaluation activities, questions are asked in beneficiary interviews about the family structure in detail, such as asking about the family members disaggregated by gender and age groups. Some may think that these data are important, but experience says the opposite, as these data are important in the phase of needs assessment and selection of beneficiaries, which were already collected in the previous activities, and all the cases I witnessed did not use this data (in the course of writing a monitoring or evaluation report), and in the best case, the family members data were grouped into a final number, so why were all these details asked and make the beneficiaries exhausted with all these questions?

The belief of some researchers that if these data are not useful, it will not cause any issues is wrong, as a large number of questions and asking questions that have nothing to do with the research objectives causes several problems, including an increase in costs, an increase in the participants’ hesitation and fear due to a large number of details that are asked about and the lack of Its rationality, the decrease in the participants’ interest in providing serious answers due to the increase in the duration of the interview and their fatigue, an increase in the possibility of errors in data collection, an increase in the complexities of data analysis, distracting the researcher from the processing data and writing the report and thus discussing topics that not related to the objectives of the research and distracting the decision-makers.

The observed cases that may be called rubbish data are uncountable. Asking about the name of the participant in a political poll in which the name of the participant does not matter at all, it only expresses his legal personality as a representative of a sample of the surveyed community groups (except in rare cases related to verification and follow-up of the data collection teams), asking about the participant’s name will necessarily lead to providing answers that stray more from his true opinions, as a result of his fear of linking those answers to his name and exposing him to any harm. I always advise that the questions we ask to be linked to the objectives of our research and not to say, “We wouldn’t lose anything if we ask this question.”

By:
Ghaith Albahr: CEO of INDICATORS

Ordinal Questions , Challenges and Issues

Published on: 2022-05-23نُشِرَ بتاريخ: 2022-05-23

The ordinal questions, where the participant is asked to answer several options in order of priority contain many issues.
I will talk through my observation of many cases about these questions focusing on the negatives points:
1. The process of arranging options according to the most important and less important is a cumbersome and time-consuming process, so it is noted that most participants do not answer them seriously, and therefore the order obtained is inaccurate.
2. In the questions in which we choose the most three important answers in an orderly way, the order tends to follow the order of the same answers in the design of the questionnaire, meaning that the participants tend to choose the answers that are mentioned to them at the beginning as the most important.
3. A big problem with the analysis of the ordinal questions is due to the weakness of most statistical programs, and the lack of ready-made analytical methods for these questions, so the data analyst is forced to do manual calculations, which causes issues in the analysis.
4. A problem in the outputs of the analysis: -Calculating the order as weights will give a result that may exceed the real value, meaning that the numerical result that we will get does not express a real value, but rather expresses the weight and importance of this option compared to the other options and not the percentage of those who chose it.
-Many data analysts have difficulty dealing with these questions, so they tend to use inappropriate methods such as displaying the analysis of the first priority only, displaying the analysis of each priority separately, or analyzing the question as a usual multi-select question.
-An error in calculating weights, the weighting system in statistics is not arbitrary, that is, in cases, it is considered in the form of degrees 1, 2, 3, or in the form of probabilities or percentages of the original answers…etc.
-An error in defining the weights of the answers, as the first priority should take the number 3 and the third should take the number 1, knowing that the logical order is the opposite, but as a final value it must give a higher number to the first priority, and this is usually the mistake that some data analysts made.
5. Issues with the report writers where some of them are confused about how to present and discuss the results in the report correctly.
6. Problems in the disaggregation of ordinal questions with other questions, as the question exists in several columns in the database, in addition to the need to take into account the weights, and to the disaggregation with one or more questions, which leads to many data analysts to make mistakes in analyzing these questions

By:
Ghaith Albahr: CEO of INDICATORS

Issues of Dealing with Missing Values

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

A lot of data analysis programs do not have the ability to distinguish between many values, namely:
· Missing Values
· Blanks
· Zero

This weakness of data analysis programs also extends to the failure of many data analysts to distinguish between these values, therefore, these values are not being distinguished or dealt with, and data are not being analyzed based on these differences.

Some may think that these differences are not very important, and they ignore them and leave dealing with them to data analysis programs, but in most cases, this gives catastrophic results that many people do not realize.

I will attempt to illustrate these differences through some examples:

1. If we want to analyze the average income of households in a country suffering from a crisis, it was noticed that a high percentage of respondents said that they have no income of any kind, and the percentage of these respondents is over 40% of the surveyed families. Data analysts dealt with these cases as missing values, the thing that gave results that are utterly different from the situation of society, as the socio-economic indicators in this case will show, for example, that only 10% of HHs are below the extreme poverty line, but the truth is that the percentage is more than 50%, because whoever does not have any income must be considered as his income is zero rather than a missing value, because the missing value is not included in the calculations, while the value zero is, and thus affects the percentages and the general average of income. In the opposite case, in the event of asking about the monthly salary, the salary of a person who does not have a job will be considered as a missing value rather than a zero, as he is unemployed and the salary is not calculated as a zero.
2. Many programs do not consider the blanks in the text questions as a missing value. For example, we find that the SPSS program does not consider the empty cell in the text questions as a missing value, but rather considers it a valid value, as in the Gender column, if it is a text question, the program will calculate the empty values, the thing that significantly affects results such as percentages, knowing that those who did not indicate their gender (male or female) should be considered a missing value.
3. In the SPSS, when trying to calculate a new data column from other columns, we find that some of the codes (formulas) can deal with the missing values effectively and some formulas cannot, for example when trying to calculate the total number of the family members out of the family members of each group, and we used the (sum) formula. We notice that SPSS gives the sum result even if there is a missing value in one of the categories, while calculating as a manual sum will give the sum result as a missing value when any of the cases with a missing value is encountered.

The cases in which there are issues in defining the missing values are unlimited, and I do not advise in any case to give the data analysis program nor the data analyst alone the freedom to guess and deal with those values, as the appropriate treatment and definition of the empty value must be determined, as we explained in the income case, the missing value must be considered as zero, while in the salary case, it must be considered a missing value, and in our third example, the empty cells of any category of family members must be considered zero, knowing that from the beginning, data collectors must be told that if a family does not have any member of a certain category, they must not leave a missing value, rather, they should fill it with a zero.

By:
Ghaith Albahr: CEO of INDICATORS

Outliers Processing

Published on: 2022-05-20نُشِرَ بتاريخ: 2022-05-20

Some data analysts do not grant any attention to outliers, and they may have first heard this term while reading this article. Outliers have a significant impact on many statistical indicators, and the methods of handling and processing them are related to many factors, some of which are simple, and some are more complex and related to the type of statistical indicator, as the data analyst must know the classification of the Smooth Parameters and the that’s not, and this indicates the degree to which it is affected by the outliers.

For example, the mean is considered one of the best indicators/coefficients of central tendency, but it is extremely affectable by outliers compared to the median, knowing that the median is not considered an accurate coefficient compared to the mean.

Within the following lines, I will try to tackle an important aspect related to the outliers, which is the simplest, it’s the methods of processing outliers:

Methods of processing outliers:
1. Revision of the source: we revise the source in order to check the value, if there is an entry mistake, it is corrected, such as writing the age for a study about children as 22 by mistake instead of 2, so, we simply discover that it is an entry mistake and correct it.
2. Logical processing of outliers: Mistakes of outliers can be discovered through logical processing, simply, when studying the labor force, for example, the data of a person who is 7 years old are deleted because he is not classified as a labor force.
3. Distinguishing between what to keep and what to delete: This process is considered very exhausting, as there are no precise criteria for accepting or rejecting outliers. In this regard, SPSS program offers a useful feature, which is classifying outliers into two types, Outliers (which are between the first/third quartile and one and a half of the inter-quartile range), and Extreme values (which are between one and a half to three times the inter-quartile range), in other words, data far from the center of the data and data extremely far from it, in this case this classification can be adopted by accepting outliers and deleting extreme values.
4. Replacing the outliers that have been deleted: The last and most sensitive step is the decision to deal with the deleted outliers, whether to keep them deleted (as missing values) or replace them, the challenge begins with the decision to replace them, as leaving them as missing values entails consequences and challenges, similarly, replacing them also entails consequences and challenges. The decision of replacing deleted outliers is followed by the appropriate methodology for replacement, as the process of replacing missing values is also complicated and has various methodologies and options, each of these methodologies will have an impact in a way on the results of data analysis (I will talk about replacing missing values in another post).

It is not simple to summarize the methodologies for dealing with outliers within these few lines, as deleting outliers puts us in front of other options; shall we leave it as a missing value or replace it with alternative values? Also, when we delete outliers and reanalyze the data, we will find that new outliers have appeared, these values were not considered outliers considering the database before it was modified (before deleting the outliers in the first stage), therefore, I recommend Data Analysts to study more about this topic, considering the extent of studying they need based on the volume and sensitivity of the data.

By:
Ghaith AlBahr (Mustafa Deniz): CEO of INDICATORS

Comparing SPSS vs Excel

Published on: 2022-04-25نُشِرَ بتاريخ: 2022-04-25

Data Analysis, Excel VS SPSS Statistics

An important question occurs to many of people interested in the field of data analysis or people who may need to use data analysis programs either for work or research; “What is the difference between Excel and SPSS? And when is each of them recommended?”.

In this article we provide a brief description of the advantages and disadvantages, this description is categorized according to the specialization or field of the required data analysis:

First: data analysis for academic research

We absolutely recommend using SPSS, as it offers very wide statistical analyses that has endless options. In this field, Excel cannot in any way provide what SPSS does.

For example, SPSS provides:

Parametric and non-parametric tests with wide options that include many tests required for researchers who are not specialized in statistics.
Regression and correlation analysis of its various types, linear and non-linear, with tests for them and analysis options that are widely related to them.
Time series analysis.
Questionnaire reliability tests.
Neural networks analysis.
Factorial analysis.
Survival analysis.
Statistical quality control analysis and charts.

Along with many other statistical analyses that serve academic fields.

Second: data analysis for non-academic research

It can be classified into several levels of data analysis:

Descriptive data analysis:

In general, the two programs are able to provide all the analyses required in descriptive statistical analysis, but Excel contains some minor flaws, such as that it does not arrange the answers according to their logical order, but rather in an alphabetical order, and it can’t provide calculations related to questions that include texts in addition to calculations related to their own order (Ordinal data) such as calculating the Likert Scale.

SPSS is characterized by providing tools for analyzing multi-select questions and with advanced options, which Excel does not provide, therefore, we need to use functions to get those analyses which options are limited with problems with the percentage that we get from it.

Disaggregation analysis:

It can be said that both programs are reliable in this aspect, except in the case of multiple and complex disaggregation/cross-tabulation with multi-select questions, in these cases, Excel becomes slower and less effective, while SPSS offers all options, no matter how complex they are, at the same speed required for descriptive statistical analysis and simple disaggregation. In addition to aforementioned, there are features such as filtering and data splitting features provided by SPSS, which accelerate data analysis to a very big scale, as it is possible to analyze the required data for 20 regions separately to be done at the same speed of analyzing data for one region, while in Excel, this means doing 20 times the work.

SPSS provides the features of descriptive analysis and data disaggregation much faster than we may think, as some analyses that take a week using Excel can be completed in just a few minutes using SPSS.

Third: Analyzing data of demographic indicators

When talking about demographic indicators, we find a challenge facing each of these two programs. In SPSS, we can perform numerous, complex and very fast arithmetic operations that outperform Excel, however, SPSS has some minor weaknesses that are important at the same time; among the most important matters that have been noticed in this regard is conducting multi-column conditional arithmetic operations, as SPSS provides multi-column arithmetic operations, but these operations do not contain multiple conditions, on the other hand, Excel provides this feature with a wide variety of conditional and effective functions.

Fourth: Data management and linking databases in the analysis

In this particular aspect, we find the clear distinction of Excel, as with the Power Query package, it offers features of data management, merging, and the possibility for aggregation and cleaning the data, in addition to the ability to link various databases without merging them, and analyzing them together with all types of analyses.

As for SPSS program, it does not include the feature of analyzing isolated databases without the need to merge them, on the other hand, it can solve a large part of this problem by merging databases, but this entails many challenges and great possibilities for error. When merging more than one database, there is usually a repetition of cases to match the other database, and this means that when we analyze the database that has been duplicated, we must perform operations that cancel this repetition in order to obtain correct analyses.

The features of data management and analyzing isolated databases together is considered as a great advantage of Excel, but in most cases it is not required, as it is only needed in complex and advanced projects.

On the other hand, SPSS program in the Data menu provides many features that can only be described as great, and the lines of this article are insufficient to talk about them, but they can be briefly described by saying that they gives data management some features that can outperform Excel in some aspects, such as the Unpivot or Restructure features that SPSS provides including features that are far more advanced and powerful than Excel.

Fifth: Weighting

One of the very important aspects of data analysis, especially with regard to demographic statistics, humanitarian needs analysis and advanced market research, is the Weighting feature, which helps to calculate the data after taking into account a weight that expresses, for example, the population of the governorate or the studied area, which gives it an amount of needs that is commensurate with its size.

This feature is not provided by Excel, if we wanted to calculate the weights manually using functions in it, this sometimes causes problems in the results, especially in the disaggregation analyses.

In SPSS, once you choose the option of Calculating Weights, it will be automatically applied to all calculations whatever they are, even on charts, and we can stop calculating weights with only one click.

This is a simple comparison between the two programs, we hope this comparison gives a preliminary perspective and help data analysis specialists and institutions that need to build the capacities of their team in this field to choose the most suitable program for them.

 

By:
Ghaith Albahr: CEO of INDICATORS

The influence of Turkish Language Level on the Integration of Syrians Refugees

Published on: 2022-02-07نُشِرَ بتاريخ: 2022-02-07

view of the importance of working to achieve the integration of Syrian refugees into Turkish society, to reduce the tension among Turks towards the Syrians, and to determine the role of the Turkish language in achieving that integration, we have conducted this study, which aims to reveal the level of mastering the Turkish language among Syrian refugees in Turkey, identifying the reasons that hinder their ability to learn the Turkish language, knowing the degree of integration of Syrians into Turkish society, and the impact of their mastery of the language on their integration and Turks acceptance of them. and to know the situation of Syrian refugees in Germany regarding learning the German language in order to benefit from the German experience in developing the language abilities and skills of Syrian refugees in Turkey.

The study was conducted during the second half of 2020 and covered the states of Istanbul, Gaziantep, Hatay, and Urfa, which are the states in which the largest number of Syrians reside. During the study, key informant interviews were conducted with key informants interested in refugees’ integration in Turkey and Germany, and questionnaires were conducted with 340 Syrians residing within the states covered by the study, and the study adopted a stratified random sampling method to ensure including Syrians according to several variables such as gender, age, and educational level

PSEA

Published on: 2022-01-17نُشِرَ بتاريخ: 2022-01-17

Given the severity of the IDPs in Syria being subjected to sexual or financial exploitation, we conducted a study in this regard, as we shed light on:

Showing the percentage of people who were subjected to exploitation and abuse by some humanitarian or relief sectors.
The reasons that made the victims of such abuses refrain from filing complaints.
The type and form of the abuse or exploitation they were subjected to
Their extent of their knowledge about how to get help and support in case they are subjected to such abuses.

HOW SYRIAN ACTIVISTS CONSIDER THEIR COUNTRY’S FUTURE CONSTITUTION

Published on: 2021-03-01نُشِرَ بتاريخ: 2021-03-01

The constitution is of great importance in the legal system of countries since it is the supreme law of the country, and it includes the general and basic rules that define the form of the state (simple or complex), the type of ruling in it (monarchy or republican) and the form of government (presidential, parliamentary or mixed). The constitution also regulates the general authorities (legislative, judicial and executive) in which the state carries out its tasks, the jurisdiction of each of them, and the relationship of these authorities with each other, and clarifies the economic, social, political and cultural principles. The constitution also has a great impact on the citizens of countries, as it defines their rights, freedoms and duties, and stipulates the guarantees that guarantee these rights for them in the face of arbitrariness of the authority. The constitution has preponderance over all other legal rules in the state, which means that the state’s authorities must abide by its provisions when issuing any decisions or enacting any legislation, and that any other decisions or legislations that contradict it are null and void.
In Syria, and following the military coup carried out by Hafez Al-Assad in 1970 and his assumption of power, the permanent constitution of the republic was issued in 1973, and in fact the provisions and texts of that constitution were intended to consolidate the rule of Hafez Al-Assad and the Arab Socialist Baath Party, as Article 8 of the constitution affirmed that the Baath Party is the leading party of the society and the state, and that it leads the National Progressive Front that includes other political parties in the country. The 1973 Constitution has also granted the President of the Republic great powers that make him able to interfere in the work of all state authorities in a way that hollows the concept of the principle of separation of authorities from any content or meaning.
With the outbreak of public protests in Syria in March of 2011 and the expansion of their geographical area, the Syrian regime undertook some nominal reforms, including the adoption of a new constitution for Syria in 2012, which abolished Article 8 of the 1973 Constitution and stipulated party pluralism in Syria, but it has preserved the great and semi-absolute powers of the head of state, as he is the head of the executive authority and has the power to unilaterally issue legislations or block the passage of legislations decided by the People’s Assembly. He is also the president of the Supreme Judicial Council which appoints judges of the Supreme Constitutional Court, and he is also the supreme commander of the Army and Armed Forces, in addition to other powers such as appointing civil and military employees and conducting referendum of the people in cases considered to be contradicting the constitution, in addition to the seven-year electoral cycle that is open to repetition.

Objectives:
The study aims to explore the views of Syrian activists, community leaders and jurists, other influential people within the Syrian society, and people who are interested in political affairs, on the most important constitutional principles that they believe should be stipulated in the new constitution, which is currently being drafted, with the aim of presenting a clear conception for members of the Constitutional Committee on the aspirations and desires of Syrians to be taken into account during committee meetings and discussions that take place within the framework of the constitution drafting process.

Research type: Political Research

Publish date: March 2021

Publisher: INDICATORS Center

CONSTITUTIONAL PROCESS IN THE EYES OF SYRIANS

Published on: 2021-01-01نُشِرَ بتاريخ: 2021-01-01

Since the early years of the conflict in Syria, the international community has sought reaching a political solution to end the current state of violence in the country. In 2012, the Action Group for Syria held its talks in the Swiss city of Geneva which was headed by the then UN envoy to Syria, Kofi Annan, who announced after the conclusion of the talks that the meeting has issued a detailed statement known as the Geneva 1 statement, which stressed the need to pressure all parties to implement the six-point plan (the Annan plan) . The statement also condemned the continuation and escalation of combat operations, destruction, and human rights violations, and recommended the commitment of all parties to cease armed violence and intensify the pace of release of the arbitrarily detained people, and called for the formation of a transitional governing body and the review of the constitutional system and the legal system in Syria.

Later, on the 18th of December 2015, the Security Council unanimously approved Resolution No. 2254, which outlined the features of a political solution in Syria, as the resolution affirmed that the Syrian people is the party which shall decide the future of the country, demanded ceasing attacks on civilians, and stipulated that the United Nations Secretary-General shall call each of the representatives of the regime and the Syrian negotiating committee to participate in formal negotiations on the path towards the political transition. The resolution also expressed its support for the commencement of a Syrian-led political process facilitated by the United Nations that establishes a credible governance that includes everyone and is not based on sectarianism, and sets a timetable for drafting a new constitution for the country and conducting free and fair elections pursuant to the new constitution and under the supervision of the United Nations in a safe and neutral environment including all Syrians including those living in exile.

In fact, it can be said that none of the items of the statements and decisions related to the path of the political solution in Syria have been practically implemented due to the regime’s procrastination and its unwillingness to enter that path seriously, which led to the prolongation of the political process, which went through several rounds of talks. The roadmap for a political solution in Syria was adopted in 2017 based on UN resolutions 2254 of 2015 and 2118 of 2013 containing the Geneva Declaration. This roadmap stipulated the work in parallel or successively on the following four axes: governance, constitution, elections, and a safe and neutral environment. In this context, the Sochi Conference affirmed its support for the implementation of Security Council Resolution No. 2254 and called on the United Nations to form the Constitutional Committee as a contribution to the United Nations-led political process in Geneva, and to the implementation of Security Council Resolution No. 2254 of 2015, accordingly, the United Nations conducted indirect negotiations between the regime’s government and the Syrian Negotiating Committee to form the Constitutional Committee and to agree its reference standards and the basic elements of its internal regulations.

Objectives:
This study aims to identify the views of Syrians of all sects and components about the Constitutional Committee and its work and to identify the issues that are a priority for them regarding the path of a political solution in Syria, including working on drafting a new constitution for the country and revealing the extent of their confidence in the work of the Constitutional Committee and its ability to advance the political process and to identify their restrictions about its work and method of formation.

The study also aims to identify the Syrians’ position on the three delegations of the Constitutional Committee (the delegation of the Syrian government – the delegation of the opposition – the delegation of the civil society) and to identify the most prominent means they believe that it can increase their ability to communicate their desires and aspirations to the members of the Constitutional Committee to be taken into consideration when drafting constitutional texts. The study also aims to identify the views of Syrians about some constitutional issues, such as the Arab identity of the Syrian Republic, the relationship of the state with the religion, the relationship of state authorities to each other, and women’s issues.

Research type: Political Research

Publish date: January 2021

Publisher: INDICATORS Center

Syrians’ Right to Legal Documents

Published on: 2019-05-01نُشِرَ بتاريخ: 2019-05-01

Due to the importance of personal identification documents and the negative
consequences of not possessing them on the lives of individuals, this study was
conducted, in order to reveal the numbers of Syrians who do not have personal
identification documents, and to define the most prominent personal identification
documents that Syrians suffer tremendously to obtain, while listing the negative
consequences of not possessing them.
The study was conducted in the cities of Idlib and Salqeen in Syria, it also included the
city of Urfa in Turkey and the regions of Arsal and the Bekaa in Lebanon, in which 305
male and female participated, taking into consideration while selecting them a
number of variables such as status of residence, age and social status, and data
collection was conducted using a questionnaire with closed-ended questions.
The results of the study showed that many Syrians, whether residing in the liberated
areas or in the countries of refuge, do not possess the personal identity documents of
all kinds, especially passports, civil registry record or educational documents.
Additionally, many Syrian children are still not registered at official state departments
and many young people, who are more than fourteen years old, still do not possess
IDs. Many Syrian have lost their personal identification documents as a result of the
bombing of their areas or during displacement, or it was confiscated or destroyed by
the various military bodies and forces dominated by the Syrian regime.
The fear of being arrested by the pro-regime forces prohibited many Syrians from
traveling to regime areas which constituted the main reason for the Syrians’ inability
to obtain any official document. In addition to the fact that many of them cannot
afford paying for these documents, considering that paying bribes to employees of the
governmental institutions or hiring a lawyer is the most common ways to obtain these
documents.
As for the negative outcomes of the non-possession of official documents and not
registering personal affairs documents in the official government departments, the
most concerning issue is depriving unregistered children from their nationality,
particularly in the event of the inability to register the marriage documents, and the
inability of people without personal IDs to vote or run for public services jobs or state
departments jobs, in addition to depriving them of many of their most basic rights that
people cannot live properly without it, such as the right to education, the right to
work, the prevention of travel and the restriction of individual freedom and
deprivation of health care.