Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/moindalvs/learn_statistics_for_data_science
Central tendency, distribution, skewness and kurtosis
https://github.com/moindalvs/learn_statistics_for_data_science
kurtosis mean median model numpy pandas skewness
Last synced: about 11 hours ago
JSON representation
Central tendency, distribution, skewness and kurtosis
- Host: GitHub
- URL: https://github.com/moindalvs/learn_statistics_for_data_science
- Owner: MoinDalvs
- Created: 2022-03-26T09:13:49.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-12-05T08:46:57.000Z (almost 2 years ago)
- Last Synced: 2023-03-08T12:08:20.645Z (over 1 year ago)
- Topics: kurtosis, mean, median, model, numpy, pandas, skewness
- Language: Jupyter Notebook
- Homepage:
- Size: 954 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Learn_Statistics_for_Data_science
![inbox_221701_c1e7f0c6c305bffe3d62bd37d8783ed8_prob](https://user-images.githubusercontent.com/99672298/190671924-4c9b0dd4-243b-4828-9ade-f91a0555bb16.png)
### `Read and catch up on content:`
- [Gradient Descent article](https://github.com/MoinDalvs/Gradient_Descent_For_beginners/blob/main/README.md) :books:## Don't get lost in the Statistics. Below is the path Follow! and Conquer it!
## 0.1 Table of Contents
1. [Let's talk about Statistics and its types.](#1)
- 1.1 [Descriptive Statistics](#1.1)
- 1.2 [Inferential Statistics](#1.2)
2. [Data Types and Level of Measurement](#2)
- 2.1 [Importance of Qualitative and Quantitative Data](#2.1)
- 2.2 [Can Ordinal and Discrete type overlap?](#2.2)
- 2.3 [Level of Measurements](#2.3)
3. [Moments of Business Decision](#3)
- 3.1 [Measures of Central tendency](#3.1)
- 3.2 [Mean](#3.2)
- 3.3 [Median](#3.3)
- 3.4 [Mode](#3.4)
- 3.5 [Causes of a Multimodal Distribution](#3.5)
- 3.6 [Mean vs Median](#3.6)
- 3.7 [Variance](#3.7)
- 3.8 [Range](#3.8)
- 3.9 [Standard Deviation](#3.9)
4. [The Difference Between Standard Deviation and Average Deviation](#4)
- 4.1 [Understanding Standard Deviation](#4.1)
- 4.2 [Average Deviation](#4.2)
5. [Skewness](#5)
6. [Kurtosis](#6)
- 6.1 [What is Excess Kurtosis?](#6.1)
- 6.2 [Mesokurtic](#6.2)
- 6.3 [Leptokurtic](#6.3)
- 6.4 [Platykurtic](#6.4)
7. [Percentile vs. Quartile vs. Quantile: What’s the Difference?](#7)
- 7.1 [What is a Quantile?](#7.1)
8. [BoxPlot](#8)
- 8.1 [Why 1.5 times IQR? Why not 1 or 2 or any other number?](#8.1)
9. [Central Limit Theorem](#9)
- 9.1 [What Is the Central Limit Theorem (CLT)?](#9.1)
- 9.2 [The Central Limit Theorem in Finance](#9.2)
- 9.3 [Why Is the Central Limit Theorem Useful?](#9.3)
- 9.4 [Why Is the Central Limit Theorem's Minimize Sample Size 30?](#9.4)
- 9.5 [What Is the Formula for Central Limit Theorem?](#9.5)
10. [Sampling techniques](#10)
- 10.1 [Sampling Frame](#10.1)
- 10.2 [Sampling Frame vs. Sample Space](#10.2)
- 10.3 [What is Sampling Distribution](#10.3)
- 10.4 [Sample Design](#10.4)
- 10.5 [Non-Random Sampling Methods](#10.5)
- 10.6 [Random Sampling Methods](#10.6)
11. [Sampling Error](#11)
12. [Distribution](#12)
- 12.1 [Normal Distribution](#12.1)
- 12.2 [Properties of Normal Distribution](#12.2)
- 12.3 [Binomial Distribution](#12.3)
13. [Outliers](#13)
- 13.1 [What are Outliers? 🤔](#13.1)
- 13.2 [Why do they occur?](#13.2)
- 13.3 [How do they affect?](#13.3)
- 13.4 [Detecting and Handling Outliers](#13.4)
14. [Q-Q Plot](#14)
15. [Distribution Function](#15)
- 15.1 [The Cumulative Distribution Function (CDF)](#15.1)
- 15.2 [Probability Density Function (PDF)](#15.2)
- 15.3 [Probability Density Function (PDF) vs Cumulative Distribution Function (CDF)](#15.3)
16. [Covariance](#16)
17. [Correlation](#17)
- 17.1 [Kendall’s Tau and Spearman’s Rank Correlation Coefficient](#17.1)
- 17.2 [Pearson's correlation](#17.2)
18. [Permutation and Combination](#18)![30 03 2022_15 52 11_REC](https://user-images.githubusercontent.com/99672298/190672178-6591a16f-0939-4262-9226-3bcfb40a7e7f.png)
![30 03 2022_15 52 32_REC](https://user-images.githubusercontent.com/99672298/190672197-76e232bd-1202-408c-a7b0-1bde68671d38.png)
![30 03 2022_15 53 01_REC](https://user-images.githubusercontent.com/99672298/190672213-c2e51ed4-1840-4fa5-b51e-6a4916968f4f.png)____
## 1.) Statistics and its typesStatistics is broadly categorized into two types:
+ Descriptive Statistics
+ Inferential Statistics![01 02 2022_22 16 38_REC](https://user-images.githubusercontent.com/99672298/190960580-a9d37c78-33e0-4092-bb0c-035ed724d6b7.png)
### 1.1) Descriptive Statistics:
It utilized numerical and graphical methods to look for patterns in a dataset, to summarize the information revealed in a dataset, and to present the information in a convinient form that individuals can use to make decisions( Mean, Median, Mode, Variance, Standard deviation and charts ore Probability Distribution)Basically, as part of descriptive Statistics, we measure the following:
+ Frequency: no. of times a data point occurs
+ Central tendency: the centrality of the data – mean, median, and mode
+ Dispersion: the spread of the data – range, variance, and standard deviation
+ The measure of position: percentiles and quantile ranks### 1.2) Inferential Statistics
In Inferential statistics, utilizing sample data, to estimate decisions, predictions or other generalization by running Hypothesis testing to assess the assumptions made about the population parameters.In simple terms, we interpret the meaning of the descriptive statistics by inferring them to the population. The main goal of inferential statistics is to make a conclusion about a population based off a sample of data from the population.
For example, we are conducting a survey on the number of two-wheelers in a city. Assume the city has a total population of 5L people. So, we take a sample of 1000 people as it is impossible to run an analysis on entire population data.
From the survey conducted, it is found that 800 people out of 1000 (800 out of 1000 is 80%) are using two-wheelers. So, we can infer these results to the population and conclude that 4L people out of the 5L population are using two-wheelers.
[Table of Content](#0.1)
___
## 2.) Data Types and Level of MeasurementThere is structured and unstructured data. Then you have qualitative and quantitative data. Now let's explore two more data types (discrete and continuous) and help you understand the difference.
![image](https://user-images.githubusercontent.com/99672298/190947841-6179d7b7-2799-4856-b4be-d5987abfbde0.png)
At a higher level, data is categorized into two types: Qualitative and Quantitative.
![25 02 2022_15 16 47_REC](https://user-images.githubusercontent.com/99672298/190960830-5d75e078-ee70-49ee-98f1-20a6ce630ffc.png)
Qualitative data is non-numerical. Some of the examples are eye colour, car brand, city, etc.
On the other hand, Quantitative data is numerical, and it is again divided into Continuous and Discrete data.
![25 02 2022_15 17 21_REC](https://user-images.githubusercontent.com/99672298/190960918-62283d9a-addc-41dc-85e1-806dcb215fcd.png)
### 2.1) Importance of Qualitative and Quantitative Data
Qualitative types of data in research work around the characteristics of the retrieved information and helps understand customer behavior. This type of data in statistics helps run market analysis through genuine figures and create value out of service by implementing useful information. Qualitative types of data in statistics can drastically affect customer satisfaction if applied smartly.On the other hand, the Quantitative data types of statistical data work with numerical values that can be measured, answering questions such as ‘how much’, ‘how many’, or ‘how many times’. Quantitative data types in statistics contain a precise numerical value. Therefore, they can help organizations use these figures to gauge improved and faulty figures and predict future trends.
### 2.2) Can Ordinal and Discrete type overlap?
If you pay attention to this, you can give numbering to the ordinal classes, and then it should be called discrete type or ordinal? The truth is that it is still ordinal. The reason for this is that even if the numbering is done, it doesn’t convey the actual distances between the classes.For instance, consider the grading system of a test. The respective grades can be A, B, C, D, E, and if we number them from starting then it would be 1,2,3,4,5. Now according to the numerical differences, the distance between E grade and D grade is the same as the distance between the D and C grade which is not very accurate as we all know that C grade is still acceptable as compared to E grade but the mid difference declares them as equal.
You can also apply the same technique to a survey form where user experience is recorded on a scale of very poor to very good. The differences between various classes are not clear therefore can’t be quantified directly.
**Continuous data:** It can be represented in decimal format. Examples are height, weight, time, distance, etc.
![image](https://user-images.githubusercontent.com/99672298/190947983-633b685c-8c2b-4d2d-9e19-59ae88813b17.png)
Discrete data: It cannot be represented in decimal format. Examples are the number of laptops, number of students in a class.
Discrete data is again divided into Categorical and Count Data.
**Categorical data:** represent the type of data that can be divided into groups. Examples are age, sex, etc.
**Count data:** This data contains non-negative integers. Example: number of children a couple has.
![image](https://user-images.githubusercontent.com/99672298/190947605-87aca262-3b39-49bc-98d2-ea4e200c91fb.png)
### 2.3) Level of Measurements
In statistics, the level of measurement is a classification that describes the relationship between the values of a variable.#### Discrete
The numerical values which fall under are integers or whole numbers are placed under this category. The number of speakers in the phone, cameras, cores in the processor, the number of sims supported all these are some of the examples of the discrete data type.Discrete data types in statistics cannot be measured – it can only be counted as the objects included in discrete data have a fixed value. The value can be represented in decimal, but it has to be whole. Discrete data is often identified through charts, including bar charts, pie charts, and tally charts.
#### Continuous
The fractional numbers are considered as continuous values. These can take the form of the operating frequency of the processors, the android version of the phone, wifi frequency, temperature of the cores, and so on.Unlike discrete data types of data in research, with a whole and fixed value, continuous data can break down into smaller pieces and can take any value. For example, volatile values such as temperature and the weight of a human can be included in the continuous value. Continuous types of statistical data are represented using a graph that easily reflects value fluctuation by the highs and lows of the line through a certain period of time.
![image](https://user-images.githubusercontent.com/99672298/190948985-801268e9-f19a-4aa0-845e-54fdce98a1fb.png)
![25 02 2022_15 17 50_REC](https://user-images.githubusercontent.com/99672298/192697029-50a74435-9ab3-4a8c-a839-2e472e93e182.png)
![image](https://user-images.githubusercontent.com/99672298/190952524-d2c841eb-e176-4bec-8c2f-20941eef601b.png)We have four fundamental levels of measurement. They are:
+ Nominal Scale
+ Ordinal Scale
+ Interval Scale
+ Ratio Scale![01 02 2022_22 23 08_REC](https://user-images.githubusercontent.com/99672298/190960629-6490a598-e380-437f-a3ef-089da872f66f.png)
![image](https://user-images.githubusercontent.com/99672298/190980595-655ecb68-d1ea-441a-abf1-7f46625ad73f.png)
![image](https://user-images.githubusercontent.com/99672298/190980780-6374b5bd-44f8-4c75-ae5c-259638d1068e.png)A. **Nominal Scale:**
These are the set of values that don’t possess a natural ordering. Let’s understand this with some examples.
This scale contains the least information since the data have names/labels only. It can be used for classification. We cannot perform mathematical operations on nominal data because there is no numerical value to the options (numbers associated with the names can only be used as tags).
The nominal scale uses categories, so finding the median makes no sense. You could put the items in alphabetical order but even then, the middle item would have no meaning as a median. However, a mode (the most frequent item in the set) is possible. For example, if you were to survey a group of random people and ask them what the most romantic city in the World is, Venice or Paris might be the most common response (the mode). It is not possible to state that ‘Red’ is greater than ‘Blue’. The gender of a person is another one where we can’t differentiate between male, female, or others.Nominal data types in statistics are not quantifiable and cannot be measured through numerical units. Nominal types of statistical data are valuable while conducting qualitative research as it extends freedom of opinion to subjects.
Example: Which country do you belong to? India, Japan, Korea.
+ Gender: Male, Female, Other.
+ Hair Color: Brown, Black, Blonde, Red, Other.
+ Type of living accommodation: House, Apartment, Trailer, Other.
+ Genotype: Bb, bb, BB, bB.
+ Religious preference: Buddhist, Mormon, Muslim, Jewish, Christian, Other.B. **Ordinal Scale:**
These types of values have a natural ordering while maintaining their class of values. If we consider the size of a clothing brand then we can easily sort them according to their name tag in the order of small < medium < large. The grading system while marking candidates in a test can also be considered as an ordinal data type where A+ is definitely better than B grade.
These categories help us deciding which encoding strategy can be applied to which type of data. Data encoding for Qualitative data is important because machine learning models can’t handle these values directly and needed to be converted to numerical types as the models are mathematical in nature.
For nominal data type where there is no comparison among the categories, one-hot encoding can be applied which is similar to binary coding considering there are in less number and for the ordinal data type, label encoding can be applied which is a form of integer encoding.
![image](https://user-images.githubusercontent.com/99672298/190952490-07302572-4556-4393-b3cb-56789c4be669.png)
These scales are generally used to depict non-mathematical ideas such as frequency, satisfaction, happiness, a degree of pain, etc. It is quite straightforward to remember the implementation of this scale as ‘Ordinal’ sounds similar to ‘Order’, which is exactly the purpose of this scale.
Ordinal data is made up of ordinal variables. In other words, if you have a list that can be placed in “first, second, third…” order, you have ordinal data. It sounds simple, but there are a couple of elements that can be confusing:
+ You don’t have to have the exact words “first, second, third….” Instead, you can have different rating scales, like “Hot, hotter, hottest” or “Agree, strongly agree, disagree.”
+ You don’t know if the intervals between the values are equal. We know that a list of cardinal numbers like 1, 5, 10 have a set value between them (in this case, 5) but with ordinal data you just don’t know. For example, in a marathon you might have first, second and third place. But if you don’t know the exact finishing times, you don’t know what the interval between first and second, or second and third is.C. **Interval Scale:**
It is a numerical scale. The Interval scale has more information than the nominal, ordinal scales. Along with the order, we know the difference between the two variables (interval indicates the distance between two entities).These scales are effective as they open doors for the statistical analysis of provided data. Mean, median, or mode can be used to calculate the central tendency in this scale. The only drawback of this scale is that there no pre-decided starting point or a true zero value.
Interval scale contains all the properties of the ordinal scale, in addition to which, it offers a calculation of the difference between variables. The main characteristic of this scale is the equidistant difference between objects.
For instance, consider a Celsius/Fahrenheit temperature scale –
+ 80 degrees is always higher than 50 degrees and the difference between these two temperatures is the same as the difference between 70 degrees and 40 degrees.
+ Also, the value of 0 is arbitrary because negative values of temperature do exist – which makes the Celsius/Fahrenheit temperature scale a classic example of an interval scale.
+ Interval scale is often chosen in research cases where the difference between variables is a mandate – which can’t be achieved using a nominal or ordinal scale. The Interval scale quantifies the difference between two variables whereas the other two scales are solely capable of associating qualitative values with variables.
The mean and median values in an ordinal scale can be evaluated, unlike the previous two scales.
+ In statistics, interval scale is frequently used as a numerical value can not only be assigned to variables but calculation on the basis of those values can also be carried out.D. **Ratio Scale:**
The ratio scale has the most information about the data. Unlike the other three scales, the ratio scale can accommodate a true zero point. The ratio scale is simply said to be the combination of Nominal, Ordinal, and Intercal scales.With the option of true zero, varied inferential, and descriptive analysis techniques can be applied to the variables. In addition to the fact that the ratio scale does everything that a nominal, ordinal, and interval scale can do, it can also establish the value of absolute zero. The best examples of ratio scales are weight and height. In market research, a ratio scale is used to calculate market share, annual sales, the price of an upcoming product, the number of consumers, etc.
+ Ratio scale provides the most detailed information as researchers and statisticians can calculate the central tendency using statistical techniques such as mean, median, mode, and methods such as geometric mean, the coefficient of variation, or harmonic mean can also be used on this scale.
+ Ratio scale accommodates the characteristic of three other variable measurement scales, i.e. labeling the variables, the significance of the order of variables, and a calculable difference between variables (which are usually equidistant).
+ Because of the existence of true zero value, the ratio scale doesn’t have negative values.
+ To decide when to use a ratio scale, the researcher must observe whether the variables have all the characteristics of an interval scale along with the presence of the absolute zero value.
+ Mean, mode and median can be calculated using the ratio scale.![25 02 2022_15 17 50_REC](https://user-images.githubusercontent.com/99672298/190960855-3ea2e75b-8c8c-4aee-b0c7-330da0afae19.png)
![image](https://user-images.githubusercontent.com/99672298/190954184-33f4918c-920e-4c3c-a2ab-4013e6206fbc.png)
![image](https://user-images.githubusercontent.com/99672298/190954195-4019d825-3784-4328-a78f-1ab964bbbb93.png)
![image](https://user-images.githubusercontent.com/99672298/190954635-62b632cc-ef5d-4195-9760-8a4c1ffb5a4d.png)
![10 09 2022_21 38 24_REC](https://user-images.githubusercontent.com/99672298/192698534-dad75b45-ea51-4b15-94f9-2f5600b462af.png)
![10 09 2022_21 37 51_REC](https://user-images.githubusercontent.com/99672298/192698548-cfbfdc2a-4a77-47ba-b13a-32719ae79788.png)[Table of Content](#0.1)
____
## 3.) Moments of Business DecisionWe have four moments of business decision that help us understand the data.
### 3.1) Measures of Central tendency
(It is also known as First Moment Business Decision)
Talks about the centrality of the data. To keep it simple, it is a part of descriptive statistical analysis where a single value at the centre represents the entire dataset.
It is the sum of all the data points divided by the total number of values in the data set. Mean cannot always be relied upon because it is influenced by outliers.
It is the middlemost value of a sorted/ordered dataset. If the size of the dataset is even, then the median is calculated by taking the average of the two middle values. In case of outliers Mean cannot be relied upon as much as median. A Median will have better representation of information when an outliers are present.
![15 09 2022_21 39 39_REC](https://user-images.githubusercontent.com/99672298/190981525-b87b46fc-3690-4822-a21e-a32f4bc90730.png)
It is the most repeated value in the dataset. Data with a single mode is called unimodal, data with two modes is called bimodal, and data with more than two modes is called multimodal.
+ A multimodal distribution is a probability distribution with more than one peak, or “mode.”
+ A distribution with one peak is called unimodal
+ A distribution with two peaks is called bimodal
+ A distribution with two peaks or more is multimodal
+ A bimodal distribution is also multimodal, as there are multiple peaks.+ A comb distribution is so-called because the distribution looks like a comb, with alternating high and low peaks. A comb shape can be caused by rounding off. For example, if you are measuring water height to the nearest 10 cm and your class width for the histogram is 5 cm, this could cause a comb shape.
![image](https://user-images.githubusercontent.com/99672298/190957391-7ee25a6e-7cb9-43dc-bc81-dac4102a570d.png)
An edge peak distribution is where there is an additional, out of place peak at the edge of the distribution. This usually means that you’ve plotted (or collected) your data incorrectly, unless you know for sure your data set has an expected set of outliers (i.e. a few extreme views on a survey).
![image](https://user-images.githubusercontent.com/99672298/190957418-32d1674c-97cb-49f8-88f5-aeefd22851d4.png)
A multimodal distribution is known as a Plateau Distribution when there are more than a few peaks close together.
![image](https://user-images.githubusercontent.com/99672298/190957449-01447354-8565-498f-aa9d-d42f10105e44.png)
### 3.5) Causes of a Multimodal Distribution
A multimodal distribution in a sample is usually an indication that the distribution in the population is not normal. It can also indicate that your sample has several patterns of response or extreme views, preferences or attitudes.When thinking about the cause of the multimodality, you may want to take a close look at your data; what may be going on is that two or more distributions are being graphed at the same time. This is opposed to a true multimodal distribution, where only one distribution is mapped. For example, the following image shows two groups of students, one of which studied (the peak on the left) and one of which didn’t (the peak on the right).
![image](https://user-images.githubusercontent.com/99672298/190957530-3009f629-cf5c-436d-94ca-a7fe104dff6a.png)
Mean will tell the Average and in some sense it will give ust the central tendecy of the data. Median will give the middle most value of the data and in both case they are telling us central tendecy (centre of the data). In case of outliers mean cannot be trusted as much as median. A median will have better representation of information when as outliers are present and if the mean is greater than median the distribution is right skewed and if the mean is lesser than median the distribution is right skewed.
____
### 3.7) Variance:
It is the average squared distance of all the data points from their mean. The problem with Variance is, the units will also get squared.![07 05 2022_13 13 53_REC](https://user-images.githubusercontent.com/99672298/190963012-a972ef22-0200-44d2-a4f3-8faa5efa176e.png)
### 3.8) Range:
It is the difference between the maximum and the minimum values of a dataset.![15 09 2022_22 06 15_REC](https://user-images.githubusercontent.com/99672298/190981467-9fbdb171-665c-46bc-9bf2-62dc9b94459f.png)
____
### 3.9) Standard Deviation:
It is the square root of Variance. Helps in retrieving the original units.![11 02 2022_19 34 09_REC](https://user-images.githubusercontent.com/99672298/192697250-8270dd52-b122-46ac-ab51-8b576660f755.png)
[Table of Content](#0.1)
____
## 4.) The Difference Between Standard Deviation and Average Deviation### Standard Deviation Versus Average Deviation
Two of the most popular ways to measure variability or volatility in a set of data are standard deviation and average deviation, also known as mean absolute deviation. Though the two measurements are similar, they are calculated differently and offer slightly different views of data.Determining volatility—that is, deviation from the center—is important in finance, so professionals in accounting, investing, and economics should be familiar with both concepts.
#### KEY TAKEAWAYS
+ Standard deviation is the most common measure of variability and is frequently used to determine the volatility of financial instruments and investment returns.
+ Standard deviation is considered the most appropriate measure of variability when using a population sample, when the mean is the best measure of center, and when the distribution of data is normal.
+ Some argue that average deviation, or mean absolute deviation, is a better gauge of variability when there are distant outliers or the data is not well distributed.### 4.2) Understanding Standard Deviation
Standard deviation is the most common measure of variability and is frequently used to determine the volatility of markets, financial instruments, and investment returns. To calculate the standard deviation:+ Find the mean, or average, of the data points by adding them and dividing the total by the number of data points.
+ Subtract the mean from each data point and square the difference of each result.
+ Find the mean of those squared differences and then the square root of the mean.Squaring the differences between each point and the mean avoids the issue of negative differences for values below the mean, but it means the variance is no longer in the same unit of measure as the original data. Taking the square root means the standard deviation returns to the original unit of measure and is easier to interpret and use in further calculations.
### 4.2) Average Deviation
The average deviation, or mean absolute deviation, is calculated similarly to standard deviation, but it uses absolute values instead of squares to circumvent the issue of negative differences between the data points and their means.To calculate the average deviation:
+ Calculate the mean of all data points.
+ Calculate the difference between the mean and each data point.
+ Calculate the average of the absolute values of those differences.### Standard Deviation Versus Average Deviation
Standard deviation is often used to measure the volatility of returns from investment funds or strategies because it can help measure volatility. Higher volatility is generally associated with a higher risk of losses, so investors want to see higher returns from funds that generate higher volatility. For example, a stock index fund should have relatively low standard deviation compared with a growth fund.The mean average, or mean absolute deviation, is considered the closest alternative to standard deviation. It is also used to gauge volatility in markets and financial instruments, but it is used less frequently than standard deviation.
According to mathematicians, when a data set is of normal distribution—that is, there aren't many outliers—standard deviation is generally the preferable gauge of variability. But when there are large outliers, standard deviation registers higher levels of dispersion (or deviation from the center) than mean absolute deviation.
[Table of Content](#0.1)
____
## 5.) Skewness![16 09 2022_19 29 02_REC](https://user-images.githubusercontent.com/99672298/190981085-24b6ee9a-e72a-4920-98bc-582ac2651fcb.png)
(It is also known as Third Moment Business Decision)
It measures the asymmetry in the data. The two types of Skewness are:
Positive/right-skewed: Data is said to be positively skewed if most of the data is concentrated to the left side and has a tail towards the right.
Negative/left-skewed: Data is said to be negatively skewed if most of the data is concentrated to the right side and has a tail towards the left.
![image](https://user-images.githubusercontent.com/99672298/190962416-8a01be82-e230-4e2e-be1c-da53288128eb.png)
![image](https://user-images.githubusercontent.com/99672298/190962452-0094c080-4269-430d-8c9c-5f2734979c22.png)
![image](https://github.com/MoinDalvs/Learn_Statistics_for_Data_science/assets/99672298/8a3b2461-ac3c-497e-9ef5-d43bace4a1c8)### We refer to it as positively or rightly skewed because, as you can see, the long tail is positioned on the positive, or right, side of the center value. Notwithstanding the skewness, the skewness does aid in our comprehension of the data's concentration on one side and the extent to which outliers deviate from the mean.
### A right-skewed or positive distribution means its tail is more pronounced on the right side than on the left. Since the distribution is positive, the assumption is that its value is positive. As such, most of the values end up left of the mean. This means that the most extreme values are on the right side.## What is a Positively Skewed Distribution?
### In statistics, a positively skewed (or right-skewed) distribution is a type of distribution in which most values are clustered around the left tail of the distribution while the right tail of the distribution is longer. The positively skewed distribution is the direct opposite of the negatively skewed distribution.### Negative or left-skewed means the tail is more pronounced on the left rather than the right. Most values are found on the right side of the mean in negative skewness. As such, the most extreme values are found further to the left.
![image](https://github.com/MoinDalvs/Learn_Statistics_for_Data_science/assets/99672298/4dd4d3e5-8f2d-4b6a-b905-36d59896d9e3)
## Central Tendency Measures in Positively Skewed Distributions
### Unlike with normally distributed data where all measures of the central tendency (mean, median, and mode) equal each other, with positively skewed data, the measures are dispersed. The general relationship among the central tendency measures in a positively skewed distribution may be expressed using the following inequality:
### Mean > Median > Mode
### In contrast to a negatively skewed distribution, in which the mean is located on the left from the peak of distribution, in a positively skewed distribution, the mean can be found on the right from the distribution’s peak. However, not all negatively skewed distributions follow the rules. You may encounter many exceptions in real life that violate the rules.### Since a high level of skewness can generate misleading results from statistical tests, the extreme positive skewness is not desirable for a distribution. In order to overcome such a problem, data transformation tools may be employed to make the skewed data closer to a normal distribution.
### For positively skewed distributions, the most popular transformation is the log transformation. The log transformation implies the calculations of the natural logarithm for each value in the dataset. The method reduces the skew of a distribution. Statistical tests are usually run only when the transformation of the data is complete.
## Positively Skewed Distribution in Finance
### In finance, the concept of skewness is utilized in the analysis of the distribution of the returns of investments. Although many finance theories and models assume that the returns from securities follow a normal distribution, in reality, the returns are usually skewed.### The positive skewness of a distribution indicates that an investor may expect frequent small losses and a few large gains from the investment. The positively skewed distributions of investment returns are generally more desired by investors since there is some probability of gaining huge profits that can cover all the frequent small losses.
![image](https://github.com/MoinDalvs/Learn_Statistics_for_Data_science/assets/99672298/1a254c2f-9c2c-4d11-99ed-c7fc7ce933c1)
## A QQ - Plot to demonstrate the skewness
![image](https://github.com/MoinDalvs/Learn_Statistics_for_Data_science/assets/99672298/614a4920-6ea2-4ebd-bb21-6147d2cf2641)
![sk1](https://user-images.githubusercontent.com/99672298/190672365-7fcd7401-b263-4a2e-916b-352663be040c.png)
![FRBfMkKaIAAjrd2](https://user-images.githubusercontent.com/99672298/190672080-c7c560e4-5447-4743-970d-839ee85898a0.png)
![26 02 2022_15 13 10_REC](https://user-images.githubusercontent.com/99672298/190962548-87f13d54-87b7-4edf-8c4e-d65a0e913657.png)
![image](https://github.com/MoinDalvs/Learn_Statistics_for_Data_science/assets/99672298/c929b44f-2c69-4124-9914-4a600a189f14)[Table of Content](#0.1)
____
## 6.) KurtosisKurtosis is a measure of the tailedness of a distribution. Tailedness is how often outliers occur. Excess kurtosis is the tailedness of a distribution relative to a normal distribution.
Like skewness, kurtosis is a statistical measure that is used to describe distribution. Whereas skewness differentiates extreme values in one versus the other tail, kurtosis measures extreme values in either tail. Distributions with large kurtosis exhibit tail data exceeding the tails of the normal distribution (e.g., five or more standard deviations from the mean). Distributions with low kurtosis exhibit tail data that are generally less extreme than the tails of the normal distribution.In other words, Kurtosis is a statistical measure that defines how heavily the tails of a distribution differ from the tails of a normal distribution. In other words, kurtosis identifies whether the tails of a given distribution contain extreme values.
Along with skewness, kurtosis is an important descriptive statistic of data distribution. However, the two concepts must not be confused with each other. Skewness essentially measures the symmetry of the distribution, while kurtosis determines the heaviness of the distribution tails.
In finance, kurtosis is used as a measure of financial risk. A large kurtosis is associated with a high risk for an investment because it indicates high probabilities of extremely large and extremely small returns. On the other hand, a small kurtosis signals a moderate level of risk because the probabilities of extreme returns are relatively low.
For investors, high kurtosis of the return distribution implies the investor will experience occasional extreme returns (either positive or negative), more extreme than the usual + or - three standard deviations from the mean that is predicted by the normal distribution of returns. This phenomenon is known as kurtosis risk.![image](https://user-images.githubusercontent.com/99672298/190942840-f4da242f-3086-4665-8bf9-70d3300d49ff.png)
![image](https://user-images.githubusercontent.com/99672298/190942616-ac87fa40-c453-4805-9a5d-c4965953979b.png)
![image](https://user-images.githubusercontent.com/99672298/190677717-f516eb9b-caed-4d49-95c4-60f6ea71290c.png)### 6.1) What is Excess Kurtosis?
Excess kurtosis is a metric that compares the kurtosis of a distribution against the kurtosis of a normal distribution. The kurtosis of a normal distribution equals 3. Therefore, the excess kurtosis is found using the formula below:Excess Kurtosis = Kurtosis – 3
![image](https://user-images.githubusercontent.com/99672298/190942974-04437c8d-cd2b-41e9-adf0-b648a6b86c19.png)
Distributions with medium kurtosis (medium tails) are mesokurtic.
Distributions with low kurtosis (thin tails) are platykurtic.
Distributions with high kurtosis (fat tails) are leptokurtic.
Tails are the tapering ends on either side of a distribution. They represent the probability or frequency of values that are extremely high or low compared to the mean. In other words, tails represent how often outliers occur.Example: Types of kurtosis
![image](https://user-images.githubusercontent.com/99672298/190672762-187cba1e-f8ed-4e98-a851-13b3af6f6f00.png)
Types of kurtosis
Distributions can be categorized into three groups based on their kurtosis:![image](https://user-images.githubusercontent.com/99672298/190672814-2c8e38aa-38ef-459f-9158-3787b61143a5.png)
![image](https://user-images.githubusercontent.com/99672298/190672949-b7dca686-02f5-4037-b1bc-f4cb61f76a46.png)
![image](https://user-images.githubusercontent.com/99672298/190673076-13f3345f-f863-44d1-8d32-27fc5da83be1.png)From the graph, we can see that the frequency distribution (shown by the gray bars) approximately follows a normal distribution (shown by the green curve). Normal distributions are mesokurtic.
The zoologist calculates the kurtosis of the sample. She finds that the kurtosis is 3.09 and the excess kurtosis is 0.09, and she concludes that the distribution is mesokurtic.
![image](https://user-images.githubusercontent.com/99672298/190673209-a780b0f0-f8c4-4988-a69c-0b202716988b.png)
### 6.3) Leptokurtic
Leptokurtic indicates a positive excess kurtosis. The leptokurtic distribution shows heavy tails on either side, indicating large outliers. In finance, a leptokurtic distribution shows that the investment returns may be prone to extreme values on either side. Therefore, an investment whose returns follow a leptokurtic distribution is considered to be risky.### 6.4) Platykurtic
A platykurtic distribution shows a negative excess kurtosis. The kurtosis reveals a distribution with flat tails. The flat tails indicate the small outliers in a distribution. In the finance context, the platykurtic distribution of the investment returns is desirable for investors because there is a small probability that the investment would experience extreme returns.[Table of Content](#0.1)
____
## 7.) Percentile vs. Quartile vs. Quantile: What’s the Difference?Three terms that students often confuse in statistics are percentiles, quartiles, and quantiles.
Here’s a simple definition of each:
Percentiles: Range from 0 to 100.
Quartiles: Range from 0 to 4.
+ Note that percentiles and quartiles share the following relationship:
+ 0 percentile = 0 quartile (also called the minimum)
+ 25th percentile = 1st quartile
+ 50th percentile = 2nd quartile (also called the median)
+ 75th percentile = 3rd quartile
+ 100th percentile = 4th quartile (also called the maximum)Quantiles: Range from any value to any other value.
Note that percentiles and quartiles are simply types of quantiles.
![image](https://user-images.githubusercontent.com/99672298/190623416-d0621d79-121d-4ff2-8a98-083a2ec825ae.png)
Some types of quantiles even have specific names, including:
+ 4-quantiles are called quartiles.
+ 5-quantiles are called quintiles.
+ 8-quantiles are called octiles.
+ 10-quantiles are called deciles.
+ 100-quantiles are called percentiles.![26 02 2022_13 27 12_REC](https://user-images.githubusercontent.com/99672298/190982569-a866dc6f-d50b-490a-953a-806c3789a9e3.png)
### 7.2) What is a Quantile?
The word “quantile” comes from the word quantity. In simple terms, a quantile is where a sample is divided into equal-sized, adjacent, subgroups (that’s why it’s sometimes called a “fractile“). It can also refer to dividing a probability distribution into areas of equal probability.The median is a quantile; the median is placed in a probability distribution so that exactly half of the data is lower than the median and half of the data is above the median. The median cuts a distribution into two equal areas and so it is sometimes called 2-quantile.
Quartiles are also quantiles; they divide the distribution into four equal parts. Percentiles are quantiles that divide a distribution into 100 equal parts and deciles are quantiles that divide a distribution into 10 equal parts
[Table of Content](#0.1)
____
## 8.) Boxplot![26 02 2022_15 16 46_REC](https://user-images.githubusercontent.com/99672298/190982234-fd5a78fe-76e7-437c-bb31-8fcc2260bdb4.png)
![26 02 2022_15 15 12_REC](https://user-images.githubusercontent.com/99672298/190982264-09bf95b5-c1f8-49f1-ac66-e5e6d568dbe8.png)
![xBoxPlotandNormalDistribution JPG pagespeed ic 84HerCY3C7](https://user-images.githubusercontent.com/99672298/190981362-928d074f-e27c-4950-a21b-0172c862150e.jpg)
![xBoxPlot jpg pagespeed ic OUUzjm6fGL](https://user-images.githubusercontent.com/99672298/190981414-aa3df22c-2498-44d3-942c-8215bc2125b2.jpg)____
### 8.1) “Why 1.5 times IQR? Why not 1 or 2 or any other number?”In the most general sense, an outlier is a data point which differs significantly from other observations.
### IQR Method of Outlier Detection
To explain IQR Method easily, let’s start with a box plot.![image](https://user-images.githubusercontent.com/99672298/191162733-725f6cf2-7ef2-4eec-b5c0-d21564869c90.png)
A box plot tells us, more or less, about the distribution of the data. It gives a sense of how much the data is actually spread about, what’s its range, and about its skewness. As you might have noticed in the figure, that a box plot enables us to draw inference from it for an ordered data, i.e., it tells us about the various metrics of a data arranged in ascending order.
In the above figure,
+ minimum is the minimum value in the dataset,
+ and maximum is the maximum value in the dataset.So the difference between the two tells us about the range of dataset.
+ The median is the median (or centre point), also called second quartile, of the data (resulting from the fact that the data is ordered).
+ Q1 is the first quartile of the data, i.e., to say 25% of the data lies between minimum and Q1.
+ Q3 is the third quartile of the data, i.e., to say 75% of the data lies between minimum and Q3.
+ The difference between Q3 and Q1 is called the Inter-Quartile Range or IQR.
### IQR = Q3 - Q1To detect the outliers using this method, we define a new range, let’s call it decision range, and any data point lying outside this range is considered as outlier and is accordingly dealt with. The range is as given below:
### Lower Bound: (Q1 - 1.5 * IQR)
### Upper Bound: (Q3 + 1.5 * IQR)Any data point less than the Lower Bound or more than the Upper Bound is considered as an outlier.
But the question was: Why only 1.5 times the IQR? Why not any other number?
Well, as you might have guessed, the number (here 1.5, hereinafter scale) clearly controls the sensitivity of the range and hence the decision rule. A bigger scale would make the outlier(s) to be considered as data point(s) while a smaller one would make some of the data point(s) to be perceived as outlier(s). And we’re quite sure, none of these cases is desirable.
But this is an abstract way of explaining the reason, it’s quite effective, but naive nonetheless. So to what should we turn our heads for hope?
Maths! Of course! (You saw that coming, right? 😐)
### You might be surprised if I tell you that this number, or scale, depends on the distribution followed by the data.
For example, let’s say our data follows, our beloved, Gaussian Distribution.### Gaussian Distribution
You all must have seen how a Gaussian Distribution looks like, right? If not, here it is (although I’m suspicious about you 👊).![image](https://user-images.githubusercontent.com/99672298/190670528-2c8e0f12-5efb-4b21-a2d7-2309d76a489b.png)
There are certain observations which could be inferred from this figure:
+ About 68.26% of the whole data lies within one standard deviation (<σ) of the mean (μ), taking both sides into account, the pink region in the figure.
+ About 95.44% of the whole data lies within two standard deviations (2σ) of the mean (μ), taking both sides into account, the pink+blue region in the figure.
+ About 99.72% of the whole data lies within three standard deviations (<3σ) of the mean (μ), taking both sides into account, the pink+blue+green region in the figure.
+ And the rest 0.28% of the whole data lies outside three standard deviations (>3σ) of the mean (μ), taking both sides into account, the little red region in the figure. And this part of the data is considered as outliers.
+ The first and the third quartiles, Q1 and Q3, lies at -0.675σ and +0.675σ from the mean, respectively.### Let’s calculate the IQR decision range in terms of σ
#### Taking scale = 1:Lower Bound:
= Q1 - 1 * IQR
= Q1 - 1 * (Q3 - Q1)
= -0.675σ - 1 * (0.675 - [-0.675])σ
= -0.675σ - 1 * 1.35σ
= -2.025σ
Upper Bound:
= Q3 + 1 * IQR
= Q3 + 1 * (Q3 - Q1)
= 0.675σ + 1 * (0.675 - [-0.675])σ
= 0.675σ + 1 * 1.35σ
= 2.025σ
So, when scale is taken as 1, then according to IQR Method any data which lies beyond 2.025σ from the mean (μ), on either side, shall be considered as outlier. But as we know, upto 3σ, on either side of the μ ,the data is useful. So we cannot take scale = 1, because this makes the decision range too exclusive, means this results in too much outliers. In other words, the decision range gets so small (compared to 3σ) that it considers some data points as outliers, which is not desirable.### Taking scale = 2:
Lower Bound:
= Q1 - 2 * IQR
= Q1 - 2 * (Q3 - Q1)
= -0.675σ - 2 * (0.675 - [-0.675])σ
= -0.675σ - 2 * 1.35σ
= -3.375σ
Upper Bound:
= Q3 + 2 * IQR
= Q3 + 2 * (Q3 - Q1)
= 0.675σ + 2 * (0.675 - [-0.675])σ
= 0.675σ + 2 * 1.35σ
= 3.375σ
So, when scale is taken as 2, then according to IQR Method any data which lies beyond 3.375σ from the mean (μ), on either side, shall be considered as outlier. But as we know, upto 3σ, on either side of the μ ,the data is useful. So we cannot take scale = 2, because this makes the decision range too inclusive, means this results in too few outliers. In other words, the decision range gets so big (compared to 3σ) that it considers some outliers as data points, which is not desirable either.### Taking scale = 1.5:
Lower Bound:
= Q1 - 1.5 * IQR
= Q1 - 1.5 * (Q3 - Q1)
= -0.675σ - 1.5 * (0.675 - [-0.675])σ
= -0.675σ - 1.5 * 1.35σ
= -2.7σ
Upper Bound:
= Q3 + 1.5 * IQR
= Q3 + 1.5 * (Q3 - Q1)
= 0.675σ + 1.5 * (0.675 - [-0.675])σ
= 0.675σ + 1.5 * 1.35σ
= 2.7σ
**When scale is taken as 1.5, then according to IQR Method any data which lies beyond 2.7σ from the mean (μ), on either side, shall be considered as outlier. And this decision range is the closest to what Gaussian Distribution tells us, i.e., 3σ. In other words, this makes the decision rule closest to what Gaussian Distribution considers for outlier detection, and this is exactly what we wanted.**[Table of Content](#0.1)
____
## 9.) Central Limit TheoremInstead of analyzing entire population data, we always take out a sample for analysis. The problem with sampling is that “sample means is a random variable – varies for different samples”. And random sample we draw can never be an exact representation of the population. This phenomenon is called sample variation.
To nullify the sample variation, we use the central limit theorem. And according to the Central Limit Theorem:
1. The distribution of sample means follows a normal distribution if the population is normal.
2. the distribution of sample means follows a normal distribution even though the population is not normal. But the sample size should be large enough.
3. The grand average of all the sample mean values give us the population mean.
4. Theoretically, the sample size should be 30. And practically, the condition on the sample size (n) is:
n > 10(k3)2, where k3 is the sample skewness.
n > 10(k4), where K4 is the sample Kurtosis.
### 9.1) What Is the Central Limit Theorem (CLT)?
In probability theory, the central limit theorem (CLT) states that the distribution of a sample variable approximates a normal distribution (i.e., a “bell curve”) as the sample size becomes larger, assuming that all samples are identical in size, and regardless of the population's actual distribution shape.
Put another way, CLT is a statistical premise that, given a sufficiently large sample size from a population with a finite level of variance, the mean of all sampled variables from the same population will be approximately equal to the mean of the whole population. Furthermore, these samples approximate a normal distribution, with their variances being approximately equal to the variance of the population as the sample size gets larger, according to the law of large numbers.
![image](https://user-images.githubusercontent.com/99672298/191051450-ea159eb3-9b6d-45e3-abe5-64a65e370cf1.png)
### Understanding the Central Limit Theorem (CLT)
According to the central limit theorem, the mean of a sample of data will be closer to the mean of the overall population in question, as the sample size increases, notwithstanding the actual distribution of the data. In other words, the data is accurate whether the distribution is normal or aberrant.As a general rule, sample sizes of around 30-50 are deemed sufficient for the CLT to hold, meaning that the distribution of the sample means is fairly normally distributed. Therefore, the more samples one takes, the more the graphed results take the shape of a normal distribution. Note, however, that the central limit theorem will still be approximated in many cases for much smaller sample sizes, such as n=8 or n=5.
3The central limit theorem is often used in conjunction with the law of large numbers, which states that the average of the sample means and standard deviations will come closer to equaling the population mean and standard deviation as the sample size grows, which is extremely useful in accurately predicting the characteristics of populations.
+ Sampling is random. All samples must be selected at random so that they have the same statistical possibility of being selected.
+ Samples should be independent. The selections or results from one sample should have no bearing on future samples or other sample results.
+ Samples should be limited. It's often cited that a sample should be no more than 10% of a population if sampling is done without replacement. In general, larger population sizes warrant the use of larger sample sizes.
+ Sample size is increasing. The central limit theorem is relevant as more samples are selected.### 9.2) The Central Limit Theorem in Finance
The CLT is useful when examining the returns of an individual stock or broader indices, because the analysis is simple, due to the relative ease of generating the necessary financial data. Consequently, investors of all types rely on the CLT to analyze stock returns, construct portfolios, and manage risk.Say, for example, an investor wishes to analyze the overall return for a stock index that comprises 1,000 equities. In this scenario, that investor may simply study a random sample of stocks to cultivate estimated returns of the total index. To be safe, at least 30-50 randomly selected stocks across various sectors should be sampled for the central limit theorem to hold. Furthermore, previously selected stocks must be swapped out with different names to help eliminate bias.
### 9.3) Why Is the Central Limit Theorem Useful?
The central limit theorem is useful when analyzing large data sets because it allows one to assume that the sampling distribution of the mean will be normally-distributed in most cases. This allows for easier statistical analysis and inference. For example, investors can use central limit theorem to aggregate individual security performance data and generate distribution of sample means that represent a larger population distribution for security returns over a period of time.### 9.4) Why Is the Central Limit Theorem's Minimize Sample Size 30?
A sample size of 30 is fairly common across statistics. A sample size of 30 often increases the confidence interval of your population data set enough to warrant assertions against your findings.
4
The higher your sample size, the more likely the sample will be representative of your population set.### 9.5) What Is the Formula for Central Limit Theorem?
The central limit theorem doesn't have its own formula, but it relies on sample mean and standard deviation. As sample means are gathered from the population, standard deviation is used to distribute the data across a probability distribution curve.[Table of Content](#0.1)
____
## 10.) Sampling techniques![01 02 2022_22 17 03_REC](https://user-images.githubusercontent.com/99672298/192697419-4110e878-561b-4f07-8516-6089bd3cb5c1.png)
![29 01 2022_19 42 19_REC](https://user-images.githubusercontent.com/99672298/192697496-52a4e062-bf28-4101-9763-31e143723105.png)
![maxresdefault](https://user-images.githubusercontent.com/99672298/190982038-48814cf2-36b0-4c0c-ba3d-5e482215b5ab.jpg)
![sampling-frame-l](https://user-images.githubusercontent.com/99672298/190982014-bf195c6b-bf2e-4e80-8984-ab791c30488c.jpg)A complete list of all the units in a population is called the sampling frame. A unit of population is a relative term. If all the workers in a factory make up a population, a single worker is a unit of the population. If all the factories in a country are being studied for some purpose, a single factory is a unit of the population of factories. The sampling frame contains all the units of the population. The units which are to be included in the frame must be clearly defined. The frame provides a base for the selection of the sample.
A sampling frame is a list of all the items in your population. It’s a complete list of everyone or everything you want to study. The difference between a population and a sampling frame is that the population is general and the frame is specific. For example, the population could be “People who live in Jacksonville, Florida.” The frame would name all of those people, from Adrian Abba to Felicity Zappa. A couple more examples:
+ **Population:** People in STAT101.
+ **Sampling Frame:** Adrian, Anna, Bob, Billy, Howie, Jess, Jin, Kate, Kaley, Lin, Manuel, Norah, Paul, Roger, Stu, Tim, Vanessa, Yasmin.+ **Population:** Birds that are pink.
+ **Sampling Frame:**+ Brown-capped Rosy-Finch.
+ White-winged Crossbill.
+ American Flamingo.
+ Roseate Spoonbill.
+ Black Rosy-Finch.
+ Cassin’s Finch.### 10.2) Sampling Frame vs. Sample Space
A sampling frame is a list of things that you draw a sample from. A sample space is a list of all possible outcomes for an experiment. For example, you might have a sampling frame of names of people in a certain town for a survey you’re going to be conducting on family size. The sample space is all possible outcomes from your survey: 1 person, 2 people, 3 people…10 or more.
![13 09 2022_15 00 15_REC](https://user-images.githubusercontent.com/99672298/190981696-9dc82d2b-d7d0-43ce-884c-d22c4863798a.png)
![image](https://user-images.githubusercontent.com/99672298/191262199-5bb42117-2772-495d-a283-ac9b770b6de4.png)### 10.3) What Is a Sampling Distribution?
A sampling distribution is a probability distribution of a statistic obtained from a larger number of samples drawn from a specific population. The sampling distribution of a given population is the distribution of frequencies of a range of different outcomes that could possibly occur for a statistic of a population.
In statistics, a population is the entire pool from which a statistical sample is drawn. A population may refer to an entire group of people, objects, events, hospital visits, or measurements. A population can thus be said to be an aggregate observation of subjects grouped together by a common feature.
![image](https://user-images.githubusercontent.com/99672298/191296487-c9d57a75-a1cb-48c0-bb1e-f4a0aa07204c.png)
#### Understanding Sampling Distribution
A lot of data drawn and used by academicians, statisticians, researchers, marketers, analysts, etc. are actually samples, not populations. A sample is a subset of a population. For example, a medical researcher that wanted to compare the average weight of all babies born in North America from 1995 to 2005 to those born in South America within the same time period cannot draw the data for the entire population of over a million childbirths that occurred over the ten-year time frame within a reasonable amount of time. They will instead only use the weight of, say, 100 babies, in each continent to make a conclusion. The weight of 100 babies used is the sample and the average weight calculated is the sample mean.In sample studies, we have to make a plan regarding the size of the sample, selection of the sample, collection of the sample data and preparation of the final results based on the sample study. The whole procedure involved is called the sample design. The term sample survey is used for a detailed study of the sample. In general, the term sample survey is used for any study conducted on the sample taken from some real world data.
### 10.5) Non-Random Sampling Methods
Non-random sampling methods select locations for sampling by either: according to regular (i.e., systematic) patterns, targeting specific features or events, using personal or anecdotal (based on anecdotes and possibly not true or accurate) information, or without any specific plan. Care must be exercised when using non-random sample selection methods because the samples may not be representative of the entire population. If this is the case, then inference cannot extend beyond the set of sampling units. Some common non-random sample design techniques are discussed below. Unless otherwise stated, the primary reference for these discussions was Elzinga et al. (2001).### 10.6) Random Sampling Methods
Random sampling methods rely on randomization at some point in the sample design process in an attempt to achieve statistically unbiased samples. Random sampling methods are a form of design-based inferencewhere1): the population being measured is assumed to have fixed parameters at the time they are sampled, and
2) that a randomly-selected set of samples for the population represents one realization of all possible sample sets (i.e., the sample set is a random variable).
There are many different random sampling techniques. Some of the most common techniques are described below. Unless otherwise stated, the primary source for information on these methods is Elzinga et al. (2001).
![12 09 2022_20 35 15_REC](https://user-images.githubusercontent.com/99672298/190981714-719a460c-f90f-4569-ac11-17e17768276d.png)
![image](https://user-images.githubusercontent.com/99672298/191477200-10228788-7e89-4d1a-b9d4-6ed9313113c7.png)
![image](https://user-images.githubusercontent.com/99672298/191478261-71a4e100-b629-44fd-a03f-52a14026fdca.png)
![26 02 2022_13 24 51_REC](https://user-images.githubusercontent.com/99672298/190982694-60e2e38a-d455-4b9f-a901-06ce57ff0bb6.png)### Simple Random Sampling
Simple random sampling is the foundation for all of the other random sampling techniques. With this approach, all of the sampling units are enumerated and a specified number of sampling units are selected at random from all the sampling units. Selection of samples for simple random sampling follow two criteria:+ each sampling unit has the same likelihood of being selected, and
+ the selection of one sampling unit does not influence the selection of any other sampling unit.Disadvantages of simple random sampling
+ it does not take into account variability caused by other measurable factors (e.g., aspect, soils, elevation)
+ it can yield high variance estimates and make detection of differences difficult if the population being sampled is not evenly distributed throughout the sample area.
+ it can be an inefficient means of sampling because of the time required to visit all of the sample sites
+ by chance, some areas may be heavily sampled while other areas are not sampled at all![26 02 2022_13 25 16_REC](https://user-images.githubusercontent.com/99672298/190982664-fce582ac-42dd-45ec-9ce2-258dfd43ba4e.png)
![stratified-sampling-example-vector-illustration-diagram-research-method-explanation-scheme-person-symbols-stages-175044570](https://user-images.githubusercontent.com/99672298/190982306-e600f6a2-5761-46e3-9307-e851b201edeb.jpg)
### Stratified Random Sampling
Stratification is the process of dividing a set of sampling units into one or more subgroups (i.e., strata) prior to selection of units for sampling. Sampling units are then selected randomly within each stratum. The purpose of using stratification is to account for variability in a population that can be explained by another variable (e.g., vegetation type, aspect, soil type). Therefore, strata should be defined so that the population conditions are similar within the strata.Sampling effort does not need to be equally allocated between strata. It is common for sampling intensity to be varied between strata based on either the variability of the population parameter within the strata or the size of the strata.
A stratified sample is a mini-reproduction of the population. Before sampling, the population is divided into characteristics of importance for the research. For example, by gender, social class, education level, religion, etc. Then the population is randomly sampled within each category or stratum. If 38% of the population is college-educated, then 38% of the sample is randomly selected from the college-educated population. Stratified samples are as good as or better than random samples, but they require fairly detailed advance knowledge of the
population characteristics, and therefore are more difficult to construct.**Advantages of stratified random sampling**
+ it increases efficiency of sampling over simple random sampling when the variable of interest responds differently to some clearly definable features.
**Disadvantages of stratified random sampling**
+ the formulas for estimating population parameters and conducting hypothesis tests are more complicated than for simple random selection.
+ each stratum should be relatively homogeneous with regard to the population parameter being measured.![StratifiedRandomSampling](https://user-images.githubusercontent.com/99672298/190982472-9f346a6d-afab-40bd-815f-7f37b5fae9be.jpg)
![stratified-random-sampling-2-768x460](https://user-images.githubusercontent.com/99672298/190982325-d585230e-b005-4e92-bfea-09c5c04b8e76.jpg)![26 02 2022_13 25 30_REC](https://user-images.githubusercontent.com/99672298/190982651-35e81bed-667f-4b1f-a83d-1ef7bb6b9dd1.png)
![26 02 2022_13 25 44_REC](https://user-images.githubusercontent.com/99672298/190982636-8fa83fc2-4efe-4b7b-91fe-dbf0e4b45684.png)### Systematic Sampling
Systematic sampling is the selection of units for sampling or the placement of sampling locations within an area according to a regularly-repeating pattern. Examples of systematic sampling are: locating sample sites on a 1km grid within a pasture, taking measurements every meter along a transect, or orienting transects along cardinal directions. Systematic techniques are commonly used to locate sub-plot sampling sites (e.g., points, transects, frames) within a sampling site where the location of the sampling site has been selected randomly. Alternatively, larger sampling units can be selected systematically and then the location of the specific sampling unit randomly selected within the larger unit (i.e., a form of two-stage sampling or restricted random sampling – see below). This technique is often used with regional- or national-scale assessment and monitoring programs like the NRCS Natural Resource Inventory (NRI) or the USFS Forest Inventory and Analysis (FIA) programs.+ **Advantages of systematic sampling are:**
+ it allows even sampling across an area
+ it is quick and easy to implement
+ it is often more efficient than random sampling, and can perform as well or better than random methods in some situations (see Elzinga et al. (2001), p125)
+ When combined with an appropriate randomization method, the data can be analyzed as if it were a random design+ **Disadvantages of systematic sampling are:**
+ it can yield biased data if there are regularly-occurring patterns in the population being sampled. For instance, when sampling for road impacts, transects oriented along cardinal directions may yield biased estimates of road impacts because many roads are oriented along cardinal directions too (M. Duniway, pers. comm.)
+ systematic sampling can miss or under-represent small or narrow features of a landscape if the sampling interval is too large.![26 02 2022_13 25 59_REC](https://user-images.githubusercontent.com/99672298/190982621-15f87cde-6ecf-4f08-93fa-fafee27ab9e3.png)
### Cluster Sampling
Cluster sampling is a technique that can be applied when it is not possible or desirable to take a random sample from the entire population. With cluster sampling, the known or accessible sampling units are grouped into clusters. A random selection of clusters is then made and each sampling unit is measured within each of the selected clusters. Cluster sampling is typically applied to monitoring of rare plants or invasive species when the objective is to estimate a property related to individual plants (e.g., mean height, number of flowers per plant).Advantages of cluster sampling are:
It can be less expensive and more efficient to sample all of the sampling units within a cluster than to sample an equal number of units across the entire population.
Cluster sampling can be an efficient choice when clusters naturally occur and when the clusters are similar to each other but have a high degree of internal variability
Disadvantages of cluster sampling are:all elements within the selected clusters must be measured. If clusters are large or contain a large number of elements, then two-stage sampling may be more efficient.
It can be difficult to determine how many clusters to sample versus how large the clusters should be.
Analysis of sample data collected using a cluster analysis design is more complex than other methods.![images](https://user-images.githubusercontent.com/99672298/190982464-68d994c1-b52c-48de-abc8-eff35923a708.png)
![stratified-cluster-sampling-l](https://user-images.githubusercontent.com/99672298/190982455-424f3441-91d1-423e-89f2-18a9af2824a5.jpg)
![TC_606106-heterogeneous-and-homogeneous-mixtures1-5ac4f1a9642dca0036847e52](https://user-images.githubusercontent.com/99672298/190981958-c427f4bc-ced1-4f25-a483-581fed6fe83c.png)### Two-Stage Sampling
In two-stage sampling, elements of the population are grouped together into large groups called primary sampling units. The individual sampling units within each primary sampling unit are called secondary sampling units. A random selection of the primary sampling units is made, and then a selection of secondary sampling units is made (usually random, but can be systematic) within each of the selected primary sampling units.Two-stage sampling is a powerful sample design method for systems that are hierarchical in nature. For example, allotments within a BLM District could be considered primary sampling units. A random selection of allotments could be made and then sample sites selected within the selected allotments. This design would allow for inference at the allotment level (e.g., average allotment condition) as well as at the district level.
The concept of two-stage sampling can be generalized to multi-stage sampling where there are more than two hierarchical levels for sampling. However, as the number of stages increases, sample size requirements go up and degrees of freedom for statistical hypothesis testing decrease. Accordingly, the number of stages is generally small (i.e., two or three).
Advantages of two-stage sampling are:
It is often more efficient to sample secondary sampling units within a limited number of primary sampling units than to sample the same number of secondary units randomly spread across a landscape.
Inferences can be made at multiple scales (i.e., the scale of the primary sampling unit, and the entire population).
Disadvantages of two-stage sampling are:Calculation of sample statistics is more complicated than with other simpler sample designs.
### Adaptive Sampling
Adaptive sampling refers to a technique where the sample design is modified in the field based on observations made at a set of pre-selected sampling units. Perhaps the best way to describe adaptive sampling is through an example. Consider sampling for the presence or abundance of rare plants. A random selection of sample units will yield many sample units where the plant is not detected, but the rare plant is likely to occur in sample units nearby to those units where it was detected. With adaptive sampling, the detection of the rare plant at one site triggers the selection and sampling of additional nearby sites that were not originally selected as part of the sample set. Thus the biggest difference between adaptive sampling and many other random selection techniques is that the observed conditions at one sampling unit influence the selection of other sampling units.One typical implementation of adaptive sampling is that whenever a specified event occurs (e.g., detection of a target species, measurement over a specified threshold), all of the neighboring sample units are searched/sampled. This continues until no new detections occur.
Adaptive sampling introduces bias into the samples that must be corrected for. More specifically, adding additional units to the sample that contain high values for the parameter being measured will result in overestimation of the population mean (Thompson 1992). Various techniques are available for correcting for the bias introduced by adaptive sampling.
Advantages of Adaptive Sampling
It is an efficient method for sampling rare species or events
It works well with populations that are naturally aggregated or clustered and does not require the exact nature of the aggregation to be known ahead of time
Disadvantages of Adaptive SamplingPopulation estimates must be corrected for bias.
Calculations for population parameter estimates and hypothesis tests are more complicated than for simpler sampling designs.
Estimation of sample size requirements is difficult.[Table of Content](#0.1)
____
## 11.) Sampling Error![14 09 2022_15 26 06_REC](https://user-images.githubusercontent.com/99672298/190981654-40f11b96-e4e7-458e-95bc-bc0cd55ec263.png)
![14 09 2022_13 01 17_REC](https://user-images.githubusercontent.com/99672298/190981679-575129e1-463f-4707-8be4-d51fd1369780.png)[Table of Content](#0.1)
____
## 12.) Distribution### 12.1) Normal Distribution
![16 09 2022_16 31 55_REC](https://user-images.githubusercontent.com/99672298/190981214-f7ef6fda-825d-43e1-9e8c-70c5d2f5fa0f.png)
![16 09 2022_18 56 19_REC](https://user-images.githubusercontent.com/99672298/190981163-f92e9d08-9ab8-463e-bd62-7a34558b3530.png)
![16 09 2022_18 55 14_REC](https://user-images.githubusercontent.com/99672298/190981186-296b8ac1-1fb8-4d19-8d75-fe04120328ae.png)
![common-distribution_huec055ab3bd48482f92384431f8f73d3e_75898_900x500_fit_box_2](https://user-images.githubusercontent.com/99672298/190981750-bf877d2f-816a-49bc-bc43-371d54b9ec40.png)
____
### 12.2) Properties of Normal Distribution![16 09 2022_19 52 31_REC](https://user-images.githubusercontent.com/99672298/190980913-dc1a9cf4-9469-4ae7-83fe-15f0f366f263.png)
### 12.3) Binomial Distribution
#### What is Binomial Distribution?
A Binomial Distribution describes the probability of an event that only has 2 possible outcomes. For Example, Heads or tails.It can also be used to describe the probability of a series of independent events that only have 2 possible outcomes occurring. For Example. Flipping a coin 10 times and having it land with 5 on heads exactly 5 times. Or, flipping a coin 10 times and having at least 5 land on heads.
Furthermore, the probability of an event with n trials and f failures follows a binomial distribution.
#### Visualizing Binomial Distribution
The Binomial Distribution looks like so when graphed:![image](https://user-images.githubusercontent.com/99672298/200475879-46687281-8210-4c5d-a4ca-7c53604be6cb.png)
#### When Would You Use Binomial Distribution?
**Requirements and Conditions for a Binomial Distribution**
+ Must be a fixed number of trials.
+ Continuous data are not binomial.
+ Probability of success should be the same on every trial, and also probability of success is constant.
+ Two state. Two possible outcomes.
+ (true or false, hot or cold, success or failure, defective or not defective.)
+ Independent trials – trials are statistically independent.
+ (For example: The flip of one coin means nothing to the results of the next coin flip.)
+ Use Binomial Distribution when you are sampling with replacement.**You Cannot Use Binomial Distribution On:**
+ When the probability of success is not constant for an event.
+ Ex. The probability of it snowing or not snowing in NYC would not fit the criteria for a Binomial Distribution because the probability of success is not constant. The chance of snow on winter days is higher than summer days.
+ When you sample WITHOUT replacement.
+ In that case, use Hypergeometric.#### Examples of Binomial Distribution Problems
##### Non Cumulative Binomial Probability
##### Non-Cumulative Calculation Example 1: “Contains Exactly”
If a coin is tossed 10 times, what is the probability of obtaining exactly four heads? (If an honest coin is tossed 10 times, what is the probability of 4 heads?)With a coin toss there are only 2 possible outcomes; heads or tails. So you should be on alert for using binomial.
![image](https://user-images.githubusercontent.com/99672298/200476375-aae51d38-247b-48da-b54a-33dc34c13dd0.png)
##### Non-Cumulative Calculation Example 2: “Contains Exactly”
A manufacturing process creates 3.4% defective parts. A sample of 10 parts from the production process is selected. What is the probability that the sample contains exactly 3 defective parts?As soon as you see the word defective, you should be alert to using the Binomial equation. Since defect in this sense means that a part is in a binary state, either functioning or defective, it meets our criteria.
![image](https://user-images.githubusercontent.com/99672298/200477556-2914ed09-487f-44ca-907c-39a39e266a3c.png)
##### Non-Cumulative Chart Example:
You can see that the same percentage was found on the chart. (In this case, x is the probability of the event happening instead of p.)Examples of Cumulative Binomial Probability Questions
Cumulative Calculation Example: “At least” or “Fewer Than” or “At Most”
79% percent of the students of a large class passed the final exam. A random sample of 4 students are selected to be analyzed by the school. What is the probability that the sample contains fewer than 2 students that passed the exam?Since we are examining data that only has a binary state, pass or fail, you should be on alert to using the binomial equation.
Also, since you’re asked for ‘x or fewer’, you have to calculate the probability of ALL POSSIBLE events.
In this example we must calculate the odds of EXACTLY one pass PLUS the odds of EXACTLY no pass in the sample.
![image](https://user-images.githubusercontent.com/99672298/200481750-3cbe4ed2-9be7-478e-82c0-13363c23c13c.png)
![image](https://user-images.githubusercontent.com/99672298/200481964-d1644626-099a-4cc0-a4ca-8365677508ef.png)
![image](https://user-images.githubusercontent.com/99672298/200482343-d3e8b0b9-c950-46a1-814e-0a41b7cfbd8d.png)
[Table of Content](#0.1)
____
## 13.) Outliers### 13.1) What are Outliers? 🤔
We all have heard of the idiom ‘odd one out which means something unusual in comparison to the others in a group.Similarly, an Outlier is an observation in a given dataset that lies far from the rest of the observations. That means an outlier is vastly larger or smaller than the remaining values in the set.
### 13.2) Why do they occur?
An outlier may occur due to the variability in the data, or due to experimental error/human error.They may indicate an experimental error or heavy skewness in the data(heavy-tailed distribution).
### 13.3) How do they affect?
In statistics, we have three measures of central tendency namely Mean, Median, and Mode. They help us describe the data.Mean is the accurate measure to describe the data when we do not have any outliers present.
Median is used if there is an outlier in the dataset.
Mode is used if there is an outlier AND about ½ or more of the data is the same.
‘Mean’ is the only measure of central tendency that is affected by the outliers which in turn impacts Standard deviation.
#### 13.4) Detecting Outliers
If our dataset is small, we can detect the outlier by just looking at the dataset. But what if we have a huge dataset, how do we identify the outliers then? We need to use visualization and mathematical techniques.Below are some of the techniques of detecting outliers
Boxplots
Z-score
Inter Quantile Range(IQR)![24 03 2022_11 04 54_REC](https://user-images.githubusercontent.com/99672298/190962879-0b5cda86-cd05-4638-8be0-9ee64f732117.png)
#### Detecting outliers using the Z-scores
Criteria: any data point whose Z-score falls out of 3rd standard deviation is an outlier.![image](https://user-images.githubusercontent.com/99672298/191570868-7c0d6af8-47bc-4abb-96f5-9ab7f12fd4e3.png)
+ Steps:
loop through all the data points and compute the Z-score using the formula (Xi-mean)/std.
define a threshold value of 3 and mark the datapoints whose absolute value of Z-score is greater than the threshold as outliers.import numpy as np
outliers = []
def detect_outliers_zscore(data):
thres = 3
mean = np.mean(data)
std = np.std(data)
# print(mean, std)
for i in data:
z_score = (i-mean)/std
if (np.abs(z_score) > thres):
outliers.append(i)
return outliers# Driver code
sample_outliers = detect_outliers_zscore(sample)
print("Outliers from Z-scores method: ", sample_outliers)
**Detecting outliers using the Inter Quantile Range(IQR)**#### 5. Handling Outliers
Till now we learned about detecting the outliers. The main question is WHAT do we do with the outliers?Below are some of the methods of treating the outliers
Trimming/removing the outlier
Quantile based flooring and capping
Mean/Median imputation**Trimming/Remove the outliers**
In this technique, we remove the outliers from the dataset. Although it is not a good practice to follow.**Quantile based flooring and capping**
In this technique, the outlier is capped at a certain value above the 90th percentile value or floored at a factor below the 10th percentile value.The above code outputs: New array: [15, 20.7, 18, 7.2, 13, 16, 11, 20.7, 7.2, 15, 10, 9]
The data points that are lesser than the 10th percentile are replaced with the 10th percentile value and the data points that are greater than the 90th percentile are replaced with 90th percentile value.
**Mean/Median imputation**
As the mean value is highly influenced by the outliers, it is advised to replace the outliers with the median value.[Table of Content](#0.1)
____
## 14.) QQ- Plot
A Normal Q-Q plot is a kind of scatter plot that is plotted by creating two sets of quantiles. It is used to check if the data is following normality or not.![17 09 2022_20 22 04_REC](https://user-images.githubusercontent.com/99672298/192698296-6453ee67-5dff-43be-a28b-19a6d202318c.png)
![image](https://user-images.githubusercontent.com/99672298/191167218-be64fb1e-8afe-447c-b8eb-3217076a8181.png)
![17 09 2022_20 19 25_REC](https://user-images.githubusercontent.com/99672298/192698301-d204be17-d344-48dc-b5d7-61816f1daea5.png)On the x-axis, we have the Z-scores and on the y-axis, we have the actual sample quantiles. If the scatter plot forms a straight line, data is said to be normal.
[Table of Content](#0.1)
____
## 15.) Distribution functionStatisticians have observed that frequently used data occur in familiar patterns and so have sort to understand and define them. Frequently seen patterns include the normal distribution, uniform distribution, binomial distribution, etc.
### 15.1) The Cumulative Distribution Function (CDF)
The cumulative distribution function (CDF) is the probability that a random variable, say X, will take a value equal to or less than x.For example, if you roll a die, the probability of obtaining a 1 or 2 or 3 or 4 or 5 or 6 is 16.667% (=1/6) individually. The cumulative distribution function (CDF) of 1 is the probability that the next roll will take a value less than or equal to 1 and is 16.667%. There is only one possible way to get a 1. The cumulative distribution function (CDF) of 2 is the probability that the next roll will take a value less than or equal to 2. The cumulative distribution function (CDF) of 2 is 33.33% as there are two possible ways to get a 2 or below (the roll giving a 1 or 2).
![image](https://user-images.githubusercontent.com/99672298/191513349-6e27091a-e125-4c1a-bc04-d6d44dd88dd8.png)
Cumulative Distribution Function for x=2 in the case of a roll of a dice.
The cumulative distribution function (CDF) of 6 is 100%. The cumulative distribution function (CDF) of 6 is the probability that the next roll will take a value less than or equal to 6 and is equal to 100% as all possible results will be less than or equal to 6.### 15.2) Probability Density Function (PDF)
![18 09 2022_12 21 11_REC](https://user-images.githubusercontent.com/99672298/192697992-36123796-260e-411f-8ceb-2d75b0c441ef.png)
The probability density function (PDF) is the probability that a random variable, say X, will take a value exactly equal to x. Note the difference between the cumulative distribution function (CDF) and the probability density function (PDF) – Here the focus is on one specific value. Whereas, for the cumulative distribution function, we are interested in the probability of taking on a value equal to or less than the specified value. The probability density function is also referred to as the probability mass function. So do not get perturbed if you encounter the probability mass function.
![image](https://user-images.githubusercontent.com/99672298/191513554-317fa96d-452a-40fa-b71f-7fe669234183.png)
For example, if you roll a die, the probability of obtaining 1, 2, 3, 4, 5, or 6 is 16.667% (=1/6). The probability density function (PDF) or the probability that you will get exactly 2 will be 16.667%. Whereas, the cumulative distribution function (CDF) of 2 is 33.33% as described above.
#### 15.3) Probability Density Function (PDF) vs Cumulative Distribution Function (CDF)
The CDF is the probability that random variable values less than or equal to x whereas the PDF is a probability that a random variable, say X, will take a value exactly equal to x.![image](https://user-images.githubusercontent.com/99672298/191513665-b72a23d0-73ee-4f5a-82bf-dfcbd6f36c80.png)
#### Probability Mass Function vs Cumulative Distribution Function for Continuous Distributions and Discrete Distributions
We have seen above that the probability density function is relevant in the case of discrete distributions (roll of a dice). Why is the probability density function not relevant in the case of continuous distributions?There is an infinite number of values between the min and max in the case of continuous distributions. Therefore, we can say that the probability of a specific value will be 1/infinity or practically zero! So we conclude that the probability density functions are not relevant in the case of continuous distributions.
[Table of Content](#0.1)
____
### 16.) Covariance![07 05 2022_13 13 53_REC](https://user-images.githubusercontent.com/99672298/190981837-3684c83b-3a03-494a-b1e2-c4cec3dbd611.png)
![18 09 2022_19 58 07_REC](https://user-images.githubusercontent.com/99672298/192697712-b2a6684a-673e-4772-82cd-85a2d97c05f7.png)
![04 05 2022_20 05 57_REC](https://user-images.githubusercontent.com/99672298/190981878-f7ef77f3-6ea8-49b1-97c9-909c5baf0ada.png)
![Covariance-Formula](https://user-images.githubusercontent.com/99672298/190981973-c91ce53d-c933-48e7-bab1-8ef22905c33f.jpg)[Table of Content](#0.1)
___
### 17.) Correlation![18 09 2022_20 33 18_REC](https://user-images.githubusercontent.com/99672298/192697640-3c0eef2a-f2a2-4c17-8a44-4c4a9edf4d67.png)
Correlation coefficients take the values between minus one and plus one. The positive correlation signifies that the ranks of both the variables are increasing. On the other hand, the negative correlation signifies that as the rank of one variable is increased, the rank of the other variable is decreased.
Correlation analyses can be used to test for associations in hypothesis testing. The null hypothesis is that there is no association between the variables under study. Thus, the purpose is to investigate the possible association in the underlying variables. It would be incorrect to write the null hypothesis as having no rank correlation between the variables.
A Pearson correlation is used when assessing the relationship between two continuous variables. The non-parametric equivalent to the Pearson correlation is the Spearman correlation (ρ), and is appropriate when at least one of the variables is measured on an ordinal scale.
#### Inferential statistical procedures generally fall into two possible categorizations: parametric and non-parametric. Depending on the level of the data you plan to examine (e.g., nominal, ordinal, continuous), a particular statistical approach should be followed.
#### Parametric tests rely on the assumption that the data you are testing resembles a particular distribution (often a normal or “bell-shaped” distribution).
#### Non-parametric tests are frequently referred to as distribution-free tests because there are not strict assumptions to check in regards to the distribution of the data.
As a general rule of thumb, when the dependent variable’s level of measurement is nominal (categorical) or ordinal, then a non-parametric test should be selected. When the dependent variable is measured on a continuous scale, then a parametric test should typically be selected. Fortunately, the most frequently used parametric analyses have non-parametric counterparts. This can be useful when the assumptions of a parametric test are violated because you can choose the non-parametric alternative as a backup analysis.
Kendall's rank correlation provides a distribution free test of independence and a measure of the strength of dependence between two variables.
Spearman's rank correlation is satisfactory for testing a null hypothesis of independence between two variables but it is difficult to interpret when the null hypothesis is rejected. Kendall's rank correlation improves upon this by reflecting the strength of the dependence between the variables being compared.
#### 17.1) Kendall’s Tau and Spearman’s Rank Correlation Coefficient
There are two accepted measures of non-parametric rank correlations: Kendall’s tau and Spearman’s (rho) rank correlation coefficient.Correlation analyses measure the strength of the relationship between two variables.
Kendall’s Tau and Spearman’s rank correlation coefficient assess statistical associations based on the ranks of the data. Ranking data is carried out on the variables that are separately put in order and are numbered.
Kendall’s Tau: usually smaller values than Spearman’s rho correlation. Calculations based on concordant and discordant pairs. Insensitive to error. P values are more accurate with smaller sample sizes.
Spearman’s rho: usually have larger values than Kendall’s Tau. Calculations based on deviations. Much more sensitive to error and discrepancies in data.
The main advantages of using Kendall’s tau are as follows:
The distribution of Kendall’s tau has better statistical properties.
The interpretation of Kendall’s tau in terms of the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs is very direct.
In most of the situations, the interpretations of Kendall’s tau and Spearman’s rank correlation coefficient are very similar and thus invariably lead to the same inferences.#### What is Kendall’s Tau?
Kendall’s Tau is a non-parametric measure of relationships between columns of ranked data. The Tau correlation coefficient returns a value of 0 to 1, where:0 is no relationship,
1 is a perfect relationship.
A quirk of this test is that it can also produce negative values (i.e. from -1 to 0). Unlike a linear graph, a negative relationship doesn’t mean much with ranked columns (other than you perhaps switched the columns around), so just remove the negative sign when you’re interpreting Tau.Several version’s of Tau exist.
Tau-A and Tau-B are usually used for square tables (with equal columns and rows). Tau-B will adjust for tied ranks.
Tau-C is usually used for rectangular tables. For square tables, Tau-B and Tau-C are essentially the same.
Most statistical packages have Tau-B built in, but you can use the following formula to calculate it by hand:Kendall’s Tau = (C – D / C + D)
Where C is the number of concordant pairs and D is the number of discordant pairs.Example Problem
![image](https://user-images.githubusercontent.com/99672298/191551514-9c390703-e03a-48f7-bec0-2799861fc094.png)
Sample Question: Two interviewers ranked 12 candidates (A through L) for a position. The results from most preferred to least preferred are:
Interviewer 1: ABCDEFGHIJKL.
Interviewer 2: ABDCFEHGJILK.
Calculate the Kendall Tau correlation.Step 1: Make a table of rankings. The first column, “Candidate” is optional and for reference only. The rankings for Interviewer 1 should be in ascending order (from least to greatest).
![image](https://user-images.githubusercontent.com/99672298/191536745-9f751406-a683-4104-8102-79904f63cc34.png)
![image](https://user-images.githubusercontent.com/99672298/191536859-c7adba85-035b-41b3-b1f1-95e069fe8eae.png)
![image](https://user-images.githubusercontent.com/99672298/191536997-dae41f6b-fcba-4c92-9bca-aedd0d3adba5.png)#### What are Concordant Pairs and Discordant Pairs?
Concordant pairs and discordant pairs refer to comparing two pairs of data points to see if they “match.” The meaning is slightly different depending on if you are finding these pairs from various coefficients (like Kendall’s Tau) or if you are performing experimental studies and clinical trials.1. In Coefficient Calculations
Concordant pairs and discordant pairs are used in Kendall’s Tau, for Goodman and Kruskal’s Gamma and in Logistic Regression. They are calculated for ordinal (ordered) variables and tell you if there is agreement (or disagreement) between scores. To calculate concordance or discordance, your data must be ordered and placed into pairs.Example of Tied, Concordant and Discordant Pairs
Let’s say you had two interviewers rate a group of twelve job applicants:![image](https://user-images.githubusercontent.com/99672298/191552206-eb8bf8f9-f933-4376-9090-76a55afae41d.png)
#### Spearman’s rank correlation coefficient is the more widely used rank correlation coefficient.
Symbolically, Spearman’s rank correlation coefficient is denoted by rs . It is given by the following formula:
rs = 1- (6∑di2 )/ (n (n2-1))
*Here di represents the difference in the ranks given to the values of the variable for each item of the particular data
This formula is applied in cases when there are no tied ranks. However, in the case of fewer numbers of tied ranks, this approximation of Spearman’s rank correlation coefficient provides sufficiently good approximations.
+ **Key terms:**
**Non-parametric test:** it does not depend upon the assumptions of various underlying distributions; this means that it is distribution free.
**Concordant pairs:** if both members of one observation are larger than their respective members of the other observations
**Discordant pairs:** if the two numbers in one observation differ in opposite directions
#### Spearman's Rank-Order Correlation
This guide will tell you when you should use Spearman's rank-order correlation to analyse your data, what assumptions you have to satisfy, how to calculate it, and how to report it. If you want to know how to run a Spearman correlation in SPSS Statistics, go to our Spearman's correlation in SPSS Statistics guide.**When should you use the Spearman's rank-order correlation?**
The Spearman's rank-order correlation is the nonparametric version of the Pearson product-moment correlation. Spearman's correlation coefficient, (ρ, also signified by rs) measures the strength and direction of association between two ranked variables.**What are the assumptions of the test?**
You need two variables that are either ordinal, interval or ratio (see our Types of Variable guide if you need clarification). Although you would normally hope to use a Pearson product-moment correlation on interval or ratio data, the Spearman correlation can be used when the assumptions of the Pearson correlation are markedly violated. However, Spearman's correlation determines the strength and direction of the monotonic relationship between your two variables rather than the strength and direction of the linear relationship between your two variables, which is what Pearson's correlation determines.**What is a monotonic relationship?**
A monotonic relationship is a relationship that does one of the following: (1) as the value of one variable increases, so does the value of the other variable; or (2) as the value of one variable increases, the other variable value decreases. Examples of monotonic and non-monotonic relationships are presented in the diagram below:![image](https://user-images.githubusercontent.com/99672298/191515036-cfc7f20c-3734-40b4-85fe-50a2d72e8d5d.png)
**Why is a monotonic relationship important to Spearman's correlation?**
Spearman's correlation measures the strength and direction of monotonic association between two variables. Monotonicity is "less restrictive" than that of a linear relationship. For example, the middle image above shows a relationship that is monotonic, but not linear.A monotonic relationship is not strictly an assumption of Spearman's correlation. That is, you can run a Spearman's correlation on a non-monotonic relationship to determine if there is a monotonic component to the association. However, you would normally pick a measure of association, such as Spearman's correlation, that fits the pattern of the observed data. That is, if a scatterplot shows that the relationship between your two variables looks monotonic you would run a Spearman's correlation because this will then measure the strength and direction of this monotonic relationship. On the other hand if, for example, the relationship appears linear (assessed via scatterplot) you would run a Pearson's correlation because this will measure the strength and direction of any linear relationship. You will not always be able to visually check whether you have a monotonic relationship, so in this case, you might run a Spearman's correlation anyway.
**How to rank data?**
In some cases your data might already be ranked, but often you will find that you need to rank the data yourself (or use SPSS Statistics to do it for you). Thankfully, ranking data is not a difficult task and is easily achieved by working through your data in a table. Let us consider the following example data regarding the marks achieved in a maths and English exam:![image](https://user-images.githubusercontent.com/99672298/191515171-569dad2e-2cc3-427a-b8fb-c7de11fd46f0.png)
You need to rank the scores for maths and English separately. The score with the highest value should be labelled "1" and the lowest score should be labelled "10" (if your data set has more than 10 cases then the lowest score will be how many cases you have). Look carefully at the two individuals that scored 61 in the English exam (highlighted in bold). Notice their joint rank of 6.5. This is because when you have two identical values in the data (called a "tie"), you need to take the average of the ranks that they would have otherwise occupied. We do this because, in this example, we have no way of knowing which score should be put in rank 6 and which score should be ranked 7. Therefore, you will notice that the ranks of 6 and 7 do not exist for English. These two ranks have been averaged ((6 + 7)/2 = 6.5) and assigned to each of these "tied" scores.
**What is the definition of Spearman's rank-order correlation?**
There are two methods to calculate Spearman's correlation depending on whether: (1) your data does not have tied ranks or (2) your data has tied ranks. The formula for when there are no tied ranks is:![image](https://user-images.githubusercontent.com/99672298/191515267-2ddebc4b-41e2-42e5-abdd-1ab835411b64.png)
**What values can the Spearman correlation coefficient, rs, take?**
The Spearman correlation coefficient, rs, can take values from +1 to -1. A rs of +1 indicates a perfect association of ranks, a rs of zero indicates no association between ranks and a rs of -1 indicates a perfect negative association of ranks. The closer rs is to zero, the weaker the association between the ranks.An example of calculating Spearman's correlation
To calculate a Spearman rank-order correlation on data without any ties we will use the following data:![image](https://user-images.githubusercontent.com/99672298/191515724-5180d659-73ab-49a6-bb32-b9a58797599d.png)
![image](https://user-images.githubusercontent.com/99672298/191515822-6e2df8f4-f7fe-4214-afc7-ce289c7f8d68.png)### Pearson's correlation
It is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When the variables are bivariate normal, Pearson's correlation provides a complete description of the association.**Spearman's correlation applies to ranks and so provides a measure of a monotonic relationship between two continuous random variables. It is also useful with ordinal data and is robust to outliers (unlike Pearson's correlation).**
**The distribution of either correlation coefficient will depend on the underlying distribution, although both are asymptotically normal because of the central limit theorem.**
![31 03 2022_12 27 48_REC](https://user-images.githubusercontent.com/99672298/190981901-f8fd0acf-458d-47d1-ba48-ebcaef9abf37.png)
[Table of Content](#0.1)
## 18.) Permutation and Combination:Permutation and combination form the principles of counting and they are applied in various situations. A permutation is a count of the different arrangements which can be made from the given set of things. In permutation the details matter, as the order or sequence is important. Writing the names of three countries {USA, Brazil, Australia} or {Australia, USA, Brazil) or { Brazil, Australia, USA} is different and this sequence in which the names of the countries are written is important. In combinations, the name of three countries is just a single group, and the sequence or order does not matter. Let us learn more about permutation and combination in the below content.
#### What are Permutation and Combination?
Permutation and combination are the methods employed in counting how many outcomes are possible in various situations. Permutations are understood as arrangements and combinations are understood as selections. As per the fundamental principle of counting, there are the sum rules and the product rules to employ counting easily.Suppose there are 14 boys and 9 girls. If a boy or a girl has to be selected to be the monitor of the class, the teacher can select 1 out of 14 boys or 1 out of 9 girls. She can do it in 14 + 9 = 23 ways(using the sum rule of counting). Let us look at another scenario. Suppose Sam usually takes one main course and a drink. Today he has the choice of burger, pizza, hot dog, watermelon juice, and orange juice. What are all the possible combinations that he can try? There are 3 snack choices and 2 drink choices. We multiply to find the combinations. 3 × 2 = 6. Thus Sam can try 6 combinations using the product rule of counting. This can be shown using tree diagrams as illustrated below.
![image](https://user-images.githubusercontent.com/99672298/191480581-57bc71b0-8cfc-4a06-bc10-83b63b24911f.png)
In order to understand permutation and combination, the concept of factorials has to be recalled. The product of the first n natural numbers is n! The number of ways of arranging n unlike objects is n!.
#### Permutations
A permutation is an arrangement in a definite order of a number of objects taken some or all at a time. Let us take 10 numbers: 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. The number of different 4-digit-PIN which can be formed using these 10 numbers is 5040. P(10,4) = 5040. This is a simple example of permutations. The permutations of 4 numbers taken from 10 numbers equal to the factorial of 10 divided by the factorial of the difference of 10 and 4. The permutations is easily calculated using![image](https://user-images.githubusercontent.com/99672298/191480993-3be05cf7-8380-4128-a09e-dd0c1dce7976.png)
#### Combinations
A combination is all about grouping. The number of different groups which can be formed from the available things can be calculated using combinations. Let us try to understand this with a simple example. A team of 2 is formed from 5 students(William, James, Noah, Logan, and Oliver). This the combination of 'r' persons from the available 'n' persons is given as ![image](https://user-images.githubusercontent.com/99672298/191481059-8084f738-c063-4279-b832-74c9595745f4.png). The combinations can happen in the following 10 ways by which the team of 2 could be formed.+ William James
+ William Noah
+ William Logan
+ William Oliver
+ James Noah
+ James Logan
+ James Oliver
+ Logan Noah
+ Logan Oliver
+ Oliver Noah
+ This is a simple example of combinations. C(5,2) = 10.![image](https://user-images.githubusercontent.com/99672298/191480015-a22f7808-d8fa-47be-8ba4-483ee8bc2cbb.png)
___
## Author
[Table of Content](#0.1)
**
♡ Thank you for taking the time ♡**