Friday, 26 May 2017

6 FEARS THAT CAN DESTROY YOU

6 FEARS THAT ARE DANGEROUS FOR US :

1) FEAR OF POVERTY

2) FEAR OF CRITICISM

3) FEAR OF ILL HEALTH

4) FEAR OF LOSS OF LOVE

5) FEAR OF OLD AGE

6) FEAR OF DEATH
for more detail watch video. click on link
link:  FEARS THAT CAN DESTROY YOU

Thursday, 25 May 2017

How to Concentrate On Studies

How to Concentrate On Studies its very importance issue.
reason? one is lot of distraction due to mobile. its very challenging.
for tips its easy to get tips important thing is to implement those tips.
1 obey rule. make rule then obey them
2 after complete task reward yourself 
3 switch off your mobile or facebook account.
for more click on link and watch video.
link: How to Concentrate On Studies 

Daily Routine of Successful People

daily routine of successful people. there are some tips which successfull do to become successful person. invest for future.
read 10 pages of good books .reduce daily 125 calories daily. walk daily . drink 2 liter water daily.
after 31 month you can your life and your future as well . for more details
watch video , click on video link.
link: Daily Routine of Successful People - Motivational Video in Hindi - The Compound Effect summary

How to show Youtube Adsense Ads on Website Or Blogger

if you are blogger and you are not earning from your blog then dont worry .
we are here to give you advice how to connect your blog with adsense account. after connect  you can earn dollars and after connect ad show in your blog and you will earn lot of money.
for video click on link.
link: How to show Youtube Adsense Ads on Website Or Blogger

SECRETS OF MILLIONAIRE MIND

SECRETS OF MILLIONAIRE MIND - 
INSPIRATION FOR THOUSANDS OF PEOPLE.
yes its very inspirational book and if you want to lesen summary in urdu | Hindi so you can click on link and watch video.
link : SECRETS OF MILLIONAIRE MIND

Thursday, 11 May 2017

QTIA report


    
    

    

    
    






Course Facilitator: Sir ARSHIAN SHARIF


 Prepared by:

·      Rooshan





Table of contents:
Chapter 1: How to create a data file?          

Chapter 2: OUTLIERS
·         Definition
·         Types of Outliers
            Uni-Variate Outliers
·         Definition
·         Steps
·         How to identify?
·         How to remove?
·         Output and Interpretation
Multi-Variate Outliers
·         Definition
·         Steps
·         How to identify?
·         How to remove?
·         Output and Interpretation

Chapter 3: RELIABILITY ANALYSIS
·         Definition
·         Scale to check Reliability
·         Requirements
·         How to test Reliability
·         Output and Interpretation

Chapter 4: RANDOMNESS ANALYSIS
·         Definition
·         Hypothesis
·         Level of significance
·         Steps to perform Randomness Analysis
·         Output and Interpretation

Chapter 5: NORMALITY ANALYSIS
·         Definition
·         Assumptions
·         Methods to check Normality
·         Hypothesis
·         Steps to perform Normality Analysis
·         Results and Interpretation

Chapter 6: CORRELATION
·         Definition
·         Types of Correlation
·         Coefficient Of Correlation
·         Bi-Variate Correlation
·         Hypothesis
·         Range to check Correlation
·         Steps to perform Correlation 

Chapter 7: FACTOR ANALYSIS
·         Definition
·         Explanation
·         Illustration and Benefits
·         Types of Factor Analysis       
Exploratory Factor Analysis:
·         Definition
·         Explanation
·         Differences
·         Assumptions
·         Fitnesses
·         Steps to perform
·         Interpretation
·         Ways to remove Cross Loadings
·         Result
                       


Chapter 8: REGRESSION ANALYSIS
·         Definition
·         Explanation
·         Assumptions
·         Regression Model
·         How to perform Regression Analysis
·         Result
·         Multicollinearity
·         Pop up question
·         Equations

Chapter 9: CONFIRMATORY FACTOR ANALYSIS
·         Definition
·         Differences
·         Fitnesses
·         Introduction to AMOS
·         How to perform CFA

Chapter 10: STRUCTURAL EQUATION MODELLING
·         Definition
·         Goals
·         Steps to perform SEM
·         Hypothesis
·         Interpretation
                                                                                                                       
Chapter 11: PATH ANALYSIS
·         Definition
·         Steps to perform Path Analysis
·         Differences



Acknowledgement:

First of all we would like to thank ALLAH ALMIGHTY (The most gracious, the most powerful of all) for giving us the power to learn and apply for the benefits of human kind. We are highly indebted and wants to express our sincere respect and gratitude to our course mentor SIR ARSHIAN SHARIF who has given his continuous support, supervision, motivation and guidance with attention and care time to time. He truly remained driving spirit in our course and his knowledge about this course makes it easier for us to write this report. He always helped us in clarifying the abstruse concepts. His simple and friendly way of teaching makes us to easily understand each and every concept and gives us confidence to implement those concepts in our report.
Last but never the least we would like to thank all our group members who worked hard and co-operate with each other in writing this report.
A teacher is a compass that activates the magnets of curiosity, knowledge, and wisdom in the pupils. - Ever Garrison
“You are like that”.












CHAPTER: 1       HOW TO CREATE A DATA FILE
How to create a data file?

Click the Variable View tab.


Type the name of your variables under the name column and enter other information about the variable. Then set the values and put measurement/measure according to your data.










Step 1:
Step 2:

Step 3:
Now click the Data View tab.
Variable names that you entered in variable view will now be showing in the columns of data view.
Like this:


Now you can enter values in each case.

This is the variable view of our data file that we collected for our project and report.


This is the data view of our data file.
1:

2:



CHAPTER: 2                    OUTLIERS
What are Outliers?

Definition:

 An outlier is an observation point that is distant from other observations.
OR
An outlier is an observation that lies at an abnormal distance from other observations. We remove outliers because we want homogenous data.

Types of Outliers:

There are two types of Outliers.
1. Uni-Variate Outlier
2. Multi-Variate Outlier

Uni-Variate Outliers:

A Uni-Variate outlier is a data point that consist of an extreme value on one variable. It can be identified by Box plot. The box plot is a useful graphical display for describing the behavior of the data.

STEPS:
GraphLegacy dialogs Boxplot.
Uni-Variate outlier is further divided into two types.

1.      Mild Outlier ( It represents with о )

2.      Sure Outlier ( It represents with * )


How to identify Uni-Variate Outliers?
Graph Legacy dialogs Boxplot.

Check mark on summaries of separate variables.
Send selected variable (BP1) to Boxes represent and click OK.


OUTLIERS DETECTED:




How to remove Outliers?

When we run the test for the first time we identified these outliers.
158,122,120,119.
Select the outliers and clear/delete them from the data.
Second time we identified these outliers.
118,117,114,113.
Select the outliers and clear/delete them from the data.
When we run the test for the third time we identified these outliers.
112,92,86.
Select the outliers and clear/delete them from the data.
Fourth time we identified these outliers.
96,76,59,51.
Select the outliers and clear/delete them from the data.

Again we identified outliers.
36,24.
Select the outliers and clear/delete them from the data as well.

Output:
Outliers removed.

Interpretation:
17Uni-Variate Outliers were identified and dropped from the data set.

Multi-Variate Outliers:
A multivariate outlier is a combination of unusual scores on at least two variables. It can be calculated by Mahalanobis Distance.

STEPS:
Analyze Regression Linear.
Formula: M_OUT = 1-CDF.CHISQ (MAH_1, No. of items)

How to identify Multi-Variate Outliers?
Create SR_NO column in Variable View.
SR_NO column created.
Analyze Regression Linear.

Send all the items to Independent Variable and SR_NO to Dependent Variable.
Click save and tick on Mahalanobis. Then OK.
Go to Variable View. MAH_1 variable executed.
Then go to Transform Compute Variable.
Write M_OUT in Target Variable and put formula 1-CDF.CHISQ (?,?) in Numeric Expression.
Note: Transfer MAH_1 variable in Numeric Expression and write No. of items.
M_OUT = 1-CDF.CHISQ (MAH_1,17)

Go to Variable View, M_OUT Variable created.
Go to Data View.
Now we will find out the values in M_OUT column which are less than 0.001. It will considered as Multi-Variated Outlier.













How to remove?

Clear all Multi-Variated Outliers.
Interpretation:

For the detection of Multi-Variate Outliers, Mahalanobis distance critical chi.sq function at p<0.001 is used. The result of Mahalanobis distance resulted in 8 Multi-Variate Outliers.
After removing total of 25 invalid responses, the final count of present research is 149.









CHAPTER: 3                RELIABILITY ANALYSIS
Definition:

Reliability Analysis measures the overall consistency of the items that are used to define a scale.
The measurement of the reliability is more than important to check for the primary data because if the data is not reliable the end result and forecasting will also be not reliable. We have collected primary data so it is our responsibility to check the quality of the data.
Reliability can be ensured by Chronbach’s Alpha and its minimum value should be 0.5 (50%).
Scale to check the Reliability:

0.5 ─ Acceptable
0.6 ─ Fair
0.7 ─ Good
0.8 ─ Excellent
0.9 And above ─ Super
Requirements for the Reliability Analysis:

1.      There must be at least 2 variables.
2.      Data must be numeric.







How to test Reliability?
STEPS:
Go to Analyze Scale Reliability Analysis.

Move BP1, BP2, BP3, BP4, and BP5 to the items column.
Click on Statistics, Tick mark on Scale if item deleted and then Continue.
Output:



Interpretation:
The above picture is showing the results of the output. Second table is telling about the total reliability.
In our result the value of Chronbach’s Alpha is 0.855 which is super according to our scale and greater than 0.5. So, we can conclude that our data is reliable.










CHAPTER: 4               RANDOMNESS ANALYSIS
What is Randomness?
Definition:
In statistics, “random implies that all possible outcomes are known, but not which outcome will occur”.
A statistical analysis performed to determine the origin of random data figures collected. If the data will not be random then the representation of whole population will not appear from the sample.
It is tested on the basis of Medians and number of runs in the data by “Runs Test”.
Run test of Randomness:
Running a Test of Randomness is a non-parametric method that is used in cases when the parametric test is not in use. This test checks whether or not the number of runs are the appropriate number of runs for a randomly generated series.
If a variable has its p-value higher then α it will be consider random and representative part of the target population and the results generated from this variables will also equally implementable on the whole population under the hypothesis
Hypothesis:
Hо = Data is random.
Hᴀ = Data is not random.
If the data is random → You can generalized the policies.
If the data is not random → You can specified the policies because it cannot be generalized.
Level of significance: (Sig.value/ P-value/ Prob. All are same)
Decide benchmark (Limit the confidence interval 90% and chances of error 10%)
Chances of error:
1% → 5% → 10%
STEPS:
Analyze Non Parametric Test Legacy Dialogue Runs.
Now move BL1, BL2, BL3, BL4, BL5, BL6 to Test Variable List and Click OK.

Output:
Hypothesis and Interpretations:
Hypothesis of BL1:
Hо = BL1 is random.
Hᴀ = BL1 is not random.
Interpretation:
Based on Sig.value, we accept alternate hypothesis which is less than 0.1 (0.017<0.1) and reject null hypothesis.
Hᴀ = BL1 is not random.
Our data is not random so the representation of whole population will not appear from the sample.
Hypothesis of BL2:
Hо = BL2 is random.
Hᴀ = BL2 is not random.
Interpretation:
Based on Sig.value, we accept null hypothesis which is greater than 0.1 (0.234>0.1) and reject alternate hypothesis.
Hо = BL2 is not random.
Our data is random so the representation of whole population will appear from the sample.
Hypothesis of BL3:
Hо = BL3 is random.
Hᴀ = BL3 is not random.
Interpretation:
Based on Sig.value, we accept null hypothesis which is greater than 0.1 (0.280>0.1) and reject alternate hypothesis.
Hо = BL3 is not random.
Our data is random so the representation of whole population will appear from the sample.
Hypothesis of BL4:
Hо = BL4 is random.
Hᴀ = BL4 is not random.
Interpretation:
Based on Sig.value, we accept null hypothesis which is greater than 0.1 (0.247>0.1) and reject alternate hypothesis.
Hо = BL4 is not random.
Our data is random so the representation of whole population will appear from the sample.
Hypothesis of BL5:
Hо = BL5 is random.
Hᴀ = BL5 is not random.
Interpretation:
Based on Sig.value, we accept null hypothesis which is greater than 0.1 (0.418>0.1) and reject alternate hypothesis.
Hо = BL5 is not random.
Our data is random so the representation of whole population will appear from the sample.
Hypothesis of BL6:
Hо = BL6 is random.
Hᴀ = BL6 is not random.
Interpretation:
Based on Sig.value, we accept alternate hypothesis which is less than 0.1 (0.061<0.1) and reject null hypothesis.
Hᴀ = BL6 is random.
Our data is not random so the representation of whole population will not be appear from the sample.

























CHAPTER: 5               NORMALITY ANALYSIS
What is Normality?
Definition:
A normality test is used to determine whether sample data has been drawn from a normally distributed population or not. The assumption of normality is just the supposition that the underlying random variable of interest is distributed normally.
Assumptions:
If the data is normally distributed then it will be helpful in forecasting and it means that we can predict the whole population from the given sample.
If the data is not normally distributed then it will not be helpful in forecasting and it means that we cannot predict the whole population from the given sample.
How to check Normality?
There are two 2 ways or methods to check the Normality.
GRAPHICAL METHOD:
We can check normality by graphical method with the help of following.
·         Histogram
·         Stem and Leaf Plot
·         Box and Wisker Plot
NUMERICAL METHOD:
We can check normality by numerical method with the help of following.
·         Kalmongrov-Smirnove (K-S TEST)
·         Shipro-Wilk’s (S-W TEST)
Level of significance: (Sig.value/ P-value/ Prob. All are same)
Decide benchmark (Limit the confidence interval 90% and chances of error 10%)
Chances of error:
1% → 5% → 10%
Hypothesis:
Hо = Data is normally distributed.
Hᴀ = Data is not normally distributed.
STEPS:
Go to Transform Compute Variable.
Write BP in Target Variable and BP1+BP2+BP3+BP4+BP5/5 in Numeric Expression.
Variables Created.
Now go to Analyze Descriptive Statistics Explore.
Send BP to Dependent List.
Select Plots and check mark on Histogram, Normality Plots with tests.
Click Continue and then OK.
Output:
Results of Graphical Method:

Histogram:


Negatively Skewed.

Stem and Leaf Plot:


Negatively Skewed.

Box and Wisker Plot:


Negatively Skewed.

Results of Numerical Method:
Kalmongrov-Smirnove (K-S TEST) and Shipro-Wilk’s (S-W TEST).

Tests of Normality

Kolmogorov-Smirnova
Shapiro-Wilk
Statistic
df
Sig.
Statistic
df
Sig.
BP
.066
149
.200
.978
149
.019

Hypothesis:
Kalmongrov-Smirnove (K-S TEST):
Hо = BP is normally distributed.
Hᴀ = BP is not normally distributed.
Interpretation:
As the Sig.value is greater than 0.1 (0.200>0.1), we reject alternate hypothesis and accept null hypothesis i-e, Hо = BP is normally distributed.
It explains that data is normally distributed so we can predict the whole population from the given sample.
Shipro-Wilk’s (S-W TEST):
Hо = BP is normally distributed.
Hᴀ = BP is not normally distributed.
Interpretation:
As the Sig.value is less than 0.1 (0.019<0.1), we reject null hypothesis and accept alternate hypothesis    i-e, Hᴀ = BP is not normally distributed.
It explains that data is not normally distributed so we cannot predict the whole population from the given sample.







CHAPTER: 6             CORRELATION ANALYSIS
What is Correlation?
Definition:
This word correlation derives from the word relationship.
We use this test to check the correlation between the variables.It is denoted by "r" for sample and by “ρ” for population.
OR
Correlation analysis is a method of statistical evaluation used to study the strength of a relationship between two, numerically measured, continuous variables.

Types of Correlation:
There are three types of correlation.
1.      Positive correlation. (Direct relationship b/w the variables)
2.      Negative correlation. (Indirect relationship b/w the variables)
3.      No correlation. (No impact on each other)

Coefficient Of Correlation:
The value of coefficient of correlation lies in b/w -1 to +1
  +1 representing absolute positive linear relationship (as X increases, Y increases).
  0 representing no linear relationship (X and Y have no pattern).
  -1 representing absolute inverse relationship (as X increases, Y, Decreases).

Bi-variate/Pearson/Simple Correlation:
It test the strength of the relationship b/w the variables.
A bi-variate analysis may show us a strong relationship b/w the variables but in reality, this strong relationship could be the result of some other extraneous factors.

Hypothesis:
Hо = There is no correlation b/w the variables.
Hᴀ = There is a correlation b/w the variables.

Level of significance; 10%

Range to check the correlation:
50% or above ─ +ve and strong correlation.
50% or less ─ -ve and weak correlation.

STEPS:
Transform Compute variables







Merge all the items of all variables separately.

Write BP in Target Variable and BP1+BP2+BP3+BP4+BP5/5 in Numeric Expression.

Write BR in Target Variable and BR1+BR2+BR3+BR4/4 in Numeric Expression.

Write CS in Target Variable and CS1+CS2/2 in Numeric Expression.

Write BL in Target Variable and BL1+BL2+BL3+BL4+BL5+BL6/6 in Numeric Expression.
Click OK.

Variables created in variable view.
Variables created in Data view.

Now go to Analyze Correlate Bi-variate.


Output:
Interpretation:
In this table we have four variables.
1.      Brand Performance.
2.      Brand Reputation.
3.      Customer Satisfaction.
4.      Brand loyalty.
There are 6 correlation in above table.
BP & BR, BR & CS, CS & BL, BP & CS, BP & BL, BR & BL.
Brand Performance & Brand Reputation:
Hо = There is no correlation b/w BP & BR.
Hᴀ = There is a correlation b/w BP & BR.
Result:
As the sig value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = There is a correlation b/w BP & BR.
There is a +ve correlation b/w BP & BR and both the variables are 78% correlated.


Brand Reputation & Customer Satisfaction:
Hо = There is no correlation b/w BR & CS.
Hᴀ = There is a correlation b/w BR & CS.
Result:
As the sig value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = There is a correlation b/w BR & CS.
There is a +ve correlation b/w BR & CS and both the variables are 73% correlated.

Customer Satisfaction & Brand Loyalty:
Hо = There is no correlation b/w CS & BL.
Hᴀ = There is a correlation b/w CS & BL.
Result:
As the sig value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = There is a correlation b/w CS & BL.
There is a +ve correlation b/w CS & BL and both the variables are 70% correlated.

Brand Performance & Customer Satisfaction:
Hо = There is no correlation b/w BP & CS.
Hᴀ = There is a correlation b/w BP & CS.
Result:
As the sig value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = There is a correlation b/w BP & CS.
There is a +ve correlation b/w BP & CS and both the variables are 72% correlated.

Brand Performance & Brand Loyalty:
Hо = There is no correlation b/w BP & BL.
Hᴀ = There is a correlation b/w BP & BL.
Result:
As the sig value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = There is a correlation b/w BP & BL.
There is a +ve correlation b/w BP & BLand both the variables are 69% correlated.

Brand Reputation & Brand Loyalty:
Hо = There is no correlation b/w BR & BL.
Hᴀ = There is a correlation b/w BR & BL.
Result:
As the sig value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = There is a correlation b/w BR & BL.
There is a +ve correlation b/w BR & BL and both the variables are 69% correlated.








CHAPTER: 7               FACTOR ANALYSIS

Exploratory Factor Analysis

Regression Analysis

Path Analysis

Confirmatory Factor Analysis

Structural Equation Modelling
 









Definition:
It is a data reduction technique used to represent a wide range of items or attributes on smaller dimension
Explanation:
It an important data reduction technique on which other techniques such as regression and structural Equation modelling depends. Actually items are reduced on the basis of similarity, whereas similarity is measured in terms of correlation.
For instance: Job Satisfaction variable has 5 items that are JS1, JS2, JS3, JS4, and JS5. This technique will form a single variable named “JS” which reflects all five items.
In case out of five only 3 items have similarity rest 2 are not correlated then this technique will form a single dimension of 3 similar and correlated items whereas rest two will be eliminated.
Illustration and benefits of Factor Analysis:
Say you are a retailer and want to increase customer footfalls through brand promotion and better understanding of customers buying behavior. You could effectively use factor analysis to provide you with better insight on customer demographics and buying behavior. This could help you target your market and achieve higher sales.
Factor analysis is an inexpensive and simple to use statistical tool and can be used in variety of ways.
It can be used to underline a lot of dormant factors that other tools may not be able to highlight.
The main benefits of factor analysis are that the analyst can focus their attention on the unique core elements instead of the redundant attributes, and as a data ‘pre-processor’ for regression models.
Types of Factor Analysis:
There are mainly two types of factor analysis that are used for different kinds of market research and analysis.
·         Exploratory Factor Analysis (EFA) is used to measure the underlying factors that affect the variables in a data structure without setting any predefined structure to the outcome.
·         Confirmatory Factor Analysis (CFA) on the other hand is used as tool in market research and analysis to reconfirm the effects and correlation of an existing set of predetermined factors and variables that affect these factors.
EXPLORATORY FACTOR ANALYSIS:
Definition:
Exploratory factor analysis is a statistical technique that is used to reduce data to a smaller set of summary variables and to explore the underlining theoretical structure of the phenomena.  It is used to identify the structure of the relationship between the variable and the respondent
Explanation:
EFA derived from the word explore. In this technique we do not know either correlation exist in all items of a variable. So in exploratory we basically make factors and it is possible that factors can be eliminated or added. But in CFA once factors are decided cannot be change.
Difference between CFA and EFA:
CFA techniques run after performing EFA. In EFA factors can be eliminated and added where as in CFA factors cannot be eliminated or added those factors that are confirmed using EFA will be further use in CFA. It is because CFA is pre-requisite for running SEM technique.
Assumptions:
1.      Variables used should be metric either ratio or interval.
2.      Sample size: Sample size should be more than 200. 
3.      Homogeneous sample: A sample should be homogenous.  Violation of this assumption increases the sample size as the number of variables increases.  Reliability analysis is conducted to check the homogeneity between variables.
4.      In exploratory factor analysis, multivariate normality is not required.
5.      There should be no outliers in the data

FITNESS OF EFA:

Following mention fitness can tell either factor analysis can be performed or not.

·         KMO(Kaiser Meyer Olkin test). Its value should be greater than or equal to 0.7, which means that factors can be formed.
·         Value of determinant: It should be less than 0.0001, which means factor analysis can be performed.
·         Bartlett’s test: For this test we form hypothesis.
Hо= Factor analysis cannot be performed.
H= Factor analysis can be performed.

If value of significance is below 0.10 we will accept alternate hypothesis where as if value of significance is above 0.10 so we will accept null hypothesis and reject the alternate hypothesis.
Out of these thee fitnesses, two fitness method should be fulfil and then we can proceed further. It’s observed, mostly KMO and Bartlett’s test method are fulfil.

How to perform EFA on SPSS ?

STEP 1:Open required SPSS file.
STEP 2:Click Analyze Dimension Reduction Factor.



STEP 3:Select and transfer all items into variable box.


STEP 4:Click Descriptive Check mark on KMO and Bartletts test Value of Determinant Continue.


STEP 5:Click Extraction Check based on fixed value Write number of variables in “factors to extract” Column. Since we are using only 3 variables so we write down 3 in the box.


Step 6:Click Rotation Check Varimax Continue.

STEP 7:Click Options Check suppress small coefficient Write 0.35 in Absolute volume.

This is like trial and error game. Suppress value is not fixed but it is recommended to use 0.35. You can vary it according to your data but it should not be less than 0.35.



STEP 8:Now run the test by clicking OK.

How to interpret result?

FIRST STEP:
·         Value of determinant: it should be less than 0.0001. Which means factor analysis can be performed.

Our result shows 5.559E-005 which is less than 0.0001 hence factor analysis can be performed
Correlation Matrixa

a. Determinant = 5.559E-005

SECOND STEP:
·         KMO AND BARTLETTS TEST:
1)      For KMO its value should be greater than or equal to 0.7, which means that factors can be formed.
i)        in our case its 0.921 which indicates that factors can be performed

2)      For Bartlett’s test: For this test we form hypothesis.
i)        Hо= Factor analysis cannot be performed.
ii)      H=Factor analysis can be performed.
iii)    If the value of significance is below 0.10 we will accept alternate hypothesis where as if value of significance is above 0.10 so we will accept null hypothesis and reject the alternate hypothesis.

In our case it is significant. So on the basis of sig.value we accept the alternate hypothesis and reject the null hypothesis. Hence we conclude that factor analysis can be performed.


KMO and Bartlett's Test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy.
.921
Bartlett's Test of Sphericity
Approx. Chi-Square
1392.887
df
105
Sig.
.000

THIRD STEP:
We have to see two things in this table. This table basically tells us that how many factors can be formed.
If we are using “based on Eigen value” method we have to see first column named “total” of “initial Eigen Values”. In this column we will consider only those values which are greater than 1.so in our case the first two values are greater than 1.which means that if we follow Eigen method only two factors can be formed
But if we are using “fixed number of factor method” we will consider last value of last column of last table that is “cumulative “column of “rotated sums of squared loadings”. This value should be greater than equal to 50%, it’s because while reducing data, data loses its explanation power.
See 1st value of cumulative table its 31.505% which means after forming 1 factor, this factor can explain 31.505% data, whereas second value is 53.806 which means that after forming two factors, these two factors has 53.806% capability to explain data . Whereas after forming 3rd factor, all these three factors have capability to explain data around 66.543%. It should be greater than 50%.
So if we are using Eigen Value method, it can form 2 factors whereas if are using fixed number method, 3 factors can be formed easily.

Total Variance Explained
Component
Initial Eigenvalues
Extraction Sums of Squared Loadings
Rotation Sums of Squared Loadings
Total
% of Variance
Cumulative %
Total
% of Variance
Cumulative %
Total
% of Variance
Cumulative %
1
7.920
52.800
52.800
7.920
52.800
52.800
4.726
31.505
31.505
2
1.167
7.777
60.577
1.167
7.777
60.577
3.345
22.300
53.806
3
.895
5.966
66.543
.895
5.966
66.543
1.911
12.737
66.543
4
.827
5.515
72.058






5
.650
4.333
76.390






6
.610
4.064
80.454






7
.512
3.415
83.869






8
.470
3.131
87.001






9
.411
2.742
89.743






10
.367
2.443
92.186






11
.357
2.380
94.566






12
.259
1.728
96.295






13
.218
1.454
97.749






14
.191
1.273
99.022






15
.147
.978
100.000






Extraction Method: Principal Component Analysis.

FOURTH STEP:
This is the most important and critical table to see.
Values of this table are called “factor loadings” while observing this table two things should be kept in mind:
·         Factor loadings should be greater than 0.4.
·         No cross loading should be there, means, there should not be any other value in the same column or row either vertically or horizontally. For instance see Brand Performance 4, there are two values, 1st is in 1st column 2nd is in 3rd column.
·         Values of each item of variable should lie in the respective column.
If we observe this table there is numerous cross loading. We have to remove this cross loading which is the main game. We have to adjust these values by adjusting suppress value and by eliminating or adding items.
Rotated Component Matrixa

Component
1
2
3
Brand Performance1
.794


Brand Performance2
.750


Brand Performance3
.785


Brand Performance4
.639

.417
Brand Performance5


.791
Brand Reputation1
.654
.404

Brand Reputation2
.797
.371

Brand Reputation3
.648


Brand Reputation4
.582

.501
Brand Loyalty1

.492
.474
Brand Loyalty2

.637
.357
Brand Loyalty3

.613
.471
Brand Loyalty4
.457
.752

Brand Loyalty5

.740

Brand Loyalty6

.738

Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 7 iterations.

How to remove cross loadings?
As discussed earlier in order to remove cross loading 1st we have to increase our suppress value, if it does not work than besides increasing suppress value we may drop or add few items. So let’s see how this work:
·         1st we have increased suppressing value to 0.45.as by observing table rotated component matrix table we can say that by suppressing all values below 0.45, betterment can be seen in our result.
·          
Rotated Component Matrixa

Component
1
2
3
Brand Performance1
.794


Brand Performance2
.750


Brand Performance3
.785


Brand Performance4
.639


Brand Performance5


.791
Brand Reputation1
.654


Brand Reputation2
.797


Brand Reputation3
.648


Brand Reputation4
.582

.501
Brand Loyalty1

.492
.474
Brand Loyalty2

.637

Brand Loyalty3

.613
.471
Brand Loyalty4
.457
.752

Brand Loyalty5

.740

Brand Loyalty6

.738

Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 7 iterations.
Still cross loading is there and most of them have coefficient greater than 0.45 which was our suppress value. So we increase our suppress value to 0.50.





Rotated Component Matrixa

Component
1
2
3
Brand Performance1
.794


Brand Performance2
.750


Brand Performance3
.785


Brand Performance4
.639


Brand Performance5


.791
Brand Reputation1
.654


Brand Reputation2
.797


Brand Reputation3
.648


Brand Reputation4
.582

.501
Brand Loyalty1



Brand Loyalty2

.637

Brand Loyalty3

.613

Brand Loyalty4

.752

Brand Loyalty5

.740

Brand Loyalty6

.738

Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 7 iterations.
·         Now cross loading have much improved. Now we will start eliminating or adding new items and see its effect. If observe in the above table, all values of brand performance lies in column 1 whereas only brand performance 5 lies in column 3. Same is the case with brand reputation 4.Furthermore value of brand loyalty 1 disappear because it will be less than our suppress value (0.50) so we are going to eliminate brand performance 5, brand reputation 4 and brand loyalty 1 and see its effect.(using step 2 and step 3)

·         Now the result will be:

Rotated Component Matrixa

Component
1
2
3
Brand Performance1
.794


Brand Performance2
.765


Brand Performance3
.796


Brand Performance4
.701


Brand Reputation1
.643


Brand Reputation2
.807


Brand Reputation3
.705


Brand Loyalty2


.594
Brand Loyalty3


.885
Brand Loyalty4

.752

Brand Loyalty5

.866

Brand Loyalty6

.729

Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 5 iterations.

Still values are not in their respective columns and cross loading is there. Now we will remove brand loyalty 2 and 3 by using same step 2 and step 3 and see its effect.









Rotated Component Matrixa

Component
1
2
3
Brand Performance1

.823

Brand Performance2

.855

Brand Performance3
.547
.620

Brand Performance4

.565

Brand Reputation1
.765


Brand Reputation2
.777


Brand Reputation3
.768


Brand Loyalty4


.773
Brand Loyalty5


.862
Brand Loyalty6


.775
Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 4 iterations.

Results are very much better. Now we will have two options either we will increase our suppress value 0.55. This will remove cross loading of brand performance 3 OR we can simply eliminate brand performance 3. Answer will be same. But we choose to drop brand performance 3. So the results will be:











Rotated Component Matrixa

Component
1
2
3
Brand Performance1


.823
Brand Performance2


.844
Brand Performance4


.609
Brand Reputation1

.773

Brand Reputation2

.776

Brand Reputation3

.770

Brand Loyalty4
.773


Brand Loyalty5
.862


Brand Loyalty6
.778


Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 5 iterations.

This is ideal result every variable is in separate column with no cross loadings and all values are above 0.40. Here our factors are final once factors are final now arrange them according to their size by:

Click Analyze Dimension Reduction Factor Option Check sorted by size Continue OK.

Now the result appear will be in presentable form:

Rotated Component Matrixa

Component
1
2
3
Brand Loyalty5
.862


Brand Loyalty6
.778


Brand Loyalty4
.773


Brand Reputation2

.776

Brand Reputation1

.773

Brand Reputation3

.770

Brand Performance2


.844
Brand Performance1


.823
Brand Performance4


.609
Extraction Method: Principal Component Analysis.
 Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 5 iterations.

So this is how we have done factor analysis now we will move towards regression.
























CHAPTER: 8               REGRESSION ANALYSIS
Definition:
“A statistical tool used to find relationship between different sets of variable”
Explanation:
In other words, regression analysis can be explained by amount or degree of dependency of one variable onto the other is known as regression analysis. Regression is basically extended of version of correlation because it fulfils all limitations of correlation
In regression analysis, we have two types of variable:
Dependent variable: One we want to predict also known as outcome variable
 Independent variable:Is the one with the help of which we predict dependent variable, also known as predictor variable.
For instance: Exam performance can be predicted by revision time.so exam performance is dependent variable whereas, revision time is independent variable.
Assumptions:
Before starting regression one must assumed three important points:
·         There should be linear relationship between both variables. Linear regression needs the relationship between the independent and dependent variables to be linear.  It is also important to check for outliers since linear regression is sensitive to outlier effects. 
·         Variables should be normally distributed. This assumption can be best checked using histogram and goodness of fit test e.g; Kolomogorov-Smirnov test
·         There should not be any multicollinearity. When we have more than one independent variable in the model then these variables start correlating with each other or with itself.
For example: GDP is dependent over import, export and trade. Where trade is actually sum of import and export so in this model issue of multicollienarity arises. Normally variables correlate with each other, but we have to make sure that this multicollinearity should not be severe. This can be checked with the help of Variance Inflation Factor (VIF).

VIF≥10 means variables are correlating severly
VIF<10 means correlation is there but not too much. This is acceptable.
If VIF>10, then there are some procedures to get rid of VIF,which are:

·         Increase the sample size if not then,
·         Use the proxy(alternate) of the variableif not then,
·         Remove the variable if not then,
·         Do nothing and remain the model same.

Regression Model:
Y = αo + β1X1 + e                 
Where,
Y= Dependent variable
X = Independent variable
β1  = Coefficient of X1. It basically tells us about the change in Y. For example: we assume x=1 and β1  = 0.5
So,Y= β1X tells us that 1 unit change in X1 creates 0.5 unit change in Y.
e = Residual or remaining parts or error terms. Those factor or variable which are not the part of variable but can affect the model.
α= Constant. Impact of residual value in terms of amount. All those variables which are not in consideration, impact of those will be tell by α.
How to perform regression analysis on SPSS?
STEP1: Take average of brand performance, brand reputation, brand loyalty and customer satisfaction.
Transform compute variable


A new window will open write BP in target variable which indicate brand performance and then in numeric expression write BP1+BP2+BP3+BP4+BP5/5 then click OK.
A new variable will be formed named BP in variable view. Repeat the same procedure with Brand Reputation (BR). Since these BP and BR fulfils assumptions of Regression that is they are normal and donot have any multicollieanrity.


STEP2:
Click Analyze  Regression  Linear

Transfer the independent variable BRAND REPUTATION(BR), into the independent boxand the dependent variableBRAND PERFORMANCE (BP) into the Dependent box.
You can do this by either drag-and-dropping the variables or by using the appropriate   buttons. You will end up with the following screen:
STEP3:
Click statistics Check collinearity diagnostics OK


Output of Linear Regression Analysis:
SPSS Statistics will generate quite a few tables of output for a linear regression. In this section, we show you only the three main tables required to understand your results from the linear regression procedure, assuming that no assumptions have been violated.
The first table of interest is the Model Summary table, as shown below:

Model Summary
Model
R
R Square
Adjusted R Square
Std. Error of the Estimate
1
.786a
.618
.616
2.19894
a. Predictors: (Constant), BR



This table provides the R and R2 values. The R value represents the simple correlation and is 0.786 (the "R" Column), which indicates a high degree of correlation. Its value should be greater than 0.40
 The R2 value (the "R Square" column) indicates how much of the total variation in the dependent variable BP can be explained by the independent variable BR. In this case, 61.8% can be explained, which is very large.
The next table is the ANOVA table, which reports how well the regression equation fits the data (i.e., predicts the dependent variable).

Point of discussion here is Fvalue which tells us about the overall significance of model. For which we set some level of significance which can be 5% or 10% or any value referred by the supervisor and we make two hypothesis that is null hypothesis and alternate hypothesis and

Hо= Overall model is not significant.
Hᴀ= Overall model is significant.


ANOVAa
Model
Sum of Squares
df
Mean Square
F
Sig.
1
Regression
1151.768
1
1151.768
238.197
.000b
Residual
710.796
147
4.835


Total
1862.564
148



a. Dependent Variable: BP

Interpretation:

Based on sig value which is 000 we reject the null hypothesis and accept the alternate hypothesis and hence conclude that the overall model is significant.
The Coefficients table provides us with the necessary information to predict Brand performance from Brand reputation, as well as determine whether Brand Reputation contributes statistically significantly to the model (by looking at the "Sig." column
Here point of discussion is t value and unstandardized coefficient B.
T- Stats: It tells us the individual significance of independent variable over dependent variable. For this two hypothesis has to be made. (Same as f-stats)
Hо= There is no significant impact of independent variable(BR/constant) over dependent variable(BP).
Hᴀ= There is a significant impact of independent variable(BR/constant) over dependent variable(BP).

Based on our sig value which is 0.00 we reject the null hypothesis and accept the alternate hypothesis. Hence conclude that brand reputation or constant has significant impact on Brand performance.


Coefficientsa
Model
Unstandardized Coefficients
Standardized Coefficients
t
Sig.
Collinearity Statistics
B
Std. Error
Beta
Tolerance
VIF
1
(Constant)
3.261
.784

4.161
.000


BR
1.006
.065
.786
15.434
.000
1.000
1.000
a. Dependent Variable: BP
Multicollinearity:
In this model there is no issue of multicollinearity because its VIF value of all independent variable is less than 10

POP UP QUESTION:

What is standardized and unstandardized coefficients?
Unstandardized coefficients are the one in which changes occur in same unit. For instance: if price increase by one unit, income will also increase by one unit. In our case if BP increases by one unit BR will also increase by 1.006 unit. Where as in unstandardized coefficient changes occur in different unit for instance if price increase by one unit, income will increase by one rupees.
It is advisable to use unstandardized coefficient. Since units are same.
General equation:
Y = αo + β1X1 + e
Specific equation:
Brand performance= αo + β1brand reputation +e
Calculated/Estimated equation:
Brand performance = 3.261 + 1.006(Brand Reputation) + e














CHAPTER: 9        CONFIRMATORY FACTOR ANALYSIS
Definition:
Confirmatory factor analysis (CFA) is a multivariate statistical procedure that is used to test how well the measured variables represent the number of constructs. Confirmatory factor analysis (CFA) is a tool that is used to confirm or reject the measurement theory.
Difference between CFA and EFA?
An exploratory factor analysis aims at exploring the relationships among the variables and does not have an a priori fixed number of factors. You may have a general idea about what you think you will find, but you have not yet settled on a specific hypothesis. Whereas, a confirmatory factoranalysis assumes that you enter the factor analysis with a firm idea about the number of factors you will encounter, and about which variables will most likely load onto each factor.
Fitnesses of CFA:
To assess the fitness of a model in confirmatory factor analysis we have following test:
·         CMIN/DF: Chi square minimum/degree of freedom
Bench mark: It should be less than 3

·         GFI: Goodness of fit index
Bench mark: It should be greater than equal to0.85
·         AGFI: Adjusted goodness of fit
Bench mark: It should be greater than equal to 0.80

·         NFI: Normative fit index (also called the Bentler-Bonett normed fit index
Bench mark: Should be closer to 1

·         TLI: The Tucker-Lewis Index
Bench mark: Should be closer to 1

·         CFI: The comparative fit index
Bench mark: Should be greater than equal to 0.95

·         RMSEA: Root Mean Square Error of Approximation
Bench mark: Should be less than equal to 0.07
P-closer: It should be insignificant

·         SRMR: Standardized root mean residual
Bench mark: It should be less than equal to 0.07

Out of these 8 fitnesses, at least 4 fitness must meet their bench mark and out of these four two should be CFI and RMSEA.

Confirmatory Factor Analysis Can Be Performed Using Amos.

INTRODUCTION TO AMOS:

AMOS is statistical software and it stands for analysis of a moment structures. AMOS is an added SPSS module, and is specially used for Structural Equation Modelling, path analysis, and confirmatory factor analysis. It is also known as analysis of covariance or causal modelling software.

Some icons to understand:

Rectangle represents observed variable.
Circle or eclipse represents unobserved variable.
 Two-way arrow: covariance or correlation.
One-way arrow: unidirectional.

How to perform CFA?

STEP 1:Open Amos and link file
To link the data, go to FileData Files, The Data Files dialog box opens. Link the file and click OK.
STEP 2:Choose and draw an oval.
Now click on the same oval according to the number of items, some small circle will be drawn. Repeat the same procedure for other variables and its item.





Set the diagram accordingly.
STEP3:Now double click on the ovals and write the name the variable.





STEP 4:View Variables in Dataset.
The Variables in Dataset window opens.
It is now possible to click-and-drag each variable to its corresponding rectangle in the diagram.
STEP 5:Now name those circles by: Plugin Name Unobserved Variable.



STEP 6:Using  select all variables (oval) and then click Plugin Draw Co-variances.



STEP 7:Click View Analysis Properties Output Check mark on Standardized Estimates, Modification Indices, Residual Moments and Minimization History.
STEP 8:Save this file by clicking  option.
STEP 9:Run this file by selecting option above floppy disk.

Plates which are present in the upper left corner will be visible now. Click the newly visible plates some figures will be appear on the diagram. This is called factor loading and it should be greater than 0.4.


Output:
Click on the “view text option” next to floppy disk.  A new window will open click on “model fit” all fitnesses will appear.
Result of fitnesses:
Our fitnesses are according to the bench mark so we will not go towards Modification Indices. If any of the fitnesses will not match to the bench mark then we will go towards the Modification Indices.
Below is the example in case of Modification Indices.



CHAPTER: 10   STRUCTURAL EQUATION MODELLING
What is structural equation modelling (SEM)?
Definition:
Structural equation modelling or SEM is a very general statistical modelling technique. Factor analysis, path analysis and regression all represent special cases of SEM. SEM is a largely confirmatory rather than exploratory technique.
It’s a statistical tool which is used to test the hypothesis about potential interrelationships among the constructs as well as their relationships to the indicators or measures assessing them.
Goals of SEM:
To determine whether the theoretical model is supported by sample data or the model fits the data well.  It helps us understand the complex relationships among constructs.
Pre-requisite of SEM is CFA.
In SEM some variables become dependent and some become independent.
STEPS:
STEP 1: Save CFA newly as SEM in the same folder where data file and CFA is saved.
STEP 2: Remove Covariance by selecting this icon .


STEP 3: Set the diagram.
STEP 4: By selecting  this icon.Draw path from Independent Variable to Dependent Variable and then from Dependent Variable to another Dependent Variable.
STEP 5: Select add a unique variable  and click on the Dependent Variables.
You can change the size and position of residuals by these icons.


Here we have only 1 Independent Variable. If there will be more than 1 Independent Variable then we must draw Co-variance among Independent Variables.
STEP 6:Go to View Analysis Properties Output
STEP 7: Tick mark on Squared Multiple Correlation (R²).
STEP 8:Save this file. By clicking  option.
STEP 9:Run this file by selecting option above floppy disk.

Plates which are present in the upper left corner will be visible now. Click the newly visible plates some figures will be appear on the diagram. This is called factor loading and it should be greater than 0.4.



Click on the “view text option” next to floppy disk.  A new window will open click on “model fit” all fitnesses will appear.


Go to Estimates.
We will check first 2 lines of the first table that is Regression Weights.
Hypothesis:
Brand Reputation ← Brand Performance:
Hо = BP has not significant impact on BR.
Hᴀ = BP has a significant impact on BR.
Interpretation:
As the sig. value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = BP has a significant impact on BR. BP has a +ve impact on BR.
Hypothesis:
Brand Loyalty ← Brand Reputation:
Hо = BR has not significant impact on BL.
Hᴀ = BR has a significant impact on BL.
Interpretation:
As the sig. value is less than 0.1 (0.000<0.1), we reject null hypothesis and accept alternate hypothesis i-e, Hᴀ = BR has a significant impact on BL. BR has a +ve impact on BL.
Now go to table named Squared Multiple Correlation.
We just have to check the values of BR & BL.

Estimates
Brand Reputation
0.775
Brand Loyalty
0.555



Interpretation:
BP has 77% capability to predict BR.
BR has 55% capability to predict BL.

























CHAPTER: 11               PATH ANALYSIS
What is Path Analysis?
Definition:
Path analysis is a straightforward extension of multiple regression. Its aim is to provide   estimates of the magnitude and significance of hypothesised causal connections between sets of variables.
There is no such difference between SEM and PATH. Results of both analysis will always be same almost. We do path analysis through data computation.
STEPS:
STEP 1: Continue SEM file but save the file newly as PATH.
STEP 2: Go to Analyze Data Imputation.
A window will open, just check the number of observations. It should be 10,000.
Click OK.



Click Impute.
Duplicate SPSS file created in the folder.


STEP 3: Open new AMOS file.
STEP 4: Link new duplicate file of SPSS with Amos.
Go to File Data Files. And link the duplicate file. Click OK.
STEP 5: Draw model of variables with the help of this icon .
STEP 6: Select single headed arrow  and draw paths.
STEP 7: Then go to View Variables in Data Set.
STEP 8: A new window will appear. Drag the newly created Variables to their respective rectangles.
STEP 9: Add residuals on Dependent Variable by selecting this icon .
STEP 10: Then go to Plugin Name Unobserved Variable

We have to draw co-variance on pure Independent Variables if the Independent Variables are more than 1. But here we just have 1 Independent Variable so there is no need to draw co-variance.

STEP 11: Go to View Analysis Properties.
A window will open.
STEP 12:Go to Output and tick mark on the following.
·         Minimization History
·         Standardized Estimates
·         Squared Multiple Correlation
·         Residual moments
·         Modification Indices

STEP 13: Save this file. By clicking  option.
STEP 14: Run this file by selecting  option above floppy disk.

Plates which are present in the upper left corner will be visible now. Click the newly visible plates some figures will be appear on the diagram. This is called factor loading and it should be greater than 0.4.



Click on the “view text option” next to floppy disk.  A new window will open click on Estimates and results will appear.
Output:
   

Difference b/w SEM and PATH analysis:

SEM
PATH


In SEM we purely use items.
It predict through average variable of items.
In SEM we have seven fitnesses to check               
No fitnesses because there are no individual items.





How to become rich overnight

in this blog i will tell you  The Illusion of Overnight Riches: A Realistic Guide to Wealth Creation In a world obsessed with instant gratif...