Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Principle component analysis

  • 05-09-2010 12:35pm
    #1
    Registered Users, Registered Users 2 Posts: 51 ✭✭


    Hi,

    In my survey for an MSc thesis I used a number of sub-scales for which I would like to demonstrate the level of internal consistency.

    I'm currently using SSPS, and I'm not entirely sure if I'm doing it correctly.

    I did Analyze>Dimention Reduction>Optimal Scaling, but whatever I do next is just very confusing, and I have difficulties follwoing the online tutorial.

    Also, how can I demonstrate correlation between the subscales?

    Any help or advice would be much appreciated.

    Thanks!


Comments

  • Closed Accounts Posts: 6,081 ✭✭✭LeixlipRed


    I think you'd have a better chance of an answer in the research forum, I can move it there if you want?


  • Registered Users, Registered Users 2 Posts: 51 ✭✭GeckoOnTheWall


    Yes, please!


  • Registered Users, Registered Users 2 Posts: 3,483 ✭✭✭Ostrom


    Hi,

    In my survey for an MSc thesis I used a number of sub-scales for which I would like to demonstrate the level of internal consistency.

    I'm currently using SSPS, and I'm not entirely sure if I'm doing it correctly.

    I did Analyze>Dimention Reduction>Optimal Scaling, but whatever I do next is just very confusing, and I have difficulties follwoing the online tutorial.

    Also, how can I demonstrate correlation between the subscales?

    Any help or advice would be much appreciated.

    Thanks!

    You should do it the opposite way!

    It matters more that you have sound theoretical justification for including certain items in scales before the analysis. SPSS will always find a solution to a set with the PCA, but that doesn't necesserily make it theoretically sound. I would run a reliability check on the scales before doing the PCA, as you designed them originally. To do this, use analyze---scale---reliability analysis. Throw a set of scale items into the selection box, tick Cronbach's Alpha, and look for a value above 0.70 - this is a generally accepted level of internal consistency.

    If you wanted to work it the other way - which you may have to if you find your scales dont correlate as strongly as you had hoped - you can then run the PCA, which is essentially doing what you have just done, only in reverse.

    To demonstrate correlation between your scale items (although this is not by definition what you are doing with the PCA), your sets of variables should load onto a common factor. To run the PCA, you need to use analyze---dimension reduction---factor. (If you can at all, Andy Field's Discovering Statistics using SPSS has a great chapter on factor analysis).

    SPSS may find more factors than you had designated as concepts at the design stage, or certain variables may load onto factors associated with different concepts - I wouldn't put all my faith in it though, unless the factor matrix is very non-specific (if your variables dont load conclusively onto any particular factors). If the literature gave justification for your choice of scale items, and your alpha-value is above or close to 0.70 using the first procedure above, that should be enough - but you might want to ask your supervisor what their preference is :)


  • Registered Users, Registered Users 2 Posts: 51 ✭✭GeckoOnTheWall


    Hi Efla,

    thanks for your reply.

    The subscaes I have used have been proved to have a high level of intenral consistency. I just want to check what consistnecy I will get on my sample. I will follow your instructions and see if I'm getting good alpha. Thanks!


  • Registered Users, Registered Users 2 Posts: 51 ✭✭GeckoOnTheWall


    Hi Efla,

    so I did my Analyze > Scale > Reliability on my subscales, and I'm getting extremely low alpha. The subscales I am using have been successfully used before and had strong alpha. My sample is quite big (n=397), so I don't know what Im doing wrong........


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,483 ✭✭✭Ostrom


    Hi Efla,

    so I did my Analyze > Scale > Reliability on my subscales, and I'm getting extremely low alpha. The subscales I am using have been successfully used before and had strong alpha. My sample is quite big (n=397), so I don't know what Im doing wrong........

    Are you putting just one set of scale items into the box at a time?

    Take a look at the correlation matrices for each set of items and see if they match individually first. Are all your response categories on the scales running in the same direction pos
    neg? Are any of them reverse-phrased (i.e. does one question state a negative, with agreement implying dislike whilst all the others do the opposite)? Do your scales have very few items - alpha is sensitive to the number of items in a scale and can be deflated in small scales even though the individual items may correlate well amongst each other (there are plenty of joural articles that point this out and justify alternative interpretations).

    Beyond this, you're probably looking at a factor analysis


    edit: Sorry, forgot one important point; when you're running the alpha check, click the statistics box and check 'scale if item deleted' - you get another box in the output with values of alpha if each individual item was removed. It is a quick way to identify problem variables that may be associated with different concepts. The values do seem a bit low however, so it may be best to build the scale from the factor analysis results.


  • Registered Users, Registered Users 2 Posts: 51 ✭✭GeckoOnTheWall


    I spent most of the day trying to make sense of it all, and you were right. Some of the items in my sub-scales should have been reversly-scored. Once I fixed that, the alpa for all subscales was very good, above 0.8.
    I also run PCA with Varimax rotation for each subscale and for almost all of them all items loaded onto one factor.

    The last thing which I need to do is to show if there is any intercorrelation between the subscales. That is probably the most important part of my work...

    Thanks so much Efla for your help! If it wasn't for you, I would be completely lost.


  • Registered Users, Registered Users 2 Posts: 51 ✭✭GeckoOnTheWall


    Hi Efla,

    could you also please tell me how I can measure intercorrelations between my subscales?

    Thanks!


  • Registered Users, Registered Users 2 Posts: 3,483 ✭✭✭Ostrom


    Hi Efla,

    could you also please tell me how I can measure intercorrelations between my subscales?

    Thanks!

    Examine the correlation matrix for all your collapsed subscales (analyze----correlate----bivariate). Use the compute command to sum your scale items into subscales (assuming you used the same rating scheme for all the questions you put into the PCA, this is just the average score of separate sets of subscale items) and examine the items for moderate-high correlations.

    edit: to save time, check the statistics box under (analyze----scale----reliability analysis), check the 'correlations' box under the 'inter-item' section. It is the same as doing the above, only you can just get one set of output and it saves you getting the correlation matrix via. another set of commands. I'm not sure why you need to do this though, the PCA should have confirmed the intercorrelation for you.


  • Registered Users, Registered Users 2 Posts: 51 ✭✭GeckoOnTheWall


    Efla,

    thanks a lot for your help! Eventually I managed to get to the end of my statistical gehenna :) I found significant correlations between the measures I used in my study, so I'm very happy with that. Thanks again!


  • Advertisement
Advertisement