Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Help with a stats problem?

  • 25-08-2008 5:22pm
    #1
    Closed Accounts Posts: 15,552 ✭✭✭✭


    Ok, lots of people would like a stats cheat sheet.

    What should go in it?

    I'll list some obvious questions and if people want to post answers and other questions (keep them simple, this is just an inital guide) I'll compile everything to a sticky and credit the contributors.

    So starter questions.

    How do I know which analysis is best?

    When do I use a t-test?

    Paired or unpaired t-test?

    one-tailed or two tailed tests?

    When do I use ANOVA?

    one-way or two-way ANOVA?

    What is post test analysis?


Comments

  • Registered Users, Registered Users 2 Posts: 1,845 ✭✭✭2Scoops


    I think it would be better to help people out on a case-by-case basis. Answering general questions won't do much to help, I think. Maybe like a stats case-study workshop. It's more interesting for the stats people to respond to specific problems too, than just answering general questions. Anywho...
    GuanYin wrote: »
    How do I know which analysis is best?
    This is determined by the study design.
    GuanYin wrote: »
    When do I use a t-test?
    When I have two different groups/conditions and I want to test them on the same variable.
    GuanYin wrote: »
    Paired or unpaired t-test?
    Paired if the measures are related (e.g. same subjects tested repeatedly) unpaired if they are not related.
    GuanYin wrote: »
    one-tailed or two tailed tests?
    Two-tailed nearly all the time when you are hypothesizing a difference/relation between groups/conditions. One-tailed very rarely when you have a directional hypothesis and a strong, i.e. evidence-based, reason to believe the difference will NOT be in the opposite direction.
    GuanYin wrote: »
    When do I use ANOVA?
    When you have more than two groups/conditions or more than one variable to test.
    GuanYin wrote: »
    one-way or two-way ANOVA?
    One way for one variable, two-way or greater for more than one variable.
    GuanYin wrote: »
    What is post test analysis?
    Further testing to compare specific means performed after a significant omnibus test.


  • Registered Users, Registered Users 2 Posts: 5,888 ✭✭✭AtomicHorror


    It would probably be good to start such an FAQ with the real basics. Explaining what significance values actually mean for example. I always took the p values from a t-test to mean "probability of randomness in difference", though I understand that it's really more like "probability of two distributions being the same distribution" or some such.


  • Registered Users, Registered Users 2 Posts: 5,888 ✭✭✭AtomicHorror


    I was going to ask this question in the stats thread but I gather that's more about finding out what should go in an FAQ... Anyway, any comments on this would be invaluable.

    The basic set up: So I'm doing what's called a "proliferation assay". Basically we take white blood cells from a single source, then put them out into lets say fifty test tubes. There's a negative control (just a tube with normal growth media) and a positive control (a tube lined with stuff that makes cells divide rapidly). The other tubes each contain a short purified protein sequence. These are derived from a protein from a microbe that we know the cells will respond to by proliferating. The sequence in each tube overlaps with the previous sequence by about 50%. With 48 sequences then, we've got the whole protein spread out over our tubes. The whole experiment is done in triplicate.

    After about three days, we measure the amount of cell proliferation, with the hope being that we can find out which parts of the protein are recognised by the white blood cells.

    The analysis: I test for "significant proliferation" by doing a paired, two-tailed t-test (my rationale on that being, same group under differing conditions). Each protein sequence triplicate being compared by t-test to the negative control triplicate. If the proliferation is elevated above the negative mean and the p-value is 0.05 or less, we have a statistically significant "hit".

    The problem: Due to the sequence overlap, we would expect any given "positive" sequence to be accompanied by an adjacent positive. This would tell us that an immunogenic sequence lies within the overlap. There are many cases where I get an increased proliferation with p=0.05 or less with an adjacent increase with a p=0.10 or so. Basically, the assay is pretty "noisy". What I'm wondering is whether there is a statistically meaningful way in which I can take the two adjacent p values and combine them, taking into account their 50% connection to each other. In that way, I'd be able to identify hits that I might otherwise not spot.

    Or should I just be loosening my criteria for significance by bringing the cut off value up to p=0.10 or some such. I imagine doing that will not be very publishable though. Well, that was a bit of a mini essay. :pac: Any advice on my analysis methods very welcome.


  • Registered Users, Registered Users 2 Posts: 1,845 ✭✭✭2Scoops


    Did you collect 3 measures from each condition from 1 experiment, or repeat the entire experiment a total of 3 times? If you are using multiple values from the same experiment then you can't really set up a paired design unless you mean for each condition first. Typically, people measure in triplicate and do 3 independent experiments (9 values total per condition) and then use an unpaired test. Even then, unless you're dealing with fairly robust changes, you'll likely be underpowered. Also, if you are doing 48 separate paired t-tests you are almost certainly going to make several type I errors. As you say it's noisy, so the right thing to do would be to repeat the experiment and increase your N.

    I'm not familiar with any way to simply combine p-values, but it would be easy to recode your independent variables to group effects (e.g. group 1 = sequence 1 through 2; group 2 = sequences 3-4... etc.).

    The best thing you can do with such a small dataset would be to simply graph it and see if it points in a particular direction, then focus your attention on that part of the sequence.


  • Registered Users, Registered Users 2 Posts: 3,427 ✭✭✭Dr Strange


    Just wondering, if you have more than two groups would an ANOVA or MANOVA not be a possibility?


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 5,888 ✭✭✭AtomicHorror


    2Scoops wrote: »
    Did you collect 3 measures from each condition from 1 experiment, or repeat the entire experiment a total of 3 times?

    Three measures from each condition. Then the experiment is repeated using cells sourced from other donors.
    2Scoops wrote: »
    If you are using multiple values from the same experiment then you can't really set up a paired design unless you mean for each condition first. Typically, people measure in triplicate and do 3 independent experiments (9 values total per condition) and then use an unpaired test.

    Ok... given then that I've repeated the experiment about a dozen times independently (using different cell donors), could I use that as the basis of an unpaired test?
    2Scoops wrote: »
    Even then, unless you're dealing with fairly robust changes, you'll likely be underpowered. Also, if you are doing 48 separate paired t-tests you are almost certainly going to make several type I errors. As you say it's noisy, so the right thing to do would be to repeat the experiment and increase your N.

    I think I understand... by not accounting for overall variability, my t-test is actually a misleading measure of significance?

    2Scoops wrote: »
    I'm not familiar with any way to simply combine p-values, but it would be easy to recode your independent variables to group effects (e.g. group 1 = sequence 1 through 2; group 2 = sequences 3-4... etc.).

    Or group 1 = sequence 1+2, group 2 = sequence 2+3? I'll take a look at that and see what it gives me. Thank you 2Scoops, you've given me a fair bit to consider. A good thing I asked you, since it looks like I'm rather off the mark in my analysis!


  • Registered Users, Registered Users 2 Posts: 1,845 ✭✭✭2Scoops


    Ok... given then that I've repeated the experiment about a dozen times independently (using different cell donors), could I use that as the basis of an unpaired test?

    Yes, or you could use a paired design if you use the mean of the triplicate measures from each condition for each donor. I wasn't sure how you went about it at first; either way is acceptable but generally the paired design will have better power. Try a quick power analysis using both ways and see with needs the biggest N.
    I think I understand... by not accounting for overall variability, my t-test is actually a misleading measure of significance?

    Well, there are 2 different issues here and they're kind of the opposite to each other. If you are underpowered, then you can't draw firm conclusions from non-significant differences. Obviously, this won't be a problem for those differences that are already significant.

    Unfortunately, the second point is that if you perform 48 t-tests with no correction for multiple comparisons, with an alpha of 0.05, you would expect 2-3 significant results by chance alone, i.e. a type I error/not a real difference. Trying to correct for this will probably wipe out your power. I've seen people get away with this before but never with so many comparisons at once.
    since it looks like I'm rather off the mark in my analysis!

    Not really. Most simple comparisons eventually boil down to t-tests or some analogue thereof. ANOVA is great for more complicated designs but it won't shed much more light on this particular problem. Just explore the data a bit more, it usually works out.


Advertisement