Could you explain #20. How do you find the values if the distribution was normally distributed. I found the cutoffs with the given information, so how to you figure out what the cutoffs would be in the distribution was normal?
I am also a bit confused about the discussion questions. Plus, I can't find the data online where it indicates to look in the text book.
Also, I had a few questions about the exam. Are we allowed to use simple calculators (not the graphing ones that you can program stuff into)? Will a z score chart be provided?
Why does Howell ask questions at the end of chapter three that you would have to look in chapter four to find the answers for? Should we be reading the next chapter before answering questions on the homework chapter? I just ask b/c he asked about one and two tailed tests in discussion question 3.20 and doesn't cover them until ch. 4.
For #11, consider what you do to normalize a data set. You know the process for “forcing” the data to have a mean of 0 and a standard deviation of 1. Using a very similar process, you can force any mean and standard deviation you like.
For #20, use the table in Appendix Z to find the percentage of scores that would fall within a certain area, similar to what we did in class on Tuesday. You don’t need to find the actual scores at the cutoff points. I think what Howell is getting at when he talks about one-tailed and two-tailed points of view is to consider not only (for example) the percentage above and below -1 sd and above and below +1 sd, but also the percentage between -1 and +1 sd. You shouldn’t need to use any tests yet, only the z-score table.
For #22, as far as I can see, Howell does want you to create a new variable (I certainly can’t find it!). Here’s a hint… average SAT scores tend to be higher when a smaller percentage of the population take them (I’ll leave it as a thought exercise as to why…). Obviously, if we’re trying to say that students test better in one state over another, this tendency can be a problem. How can you (very roughly – don’t overthink this) control for that using the variables Howell gave you?
Howell moved his data files around on his site. Always check here first: http://www.uvm.edu/~dhowell/methods/DataFiles/DataSets.html When in doubt, you can always do a web search for “Howell” and the name of the file you are looking for (ex., sat.dat).
If I missed anyone’s question or if anyone needs further illumination, let me know!
The basic question that #20 is asking you to think about is: “How realistic is it to assume a normal curve? What are the potential pitfalls?” Let’s pretend that by counting, you found that 19% (a totally made-up number) of the data was more than one standard deviation above the mean. Using the Z-score table, you can see that 16% of the data would fall in that range in a normal curve. Are those two numbers close? How about the percentage of data that falls between the mean and two sd below the mean? Are the numbers closer for one part of the curve compared to another? Try not to get too hung up on every detail that Howell asks for, especially if that will make you lose sight of the larger question he’s asking. The point of the exercise is to compare the normal curve that we pretend exists with an actual, skewed data set. Try to imagine how changing the data would change how different the shape would be from a normal curve.
13 Comments:
Could you explain #20. How do you find the values if the distribution was normally distributed. I found the cutoffs with the given information, so how to you figure out what the cutoffs would be in the distribution was normal?
For #22 the variable adjcomb is not in the data set. Are we supposed to come up with that variable on our own? If so how do we come up with it?
I am also a bit confused about the discussion questions. Plus, I can't find the data online where it indicates to look in the text book.
Also, I had a few questions about the exam. Are we allowed to use simple calculators (not the graphing ones that you can program stuff into)? Will a z score chart be provided?
Can you explain # 11 in class? I understand it but not fully. It would help a lot...
I also couldn't find the adjcomb variable in the SAT data set. If we are supposed to come up with it on our own, how?
Why does Howell ask questions at the end of chapter three that you would have to look in chapter four to find the answers for? Should we be reading the next chapter before answering questions on the homework chapter? I just ask b/c he asked about one and two tailed tests in discussion question 3.20 and doesn't cover them until ch. 4.
For #11, consider what you do to normalize a data set. You know the process for “forcing” the data to have a mean of 0 and a standard deviation of 1. Using a very similar process, you can force any mean and standard deviation you like.
For #20, use the table in Appendix Z to find the percentage of scores that would fall within a certain area, similar to what we did in class on Tuesday. You don’t need to find the actual scores at the cutoff points.
I think what Howell is getting at when he talks about one-tailed and two-tailed points of view is to consider not only (for example) the percentage above and below -1 sd and above and below +1 sd, but also the percentage between -1 and +1 sd. You shouldn’t need to use any tests yet, only the z-score table.
For #22, as far as I can see, Howell does want you to create a new variable (I certainly can’t find it!). Here’s a hint… average SAT scores tend to be higher when a smaller percentage of the population take them (I’ll leave it as a thought exercise as to why…). Obviously, if we’re trying to say that students test better in one state over another, this tendency can be a problem. How can you (very roughly – don’t overthink this) control for that using the variables Howell gave you?
Howell moved his data files around on his site. Always check here first:
http://www.uvm.edu/~dhowell/methods/DataFiles/DataSets.html
When in doubt, you can always do a web search for “Howell” and the name of the file you are looking for (ex., sat.dat).
If I missed anyone’s question or if anyone needs further illumination, let me know!
Where is the test study guide that was supposed to be up this weekend? I can't find it.
I'm still confused on what #20 is asking for exactly...there are multiple questions, and I only know how to do the first one!
The basic question that #20 is asking you to think about is: “How realistic is it to assume a normal curve? What are the potential pitfalls?”
Let’s pretend that by counting, you found that 19% (a totally made-up number) of the data was more than one standard deviation above the mean. Using the Z-score table, you can see that 16% of the data would fall in that range in a normal curve. Are those two numbers close? How about the percentage of data that falls between the mean and two sd below the mean? Are the numbers closer for one part of the curve compared to another?
Try not to get too hung up on every detail that Howell asks for, especially if that will make you lose sight of the larger question he’s asking. The point of the exercise is to compare the normal curve that we pretend exists with an actual, skewed data set. Try to imagine how changing the data would change how different the shape would be from a normal curve.
you should have received the study guide by email.
--Dr. M
Can you post it on the webpage? Because I never received it.
If you send me an email I will make sure you are on my list and then resend it to you. Or you can come by my office and I can print a copy for you.
Post a Comment
<< Home