Is the Caliper Assessment Test Valid? (Read Below to Discover)
The Following Text is a Comment From Reddit Discussing the Video Above
Says it’s not a pass/fail test, spends for 25 minutes talking about how to “beat the test”.
Let’s assume that the Caliper Test is a validated inventory for the sake of this argument. (I haven’t too deep into it, but the site does not seem to offer any technical papers, nor can I find any information on the personality model, nor “Taylor D. Wilson” – who developed the model, nor does the model seem parsimonious…Which are things that are odd to say the least.)
Even if there is nothing wrong with the test, the advice given here will still add errors and biases into the assessment. And it’s hard to say that following these advice will improve one’s chances on this type of assessment.
- Right off the bat, she’s over-analyzing the wording of individual items (i.e., questions) and assuming the measurement domains. The scores are usually aggregates of multiple items, and can be calculated in different ways depending on how the response options are structured. She then assumed that respondents going for Role X should answer this way or that way, by cherry-picking certain words within each options. Assessment items are not usually written with that granular of detail; they just need to innately infer the domain that they need to measure which can be statistically validated later on. This tactic is essentially wasting time and energy, to make mountains out of molehills.
- Ironically, answering the whole test in a consistent manner shouldn’t be an active behavior. If you’re answering personality assessments genuinely, by nature, you will answer in a way that is consistent to your own trait characteristics. If you’re actively trying to keep track of how you answered, you are not answering it naturally. The Caliper website does mention that they’ve built in measures to reduce applicant faking (“low fake-ability”, which is odd to word it this way but it’s a marketing material so whatever). Acquiescence is a type of respondent bias, a feature of applicant faking. Again, all of this is assuming that the employer is looking for applicants in a singular investigative lens, or what the wording of those items infer. This tip contradicts itself.
- This entire premise seems to be based on how she answered the test, operating on the premise that since she “passed” the test, her way must be the best approach to win. This is attribution bias and confirmation bias.
- There is a lot to unpack in the section on “looking for patterns” for this discussion; not just with her approach. Including the suggestion to look for patterns, there wasn’t anything revolutionary with how one could “beat” this section. This section asks you to look at patterns by design, so it’s really just saying “take the test by taking the test”.
- She was really close on the Likert-Type agreement scales. We’re running into the acquiescence bias again. There is nothing wrong with answering in the Strongly; again, if it applies to you. If respondents kept answering in the extreme, the test will pick up on this bias. So, actively avoiding the extremes will do the same. The assessment will aggregate these answers anyways, and employers who understands psychometrics is going to pick up on this bias right away during the analysis (Some test outputs have graphic representations, and domains with acquiescent responses will stick out like a sore thumb). It is much easier to answer genuinely than to try a trick the test in a certain way.
I get where she’s coming from with this. She wants to help; she did well; so she wants to share. But things like this (which adds to the tips and tricks that she was able to google), where people with no Psychometric background are making assumptions about this tool, can be really dangerous. As demonstrated in this video, where there are at least 2 or 3 ways ruin YOUR responses on these tests, suggestions like this aren’t as beneficial as one would assume.