What’s up with the GMAT?

by Chuck on Friday, 22 February, 2013

I’ve been wondering for a while now what’s going on with the GMAT. I’ve been wondering because several students have done significantly worse on the real thing than on the practice software, which I always thought was a reasonable predictor. Not just 20 or 30 points worse, which is entirely normal, but 5o, 60, 80 or 100 points worse. When students who have scored over 700 on the software get 600 or below in the test, or go from 600+ to 530, I’m left wondering how best to prepare them when the software is clearly no longer the reliable guide it once was, and when being familiar with the OG is not enough. Either that or it’s just a coincidence that since June, when the test changed, strange things have happened.

So I emailed GMAC. I queried the re-marking of tests after so many surprising scores and following feedback from one student who was told that there had been many more requests for re-marking since June. I also asked for clarification on the changes to the test in June. I hoped to get some useful insights into the scoring system which has been (probably deliberately) shrouded in mystery for some time. GMAC’s response was sadly, and perhaps predictably, unhelpful, but I’ve reproduced it here so you can see what we’re dealing with (notice also the incorrect usage of tense :-P) ~

Dear Chuck,

I apologize for the delayed response. We have not experienced a surge in requests to “remark” scores. For quality assurance, we do score the GMAT test more than once for every test-taker before we even deliver the scores to the test-taker and schools. If a test-taker feels that there is an error in a particular question, s/he is welcome to submit a “question challenge.” A Test-taker may challenge a test question if he feels that there is something faulty about a passage, the question itself, or the answer choices (for example: no correct answer, more than one correct answer, etc.). We receive very few question challenges and when we do, only on very rare occasions does a test-taker correctly identify a problem. An extraordinarily high percentage of question challenges are erroneous and result in no change to scores.

Since June 5, we added [sic] the Integrated Reasoning (IR) section and dropped one of the two essays. We did not make any changes to the Quantitative and Verbal sections of the exam. Further, we did not change how we calculate the Total score (which is still based on Quantitative and Verbal performance). The score reports provide the scaled score and percentile ranking for each section of the exam and the Total score, but we do not provide any further details about performance within each section.

I remain more than a little confused, and I sympathise with those of you who think that $300 ought to get you more than a couple of numbers on a piece of paper, especially if those numbers aren’t the ones you wanted and you want to know how and where you can improve. I fear there’s not a lot we can do about this corporate monster, but there are a couple of rays of light.

One: some of the more enlightened schools, certainly Cass among them, seem to take a holistic approach to admissions and don’t lend too much weight to the GMAT. And two: certain other schools (I forget which) are introducing their own tests. It may be that down the line the GMAT becomes obsolete, or at least falls out of favour, as schools realise that it’s not the best way to select the business minds of the future. The GMAC website makes the Integrated Reasoning section sound like the greatest thing since sliced bread; needless to say, I take much of what it says with a large pinch of salt. Maybe I should design my own transparent test…

Previous post:

Next post: