Today’s threecast

Dr. Weevil finds a recurring anomaly:

Four times in the last week, the National Weather Service has displayed a current temperature for my town higher than the expected high for the day. Surely if the current temperature is 68° F, the expected high cannot be 62°, it must be at least 68°. Is there any programming language in which that cannot be fixed with a single line of code?

Today the expected high was 49°, while the reported temperature around noon was 63°, which is what it felt like. A fourteen degree discrepancy is impressive, even for government bureaucrats.

Regional phenomenon, I suggest: out here, the expected high has been too high three days running and was scaled back on the fly. (Yesterday, the forecast called for 48; by noon, with the predicted clearing not happening, they dropped it to 43; only briefly did it touch 40.)

I cannot, however, fully explain this:

Tomorrow’s expected high (or “hi”) is 34°, while tomorrow night’s expected low (or “lo”) is 35°. Is that mathematically possible? Surely a nightly low cannot be higher than the high in a directly adjacent day, either before or after? I don’t know when the official switchover from day to night is (sunset?), but if the temperature in the last minute of day is 34° or less, can it really be 35° or more in the first minute of night? If anything, we would expect a relatively sudden drop in temperature at sunset, but today’s forecast implies a sudden jump.

In my neck of the woods, anyway, we tend to expect the lowest temperature of the day right around sunrise, and forecasts for “tonight” often include the qualifier “after midnight” if significant events are anticipated at such hours. But we’ve had rising temperatures overnight many times; all it takes is a wind shift at the right moment. And it’s warmer now, half an hour before the sun, than it was at 8 pm last night.

(With apologies to Victor Borge.)





5 comments

  1. Gabrielle Dolly »

    7 January 2009 · 7:40 am

    And the records from this system form the data which are used to drive the whole global warming theory er hypothesis er conjecture.

    Sorta gives ya the warm-and-fuzzies, dunnit?

    GFD

  2. McGehee »

    7 January 2009 · 7:48 am

    I’ve sometimes noticed what Dr. Weevil points out, and suspected that some stations don’t report the actual highest or lowest temperature recorded during a given observing day as the “high” or “low.” Rather, it’s whatever happens to be the temp at the time of the standard “daytime” or “nighttime” observation. Since my wife (NWS employee) used to have a simple mercury thermometer that could record the highest and lowest temp since the last ob, that’s simply unacceptable.

    The more so, in this digital age.

  3. Old Grouch »

    7 January 2009 · 4:54 pm

    “that’s simply unacceptable”

    Except that if you change your data collection method, you run into problems when you try to make comparisons with the pre-change stuff..

    If you look at the Oklahoma City Observations page, you’ll note that they take their readings at xx:52. (So the “2:00 temperature” was actually read 8 minutes earlier.) [Probably dates back to the days when forecasters had to hand-key everything onto the TeleType; the information then needed to clear the wire in time to make the end of the top-of-the-hour newscasts.]

    The switchover from “today” to “tomorrow” is local midnight. And as near as I can tell the separation between “day” and “night” is local sunrise/sunset. So one thing that would produce Dr. W’s scenario is a warm front coming through around sunset, remembering that the pre and post-sunset readings will always be one hour apart.

    Re: The “actual high” vs. highest hourly reading, I dunno. Weather is such a marco phenomenon that I doubt the additional info gained from absolute data would outweigh the problems of handling it. (IIRC the manned stations can and do take intermediate readings if unusual conditions warrant it.) And there’s the comparison problem again: If hourly readings tend to underreport daily highs by, say, 1/2°, if you start recording the actual maximums everything suddenly looks 1/2° warmer. In this case consistency is more important than absolute accuracy.

    And as to the “predicted” vs. “actual” temperatures, one number is the forecast, the other is the actual data. Two different things, if you’re a weather geek. ;-)

  4. CGHill »

    7 January 2009 · 5:18 pm

    And it should be noted here that the switchover from today to tomorrow ignores DST: a summer rain at a quarter to one in the morning counts on yesterday’s total.

    The local Weather Guys, on the Forecast Discussion, have occasionally allowed that “guidance” from one model or another just didn’t seem right, and therefore they’d tweaked their numbers to match their gut feelings.

  5. McGehee »

    7 January 2009 · 6:19 pm

    Except that if you change your data collection method, you run into problems when you try to make comparisons with the pre-change stuff.

    A good point, but for today’s public consumption that needn’t be a consideration.

    Besides, weather ob stations do get moved; the micro-climate in the new location will invariably be different from that of the original. And even if the station doesn’t move, artificial changes in the immediate surroundings (as small as the installation of a new air-conditioning unit, or the expansion of a parking lot next door) can take a longstanding location entirely out of the realm of reality for those it supposedly serves.

    Thus I think the idea of some kind of holy consistency for weather observations comprable to those of the past, is overrated anyway.

RSS feed for comments on this post