Showing posts with label professionalism. Show all posts
Showing posts with label professionalism. Show all posts

Wednesday, September 1, 2010

NJ's $400m 'Race to the Top' - adults misbehaving

Over the past two weeks, there's been much to do about how locally here in New Jersey, an (ahem) 'error' resulted in losing $400M worth of Federal Aid.

Naturally, the gears of Political Spin turned, to try to avoid responsibility and deflect blame on others: it is the classical "If its good, I get credit, but if its bad, it must be someone else's fault" - - with exactly the underlying message on ethics that that entails. As usual, our 'Leaders' actions in misbehavior sends the worst possible message to our children (and students), which is that its okay to lie and cheat.

And specifically for NJ Governor Chris Christie, we can see that he has conspicuously failed to issue a clear (and equally loud) public apology to those in the Federal Government that he had previously blamed for his own administration's error. Too late to do this now - the train has left the station and his credible opportunity window has passed, and Christie thus gets a failing grade in Ethics Class.

- - -

However, there is another interesting point that has been missed within all of the politically-generated spin-doctoring in regards to the Race to the Top. People have forgotten the very basics: this was a competition, and not all entries were going to win (receive funding).

As such, let's apply one more "What If", centered on Ohio (who just beat NJ out for the last winning spot):


WHAT IF ... Ohio's entry had done a few points better on their entry?

Answer: all of this teeth gnashing and caterwauling on NJ's 4 point mistake would be utterly moot.


In life, there are winners and losers...and it doesn't take long to learn that we won't always win. As such, we need to be honest with ourselves and accept losing graciously ... which includes accepting responsibility for our actions, win or lose. It doesn't matter how lofty one's life position is, or becomes: the buck always stops.


-hh

Monday, February 16, 2009

4 x 5DII in Freezing cold, snow and wet

There was a recent report of a photo-journey to Antarctica, where several Canon 5Dmk2 dSLRs failed, while essentially "none" of the other cameras onboard did.

The above link is to DPreview.com, where a gentleman posted a "mine didn't fail" report. A long conversation resulted, with one poster pointing out (correctly) that there were a lot of issues and that the report of a non-failure wasn't particularly insightful, particularly when it was in attempted response to a field report of failures.

Unfortunately, while some of the exchange did get a bit heated, the moderators at DPreview have decided to slash-n-burn their way through the thread, and in doing so, resulted in collateral damage of posts that did not have any possible violations or controversies. Which included both of mine. As such, I see that I can no longer trust dpreview to retain professionally based objective works.

I'm not going to ask DPreview to consider undoing their moderation - that's their prerogative, and their actions reflect on their reputation only. Instead, I'll reiterate where it can't be removed:


Philip Harle wrote:
> Spent last week on the Light & Land photo trip to Glencoe. I was
> amongst 4 5DII users who managed to get their camera soaking wet and
> cold shooting for a whole day whilst it was constantly snowing. None
> of the cameras had the slightest problem.

I'm not about to re-write my long objective statistical analysis that has apparently been removed by the Moderators for whatever reason. My editorial comment on this matter is that I have noted that it has been removed.

To reiterate in much shorter form - - my apologies, but I'm not about to go into the same level of detail:

Part I:

- The above 'zero failures' report is a sample size of (n=4 x 1 day)
- The controversial Antarctica trip was a sample size of (n=26 x YY days)

That's at least a 60:1 ratio in the "power" of the respective statistical samples. As such, even if the suggested 20-25% failure rate is true, this report's sample size lacks sufficient sampling"power" to have a reliably high confidence to be able to detect the failure(s) in the first place.


Part II:

- most people don't really understand Statistics.
- most people don't really understand Test & Evaluation (methods & standards)
- most people don't really understand "self selected" sampling bias

Nor do most people understand how these interact and make the analysis of a complicated device used in uncontrolled settings and then subject to anecdotal reporting, variable judging and self-selection bias ... simply results in a mess to try to professionally analyze.

As such, all that can really be concluded is that the LL trip reported an 'alarmingly high' failure rate in 50% of their sample, which under a null hypothesis of 'All dSLRs are about the same' then may have been coupled with an 'alarmingly low' failure rate on the other 50%.

Part III:

It should be noted that even if the failures are eventually determined to have been caused by 'human error', there remains the niggling issue that said human errors were not randomly distributed, but clustered. To cut to the chase, something that significantly alters the probability of human-contributed errors ... infers a system design flaw.

Part IV:

How to rig a 'waterproof' test so that even an exposed Kleenex can pass?
I previously only said that it could be done. Here's some concrete suggestions as to how it can be done:


Method A: low flow rate + atomized to mist + extremely dry chamber + high temperature + air make-up + good separation distance = weak humidifier

Method B: medium flow rate + spread + very dry chamber + extreme cold + good separation distance = dry snow machine, or possibly even just verga

Method C: high flow rate + no spread + aimed horizontally at target + distance + gravity + splash control = water misses the test coupon

Method D: "before/after" weigh scale not sensitive enough to measure weight change from water, or use of a method that doesn't measure relevant change (eg, dimensions).

Method E: handling of sample after test (eg, time delay, allowing it to dry).



-hh