Wednesday, July 21, 2004

IntraLearn "Encrption.txt (sic)"

IntraLearn users:

You have been f***ed.

Any doubt?

Below is the text of Encrption.txt, which can be found in the cgi-bin directory of any IntraLearn port. (May have been removed in post 3.5 installs)
Two files namely


has been encrypted prior to version 2.3 and the source code is not available for the same. When encryting Intralearn, make sure to remove these two files before running cfencode.

Syntax for cfencode

cfencode directorypath/*.cfm /r /v "1"

You see, Cold Fusion lets you encrypt your CFM files so your users can't view the source code. This prevents your clients from making unauthorized changes to the product.

IntraLearn lost the source code part of their product, then left the above text file stating as much on their distribution CDs.

The funny thing is (Aside from the fact that they can't spell "encryption") is that the cfdecrypt utility was around a couple years before IntraLearn hit version 2.6. They never bothered to do a google search.

A quick note to anyone looking for cfdecrypt. As of this writing, the first hit is for a web interface to the utility. There is a compiled, command line Windows binary available.

I wish I'd noticed the file ages ago. I could have e-mailed them the decrypted files back when they would have been useful.

Let your mind wander over the implications of a company losing source code to files they continue to distribute. Feel free to take into account the detail that their QA didn't catch the fact that they left an admission of this error in their distribution files.

Let's all hope the last person to work on those files wasn't building any back doors.

Fortunately, the back doors I found after decrypting their source no longer functioned. I don't know if they explicitly removed them, or if other changes to the code happened to break them.

And if you're from IntraLearn, don't worry, I'm not going to post the user names and passwords you hard coded into your "product."

Friday, July 16, 2004

My Wife will like this one

Mac News: Commentary: Are Mac Users Smarter Than PC Users?

Are Mac Users Smarter Than PC Users?

By Paul Murphy,
Part of the ECT News Network
07/15/04 7:45 AM PT

I doubt it's possible to get a definitive answer, but as long as you don't take any of it too seriously you can have a lot of fun playing with proxies such as the average user's ability to read and write his or her native language.

My wife has a Dilbert cartoon on her office door in which one of the characters says: "If you have any trouble sounding condescending, find a Unix user to show you how." She's a Mac user and they were worse even before they all became Unix users too.

Or maybe not. But finding out whether the average Mac user really is smarter than the rest of us isn't so easy. Part of the problem is that even if you matched the admissions test results for a graduate school with individual PC or Mac preferences to discover a strong positive correlation, people would argue that the Mac users are exceptional for other reasons, that the tests don't measure anything relevant, and that it's unethical to do this in the first place.
In fact, it's pretty clear that this topic is sufficiently emotionally loaded that you'd get shouted down by one side or another no matter how you did the research; and that's too bad because a clear answer one way or the other would be interesting.

I doubt it's possible to get a definitive answer, but as long as you don't take any of it too seriously you can have a lot of fun playing with proxies such as the average user's ability to read and write his or her native language. This isn't necessarily a reasonable measure of intelligence (mainly because intelligence has yet to be defined) but almost everyone agrees that a native English speaker's ability to write correct English correlates closely with that person's ability to think clearly.

Measuring Written English

In other words, if we knew that Mac users, as a group, were significantly better users of written English than PC users, then we'd have a presumptive basis for ranking the probable "smartness" of two people about whom we only know that one uses a Mac and the other a PC.
So how can we do that? As it happens, Unix has been useful for text processing and analysis virtually from the beginning. In fact, the very first Unics application offered text processing support for the patent application process at Bell Labs -- in 1971 on a PDP-11 with 8 KB of RAM and a 500-KB disk.

By coincidence, Interleaf, the first GUI-based Document-processing package, was the first major commercial package available on Sun -- in 1983, well before Microsoft "invented" Windows and well ahead of the first significant third-party applications for the Apple Lisa.

During the 12 years between those two applications, text processing and related research became one of the hallmarks of academic Unix use. By the early eighties therefore most Unix releases, whether BSD- or AT&T-derived, came with the AT&T writers workbench -- a collection of useful text processing utilities.

One of those was a thing called style. Style is somewhat out of style these days but is on many Linux "bonus" CDs and downloadable from as part of the diction package.

Style produces readability metrics on text. Forget for the moment what the ratings mean and look at the numbers. For comparison, here's what style says about the first 1,000 words in what is arguably the finest novel ever published in English: The Golden Bowl readability grades:

Kincaid: 18.2
ARI: 22.2
Coleman-Liau: 9.8
Flesch Index: 46.7
Fog Index: 21.7
Lix: 64.4 = higher than school year 11
SMOG-Grading: 13.5

Of course, that's Henry James at the top of his form.

Slashdot and Other Style

For a more realistic and interesting baseline, I collected about 2,800 lines of Slashdot discussion contributions and ran style against them to get the following ratings summary along with a lot of detail data omitted here:

Kincaid: 7.7
ARI: 8.0
Coleman-Liau: 9.7
Flesch Index: 72.4
Fog Index: 10.7
Lix: 37.1 = school year 5
SMOG-Grading: 9.8

Notice that these results apply to comments from Slashdotters, not to the text on which they're commenting. Look at the source articles and you get very different results because, of course, most are professionally written or edited -- although there is an interesting oddity in that ratings for files made up by pasting together stories posted by "Michael" are consistently at least one school year higher than comparable accumulations made from postings (other than press releases) by "Cowboyneal."

Comments put in discussion groups aren't usually professional productions like news articles. You'd expect those to rate considerably higher; and they do. Here, for example, is the summary from running it against five articles taken from today's online edition of The Christian Science Monitor:

Kincaid: 10.4
ARI: 12.5
Coleman-Liau: 12.9
Flesch Index: 59.5
Fog Index: 13.3
Lix: 48.8 = school year 9
SMOG-Grading: 11.6

Lots of smart people have put effort into arguing that these readability scores are either meaningless or meaningful, a choice that apparently depends rather more on the writer's agenda than research. Most of the more credible would probably agree, however, that higher rankings are mainly useful as a rough guide to the writer's expectations about his or her audience but lower rankings do correlate directly with the writer's education in English and indirectly with intelligence.

So what happens if we treat the Slashdotters, a mixed bunch if there ever was one, as a median and then compare the ratings shown above with results from "pure play" Mac and PC communities?

The PC Community

I tried running style against text collected from various PC sites. The very lowest ratings came from text collected from an MSN forum host, but I only got about 600 lines because the forums suffer the Wintel design disease of requiring you to click for each new text contribution and I get bored easily.

Kincaid: 2.9
ARI: 1.9
Coleman-Liau: 8.0
Flesch Index: 89.5
Fog Index: 6.0
Lix: 21.5 = below school year 5
SMOG-Grading: 7.1

The highest PC-oriented ratings came from a sample of about 2,500 lines taken from reader comments hosted by PC Magazine:

Kincaid: 5.9
ARI: 5.9
Coleman-Liau: 9.0
Flesch Index: 79.3
Fog Index: 9.0
Lix: 32.2 = below school year 5
SMOG-Grading: 8.8

Notice that both sets score well below the level of Slashdot's contributors.

And the Mac Users?

So do Mac users differ? You bet. Here's the ratings summary based on about 3,000 lines of text taken from reader comments hosted by the Macintouch site:

Kincaid: 8.9
ARI: 9.4
Coleman-Liau: 10.0
Flesch Index: 67.8
Fog Index: 12.0
Lix: 40.5 = school year 6
SMOG-Grading: 10.7

Not only were these ratings significantly higher than those given Slashdot's contributors, and thus better than those given text from the PC sites, but the vocabulary was larger too. Without collapsing words to their root forms, but after removing punctuation (including capitalization) and numbers, the Macintouch stuff had 870 unique words to only 517 for the combined PC sites.

Overall, the results are pretty clear: Mac users might not actually be smarter than PC users, but they certainly use better English and a larger vocabulary to express more complex thinking.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.

Thursday, July 1, 2004

If you never hear from me again...

Well, things are pretty much par for the course here at work.

The ceiling is threatening to collapse.

I need to start keeping a digital camera at work for situations like this.

The IT department is in the basement. Above us is the filing department and customer service. Their ground level room used to be two separate areas with a rather sturdy wall between them.

A couple years ago, the wall was taken down in order to create a single, large room for an open office plan. In place of the wall (Which may have been load bearing, the contractors who tore it down couldn't tell) they left a foot wide strip of drywall, the jagged end of which was finished with some moulding. This meant there was a strip of what used to be wall running along the ceiling from one end of the room to the other. No support beams were installed, and they would have been expensive and broken up the room.

This 30 some foot strip of wall remnant (anyone know the proper term for this?) is now sagging, threatening to buckle and collapse. It's also straining the wall sections that it connects to. There are a number of cracks all along the plaster.

This sagging was first noticed today, and the center of this edging is now a good foot lower than the ends. The sag is distorting the hanging ceiling, and raining a dust of drywall on the staff.

Now here is why I'm not happy with this.

My desk is in the basement, directly under the sagging section of wall. There is debate about the composition of this chunk of construction. If it's largely moulding and drywall, then it's theorized that a collapse would create a mess in the room above, perhaps leaving a hole in the exterior wall or damaging the stairwell. This does not distress me much, as I can get out the back door unless the collapse seriously damages the rest of the building.

If this chunk of building has some substantial planks of wood, then the collapse could create what amounts to a massive spear piercing the floor into the basement below. This would result in one or more shafts of wood and drywall plunging through the floor and into my work area.

Did I mention my desk is right under this bending and bowing part of the building?

Naturally, if I hear so much as a creak form above, I'm running out the back door as fast as I can.

On a side note, I would like to state that I prefer not to be kept on life support if there is no hope of recovery.

Which reminds me, I need to go over that organ donor stuff if I get home alive tonight. My eyes are too messed up to be of any use, but my other internal organs could be recycled.