Welcome to the QA Tech-Tips blog!

Some see things as they are, and ask "Why?"   I dream things that never were, and ask "Why Not".  
Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.  
Old Chinese Proverb

Saturday, November 5, 2011

The Problem is "Obvious"
QA's Sucker-Punch

Many years ago when I was in college studying Calculus, the professor had a favorite line:  "It is intuitively obvious that. . . ."

He'd fill the blackboard with mathematical hieroglyphics that would give Einstein headaches, say "It is intuitively obvious that [something or other]", turn back to the blackboard and perform some seemingly magical manipulation whereupon the entire mess would collapse on itself with the answer being something like three and a half.  Needless to say this left the rest of us looking at each other like something had just zoned-in from The Outer Limits, thinking "What the !!! was that?!"

Sherlock Holmes, in one of his stories, (I forget which one), made a very pithy comment when he said "There is nothing more deceptive than an "obvious" fact"

This is a statement that should be permanently tattooed onto the palm of every QA individual in the world.

If I were to write a "Ten Commandments" for QA, part of it would be those words that should strike fear in the heart QA engineers and testers all over the world:
  • Always
  • Never
  • Every
  • Only
  • Assume
  • Obvious
I am sure there are others, but I just don't remember them right now.

Why are these words a problem?  Because these words are the flashing red strobe lights that warn us about assumptions involved in design, requirements, or test, and we need to remember that these assumptions may not be true all of the time.  In fact, they may signal a special case that is "assumed" to be true all of the time, or that something seemingly obvious to us is obvious to everyone else.

And THAT lies at the heart of QA's biggest failures.  It's like a politician looking you straight in the eye and saying "Trust me!"

In it's purest essence, whenever you hear a broad blanket statement about something or other, it signals a potential weakness that should be given extra scrutiny during test or verification.  In other words, we should be continually vigilant toward the "obvious", be it in requirements, design, or verification of what we do.

What say ye?

Jim (JR)

Friday, November 4, 2011

You Can't Get There From Here!
Mapped drives are not visible from an
elevated command prompt

This is another one of those "What WERE they smoking?" type of tech-tips.

You are using either Vista (ye Gods!) or Win-7.
You have one or more mapped drives to external/networked resources.
You open an elevated command prompt and attempt to do something that requires access to the mapped drive.

Zzzzzzzzt!  "We're sorry, but thanks for playing!"

You will notice, rather rapidly, that "you can't get there from here" - that is, the command prompt window cannot see any of your mapped drive letters.  To be brutally honest about it, the command window CAN see the mapped letters. . . . it just won't let YOU have them!

(Note that there are reports from people saying that this won't work from a non-elevated command prompt too, but others claim that this is not an issue.  Your Mileage May Vary.)

If you search the Web you will see a mighty wailing and gnashing of teeth, with people waiting in line to beat Microsoft senseless over this issue.

There are two workarounds for this issue - though, in my opinion, these are UGLY hacks for a lack of functionality that should exist by default.

  • From an elevated command prompt, execute:
    net use [letter:] \\[system]\[share]
    Note that you cannot use the pre-existing mapped drive letter.  The command prompt won't let you use it by default, but complains if you try to map it within the command window.  Go figure!
    (Ref: http://tinyurl.com/6xlk664 - near the bottom of the page)

  • You can edit a Registry key:
    by creating the a DWORD value of EnableLinkedConnections and setting it to "1"
    Though this works, sources at Microsoft claim that it opens potential security holes to cleverly written malware.
    (Ref: http://tinyurl.com/5sunjbb)

For those of you who are more paranoid about security, or if you want to write batch-files that are portable, I would suggest the first solution.  Otherwise, you can make the registry edit and take your chances.

To read all about it, along with Microsoft's response, here's the referenced article, http://tinyurl.com/5sunjbb
that discusses this issue in detail.

Sigh. . . .  You're damned if you do, and damned if you don't.

What say ye?

Jim (JR)

Wednesday, November 2, 2011

Don't Shoot Yourself In the Foot!
(Protecting 'Nix mount-points)

One of the big differences between Windows and the various 'Nix flavors is in the way it handles mounting logical/physical drive volumes.

Windows uses "Drive Letters", (C:, D:, etc.), to distinguish between mounted drives.  Because of this, it's relatively easy to know where one drive or partition ends and another begins as they are shown as separate, distinct entities.

On the other hand, 'Nix uses "Mount Points", ("/mnt/foo", "/mnt/bar", etc.), to distinguish between mounted devices.  Because of this, devices, data-sources, or whatevers, appear as if they are a part of the local, physical hard drive - yet can be located on a different partition, different hard-drive, a different computer, or it could even be located in an entirely different part of the world.

The way this works is like this:
  • You create a physical directory where you want your data-source, (hard drive, partition, etc.), to appear; such as "mkdir /mnt/foo" where "foo" is now an empty directory located within "/mnt" (or wherever you want to put it).
  • You then actually put the physical device on top of the mount point by "mounting" it:
    Viz.:  "mount [something on] /mnt/foo"
And Voila!  Whatever data, device, or whatever exists at or within "something" magically appears at "/mnt/foo" replacing whatever was already there.

Are warning bells ringing yet?  They should be. . . . .

What this means is that - if you change directories to "/mnt/foo" - you have no way of knowing if your [something] is, or is not, mounted there by simply looking at the directory.  That is, unless you just happen to know what's supposed to be there. . . .  An assumption I'd really hesitate to make if I were you.  Especially if you are starting out with an empty "something".  Or if the errant user thinks he is starting out with an empty "something". . . . .

What this also means is that shell-scripts, (batch files for all you Windows aficionados), have no way of knowing what's there, or what's supposed to be there, without you telling them somehow.

(OK, OK!  There are special commands that you can run to find out what's there, or what's not there - but they are not always easy or intuitive, and it's really easy for an unknowing user to dump stuff into a mount-point that's not mounted yet.  Go ahead.  I dare you.  Ask me how I know. . . . .)

What Unix should do is make un-mounted mount-points un-writable in the same way that Windows/DOS doesn't allow you to use a drive letter that is not yet mounted.  But it doesn't.  Any Tom, Dick, or Harry can blithely write into an un-mounted mount-point, causing no end of confusion.

Solving the Problem:

Obviously what is needed is some way to show when the directory you are using as a mount-point - isn't mounted.  And the hint on exactly how to solve this problem is given by the problem itself.

If you remember, when you mount something on top of a directory, (which, by the way, is the way it works), whatever was in the directory prior to being mounted disappears, replaced by whatever you mounted there.

The fix is to deliberately put something in the mount-point directory - prior to something being mounted there - warning everybody that whatever is supposed to be there, isn't there yet.

So. . . . this is my fix:

From a root terminal - or sudo root. . . .
  • I create the directory where I want to mount something.
  • I deliberately "touch" (create, with nothing in them) two bogus files with warning file names:
    touch 'Do Not Use!'
    touch 'Not Mounted Yet!'
  • I then "chmod" these two "files" to 644 - making them read only to everyone but root.
(Note that the single quote marks are not a mistake.  You need to use them to include the "!" character in the file's name - as normally the "!" is a "magic character" in 'Nix.)

With this, there is no possibility for mistake.  Anyone who goes to that directory, expecting something to be there, instantly knows that - whatever it's supposed to be - it isn't there yet.  And depending on the system - and their relationship with it - they can either go "Oops!  Forgot to mount my. . . .", or put the Sysadmin wise that something isn't exactly kosher in Denmark.

What say ye?

Jim (JR)

Sunday, September 4, 2011

The Cost of Complacency

I spent most of today replacing the front brake pads on my wife's Lexus that we had bought in 2005 or thereabouts.  Not only were the fancy aluminum rims frozen fast to the steel of the disk-brake rotors due to the electrolysis of the dissimilar metals, (nothing being done to prevent it), the calipers and especially the caliper supports that hold the pads in place were unbelievably rusted.  The rotors themselves were so badly rusted that the polished brake surface was actually peeling off the rotors, exposing the rusted and pitted metal below.  These parts had been replaced a year ago by the dealer.  One year later they needed replacing again.

Now we also have a 2002 Camry, and I've done brake work on it before - and yes, I've seen rusty brake parts there too.  But!  They were never as badly rusted as the parts I saw today - even after years and miles of use and abuse.

A friend of mine bought a Yaris for two reasons:  First of all, it was manufactured by Toyota, the Reigning Gods of Automobile Manufacturing,  Secondly, the price was right.  Of course, being manufactured by Toyota, it walks on water and talks to the angels.  Right?

In his case, the car has been one expensive repair after another and he has made a Holy Vow to never darken the door of a Toyota dealership ever again.  Especially since the Toyota people near him have been absolute models of diplomacy and tact.  (/sarcasm!!)

Of course, all three of these cars were manufactured and sold before the Toyota Recall Debacle.  The Camry was, and still is, one heck of a car.  It's within spitting distance of 200,000 miles on the clock with 'nary a burp to sully its pristine reputation.  The Lexus, manufactured two years later, has been a hole in our driveway into which we have been pouring money.  And my friend's Yaris, purchased even later than the other two, is rapidly on its way to being inducted into the "Five Gallons of Kerosene and a Match" Hall of Fame.

Then came the problem of "sudden unexplained acceleration."  And depending on who you talk to, it cost lives - people who died in crashes attributed to that fault.

Toyota was God, so Toyota had become complacent.

So, what happened to Toyota?  Nowadays many people are thinking a second, third, and maybe even a fourth time about trusting Toyota again, and for the first time in it's long history Toyota has been posting sales losses rather than gains.

In the same vein, the exact same vein as a matter of fact, General Motors all but literally owned the automobile marketplace years ago.  They became complacent, and they reaped the rewards of their complacency.  General Motors all but vanished off the face of the earth - and would have vanished without a trace - but for the massive Government bail-out they received.

Digital Equipment Corporation - in it's time - had virtually the entire mini-computer market in its back-pocket.  Being Gods in their industry, they became complacent.  Look where they are today.  Or rather look where they aren't today, having withered away to nothing long ago.

In the '60's, NASA was GOD when it came to technical innovation.  They had the world by the balls, and the sprinkles on top of the cherry, on top of the whipped cream, on top of the icing, on top of the cake was landing a man on the Moon.

When the Apollo 13 mission was rapidly going down the toilet, the three astronauts on that mission were in such deep trouble that, (in all probability), even Lloyd's of London would not have insured their lives.  Thanks to the incredible ingenuity of the people on the ground at NASA they made it home, in record time, with 'nary a scratch to show for their harrowing adventure.

Having gained the high ground, so to speak, they became complacent - embroiled in political turf-battles that sapped the energy and vitality out of that agency.

Where are they now?  On the peripheries of space technology; so far out of the picture that they depend on the Russians, French, and Chinese to get payloads into space.

The United States, once the Gold Standard for innovation, has become complacent since we - obviously - had the entire world by the Short Hairs.

So, what happened?  A recent article on the subject of innovation and its relationship to the economy quoted an independent analysis of the inventiveness of various countries - and guess where the Good 'Ole U-S-of-A ended up on that list?  In the highly prestigious position of being number eighty-one.

That's right, kiddies - in 81st place, right behind Dilbert's famous Country of Elbonia.

And it would not surprise me if position number 81 is even lower than Iraq and Afghanistan's position on that list.  Compared to China?  Fuggeddaboutit!  We're not even in the same Solar System they're in.  We have even dropped below our former Arch Rivals, the Russians, and are probably being outpaced by some third-world countries as well.

There's a saying:  "If you always do what you've always done, you will always get what you always got."

Unfortunately, that's not true anymore.  If you "always do what you've always done" what you "always get" is to be rapidly left behind by those companies, agencies, and governments who still have the wisdom to encourage, (and fund!), innovation.

Complacency cost Digital it's entire company.

Complacency darn-near sent General Motors to the same fate.

Complacency cost Toyota dearly in that most precious of commodities - customer trust and loyalty.

Complacency has cost the United States not only it's position in the world, but has wrecked more havoc on our economy than ever since the Great Depression, and has placed our National Debt squarely in the fists of the Chinese.  And God Himself help us should the Chinese decide to "call" on even a fraction of the paper they hold.  We'd deflate faster than a pin-pricked balloon. . . .

Complacency costs.  Dearly.  Tragically.  Even Globally.

What say ye?


Saturday, May 7, 2011

Hot Smokin' Weapon! Award for May 2011:
EasyBCD by NeoSmart

  . . . . And a Big HELLO to all my friends out there in Television Land!

I have decided that it's time for another one of my famous (sort-of) "Hot Smokin' Weapon!" awards.

And the lucky winner is. . . . . . (Envelope please. . . .rrrrrrip, shuffle, shuffle - pregnant pause)
NeoSmart Technologies and their EasyBCD product!  (Enthusiastic canned applause. . . .)

Seriously now, EasyBCD is one of those cute little utilities that really should have been included with Windows Vista, 7, etc., because it's so darned handy and useful to have.  In a sense it's a lot like sex.  If you've never had it, you don't miss it - but once you've gotten it you wonder how you ever lived without it.  Not only is it so darned useful that, (IMHO), it should be standard equipment on modern computers, it is also absolutely free.  Yea, I know - I can hear the yawns already - however try not to fall asleep before I finish here, you'll be glad you stayed awake - Promise!

To really appreciate the magnitude of EasyBCD's contribution, we need to take yet another Stroll Down Memory Lane. . . .

The earliest versions of both DOS and Windows - up until about the time of Windows '98 - use a hard-coded boot "pointer" in the Master Boot Record, (MBR), of the hard drive to tell it where - on that particular hard drive - the boot and start up files were so that it could get the computer up and running.  The advantage of this system was that the boot process is a trivial exercise:  Follow The Yellow-Brick Road and you eventually get to Oz.

If you've been paying attention, you will see the big, glaring, disadvantage of this boot method.  It assumes that any bootable operating system is on the first hard drive in the logical chain - which, by the way, is the only one that would boot natively.

If you wanted to boot to a different partition on the first, (root), hard drive, you'd have to re-create a new MBR that points to the partition to be booted.  If you want to boot to an entirely different hard drive you'd have to, (somehow or other), change the logical order of the drives to make the new drive the root drive in the logical chain.  (And then muck around with the MBR!)  Both of which, (as many a user will tell you), are among the fastest ways to bork a system into utter oblivion unless you were particularly careful.  And lucky.  And had backups.

In order to make these additional partitions - or disks - bootable without having to hack the MBR, (Yow!), or fuss around with the logical sequence of the drives, (Say WHAT?!), whenever someone wanted to change the boot order; people were forced to purchase specialized applications, (like System Commander), that could show them a menu of bootable operating systems and let them pick the one they want.  It was a hack, but it was a hack that actually worked and people gladly shelled out their hard-earned pesos to buy these utilities.  These utilities, because of what they did, were also fraught with danger.  I can personally testify that if you screw up just THIS MUCH. . . . Well let me just say that it wasn't a pretty sight.

Starting with Windows NT Microsoft worked out a way to fix this problem by providing a "boot.ini" file in the root of the first logical drive that was used to boot the system.  The function of the boot.ini file was to tell the system what operating systems were installed and on which disk and/or partitions they could be found.  Creating a multi-boot environment became as simple as editing the .ini file.  No hack required!

Of course, making things that easy for the end user goes against Microsoft's Corporate Policy, so they fixed that with the release of Vista by scrapping the boot.ini file in favor of a special, hidden, "database" called the Boot Configuration Data (BCD) store.  Supposedly, this was done to support an entirely new boot paradigm called the Extensible Firmware Interface (EFI) - something that may well become significant when 3 terabyte (+) disks become more common.

The end result of this was to make changing the boot process virtually impossible without using a Microsoft Supplied Utility called BCDedit.  BCDedit is a pure command-line tool and is about as obtuse and cryptic as Egyptian Hieroglyphics, thus giving Microsoft the lead in the mad rush to make boot configurations as insanely difficult to maintain as possible.  (Though Grub2 is a close second, with FreeBSD and Solaris right behind them.)

Enter NeoSmart and EasyBCD.  EasyBCD transforms the management of the BCD store from alchemy and fervent prayer to something as easy as a point-and-click interface.  With this you can make a nice little menu for yourself with each bootable O/S listed - without having to jump through flaming hoops or do the high-dive into a cement filled bucket.

You can change the existing boot order around, add or remove operating system boot entries, change which O/S boots by default - along with tweaking all kinds of interesting parameters - like creating an "if all else fails, this should boot" entry in your O/S list.

Probably one of the best features of this utility, (IMHO), is that it allows you to make BACKUPS of your finely tuned boot sequence - just in case!

Even more useful is that the latest versions allow you to make a bootable "recovery" USB stick with EasyBCD on it, to help you in those cases where an operating system install has hijacked your carefully crafted boot process and stubbornly refuses to let you get it back.  Or even provide a workable substitute.

Now, admittedly, there are some things that EasyBCD can not, and will not help you with:
If you decide that you're crazy enough to mess around with, (or actually re-arrange), the logical volume GUID's, (Yikes!), or you're maddeningly insane enough to hand-edit the drive's MBR and partition tables to change the logical order of the partitions on the drive, (Double Yikes!!), EasyBCD can't help you.

For these tasks you need particular and specialized tools, a double-shot of fine Kentucky Sour-Mash Bourbon, And a straight-jacket!

For everything else related to booting, EasyBCD is the obvious winner.  So much so that, (surprise! surprise!), even the folks at Microsoft use it instead of their own tool.  Which should tell you something about both EasyBCD, and Microsoft's BCDedit utility.

P.S.  Just in case you missed the cleverly hidden hyperlink at the beginning of this article, you can go check out EasyBCD right here:  http://neosmart.net/dl.php?id=1   I'd do it if I were you.  You really will be glad you did.

What say ye?


Thursday, May 5, 2011

I've Got a Tiger in my Tank!

No, this is not an Esso/Exxon commercial.  (Of course, you do realize that I am severely dating myself with that reference!)

Neither is this a commercial for Mac's OS-X.

Instead, this article is about a little known - and probably even less often used - SATA drive mode called AHCI which stands for "Advanced Host Controller Interface".  There's even a nice Wikipedia article about it that goes into all the gory details if you're interested.

AHCI supports all kinds of fun features like the ones listed below.
(Taken from the AHCI Spec - Rev 1.3, available on IBM's web site here.)

AHCI specifies the following features:
• Support for 32 ports
• 64-bit addressing
• Elimination of Master / Slave Handling
• Large LBA support
• Hot Plug
• Power Management
• HW Assisted Native Command Queuing
• Staggered Spin-up
• Cold device presence detect
• Serial ATA superset registers
• Activity LED generation
• Port Multiplier
The support for port multipliers is important, especially if you want to get a nice shiny new External SATA RAID box - as most of them require port-multiplier support nowadays.

The large LBA support is especially important because it allows you to connect HUGE drives to the system - and the staggered spin up helps avoid smoking your computer's power supply when you fire up that monster 32 drive array!  Though you would hope that any array that size would have its own dedicated power supply, right?

There are - as always - a couple of flies in the ointment:
  • Many self-booting utilities, (like Apricorn's hard drive backup/cloning software), haven't even thought of AHCI, let alone support it.
  • If you're running anything older than Vista or a Hot Smokin' Linux Kernel, fuggedaboutit!  Don't even try.
  • If you ARE running Vista or better, (trust me, anything you might be running is much better than Vista!), or a Hot Smokin' Linux Kernel - and didn't install with AHCI enabled at initial install time - when you change to AHCI and reboot, your computer is liable to look at you with a puzzled expression and ask "What's a Cubit?"
I have no idea how to mitigate this in Linux as I have neither tried it, nor have I researched it.  On the other hand, Microsoft has already released a Knowledge Base Article describing the registry hack you must do - before making the switch - to clue your computer in on what's about to happen.

All in all, especially as multi-petabyte RAID arrays become common attachments to the average X-BOX game console, AHCI is going to become increasingly important.

What say ye?


Sunday, April 17, 2011

OOPS! - When disaster strikes
System Restore (Part 1 of a series)

Usually, mostly, and as a general rule, your computer starts up and runs well.  You turn it on, enter your password if required, and proceed to have a, (hopefully), happy day on the other side of Alice's mirror in Computer Land.

Sometimes things don't work out so nicely.  You turn the computer on and things go awry in weird and bizarre ways.

Or there's that sickening, sinking feeling you get in the pit of your stomach when your computer starts up with a "no boot device available" message, or sits there - black-screened - with a little white underline cursor blinking at you in the upper left hand corner, as if to say "Ha! Ha! Ha!  You're thoroughly hosed now!!"

Since you are reading this, your computer is, (hopefully), not borked beyond all recognition and we can begin with a machine that is still in running order.

There are certain things that can be done right now, without specialized tools, that can mean the difference between disaster and a successfully recovered system.

The first is System Restore

System Restore
(Note: Windows 7, Vista and XP users all benefit from this.)

This is a life-saving trick that really doesn't get the exposure it should.  Seriously!  I have done things that can only be classified as "abysmally stupid" in hindsight; and had my computer - though still bootable - in very deep sneakers, with no idea whatsoever as to how to dig myself out.  If it weren't for the System Restore feature - I'd still be in deep sneakers with no way out but a complete bare-metal re-install.

What System Restore does is to save periodic snapshots of your system's critical configuration data, (called "restore points"), which are snapshots of system files, registry entries, driver status, etc.  If, and when, your system gets corrupted by an install that went awry - or some other reason, (wink! wink!) - you can use System Restore to, in essence, return your computer to an earlier point-in-time when things were not damaged.

System Restore needs to be enabled and running.  I know, this is one of those "Ya Think?!" statements - but seriously, System Restore is often not running - or has been disabled for some reason - and the status of the System Restore service needs to be checked periodically.

Note:  You will find articles on the Internet advising you to turn System Restore off for one reason or another.  They allege it will speed up your system, (false), or that you will recover tons of disk space, (only partially true), or that it's undesirable for this or that reason, (absolute B.S.).  They actually have the gall to suggest that you should destroy your first line of defense against system destruction to gain some nebulous - and often minimal - improvement to your system.  Which, In My Humble Opinion, is absolute insanity!

Now I will admit, System Restore does reserve a chunk of your hard drive for system restore points.

Of course, Windows also reserves chunks of your hard drive's space for the system swap-file, the hibernation file, crash-dump files, Windows Update backups and so on.  If you are getting so tight on disk space that you are considering dumping your system's restore points - you really need to go buy a huge external drive to put stuff on.

It is my humble opinion that - if you need to dump something - dump all that worthless advice instead of System Restore.

To check, (and enable if necessary):
Open your START menu by clicking on the "Windows" icon in the lower left corner of your desktop - assuming you haven't moved the bottom task bar.

Point to the "Computer" menu item on the right had side of the start menu and right click.  A drop-down menu will open and you should click on "Properties" down at the very bottom.

When you do that a Control Panel window opens up with important information about your system, its setup, and the operating system running.  You should remember how to get to this point as you will very likely need this information in the future.

What you should take note of now is in the center of the screen under "System Information".  The "System Type" entry will say something about either a "32 bit" or "64 bit" operating system.  Write it down somewhere and remember it.  A sticky-label on the bottom of your computer is a good place to keep it.

Even though this has nothing to do with System Restore, it is essential information to have, and since you're there anyway, you should take note of it and record it somewhere.

After you have done that - in the upper left is a list of items one of which is "System Protection".  Click on it.

You will get a pop-up properties window that looks something like this:

The System Protection Property Sheet
Notice in this case System Restore for drive "C:" is turned on.  The "D:" drive is managed by a different operating system - with its own System Restore - so I have that turned off.

You should see your C: drive with System Restore turned on.  Click on the C: drive's entry so that it turns blue and follow the steps below.

If it is turned ON:
Click on the System Restore button.  What should open up is a window that discusses System Restore.  If you click next you should see a list of restore points that you can use.  This means you're all set and ready to roll back the clock if necessary.  Click "Cancel" three times to exit system restore and the system properties page and then close the Control Panel window.

If it is turned ON but there are no restore points available:
If, when you click on the System Restore button, you get a message saying that there are no restore points available, you should create one RIGHT NOW.  And I'm going to show you how to do it.

What has happened is either you just turned System Restore on, or (more likely), something has destroyed any previous restore points.

To create a system restore point, exit the status window and return to the system properties page.  There you will see a button that says "Create".  Click on this button and a small window will open asking you to type in a description of this restore point.  Type in some descriptive text - "Restore Point created after something clobbered them" - or something similar and then click on the button marked "Create".  A dialog will open saying that a restore point is being created; and once that finishes you will see another dialog saying that it was created successfully.

Exit the dialog and then try the test mentioned above under "If it is turned ON".  You should see a list of restore points, with only one in it - the one you just created.  Click "Cancel" several times to exit back to the control panel window, and then close it.

In both cases, you are now set for rolling back your computer if disaster strikes.  With System Restore turned on and working, Windows will - periodically - create "Automatic Restore Points" to create a trail of bread-crumbs you can use when disaster strikes.

If System Restore appears to be ON, but the dialog tells you it's OFF:
There is a rare, but real, possibility that when you go to the System Restore property sheet, select the drive that is "ON" and check for restore points by clicking on the "System Restore" button - a dialog will pop up telling you that System Restore is not enabled on this drive - leaving you scratching your head in wonder.

You check if it is really on or not by verifying that the correct drive is selected and clicking on "Configure".  When you get to the configuration page, it may even show that protection is turned on because the little dot is in the circle next to "Restore system settings and previous versions of files".

What has happened is that the state of the System Restore service and the indications in the System Protection dialogs, have gotten out of sync somehow.  This is rare, but it does sometimes happen.  In this case System Restore is actually and truly off - it only thinks it's on.

Here is how you re-sync it:  Click on the "Disable" circle to put the dot there, and then click it back on the top circle again.  Verify that the slider below shows some percentage of the disk reserved for restore points, (5% is a good starting value), and then click the "Apply" button which should now be lit.

Hit "OK" to close that window and return to the System Protection property sheet and create a new restore point by clicking the "Create" button and following the steps given.

If System Restore is OFF:
First thing we need to do is turn System Restore on.

To do that, you go to the System Protection property sheet by following the steps outlined above - but instead of checking System Restore, we're going to turn it on.

To do that, first look at the hard drive or drives listed in the System Restore window.  One of them should be your C: drive and we want to protect that.

Click on the C: drive entry in the window and notice that it turns blue.  Then click on configure.  When you open the configuration page, it should show "Do not protect this disk" selected and we want to set that to the top item: "Restore system settings and previous versions of files" by pointing to the circle next to it and clicking on it.

Further down, you will see a slider that determines how much of the hard disk should be reserved for system restore points.  Mine is set to 5%, and that should be enough, and more, to hold enough restore points to get you out of trouble.

Click "Apply" and then "OK" to activate System Restore and return to the System Protection property sheet.  When you return to the System Protection property sheet, you should see that your C: drive is set to "ON".

Now that its turned on, go ahead and create your first restore point by clicking on "Create".  You give the restore point a descriptive name, and actually create it by clicking on the "Create" button.

At this point you should have System Restore turned on and at least one good restore point set.  Now that System Restore has "got your back" so to speak, it will continue to create restore points periodically - especially when new software is installed.

You've now taken your first step toward avoiding - and correcting if it should happen - system disasters.

What say ye?


Monday, April 11, 2011

Internet Explorer 9
Preventing Automatic Install

As many of you know - Internet Explorer 9 has been released by Microsoft. . . .  And true to their usual tricks - they have released it as an "Important Update" - virtually guaranteeing that it will be installed automagically - whether you want it or not.

Since it is so new - and it is likely that there are still a few rough edges - you may want to delay automatic installation of IE-9 via Windows Update.

There is a registry entry that you can insert that will accomplish this:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Setup\9.0]
You can create a file, (xxxx.reg) that will automatically update the registry for you, like this:
Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Setup\9.0]
Of course, you will get all the usual warnings about modifying the registry causing Doom and Destruction - but you can (in this case) safely ignore them.

What say ye?


Saturday, March 19, 2011

Linux vs Windows?
Which to use and when

Over the years, I have messed with a wide variety of both Linux and Unix type systems.  I have also used virtually every Microsoft operating system - ranging from the venerable DOS and it's kin, through the germinal "Windows" versions, all the way to Windows 7, Server 2008, et. al.

The Linux vs Windows debate is, (IMHO), one of those pointless "religious wars" that have no real resolution.

Like the "Ford vs Chevy" or the "Mac vs Everybody Else" kind of debate - it's tantamount to arguing which is better, Vanilla or Chocolate.  There are legions of advocates and supporters on both sides, each convinced of the Ultimate Rightness of their respective opinion.

So?  Here goes. . . .

Which do *I* think is better?  Wanna be perfectly honest?  Neither one.

There are things that can be done in Linux with trivial ease that are virtually impossible to do in Windows; and vice-versa, you can do things in Windows that would tie Linux into knots.

Oh, yea - I can hear it now:  Microsoft wears a black helmet and has amplified breathing, whereas Linux and the whole Free Software Movement is allied with the Good Side of The Force.  And when I hear someone spouting that - I have just two words for them:

Grow up!

With all due respect to Mr. Stallman and the Free Software Foundation, I personally believe that pedantic, polarized religious thinking is counterproductive.  Neither Windows nor Linux is going away in the reasonably foreseeable future and the continuing back-biting, blame-throwing and finger-pointing isn't doing anyone any good.  Except for those periodicals, web-sites, advertisers and other Yellow Journalists that have always profited from roiling up strife and discord.

Comparing Linux to Windows is like comparing a pumped up hot-rod to a Buick.  The hot-rod is a much more powerful and fun machine to drive, but that power and fun is purchased at the cost of greater administrative responsibility.  You have to do more "tinkering" with high-powered cars than you do with the Buick.  You also have to really pay attention when you are driving one; much more so than with the Buick.

By comparison the Buick is more comfortable and easy to use, but it sacrifices power and flexibility to achieve that ease of use.  This is not to say that a stroked-and-bored Street Beast is inherently better, or worse, than the Buick.  It's a matter of personal desire, taste, and need.

Aside from the fact that the Windows user interface is inarguably the most well known UI in the world - aside from the UI for automobiles -  (Sorry, Mac!), I believe that each system does things that - ultimately - complement the other.  To express it differently:  both Windows and Linux can, and should, coexist peacefully.

The one place where Windows absolutely excels is in large enterprise deployments, where global policies and granular permissions need to be propagated throughout the infrastructure in a seamless and efficient manner.  Within Windows it is possible to grant administrative authority for a very limited range of actions in a very specific and granular way.  Allowing specific users to administer a very limited and specific set of printers in their department is one example.  The ability to limit who can send faxes via the corporate fax-server is another.

By comparison, Linux is more of an all-or-nothing situation.  "Sudo" grants virtually unrestrained root access. Though you can create specific user-groups that have specific authorities, the grant of authority - even in a specific situation such as administering printers - is often uncontrollably wide and vast - or is unreasonably specific and limited.

Windows allows you to tailor authority over a certain very specific group of resources, the printers in a local department for example, without granting a broad and sweeping authority over printers in general.

Windows also supports the concept of group policies in general, and global group policies in particular.  With these policies, you can grant a very specific authority in a more generalized way.  For example, you can grant to QA or development departments - wherever they are in the organization, even if in remote locations - the authority to change the computer's system time, while forbidding everyone else.

Likewise you could set a policy that establishes a global standard workday - 08:00 to 17:00 local time with an hour's lunch from 12:00 to 13:00 - except in Dubai where there are also four or five ten-minute intervals blocked out for the required Islamic prayer-periods.

Permissions and policies - though set for regional needs or preferences - can be made portable.  This way the executive from Dubai who is in Chicago can have his machine automatically adjust for local time - while still maintaining the prayer-period time blocks he needs.

Windows allows a machine to be added to a particular group or class of users - a new employee hire, for example - and once that user is placed in a particular department or group, the appropriate enterprise permissions and restrictions are automagically applied to his system without further intervention.

Policies can be designed to apply to a particular class of computers regardless of where they might be at a particular point in time or who is using them.  Likewise permissions or restrictions can be applied on a user-by-user basis, regardless of what machine this particular person might be using.

Additionally, local groups or departments can be delegated specific authority to administer their own policies.  Stock traders or investment brokers within an organization may be subject to legal or administrative restrictions that would not normally apply to the average user.  Or vice-versa.

With Linux, you would have to manually propagate the policy from machine to machine, group to group, user to user and hope you didn't exclude someone who needs it, or include someone who doesn't.  Of course, you could automate that task with shell scripts, but with 'nix, every time a machine is added, removed, or changes assignment within the enterprise, the configuration process has to be done all over again.

In a nutshell, when it comes to the ability to granularise permissions and authority, Windows beats 'nix hands-down.

On the other hand Linux systems are remarkable for their flexibility and their ability to adapt to varied and varying roles with little or no cost.  Though Linux systems are making inroads into the desktop user-space, the place where Unix in general, (and Linux in particular), excels is in server-based applications.

Virtually any old piece of hardware you may have laying around can be adapted to a wide range of useful purposes using Linux.  I have built multi-terabyte file servers using systems that had reached the pinnacle of their capabilities with Windows '98.

And I am not talking about some crufty version of Linux back from the time of Wooden Ships and Iron Men; I am talking about modern distributions, fully updated with all the latest security patches.  Of course, they might be running a text console instead of a full-fledged GUI, but it can still be done.  By adding Samba, a minimal installation of Apache as well as SWAT, you can have a fully functional file server with a manifestly capable administrative web interface.

You can, (and I have), taken ancient laptops that have long outlived their usefulness in the Windows world and adapted them for use as Linux boxes.

A excellent example would be to take an old - but quite capable - laptop, install Linux on it and use it as a portable network test-set.  Tools like Wireshark, Netmon and a whole host of others that are free for the asking, allow you to take an ancient laptop and convert it into the equivalent of a multi-thousand-dollar portable network analysis tool.  And the cost of this is virtually de minimus; all it requires is a tiny bit of research and a few moments of your time.

Unlike the multi-thousand-dollar network analysis tool - your tool is adaptable and upgradable as your network or needs change - without investing additional  thousands of dollars into upgrades of dubious merit.

In a similar vein, Vyatta provides an Open Source enterprise firewall/router/vpn/etc. system that is easily the equivalent of the best that Cisco has to offer.  Even on older hardware it Eats Cisco's Lunch and if you invest in a multi-core Beast System, Cisco is not even in the same solar-system.  (Though there is the risk of that Cisco box coming back in the distant future, surrounded by a huge electro-magnetic field, looking for it's master. . .)

Another place where Linux excels is in the granularity of the installation.  In pretty much the same way that Windows excels in granularity of permissions, Linux allows you to install, or create, an installation environment tailored to exactly and precisely what you need.  With Windows any installation is, (virtually), an all-or-nothing situation.  Not only do you get the entire circus, you get the elephants thrown into the bargain.

Linux by comparison allows you to install, add, remove and otherwise tailor an installation to a specific need.  With regard to that multi-terabyte file-server, you can include 100% of precisely what you need with little to no extra fat.  This is one of the main reasons why Linux can be used so successfully on older systems; you can make that ancient laptop into a lean, mean, network munching machine; without the fat, cruft, bloat, and gobbledygook that other operating systems drag in their wake.

Just as important as Linux's inherent flexibility, is the extensibility of Linux.  If there is an application or use for Linux that you desperately need, it's a virtual lead-pipe-cinch that someone has already created it for you.  In the unlikely event that what you need doesn't exist, there are tools - again free for the asking - that allow you to create what you need for yourself.  These tools range from the simplest of shell-scripts to the most extensive development and version-control systems imaginable.

By comparison, the cost of development systems for Windows is often a significant portion of the total software development cost.

Should you need help, it's there waiting for you on the Internet.  Unlike the Windows assistance and training that is available, (for the mere pittance of multi-thousands of dollars, per person), most Linux help is available for the asking.  There are a multitude of fora, groups, blogs, local meetings, events, shows, and other things that - if not absolutely free - are available for a fraction of the cost of the corresponding Microsoft/Windows offerings.  Even those companies that provide payware solutions often provide free webinars, podcasts or RSS feeds to help keep you abreast of the latest developments.

What about interoperability?

Windows' ability to play nice with others in the sandbox - though limited - is slowly improving.  Microsoft, having come to the startling realization that - maybe, just maybe - they aren't the only fish in the pond, ( !! ), is beginning to make efforts to interoperate more effectively with other systems and platforms.

The biggest strides toward interoperability have been made - as one would expect - by the Linux and Open Source / Free Software community.

A shining example of this is the Samba software suite that allows Linux based systems to fully participate in Windows networks.  This is not restricted to just file services - though it does this remarkably well - it also includes participation as Active Directory capable member servers, domain controllers, enterprise level role masters and global enterprise repositories.  Implementing DNS, WINS, mail services, including "Exchange" capable mail services, is also doable within Linux.

In fact, because Microsoft server licensing is - shall we say - somewhat expensive, even larger enterprises with deeper pockets tend to place Windows servers in key locations within the architecture and fill in the rest with Linux machines participating in the Windows enterprise network.  Smaller organizations sometimes completely forgo the Windows servers altogether, using Linux equivalents to administer their Windows desktop machines.

In Summary:
Each platform has both strengths and weaknesses.  Each platform is like a specific tool - designed and useful for certain specific uses.  Which you use, and how you use it, is entirely up to you.

Tossing out Windows - as some advocate - simply because it's Windows is tantamount to throwing the baby out with the bathwater.  Likewise, avoiding Linux simply because it's NOT Windows is similarly narrow-minded.  You need to keep a varied and flexible set of tools in your tool-box to meet all your needs.

Both Windows and Linux deserve a place of honor in your tool-box, alongside all the rest of your tools.

What say ye?


Wednesday, March 16, 2011

An Update to my open letter to Ubuntu

The following was posted on the Ubuntu blog - Launchpad - as question 149330


Ref: My blog post titled "An Open Letter to Canonical and the Ubuntu Team."
(Please read and comment)

Ubuntu's Claim to Fame - and what has lifted it to the top of the popularity list for Linux distributions - was it's primary emphasis on usability instead of the Latest and Greatest whizz-bang features.

The Linux community is both broad and vast - there is a distribution for just about every taste imaginable - from the micro-Linux to the monolithic "everything but the kitchen-sink" monster distro's; from the most experimental "bleeding edge" distributions for the most daring Uber Geek, to those distributions that focus on usability.

I have tried many different Linux distributions for varying reasons over the years and I settled on Ubuntu for one simple reason:  I have a job to do - and it's often difficult enough to do what is needed without having to jump through the roadblocks and hoops imposed by those distributions who don't know better, or just don't care.

Until recently, Ubuntu has been my favorite distribution because "it just works".   Period.  In fact, I praised Ubuntu in a previous posting on my blog as the *ONLY* Linux distribution that I would be willing to install on my wife's computer - or even the computer run by my sainted mother of 70+ years.

And why?  Like I said before, it just works.  You didn't have to be an uber geek to use it.  Of course, if you wanted to get your hands dirty and poke around under the hood, that was available too.

Unfortunately, in their latest distributions Ubuntu has sadly fallen away from this high standard of excellence.  In fact, perusing the various blogs and posts, I have noticed an increasing disdain toward "dumbing down" Ubuntu.

There seems to be an increasing emphasis on moving toward a more "edgy" (bad pun !) distribution model, sacrificing the usability that has been Ubuntu's hall-mark for years.

I have a number of beefs with Ubuntu, but I will place at the Ubuntu Community's feet the two that I think are the worst of the bunch:   Grub2, and the new GUI interface.

Note that I am referring to my own installed distribution - 10.04 LTS.


Back when Men were Men, and Linux was Linux, we had LILO as the primary boot-loader.  It was difficult, annoying, and a pain in the tush, but it was what we had; so we sucked-it-up and did the best we could with a bad situation.

Then, in a Stroke of Genius, someone came up with the Grub boot loader.  Not only was it a miracle of simplicity compared to the abomination that was LILO, it was a miracle of simplicity in it's own right.  Edits and configuration changes were as simple as editing a few lines in the menu.list file.

It's basic simplicity and ease-of-use resulted in virtually Every Distribution Known To Man immediately depreciating LILO and switching en masse to Grub.

In fact, over 99.9999999(. . . . .)99999% of the existing distributions *STILL* use Grub for just that reason.  Even the most experimental and Bleeding Edge distro's still use Grub.

Unique among all distributions, Ubuntu and Ubuntu alone, has decided to switch to Grub2 despite the fact that Grub2 is probably one of the most difficult boot-loaders I have ever had the misfortune to come across.

It resurrects everything that was Universally Hated and Despised about LILO, and it does it with a vengeance!

Not only does one have to go edit obscure files located in remote parts of the file-system, one has to edit - or pay attention to - several different files located in different places, presumably doing different tasks in different ways.  And one cannot edit simple menu lists, one has to create entire shell scripts to add a single boot entry.  Even LILO wasn't that gawd-awful.

It is so bizarre that even the foremost author on Grub, Dedoimedo - the author of the definitive Grub tutorial - mentions in his tutorial on Grub2:
Warning!  GRUB 2 is still beta software.  Although it already ships with Ubuntu flavors, it is not yet production quality per se.
When discussing the question of migrating to Grub2, he says:
Currently, GRUB legacy is doing fine and will continue for many more years.  Given the long-term support by companies like RedHat and Novell for their server distributions, GRUB legacy is going to remain the key player. . . . .
And to put the cherry on top of the icing, on top of the cake, he says:
Just remember that GRUB 2 is still beta. . . . so, you must exercise caution.  What's more, the contents and relevance of contents in this tutorial might yet change as GRUB 2 makes [it] into. . . production.
(Ref: http://www.dedoimedo.com/computers/grub-2.html )

This is oh, so true!  Even the existing Ubuntu tutorials on Grub2 don't match current, shipping configurations - which makes attempts to edit Grub2's boot configuration more difficult - even for seasoned pro's at configuration edits.

Why, oh why, did Ubuntu have such an absolutely asinine brainstorm is totally beyond me.

The new GUI:

The ultimate goal of any Linux distribution - especially Ubuntu - is to encourage cross-over adoption by users of other - proprietary - operating systems.  And when we talk about cross-over adoption from other operating systems there are only two others of significance: Windows and Mac.

Mac users don't see their platform as a computer or an operating system; to them it is virtually a religion - with the rest of us being the poor, pitied, un-saved heathen that we are.  Expecting them to drop Salvation according to Jobs in favor of Linux is just silly.  Especially now that they can crow that they have their own 'nix O/S.

So, the best and most obvious choice for cross-over adoption are those users who use the various flavors of Windows.

Microsoft's licensing and activation paradigms have become so onerous and expensive that entire national governments as well as several states here in the US, (ex. Massachusetts for one), have completely abandoned Windows in favor of Open Source solutions.

"It is intuitively obvious. . . .", (as my Calculus professor used to say), that Ubuntu should be in a position to garner the lion's share of these cross-over users, right?  And the obvious move to encourage this would be to make the target interface as friendly and familiar as possible.   Right?

So - what does Ubuntu do to encourage Windows user cross-over?  They have gone to great lengths to make their user interface as Mac-like as they possibly can, short of being sued by Apple!  As if Mac-izing the GUI will cause legions of Apple users to abandon The True Faith and jump on the Ubuntu bandwagon. . . . .

Brilliant move Ubuntu!  Encourage Windows cross-overs by plopping them into a completely alien user interface!

In Summary:

Ubuntu's original claim to fame was the attempt to de-mystify Linux and make it increasingly usable by heretofore non-Linux users.  Canonical and the move by Ubuntu's leadership away from these ideals is, in my humble opinion, a huge mistake with potentially disastrous consequences for both Ubuntu in particular and Linux as a whole.

What say ye?

Thursday, March 10, 2011

Terrabyte or not Terabyte
That is the question

In my last post, The 2000GB Gorilla, I discussed some of the issues surrounding the newer 2 terabyte hard drives.  Partition table types, allocation unit sizes and partition alignment all had to be taken into account.

I've been working with a pair of 2TB drives for the past week - and I've become increasingly frustrated.  They'd work wonderfully one minute - I could torture-test them into the ground without so much as a hiccup.  Next thing I know, they blow up leaving dead bodies all over the Peking Highway.

Yet, it's not absolutely reproducible.

The lack of reproducibility, and the fact that the largest number of the drive "failures" occur at the highest eSATA port number, leads me to believe that there is more to this than meets the eye - and I began looking at the controllers themselves.

Looking online I notice that SATA controllers come in three distinct "flavors":
  • 1.5 Gbs
  • 3.0 Gbs
  • 6.0 Gbs
Along with two major versions:
  • 48 bit LBA (logical block address)
  • 64 bit LBA
Where the "64 bit LBA" cards claim to be able to handle the 2T+ drives.

Now WHY should 64 bits of Logical Block Address be required for drives larger than a single terabyte or so?

If I look at the addressing range for a 48 bit LBA, (248), I get 281 trillion bytes, (2.81 x 1014 or 281 TB), if we assume that the smallest item addressed by the logical block address is a single byte.

However – these are logical BLOCK addresses, not byte addresses and the smallest addressable unit is the sector, (or allocation unit), which is, (usually), 512 bytes.  So what you should have is 281 trillion sectors of 512 bytes each which is a pretty doggone big number.  Even if we ignore sectors and count just bytes, we still have more than 281 terabytes to play with.

Just for grins and giggles, if I assume that this is 281 (plus-or-minus) terra-BITS – we’d divide by 8, which still gives us well over 35.2 terra-BYTES of storage.

The only thing that comes even close to conventional numbers is dividing the 281T, by 512 – which gives us right at 551 gigs.

Again, this does not make sense.

First:  There is no logical reason to divide by the sector byte count.

Second:  If that were true, there would be no 1TB drive on the planet that could possibly work.  They would all puke at 551 gigs due to wrap-around.

Looking at a 64 bit LBA, 264 equals 3.6 exta-bytes (3.60 x 1017 bytes) which should be plenty enough bytes to last anyone for at least the next year or so until the google-plex sized drives come out.

So, no matter how we slice it, there should be plenty of bytes to go around and I suspect that the 64 bit LBA is more of a marketing tour de force  rather than a real hardware requirement.

So. . . .  What is the real limiting factor?

My money is on “the controller card and its memory”.

The SATA controllers that talk to both the drives and the computer buss itself have to do a significant amount of data-translation – parallel to serial – as well as serial to parallel.  Addresses as well as data received need to be assembled and dis-assembled somewhere and the serial ATA controllers have to have registers large enough to handle the data widths.

My suspicion is that the controller card memory, which was plenty and more than plenty, when handling drives 1TB or smaller; becomes a critical resource when handling 2TB drives.

I also suspect that my specific controller card, (I am assuming it was spec’d for four 1TB drives max), depends on the fact that at no time will all four drives be sending data absolutely simultaneously as the controller can “control” (duty-cycle), the data streams to keep things in-bounds.

Two 2TB drives running at the same time is the equivalent of all four 1TB drives talking all at once – and that becomes a juggling act that the controller may have trouble keeping up with, since the controller cannot duty-cycle individual 1T data-streams.  And when a hard-drive controller starts dropping the balls, well. . . .  Lets just say that it’s not a pretty sight.

So – IMHO – the real limiting factor here is that the existing hardware SATA controllers have been outgrown by their respective drive sizes; requiring us to either limit the number of 2T drives, (or not use them at all), OR upgrade to a more modern controller that is equipped to handle the larger drive sizes.

What say ye?


Tuesday, March 8, 2011

The 2000 Gigabyte Gorilla

Here's the scenario:

You have a computer that supports SATA / eSATA - or an external drive enclosure that supports SATA - and you decide you want a huge drive to fill it.

You snoop around and find a really good price on 2+ terabyte hard drives, so you buy a couple-or-five, depending on your cash situation.

You bring them home, carry them lovingly to your computer, hook them up, and proceed to partition and format them in the way you usually do.

Unknown to you, there's a 2000GB Gorilla in the room with you.  And that's when the fun begins!

In my case, I wanted to hook them up to the Linux box I am using for my primary file store so that I could make space on my RAID array.  I was planning to move less critical files to a more "near line" storage device, so I needed a very large drive to accommodate them.

So, I did exactly that.  I plugged one in, partitioned and formatted it in the usual way and started copying almost a full terabyte of data over to it.

Unfortunately about half way, (maybe two thirds of the way), through the copy the drive errored out and remounted as read only; causing the entire copy process to go straight to hell in a hand-basket.

I tried everything.  I changed interface adapters, I used a different power supply to power the drive, I even hooked it up directly to my computer's eSATA port.

No difference.  It would still error out about half way through the copy.

So I'm thinking:  "$^%#*&@!! - stinkin' hard drive's bad. . .!" and I get out the second one I bought. (I bought two, so I'd have a spare.)

I repeat the entire process and - sure enough - the drive fails about half way through the bulk copy.

I look on the Internet and I see a whole host of articles complaining that these drives, (from Western Digital), are pieces of GAGH!  Everybody's having issues with them and not a few unkind things were said to - or about - Western Digital.  Not to mention a whole host of other drive manufacturers who appear to be having the same issues.  Even my buddy, Ed, at Micro Center says they're all junk.

Hmmm. . . . .  Is EVERY two terabyte hard drive garbage?  This doesn't make sense to me.  Western Digital, Samsung, Hitachi, Seagate and all the rest of the hard drive manufacturers might be crazy, but one thing is absolutely certain:  They are NOT stupid.  I cannot believe that any reputable manufacturer would deliberately ship crates and crates of drives that are known garbage to an unsuspecting public.

Of course, the "conspiracy theorists" are having a field day:  It's all a conspiracy to get us to buy solid-state drives!

But it doesn't make sense to me.  Why would any reputable manufacturer risk his good name and reputation for the sake of a "conspiracy"?

I still couldn't see the 2000GB Gorilla, but I decided to dig a little bit deeper anyway.

Let's pause for a short trip down memory lane. . . .

Back at the Dawn of Time - when Men were Men, and Hard Drives were Hard Drives, (and starting one sounded like the jet engines on a B-52 winding up), hard drives used a very simple geometry known as "CHS" - Cylinders, Heads, and Sectors.  Any point on the drive could be addressed by specifying the cylinder, (the radial position of the heads), which of the many heads to use, and what sector on that particular platter is desired.

Once hard drives started to get fairly large - larger than about 512 megs - the old CHS scheme had troubles.  In order to address a particular sector on the drive, the number of cylinders and heads had become larger than the controllers could handle, so there were BIOS updates that allowed the drives to report a fictitious CHS geometry which would add up to the correct drive size.

Again, when hard drives became relatively huge, (around 8 gigs or so), there was another issue:  The CHS system could not keep up.  So hard drives, and the respective computer BIOS programs, addressed this issue by switching to Logical Block Addressing, (LBA), where each sector was numbered in ascending order.  And that kept people happy for a while. . . .  But not for long, because hard drives were getting bigger, and bigger, and bigger, and . . . . . .

Enter the 132 gig problem:  We've run out of bits to address all the logical blocks on a large drive.  So there was another hack: Extended LBA, (also known as LBA-48), that increased the bit-count even more.  This allowed the IDE/ATA interface to accommodate larger and larger drives capacities.

At around 500-or-so gigs, the LBA addressing scheme, (as well as the entire ATA architecture), was straining at the seams.  There were architectural issues that could not be solved simply by throwing bits at them.

This time - instead of hacking what was rapidly becoming an old and crufty interface - they decided to go an entirely different direction; SATA, (serial ATA).  It was faster, it was neater to install because the cables were smaller and it allowed, (theoretically), a virtually unlimited addressing range.

As a plus, because of the smaller cable arrangement with fewer pins to accommodate, drives could be added externally to the computer - hence eSATA.  Drives were still using LBA addresses, but now the addressing range was much greater.

And. . . .  just to make things even more interesting. . . . .

For the longest time hard drives, and their manufacturers, were leading a double-life.

In public they still supported both the CHS and LBA geometries, but secretly they were re-mapping the "public" geometry to a hidden geometry that had no real relationship to the public one.  And what a life it was - on the outside they had the stodgy, old and conservative wife, but secretly they had the young, sexy mistress making things nice for them.

In fact, this had been going on since the original 512 megabyte limit issue, when the drives started reporting ficticious geometries that would keep the BIOS happy.

"All good things must come to an end" and if you're living a double life you eventually get found out.  Which, by the way, is exactly what happened.

Fast forward to the present day as drives keep getting bigger and bigger.

Somewhere between the 1.5 TB and 2TB drive sizes, the drive manufacturers reached a crisis.  Trying to keep up the "512 byte sector" facade was becoming more and more difficult.  Making things worse was the fact that most every operating system had given up addressing things in "sectors" long ago.  Operating systems started allocating space in terms of "clusters"; groups of sectors that were treated as a single entity.  The result was that for every request to update a cluster, a multitude of sectors had to be read, potentially modified, and then written back - one by one.

Early attempts were made to solve this bottleneck by allowing read and write "bursting"; asking for more than one sector at a time and getting all of them read - or written - all at once.

Increasingly large amounts of cache memory on the hard drive were used to mitigate the issue by allowing the computer to make multiple requests of the drive without actually accessing the drive platters themselves.  Since, for a fairly large percentage of the individual drive requests, the O/S would be addressing the same or near-by locations, the drive's cache and internally delayed writes allowed the drives to keep up with the data-rate demands.

Later still, hard drives adopted "Native Command Queueing", a technique that allowed the drive - internally - to shuffle read and write requests so that the sequence of reads and writes made sense.  For example, if the computer read a block of data, made changes, wrote the changes, then made more changes and wrote them again; the hard drive could choose to skip the first write(s) since all the changes were within the same block of data.

Likewise, if multiple programs were using the disk, and each wanted to read or write specific pieces of data; the drive, (recognizing that all these requests were within a relatively short distance from each other), would read all the data needed by all the applications as one distinct read, (or write as the case may be), saving significant amounts of access time.

However. . . . .  There's still the 2000GB Gorilla.

When you get up into the multiples of terabytes, keeping track of all those sectors becomes hugely unwieldy.  Translation tables were becoming unreasonably large, performance was suffering and the cost of maintaining these huge tables, as well as the optimization software needed to make them work, was becoming excessive.  Both the cost of the embedded hard drive controller chip's capacity and speed, as well as the sheer manpower needed to keep it all working, had become a significant expense.

What happened is what usually happens when manufacturing and engineering face a life-and-death crisis:  All the engineers got together, went to a resort somewhere, and got drunk . . . .

After they sobered up, they came up with a solution:  Drop the facade, and "come clean" with respect to drive geometry.  The result was the new Advanced Format Drive, (AFD), geometry that abandoned the idea of 512 byte sectors, organizing the drive geometry into larger "sectors", (now called "allocation units" or "allocation blocks"), that are 4kb in length.

And I am sure you can guess what happened next.  It's what usually happens when someone comes clean about a sexy young mistress - the stodgy old wives had a fit!

The BIOS writers were / are still using the "Interrupt 13", (Int-13), boot process - a fossilized legacy from the days of the XT - and maybe even earlier.  And this boot process requires certain things:
  1. The hard disk will report a "sane" CHS geometry at start up.
  2. The Int-13 bootstrap would see 512 byte sectors for the partition table, boot code, and possibly even the secondary boot loader.
. . . . and it's kind-of hard to square a 4k allocation unit size with a 512 byte sector.

So, to keep the stodgy old wives happy, the hard drive manufacturers did two things:
  1. They allow the first meg-or-so of the drive to be addressed natively as 512 byte sectors.  This provides enough room for the MBR, (Master Boot Record), and enough of the bootstrap loader so that the Int-13 boot process can get things going.
  2. The drives would still accept requests for data anywhere on the drive based on 512 byte sectors with two caveats:  There would be a huge performance penalty for doing so, and YOU had to do more of the work to keep track of the sector juggling act.  And God help you if you dropped the ball!
And this is exactly the crux of the problem:  Many operating systems, (surprisingly, later versions of Windows are a notable exception), depend on sharing the juggling act with the hard drive itself.  Even Linux's hard-drive kernel modules assume that the drive will shoulder some of the load when using the legacy msdos partition table format.

I am sure you can guess what happens when HE expects you to be shouldering the entire load, and YOU expect him to shoulder his share.

This, my friends, is the 2000GB Gorilla and if he's not happy, things get "interesting". . . .

So, how do you go about taming this beast?

Interestingly enough, there has been a solution to this all along.  It's not until now where larger capacities, (that require the AFD drive geometry), appeared on single-unit drives that things have come to a head.

The old "msdos" type of partition table makes assumptions about drive geometry that are no longer true.  Not to mention the fact that the msdos partition table can't handle exceptionally large drives.  Not without jumping through hoops or some really ugly hacks that we really don't want to think about.

The solution is to just abandon the msdos partition type, as there are a host of other partition types that will work just as well.  One in particular, GPT, is especially designed to work with more advanced drive geometries.

You do it like this:
(I'm using GNU parted, so that you can actually see what's happening.)
# parted
(parted)  select /dev/[device]
(parted)  mklabel gpt
(parted) [. . . . .]

Presto!  A non-msdos partition table structure that is compatible with the newer drive geometries.

You have to make sure that the partition table's clusters, (allocation units), are set up so that the logical allocation units, (where the partition thinks the clusters are), and the actual, physical allocation units on the hard drive itself, are aligned properly.

If you fail to do this you could suffer the same massive performance penalty as if you were addressing 512 byte sectors; because for every allocation unit you read or write, multiple physical allocation units may have to be individually read, updated, and/or written.  Fortunately the Linux partitioner, parted, will complain bitterly if it notices that things aren't aligned properly.

The solution - when using parted - is to skip the first meg of the drive so that physical and logical allocation units align correctly.

Like this:
(parted)  mkpart primary ext4 1 -1
(parted) [. . . .]
(parted) quit

Here you make a partition that is primary, preset as an ext4 partition, starting at a 1 meg offset from the beginning of the drive and stopping at the very end, (-1).  Of course, you can set the partition to ext2, ext3, or whatever.  I haven't heard of this being tried with xfs, Reiser, etc, so Your Mileage May Vary.

By the way, this works with the Western Digital drives I purchased and, ideally, other manufacturers should map their drives the same way.  However if you get a warning that the partition is not aligned correctly - look on the web, try different offsets and keep plugging at it until you get the geometry lined up just right.

I finally had the chance to try this with a couple of 2TB Seagate drives and the partitioning scheme mentioned above worked like a champ with them as well.  So there's a really good chance that, whatever brand of 2TB hard-drive you buy, this fix will work just fine for you too.

If you are creating multiple partitions, you have to check alignment for each and every partition from beginning to end.  Fortunately, if the first partition is aligned properly, there's a good chance that subsequent partitions will align properly too.

Once you do that, you can use mke2fs, (or whatever), to create the actual file system in the normal manner.  And once that is done you should notice that the drive access times are MUCH faster than before and you don't get a mid-drive logical crash!

It may appear more complicated now but I suspect, very highly, that those who work with these fundamental drive utilities and drivers will rapidly bring their software up-to-date so that this stuff is handled transparently in future releases.

There is, unfortunately, one caveat with all this:  You can kiss backwards compatibility with legacy versions of Linux goodbye - as well as compatibility with legacy non-Linux operating systems when you switch to any kind of advanced partitioning scheme.

Of course this is not news.

When drives switched from CHS to LBA, from LBA to LBA-48, or from parallel ATA to serial ATA, backward compatibility for the newer drives was also lost.  You could regain it if needed, but not without using some butt-ugly hacks or specialized hardware adapters.

And my money's on the almost certain possibility that - in a few years when hundreds of terabytes, or peta-byte hard drives become main-stream - the AFD geometry will need a major update too.

What say ye?


Sunday, March 6, 2011

An Open Letter to Canonical and the Ubuntu Team

As Mark Twain once said:  "You shouldn't criticize where you, yourself, cannot stand perpendicular." (or something like that. . .)   Anyway, the message should be clear, take the 4x4 pressure-treated beam out of your own eye, before trying to remove a splinter from someone else's.

So - I really hate to criticize someone else's work - especially if I'm not a "contributor" to that work.

However, noting the current trend in the Ubuntu development, I feel compelled to make my feelings and opinions known.

Dear Canonical,

When I switched to Ubuntu from Red-Hat / Fedora, I was especially attracted by the Ubuntu slogan: "It's all about giving the user choices."

And this is a great concept. If a particular user wants a system that is essentially an "appliance", it's there; a distribution that is simple to configure and easy to use.

Likewise, if a user wants to "get under the hood" and get his hands dirty, that's available too.

Some of the features are absolutely unprecedented in the world of 'nix operating systems, such as the automagic "apropos" feature where a mistaken or mis-typed command is rejected - and the errant user is supplied with "did you mean. . . .?" suggestions. Even to the relatively experienced sysadmin, this feature is both welcome and useful.

I also absolutely love the graceful way Ubuntu now handles device or mount errors in fstab - instead of puking it's brains up with a kernel panic and dropping them into a very limited shell - you tell the user "Such-and-so didn't mount or isn't ready yet." and you offer the user the choice of doing an immediate fix-up, or just continuing without that device.

The ability for me to say "Yes, I know about that, just keep on going." is invaluable.  And it is especially invaluable for people like myself who often work with multiple possible configurations at the same time. Even if there is a real problem, (Oops! I forgot to update the UUID!), this is much easier to take care of from within the GUI, than from within a severely limited shell environment.

Unfortunately, both on the forums and within the distributions themselves, there is an increasing disdain for "dumbing down" the distribution.

Folks, I hate to break the bad news to you, but it is exactly and precisely this; the "dumbing down" as it were, that makes Ubuntu such a popular distro - you don't have to be an uber-geek to use it.  In fact, I mentioned in an earlier article on this blog that "Ubuntu is the first Linux distro that I would seriously consider installing on my wife's computer, or even my mother's."

My wife is the quintessential anti-geek, and my mother thought that Windows 98 was the best thing that ever happened.  They don't want stacks of Hollerith punch cards, or lists of cryptic commands - they want a system they can turn on, use for something useful and be done with it.  Just like a toaster or microwave.  These are people for whom today's multi-button TV / Satellite / DVD / Home Entertainment remote control is beyond their technological grasp.

But! "You don't want to dumb-down Ubuntu". And that, in my humble opinion, is a great loss for both the distribution in particular and those people who might be convinced to use it.

Secondly: Ubuntu is, (supposedly), all about giving the user "choices". . . . .

I don't know about you, but my understanding of the word "choice", (as in "choices"), means that I get the option of choosing between more than one alternative; that I get an active say in what, and how, my system is organized and configured, when it is being organized and configured.

Unfortunately, that credo has - apparently - gone by the boards at Ubuntu.

A couple of cases in point:

Someone, somewhere, had the brilliant revelation that Ubuntu should switch to Grub2 from the venerable, stable, and well understood Grub boot loader.

Again, in my humble opinion, Grub2 represents a throwback to everything that was universally hated and despised in the LILO loader. It's difficult to configure because the user has to find - and edit - an obscure "template" file and then run a special command that makes the changes for him. This is what made LILO such a (ahem!) "popular" loader - you couldn't just go edit a config file somewhere, you had to jump through hoops and pray to the Blessed Virgin that you didn't inadvertently bork things up beyond all recognition.

It seems to be an incredible coincidence that Ubuntu is the only distribution that has embraced Grub2. Even the very experimental distributions that seek to be at the Bleeding Edge of the curve have stayed away from Grub2 in droves.

But the most disturbing aspect was this: I wasn't offered a choice. Nowhere in the installation or upgrade process was I asked "do you want to use Grub2 or Grub as your boot loader?" Grub2 is the default. And in my opinion, it's clearly "default" of whoever had that brilliant idea in the first place.

More recently someone had the amazing brain-storm that the default GUI should suddenly transition from the familiar Windows-like interface, to a much more Mac-ish design with tiny, difficult to see and use, Mac-like buttons all on the left hand side.

It is important to remember when designing a GUI, that not everyone is 20 years old and not everyone has 20/20 eyesight. Buttons, especially these fundamental control buttons, should be big and bright so that they are easy to find and easy to use.

Again, this was not something that I had the opportunity to select or not as I saw fit. Instead, it suddenly and magically appeared.

This change, more than any other change Ubuntu has foisted upon us, has me shaking my head in absolute wonder.

Grub2? If I make an incredible technological stretch of my imagination, I can - maybe - see some sense in supporting the newly invented Extended Boot Protocol; despite the fact that the only PC architectures that required it were PC's based on the now defunct Itanium processor.

However, the move to Mac-ize the GUI is absolutely beyond my comprehension, no matter how far I stretch my imagination. Does Ubuntu seriously believe that by this change in the GUI, that they can convert legions of Mac users to Ubuntu?

You forget two essential facts:
  • To the Apple product user, the Mac isn't a system; it's more like a religion - with the rest of us being the "unsaved heathens". Switching to any other operating system would be sacrilege of the highest order! Mac users may - under duress - use other operating systems at work because they are forced to; but they complain endlessly about it.
  • Over 95% of the personal computers in use today, (as well as a substantial percentage of the servers), use Windows of one flavor or another. The Windows GUI paradigm is, unarguably, the most popular and well known GUI on the planet. And I strongly suspect that should we ever venture to Mars or other planets in our Solar System, Windows will be in the vanguard of that venture.

So, if we assume that one of the main thrusts of the Linux community is to attempt to broaden the Linux user-base, where do you think these users are going to come from? The Mac? Who are you kidding?! They already crow about having their own 'nix based system - Free BSD - with the Mac GUI pasted on top of it. A 'nix based Mac was inevitable, but by slapping the Mac GUI on it they keep their religious sensibilities and the purity of their beliefs.

No, the real market for cross-over users are those that use Windows. The Microsoft licensing model is becoming increasingly onerous. The real cost of implementation is becoming increasingly expensive to the point that entire governments, both state and national, have eschewed Windows in favor of Open Source solutions.

Changing the basic GUI paradigm from a familiar Windows-like paradigm to a much less popular and more difficult to use Mac type interface only serves to drive away users that might be tempted to make the switch. Linux is already different enough in many respects, why make it even more alien?

Allow me to offer the following suggestions:
  • Loose Grub2.
    I don't know of a single Sysadmin using Linux who would, willingly, get within a hundred yards of Grub2. And it's the Sysadmins who make the recommendations on what operating system to use, that drives the implementation of Linux in general and Ubuntu in particular.
  • Forget the "pseudo-Mac" interface.
    The easier you make it for the Windows user to make the transition, the more Windows users will actually want to make it. Again, it's the power-user that is at the forefront of evangelizing Linux. The more you frustrate them, the less likely it is that they will recommend the switch.
  • Keep It Simple, Stupid! (The "KISS" rule)
    "Dumbing-down" Ubuntu to make it more easily within reach of the average user is absolutely the primary key toward the goal of getting people to transition away from Windows.
  • Don't forget to actually give the user a CHOICE.
    Most distributions, prior to making a radical change, "depreciate" the original method for several distributions before making the actual change itself.
    First of all, it puts people on notice that a fundamental change is in the works.
    Secondly, it gives users a chance to "try before they buy" - and weigh in on the proposed change. Does this change annoy 90+% of your user-base? Uh, maybe we should rethink it. . . .
  • Fork the distributions.
    Make the ".0x" distributions focus on what works, not on what can be changed. Avoid making radical changes in the design - unless absolutely, positively, inescapably necessary. This gives the user an important continuity of design that is essential in production environments. This continuity of design also eases the transition shock of those switching from other operating systems.

    Make the ".1x" distributions the "experimental" distributions - where new things are tried and eventually proven or discarded. Eventually, when a new feature or other change is sufficiently proven and useful, it can then be merged with the main-line ".0x" releases.

In essence, Ubuntu would actually consist of two, separate and distinct, release paths. The first for those users who want long-term stability and the other for those users who want to be on the Bleeding Edge.

It would also have the advantage of giving each prong of the fork a one-year release cycle, as opposed to the current release cycle of one every six months.

You could have two separate teams, each working with enough time to make their distributions the best there is. It would also give time for a few "Official Beta Releases" to test the waters, so to speak.

Please remember that it is ultimately the user-base itself that decides if a particular distribution sinks or swims. Right now Ubuntu is riding high - just don't forget that it's a long hard fall when you get toppled from your ivory pillar.

What say ye?


Tuesday, February 22, 2011

The Only Feature Linux Needs
Concepts in usability

A few days ago while searching for something else, I found an interesting article about Ubuntu in general, and the 10.04 release in particular, which you can find right here.

Actually, to be perfectly honest with you, this guy has a rather extensive web site, (located here ), that I found most interesting.  This gentleman has a rather broad range of interests and opinions - and you could do far worse than exploring his site.

This particular article was titled: The Only Feature Ubuntu 10.04 Needs.  It was a discussion about what Canonical, (the enterprise supporting Ubuntu Linux), needed to do to make Ubuntu even better.

To make a long story even longer, (laughing!), I decided to toss my two centavos into the fray there.  And what a fray it was!

Apparently Tanner Helland, (the blog's author), touched a raw nerve with this article as there were more replies, covering a longer time-span, than any five articles I've seen just about anywhere else.

In reply to my posting, he sent a very interesting e-mail back to me:
My name is Tanner Helland (from www.tannerhelland.com).

First off, I apologize for contacting you via email. I almost never check email addresses used to submit comments to my site - but in this case, I felt it was okay to make an exception.
: )

I just wanted to let you know that your comment comparing Linux to appliances, tools, and toys was absolutely spot-on. Excellent assessment. Frankly, your comment was so good that I think it deserves an article of its own.

I see that you run your own blog - if you haven't already, might I suggest posting your comparison there? I worry that it won't get enough attention on my site's comment thread (there are a LOT of comments on that article...).

At any rate, excellent insight - thanks for sharing.
Ahhh!  Wonderful e-mail, except that I'm getting a sore shoulder from patting myself on the back. (grin!)

Re-reading the article convinced me that maybe he was right, so here it is.  Feel free to comment as you see fit.

This post is remarkably “after the fact” but I’ll put in my two kopecks.

First of all, I will say that I heartily endorse the basic premise of this thread – that, ideally, Ubuntu (or any other distro for that matter), should work “out of the box” without a herculean effort to get things to work with it.

My own basic premise about computers and the “mass market” is that they should become an “appliance” Like your toaster, microwave, or whatever; you should be able to turn them on, use them, and turn them off again without having to even think about the internals.

The problem with this premise is that computers are, by definition, incredibly complex pieces of equipment. And, unfortunately, what people expect of their computers represents a very broad spectrum of possible uses. If your microwave was expected to meet such a disparate range of uses, it too would be as error-prone as computers are today.

I divide computer users up into three very broad categories:
  • Those that want computers to be an appliance.
  • Those that use the computer as a tool.
  • Those that use the computer as a “toy”.
The appliance users want exactly that. You turn it on, click on a few icons, it does certain things, and you’re finished with it.

The tool users understand that – as a tool – the computer will have it’s moments. Everyone who has used any tool more complicated than a hammer is used to having to change the bit or blade now and then to fit the task at hand, They understand that, sometimes, the bit or blade breaks and that’s that. And, occasionally, they pick up a bit or blade that doesn’t fit the chuck or mandrel of their tool and they need to either adapt it some way, or get a different tool for that task.

However, the tool users don’t care if the motor has brushes, is a synchronous AC motor, or is a variable speed DC motor using pulse-width, (duty cycling), as a way to control the motor’s speed. They want to be able to chuck-up the right bit and get to work. In essence they want it to be an appliance, but one that is more powerful and configurable in exchange for increased risk.

The “toy” users are the ones who really want to “get under the hood” as it were – the 21st century equivalent of the ’60′s motor-heads who spent most of their time, and darn near all their pay, tinkering with their car to make it the baddest machine on the strip.

These are the ones who want to experiment with different things, perhaps they do some coding or engage in test cycles. Or they get on blogs like this one and relate how things occurred to them.

Windows – of whatever version – is primarily targeted to the first and second type of users. It’s like a ‘fridge. You can’t adapt it very well without using expensive tools that most people cannot afford or do not know how to use.

Linux, on the other hand, is trying to embrace the entire spectrum of user models ranging from the appliance users to the users that like to tinker with it and get their hands dirty.

And, I believe that this is a good thing. It’s kind-of like brainstorming, it appears chaotic but eventually becomes something really useful.

IMHO, were I Canonical, I would strive to meet these ideals by doing certain things:
  • Make the x.0 releases benchmark releases that focus on long-term stability at the expense of new whiz-bang features.  (These releases would be of primary use to those in the first two categories.)
  • Make the x.1 releases the more “experimental” releases where new features are introduced, (after, hopefully, enough beta time to make them reasonably usable).
  • Periodically release a “stop-and-catch-our-breath” x.0 release – one which is not about new features at all, but about polishing up the stuff they already have.
I would make similar suggestions about the repository organization. It is frightening to go into a “common” repository and see packages that have warnings saying that it can, and will, casually destroy your system unless you are used to Category-5 clean-room practices.

For each of the broad categories of repositories, I would divide them up into three sections:
  • Those packages that are virtually harmless – the standard utilities that people – even the appliance users – might need. Word processing, web browsing, e-mail, things like Skype (or an equivalent Open Source app), and so on. If you *REALLY* wanted to bork your box with one of these, you could, but you’d really have to work at it.
  • The packages that are more specific to a particular task and might carry an increased risk of danger – like a buzz-saw carries an increased risk – but if intelligently used won’t hurt you. These would be the more “tool” like packages. Virtually all of the system-administration packages would fall into this category. With these there is a risk, and if the tool is used carelessly, it will hurt you, possibly badly. These tools can bork-up your box rather nastily if you are careless, but you’d have to be pretty damn careless to do it.
  • The very specialized packages that are for a very particular need – or are possibly experimental releases carrying a vastly increased risk if not used very carefully – like a power-driver that uses .22 or .30 caliber loads to fire pins through solid steel or heavy concrete. These packages would be the kind that, if you really don’t know what you’re doing, you shouldn’t be here. These would be potentially powerful tools that, unless used *VERY CAREFULLY* and with a certain amount of forethought, will almost certainly bork your box.
But, by the very same token, the “appliance” level user should have trouble even finding these packages.

The tool and toy users would have, perhaps, a little less difficulty finding them. And anyone going into that section of the repository would be warned that – unless carefully used – these tools *will* cause harm.

Right now, it is much too easy for an inexperienced user to – not knowing any better – download the .30-0-6 (magnum load) tool by mistake and end up with a box borked beyond any reasonable attempt at repair.

The big issue with Linux, and why it generates so much controversy, is because of what it is, and the very broad spectrum of users that it tries to embrace without fault or favor.

Windows can be sharply focused to a particular end, whereas Linux, almost by definition, cannot be. And that creates issues and responsibilities that Linux will just have to face.

What say ye?