The personal computer (PC) has rapidly evolved from the old familiar user-configurable and upgradeable open system into a short life-cycle factory sealed consumer product. In a profit driven global market this is inevitable. But there is no reason why amateurs should not recreate the flexibility we once enjoyed, and even try to improve on it. [PDF]
The original PC was built to a sound engineering objective. It was based on a standard 19-inch rack cabinet containing a steel chassis which housed a standard size motherboard into which standard form option cards could be inserted. To add a new function, you just added an option card. To upgrade a function, you simply replaced an option card. The technology of each of the PC's functional areas could thus evolve independently.
This made the original PC not only engineer friendly, but eco-friendly also. Theoretically, you could buy an original PC and keep the steel chassis, case and fittings for life. There was no need to dump or recycle the chassis, case and fittings each time there was a leap in technology. You just swapped the naked electronics for a newer, more advanced version. Moreover, you only had to swap the circuit card for the particular function you wanted to upgrade. You did not have to change circuit cards which did not need upgrading. Thus each year, as more and more functionality was crammed into an ever smaller space, the PC could evolve and advance forever within its original standard chassis.
Since I bought my first IBM PC XT in 1984, I have diligently tried to follow the noble principle of evolutionary upgrade with minimum throw-away. But that's not the way it turned out. Since 1984 I have been forced - by economic expedience and changing standards - to buy 6 completely new PCs, each time relegating my old one to the scrap heap.
I am convinced that this was forced by commercial expedience. A future safe flexible design is not the most profitable. Manufacturers wanted to convert the PC from an engineering innovation into a consumer product. To do this, they had to find a way to either:
Limit each machine's operational life-span to the statutory warranty period + as little extra time as the market would reasonably accept, or
Keep technology advancing rapidly enough to render each machine technologically obsolete in roughly the same amount of time.
PC technology is too near the cutting edge of scientific knowledge and engineering capability for the first option to be practicable. The second option is therefore the one manufacturers seem to have adopted.
The upshot is that manufacturers are building ever more of the PC's functionality into the motherboard. This means that when you want to upgrade a particular function, you must add an option card, yet leave the original functionality in place. This results in unnecessary power consumption and potential operational conflicts. Advances in only a few independent functional areas need take place before your only sensible option is to replace the motherboard. And if you do that, you might just as well replace the whole PC.
Many of the latest PCs and servers have now done away with the standard PC chassis, case and fittings. They are super-thin 1U (1½" high) units with everything on the motherboard. The only way you can add options to these is externally via an appropriate bus. The PC is rapidly metamorphosing into one of those dreaded black box hermetically sealed consumer products labelled:
"Warranty void if seal broken. Contains no user-serviceable components."
For how much longer will the full specification of the commercially available PC be open and published in full and obtainable easily by anybody?
Having always used a standard form PC for 20 years, in 2004 I eventually bought myself a laptop computer with a 15-inch screen. I still have it in 2017 and it is still working well.
The laptop has some advantages over a normal PC. It is compact, light and portable. Having an internal battery, it is immune from sudden power failures. In such an event I am in no danger of losing data or having files corrupted if they are being written to at the instant the power fails. I also do not lose downloads or uploads that are in progress at the instant the power fails. If the power failure is prolonged, I have time to allow short-term tasks to finish and gracefully shut down all applications and then the operating system without fault, corruption or damage.
The laptop also has certain disadvantages against the normal PC. Although I tried to persevere with its restricted keyboard and swipe pad in place of a mouse, I rapidly came to realize that these simply cannot be used with the same ease, rapidity and dexterity as a standard keyboard and mouse. Furthermore, the screen cannot be placed at an ergonomically acceptable height for viewing while leaving the keyboard and swipe pad in an ergonomically acceptable position for typing and clicking. Thus I had the choice of enduring either neck ache or arm ache. Inevitably, I had to sit my laptop on a pile of books with an external keyboard and mouse attached and placed in a position on the desk that offered comfort for typing. Compared to the conventional PC, the laptop is much less user-maintainable and much less flexible in that user-specific expansion cards cannot be installed. It is also doubtful whether a laptop would stand the stress of a continuous work cycle.
The portability of the laptop is a great advantage if one needs to work while travelling or at alternative locations, so long as that work does not involve the use of peripherals such as printer or scanner. Notwithstanding, the portability of the laptop is also a great disadvantage in that it is very easy to steal, which can result in not only the loss of the machine itself but, perhaps more importantly, the loss or compromise of valuable data held within its internal storage.
The laptop does not replace a conventional computer system. A complete system requires a printer and a scanner. These are peripherals which must be connected externally to the laptop, as with a normal PC. The small loudspeakers in a laptop are not suitable for serious listening to audio content, especially music. Consequently, for adequate quality, an external sound system (with several loudspeakers plus connecting cables and power supply) is also required.
My 2004 laptop had a built-in 56kbps modem, which was the state of the art at the time. With the advent of ADSL and cable services, the modems are now exclusively external devices supplied by the corporate entity that provides Internet access. Current laptops generally have wireless adapters built in. Notwithstanding, whether the laptop is connected by wireless or LAN cable, it still requires a cable or ADSL modem (together with its power supply and cables) to gain access to the Internet. Since most households and offices now have many computers using the same Internet access point, a router plus associated cord, power supply and signal cables are also required.
Thus a laptop requires the same peripherals as a conventional PC, which creates the same inevitable rat's nest of wires, cables, wall warts, mid-cord power supply units and signal cables. Unlike the standard PC, the laptop itself also has an external power supply, which is not a small item by any means. This is yet another contributor to the labyrinthine rat's nest under the desk, which must be separately lifted up and cleaned under every week.
I have never possessed or used an integrated computer. It is somewhat like a laptop with a detachable screen, except that the internal components of the computer are housed within the screen casing. Its conventional keyboard and mouse are separate. The main unit (the screen) can be wall mounted or fitted with a desk stand like a conventional monitor. Although it combines the screen and the system unit, the integrated computer needs an external wall wart power supply, which a conventional PC does not, so the number of separate items involved is the same.
Unlike the conventional PC, the integrated computer is unable to accommodate standard expansion cards. So, for instance, when the time comes to upgrade from normal high definition (HD) video to the ultra-high definition (4K) standard, you need a whole new computer. You cannot simply buy a new monitor and insert a 4K video card as in a conventional PC.
The cramped conditions inside the integrated case can also cause a problem with hardware updating, cooling and cleaning out accumulated dust. Some integrated computers have touch-sensitive screens so they can be used without a keyboard. A touch screen is definitely not practicable for writing a report, let alone an article, a computer program or a book. A full sized keyboard is needed for this kind of work. Besides, touching the screen leaves finger marks and results in a screen which becomes dirty very rapidly.
With the market for the PC and the laptop saturating, it seems that the IT industry is desperate to create new things to sell. Among these is the tablet computer, one version of which is shown on the right. Its neat style and compact form make it ideal for impulsive purchase as a "must have" personal accessory. I have been using one, on and off, for over a year.
I have to admit that I, personally, do not find it a whole lot of use. Its small 154 × 90 mm screen is very restrictive and cannot adequately display documents or web pages designed around the international standard A4 paper size. The print is far too small to read fast and comfortably. In fact, I find it very fatiguing to try to read documents on it. I also find the user interface non-intuitive and difficult to use. On top of this, the only application programs that seem to run on it are the very restricted range obtainable from the operating system vendor's on-line apps store. The biggest killer is that the operating system deliberately disallows the user from writing files to the removable memory extension card. For me, this renders the memory card useless.
The operating system also contains arbitrary restrictions which have generally become evermore severe with each new update. One of these is the playing of video files. I have no interest in downloading films from the Internet. However, I did like the idea of being able to play, on the tablet, videos I shoot myself with my own camera. Notwithstanding, I quickly found that this was most effectively disallowed.
To test the idea I shot a short video of a friend running on a treadmill. I then transferred it via my main computer and local area network to the tablet. Of course, I could only store it in the tablet's restricted on-board memory. I was not allowed to store it on the much larger memory card which the vendor had sold with the tablet. I then tried to play the video which, of course, I had already played perfectly well on my computer. I tried to open the video with the native video player. In place of the video a message was displayed saying "You are not authorized to view this DivX protected video on this device". I was referred to the DivX website where I was told I would be pointed to a website where I could purchase the video. Obviously, since I had shot it myself, in my own home with my own camera, it wasn't available for sale anywhere. I wrote a terse email to DivX. They replied saying that this simply should not happen. They asked me to send the video, which I did. Despite a couple of reminder emails I never heard back from them again and no solution was ever offered. Consequently, under its current Android operating system, the table cannot be used for displaying my videos. The only solution I was able to find was to erase the Android operating system and install Linux with the VLC multimedia player.
A tablet computer has no keyboard. As far as I am aware it has no means of connecting one (unless it is somehow possible to connect a keyboard via its ubiquitous sub-miniature USB socket, which I doubt). The only option is therefore the touch-screen keyboard, which is displayed when required. It also has a nasty habit of displaying when not required, which is a thorough nuisance. Typing with such a keyboard, even with the so-called pen, is, for me, very slow and prone to lots of errors. I usually have to try several times to hit the right key (or rather, key-image), neighbouring keys frequently being triggered in error. I couldn't write a book with such a "keyboard" in 100 years. I wouldn't even attempt to write an article. The sheer effort required to type on such a device would detract far to much from my concentration on the subject about which I was writing.
All in all, I see the tablet not as a work tool but simply as a gadget. It is for sort of looking at certain web content, for use as a personal music player, for taking photographs and for exchanging messages and photographs via proprietary social media sites. Nothing much more. But it sells well to a gullible public. And that is what it's all about.
Oh yes, it still needs providing with a wireless LAN connection to access the Internet, which means you still need your modem and router with their rat's nest of cords, power warts and signal cables. So no saving there.
In its eternal quest for novelty, the IT and telecommunications industries are packing ever more functionality into an ever smaller space. The result, at the current state of the art, is what is known as the smartphone, an example of which is shown on the left. The smartphone attempts to fulfil the composite role of a mobile phone and a personal computer. It has a screen with a graphics user interface, which includes a touch-screen keyboard, which is supposed to appear as and when required. It is able to connect to the cellular radio towers for telephone calls and to a local Wi-Fi network for Internet access. I am also given to understand that it can connect to the Internet also through its cellular radio service.
In a strict pedantic sense, the smartphone is all that it claims to be. It has much more than the necessary and sufficient functionality to be called a mobile telephone. It also has much more than the necessary and sufficient functionality for it to qualify as a personal computer. Notwithstanding, its usefulness and usability as either requires considerable qualification.
The device's tiny 43 × 57 mm screen is only really adequate for displaying short text messages of up to about 100 characters. That is, unless you want to restrict its user-base to only children, adolescents and young adults. So it is good for a conventional short message service (SMS) and for short-message based social media sites. It is essential to remember that, when conversing by short message services, at least half of this tiny screen is occupied by the touch-keyboard. I remember my intrigue one time while watching the intricate thumb-dance being performed by a young female relative as she was using her smartphone to converse with a friend via a social media site. Notwithstanding, as impressive as her thumb-dance was, the net throughput of characters was extremely slow when compared with normal keyboard typing and the individual bursts of activity were very short. Nobody could write a letter, report, article or book this way. In any case, my thumbs are far too big to be able to hit a single key at a time on her phone's Lilliputian keyboard the way she did.
The device's tiny screen also has the collateral effect of making the graphical interface very complicated both visually and procedurally, hence difficult to use. To illustrate this I considered the task of making a phone call using the smartphone compared with a conventional telephone. With a conventional telephone I lift the handset, tap the number I wish to call on the number pad and wait. At the end of the call, I simply replace the handset. On my old cell phone I tap out the number on the key pad and press the green phone button. On the smartphone, however, the procedure is far more complicated.
† I often have to tap it 2 or 3 times with increasing force.
If I had not disabled the security function I would have to enter a password to unlock the screen every time the padlock appeared. What an irksome procedure to have to go through just to make a phone call. I will not even attempt to describe the horrendous and unacceptable complication of doing anything else on this kind of device. Suffice it to say that I never managed to type-in successfully the somewhat long strong password to my PC wireless network using the smartphone's touch keyboard. I do not use a smartphone any more. I still use my old conventional cellphone, which is, at least, usable.
Thus, I am afraid that, for me, the smartphone doesn't even rank as a gadget. It is no more than a super-miniaturized novelty. I was obviously never meant to be part of its market. I do not diminish the technical achievement of the engineers, programmers and designers who produced it. I do not mock the insight of the marketeers who saw its enormous potential for consumption by the world. However, as a tool for enhancing the human condition, I think it is extremely negative for the two following reasons.
Firstly, although the smartphone is absolutely loaded with useful functionality, its ability to make that functionality available to its human user is less than abysmal. And this is almost entirely to do with the smartphone's physical size. Or more precisely, its size to functionality ratio. Engineers have been able to increasingly miniaturize IT and communications technology. But they have been utterly unable to effect a corresponding miniaturization of the human user. So while devices become acceleratingly smaller, the human user does not. Specifically, the size of the human hand and the definition of the human eye remain unchanged. Consequently, for the human user to be able to continue using these ever-shrinking devices, the interface between the user and the device has had to become ever more complicated. Indeed, this increasingly fidgety complication has pushed the older generations right out of the frame. They have become no longer able to use such devices.
In contrast, the standard sized computer keyboard, an example of which is shown below, can be used easily and without complication by people of all age groups. This is because its layout is essentially intuitive and the keys are set in a 19 × 19 millimetre grid, which provides a key size and key spacing that is ideally suited to the size of the average human hand and arm reach. It is utterly impossible to design or construct a human-usable input device within the small screen of the smartphone (shown on the right at the same scale) capable of anything like the same bandwidth of information flow.
As a direct consequence of this, the commercially-induced establishment of the smartphone as the de facto personal device for Internet access has forced people to abandon the thoughtfully-written letter, report, article or document as means of communicating ideas and replace them with the impulsive one-liner message to which the smartphone's user interface better lends itself. Hence, a reduction in the capability of the physical communication channel has forced a reduction in the quality of the intellectual content conveyable through it. So from being the ultimate source of intellectual research and exchange, the Internet has, for most people, become nothing more than a place for exchanging pointless quips.
The smartphone came about, supposedly, from the IT industry's desperation to create a new saleable product in a dwindling PC market. Notwithstanding, I think the quest to kill off the conventional PC in favour of the modern compact personal device was orchestrated by a far more sinister interest. A conventional personal computer, with an open operating system, connected to the Internet, offered unregulated ordinary people enormous flexibility of association and independence of free thought. And this was seen by the established orders as far too dangerous to allow.
How could they stop it? Legislation and force would be far too obvious and may precipitate insurrection. A subtle almost invisible way had to be found. Stealth was the order of the day. The problem with the PC is the enormous semantic bandwidth it provides to its user. This had to be choked right down to a width through which the exchange of intellectual thought became impractical, if not impossible. And the smartphone did this perfectly. With their minds re-redirected onto trivia the people were once again tranquillized and contained.
A dangerous consequence is that more and more of the individual's interaction with society - especially corporate society - is being forced through the personal device [or smartphone]. For instance, more and more financial transactions now take place via the smartphone than by any other means. The natural outcome will be that anybody who does not have - or cannot use - a smartphone [such as myself] will be unable to buy or sell anything. It is as if the smartphone has become [or at least is becoming] the proverbial Mark of the Beast.
For instance, I have bank accounts with HSBC, Bradesco and Santander. I can still conduct all my banking necessities via Internet banking on my personal computer with HSBC and Bradesco. However, I can no longer do so on my account with Santander. Santander's web site just does not work on my PC with any available browser. I simply cannot log in. Presumably, to do Internet banking now on my Santander account I would have to use a smartphone, which I do not have and cannot use anyway. It would be extremely dangerous for me to attempt banking transactions via a smartphone. Typing in an amount of money would take me many attempts to get the figures entered correctly and the possibility of my pressing or touching a sensitive button inadvertently would be extremely high. I could easily end up paying a kings ransom for a bottle of water or signing up to some service or contract without even being aware that I had done so. I must assume, therefore, that Santander no longer perceives people my age as part of its market. I expect the other banks will eventually follow suit.
The inevitable conclusion that one may draw from the foregoing is that the user interface for all manner of devices has undergone an overwhelming retrogression. The essence of this retrogression is that the procedure, which the user has to learn and repeat to achieve a given objective, has become relentlessly ever more complicated in relation to the inherent logic of the objective itself.
For instance, consider the simple task of switching a device on or off. In ancient times, one simply used an on-off switch as shown on the right. The objective of switching the device on or off is a single bit of two-state logic. The operation the user is required to do in order to achieve the objective is also an act of single bit two-state logic. It is just like switching a light on or off. Simple and obvious. And inherently understandable to anybody of any age: to the young, to the adolescent, to the adult, to the old.
In contrast, turning off my PC requires me to move the mouse so that its pointer on the screen hovers over the on-off switch symbol. [I use the XFCE user interface. I understand that with Microsoft Windows it is necessary to click the START button, which opens a Christmas tree of a menu in which one has to hunt for an on-off symbol. What logical sense that makes I don't know.] Then I must click the left mouse button, whereupon a dialogue box opens asking me if I am sure I want to shut down my computer. It then allows me a given amount of time to answer. I hate time-outs. They are intrinsically stress-inducing. The operating system then shuts down. However, I must still remember to switch off the mains supply, which I can only do by first switching off the surge protector and then pulling the plug out of the wall socket. After all, there could be a storm during which lightening could hit the electricity cables and burn out my surge protector and perhaps even my computer.
That is a long and complicated procedure when all I want to achieve is a simple objective which is inherently nothing more than an act of single-bit two-state logic. It is certainly not beyond the wit of man to have the operating system detect the absence of mains voltage and thereupon expedite a graceful shut-down — including the saving of all open files and current machine states — using adequate residual capacitance built into the computer's power supply. But, even now, after 40 years since the birth of the IBM PC, we still haven't got there.
Of course, the inherent logic of other operations within a computing device is more complicated than that of the simple on-off operation. Some even encompass the notion of continuous variability between a minimum and a maximum. An example of this kind of operation is the volume control of a sound device such as an audio amplifier. The most ubiquitous physical device for expediting this operation is the self-evident volume control shown on the left, comprising a chicken-head knob with a graduated scale.
Its objective functionality corresponds exactly to the action required to use it. Using it is no more complicated than the objective achieved by using it. And its current state or setting is immediately obvious just from looking at it. It is an ideal control.
Now compare this with the volume control of a PC sound system. Some PC keyboards have built in volume controls for sound. Notwithstanding, these are almost always buttons or, very rarely, a slider control. The slider control is functionally similar to a rotating control but, inevitably, it is not quite as easy to use, especially for large clumsy old hands. The two-button volume control is much more difficult to use and offers no inherent visual indication of the current volume setting. However, my keyboard, like most, includes no form of physical volume control.
I must therefore rely on my GUI (Graphics User Interface) to supply a means of controlling the sound level of my PC. The volume control on my XFCE desktop is accessed via a small loudspeaker symbol on the services panel at the top right of the screen. To a regular computer user, the loudspeaker symbol may obviously be the means of accessing the volume control. But this is not inherently self-evident.
I doubt whether my 86 year old mother-in-law would recognise it as such. Nevertheless, the old must increasingly be considered as computer users, especially since corporate, banking and government entities increasingly require them to access commercial and public services and to fulfil official obligations by this means, while making it increasingly difficult, expensive and obstructive for them to do these things the old fashioned way.
I therefore have to move my mouse and guide the pointer onto the rather small loudspeaker symbol at the top of my screen. [I remember that my mother, when she was using her own PC, used to find guiding the mouse pointer onto such a small symbol extremely difficult.] I must then click the left mouse button to display the volume control dialogue box. There, I must click and hold the mouse button on the nub of the volume slider and move it with the mouse until the volume level is as I wish it to be. [I cannot imagine that a large proportion of old people would be sufficiently adept physically to be able to do this. I wonder if I will be thus able in a decade or so's time.] If I wish to change the input source of my sound or the destination of its output, I must click on a further link at the bottom of the dialogue box. This precipitates a larger tabbed dialogue box, the logic of which is far from self-explanatory.
This is all extremely and needlessly complicated just to achieve the objective of the simple chicken-head volume control illustrated above. The logic of the procedure the user must follow is vastly more complicated than the inherent logic of the desired objective. And the physical dexterity required of the user is also vastly higher — perhaps even prohibitively so for the old or inadept.
The procedures required of users to set sound equalisation and compression — so often necessary with the plethora of content sources now available — are mind-bogglingly complicated. Yet the inherent logic of each can be achieved with a single continuously-variable chicken-head rotary control, as shown below for an old fashioned amplifier.
The 'Source' control is a discrete selector switch. All the other controls are continuously variable. The 'Gain' control sets the gain of the amplifier according to whether the source signal be weak or strong. The volume control sets the output volume of the sound in the room, irrespective of whether the source signal be weak or strong. The 'Shape' control changes the shape of the frequency response profile of the amplifier from flat to favouring the narrow frequency range of human speech. The 'Compressor' gradually reduces the dynamic range of the amplifier's volume to compensate for cinema-style sound tracks where speech is too quiet and sound effects are too loud. It allows the user to set the extremes closer together or further apart according to a loudness/softness scale.
The functionality behind these controls can be implemented analogically or digitally as desired. Personally, I favour amplification, equalisation and compression to be implemented analogically using vacuum tube electronic valves. To my mind, valves are amplifiers; transistors are switches. Naturally, I prefer my sound system and its controls to be entirely separate from my PC, with controls that are logically no more complicated to use than the objective they achieve, visibly indicate their settings all the time and require nothing extraordinary in the way of physical dexterity.
When I was a young programmer, I welcomed the arrival of GUIs and windows with menus. One could quickly select an item from a menu. I thought menus and tabs constituted the ideal user interface for software.
Years before, I had experimented with a crude menu system on a console typewriter. My first menu was to select the type of radionavigation station I needed to tune to on a flight simulator. The typewriter would type out the menu each time and all I then needed to enter was the menu item number of the station type I wanted to select. Previously, the user had to enter the station type number as looked up in a user manual. The menu system seemed so much simpler and did not require the user to consult any external document. The first thing that didn't spring to my young mind was how much paper I was wasting.
But menus and tabs did not materialise until decades later. In the meantime the Unix terminal stepped in as the channel through which users interacted with computers. This required the user to learn the shell scrip language, the central component of which was the Unix command, an example of which is shown below.
navstart -f -pt -trace > output.txt
This command tells the computer to execute the program called 'navstart' which must get its input from local files (as opposed to an archive file or a remote web server), display the GUI window annotations in Portuguese, store a transaction trace of what it is doing in a file called 'output.txt'. It is a language that needs learning. The user must therefore spend some time initially learning the language. However, once learned, the language is an extremely fast and powerful easy-to-use way of instructing a computer to expedite any kind of task, from the very simple to the exceedingly complex.
Then along came GUIs, in which each program opened as a window with a set of menus below its header as the means through which the user instructed the program as to what he wanted it to do. This meant that users no longer had to learn a shell script language in order to use a computer. They could just glance through the menus to see what the program could do and select what they wanted it to do. The GUI thus opened up computing to a much wider range of people who were either not adept at learning shell script languages, hadn't the time or were just too lazy.
But GUIs and windows were not quite the God-send that was at first thought. GUIs and windows require lots of program code and are voracious consumers of computing resources. Their greatest draw-back, however, is their rather small usable limit in the size and complexity necessary to keep them sufficiently self-explanatory to provide the user with a cogent logical model of the program's functionality.
The result has been applications, such as word processors, with labyrinthine menus whose logical structures are too rudimentary for the user to find anything unless he has learned the whole menu system by rote beforehand. And having been a victim of both, I find the over-populated menu system far more complicated to learn and much harder to use than a well constructed shell script command. A menu system simply cannot incorporate the flexibility of a script. So, to my mind, learning 'Bash' is well worth the effort.
Of course I am not against the window menu. But only for simple applications such as text editors. And I am not against command-line initiated programs from spawning purpose-designed GUI panels for indication and user interaction. But for serious computing I prefer to use a terminal script.
Volume controls on some PC keyboards, TV remotes and smartphones are operated by pressing "+" or "−" buttons. Instead of being two discrete buttons, they may be combined into a single rocker switch with a "+" sign at one end and a "−" sign at the other. In either case, you press the "+" to increase the volume and the "−" to decrease it. If you press either button once and release it immediately, it will move a displayed slider to the right or to the left, respectively indicating an increase or decrease in volume. If you maintain a button in its pressed state, the slider continues to move in the respective direction at a prescribed rate, continuously increasing or decreasing the volume as it goes.
This is indisputably more difficult to use than the chicken-head rotary control shown earlier, for the following reasons. The rotary control visually shows the current volume setting at all times: the button and rocker controls do not reveal the current volume setting. On top of this, the button and rocker controls are very slow to respond to user action. If the invisible current volume setting is far too high, the "−" button has to be pressed and held, causing the volume to diminish slowly while the excessive sound blasts your ears; whereas the chicken-head knob can be whipped round to kill the excessive sound instantly. The logical objective of the control is to place the volume at a specific desired level. The chicken-head rotary control allows the user to do this with direct positive action; whereas the rocker or button controls exert only indirect positioning, requiring of the user the increased dexterity and dynamic judgement to release the control as its slider is passing the desired level upwards or downwards in arbitrary jumps. The rocker switch and button controls are thus much less precise and much more difficult and frustrating to use than the chicken-head rotary control.
The confusion is even greater with TV controls. I frequently have to go out late at night in a not-so-safe neighbourhood to rescue my mother-in-law from a TV crisis. Her TV itself is only equipped to receive analogue TV signals, which are no longer available. So she has a small box underneath which is a digital converted. To receive signals from the converter, the TV's input selector must be set to HDMI1. Being old, she often, when switching her TV on or off, happens to grip the remote controller in a way that inadvertently presses a button, causing it to do something like change channel. But this sets the TV into analogue air mode. Getting the input selection back to HDMI1 is quite a chore, which she seems unable to learn. After all, bearing in mind the very simple objective she wishes to achieve, the over-complicated procedure required makes no logical sense at all. So I have to go over to her house and reset the TV back to the one and only channel she ever watches.
I first have to switch off the TV and switch it back on again. The screen displays noise (snow). I then need to feel along the top of her TV for the second button from the left. I press it. An input selector menu appears on the screen. But I must not tarry or the menu will disappear and I will have to start all over again. I must keep pressing the same button until the highlighting reaches the line with "HDMI1". Then just wait for the menu to disappear. My mother-in-law's favourite channel should then reappear. There is another option for doing this but it involves even more steps — also with time-outs — and involves pushing little plastic buttons that are so small that I need to use my thumb nail to do so. There is no visual indicator, like the position of a chicken-head knob, to indicate which input is currently selected. To achieve this objective, my 86-year-old mother-in-law with her ailing physical dexterity has no chance. Yet she is well able to cook nice meals, which shows that, under normal circumstances and with the normal controls on her cooker, she is not deficient in normal work-a-day dexterity.
Context-dependent controls are useful and simple. For example, I always used to put a context-dependent help key, namely F1 [Function Key 1 on the PC keyboard] to cause a context dependent help text to appear at any point in the program. But I am not here referring to this kind of context-dependent control. I am referring here to the way smartphone's economise on the number of controls required to accommodate all the required functions.
For example, on my 5-inch LG smartphone, one of the two side buttons normally used to increase and decrease the sound level causes the phone to boot up into command terminal mode when it is held pressed while pressing the switch-on button on the back. I discovered this one day by accident because I was holding the phone by its sides, inadvertently pressing the volume "+" button while I was also pressing the back button to switch on the phone.
Despite all the above, what really takes the prize for incomprehensible complication is a Marathon runner's watch, which I bought a few years ago when I still ran mini-Marathons. It tells the time. It monitors heartbeat. It calculates my energy consumption while running. It can time my runs. And it has lots of other functions, the total of which I never really knew. But I don't use it any more. I managed to get it to monitor my heartbeat while running and beep if I exceeded my 160 bpm recommended limit. But then, despite its microscopic instruction manual, for which I need a magnifying glass to read, I cannot even figure out the button combinations necessary to correct the time, let alone anything else. None of it makes any logical sense. The battery eventually ran down and so it is now left in the drawer, useless.
I now use a simple Mondaine watch that does nothing more than tell the time and has one single-function button for setting the time. It has a black face with clear white markings, white clock hands with a red seconds hand sporting Mondaine's hallmark big red blob on the end. It is the most useful watch I have ever had. The lesson for me is obvious. Piling ever more functionality into a single device makes it ever more complicated and difficult to use and hence ever less useful.
If these modern devices are so user-unfriendly, why do people buy them? Firstly, because there are no longer any simple user-friendly devices available. Or, at least, only very few, which are inevitably in short supply and difficult to obtain. They have all been pushed out by powerful commercial advertising tuned to expanding its market by selling innovation. Secondly, the features-benefits-incentives recipe of commercial advertising persuades people that small feature-packed devices are both desirable and user-friendly. It emphasises the benefits of the large diversity of functions built into a small package, without mentioning the negative aspects of the complexity and physical difficulty of the procedures the user must master to be able to access and use that diverse functionality.
The only demographic sector able to master such torturous procedures comprises adolescents and young adults. And it is this sector, which has most money with least commitment, at which the marketing is aimed. After all, a corporation is not in business to produce what is ideal for everybody but to make the highest possible profit for its shareholders. The adolescent/young adult market thereby succumbs to the innovation-oriented advertising and mindlessly buys these feature-intensive difficult-to-use products, implying by default that the more mature sector of the population simply hasn't got the whiz-kid mentality and physical dexterity necessary to involve themselves with such things.
Packing ever more functionality, into ever smaller space, requiring ever more complex and difficult user procedures, is done by the manufacturer entirely for its own benefit. It is not for the benefit of the user, who indeed does not benefit. In fact, quite the opposite. The fact that the more mature sector of the population is excluded from technical innovation is irrelevant to the prime objective of maximising profit. If the world wants products that are specifically designed for easy use and for the benefit of everybody, it will have to wait for a future epoch, in which the profit motive takes second place, or ideally, no place at all.
In an ideal user interface, the primary rule is that the logic involved in operating any control must be no more complicated than the logic of its functional objective. In other words, the act of using the control must be no more complicated than what it does for me.
A control must have only one positive function and must indicate its current state entirely by its passive appearance. Examples of controls that meet this requirement are the two-state on-off switch and the chicken-head rotary control. The function of a control must not depend on the state of any other control, except for the fact that it will have no function at all if the device it controls is switched off. The operation of a control must be a direct analogue of its functional objective, which has the same order time-differential as its objective. The chicken-head volume control meets this requirement because the change in volume that it effects is in direct proportion to how much it is rotated. The +/− button volume control doesn't meet this requirement because pressing it causes the volume to undergo a predetermined rate of change, which is one time derivative higher than the act of pressing or releasing it.
My ideal PC has a positive-action double pole double throw on-off switch, which isolates the whole device from mains electricity when it is switched off. The switch has an off-state air-gap large enough to block passive power line transients and also protects my PC by having a smaller air-gap between each mains-side pole and an external ground (or earth) rail. The PC self-starts when mains power is applied by the on-off switch and also performs an automatic graceful shut-down when mains power is switched off. A second switch of the same type determines whether or not the PC should perform a graceful shut-down when mains power is withdrawn or transfer to an internal float-charged battery as its power source. There is also a separate manual re-boot/shut-down button, which operates when power is available from either mains or battery as appropriate.
For audio, my ideal PC uses the full sized standard quarter-inch jack for audio input and output. It has one jack on the front panel for headphones and one on the back panel for connecting an external amplifier or other audio devices.
Incidentally, my ideal headphones have a coiled cable so that, unlike the thin straggling cords of most modern headphones, they don't get pinched under my chair castors, get trapped in my desk drawer or trip me as I get up whipping their flimsy jacks out of their tiny sockets, probably bending them in the process. All these shortcomings were well and truly resolved way back in the 1950s. But modern production cost-cutting and inappropriate miniaturisation has brought them all back to us with a vengeance.
A simple two-way switch on the front panel is used to select between the two audio channels. Headphone volume is set by a chicken head variable control, also on the front panel. This allows me to snatch the volume down instantly to a lower level to protect my ears after changing to a new louder sound source — a quick physical action, which is impossible with push-button or on-screen slider controls.
That the single positive action controls I have described are far easier to use than the context-dependent multi-action push buttons and mouse-operated on-screen sliders is indisputable. But why is this? It is because the former have the enormous physical advantage of 3-dimensional space as their primary means of selecting and regulating functionality. On-screen controls are limited to selectivity and regulation using only a maximum of 2-dimensions in a space which is only indirectly accessible to the user. Context-dependent buttons have, in effect, only zero to one dimension of physical selectivity.
I do not want a novelty gadget. I want a workhorse. I want a personal computer that provides me with a comfortable human interface, which is the right size for fast and efficient use by my hands, viewing by my eyes and hearing by my ears. I want it to be future-safe. This means that its various functional areas must be independently updatable and upgradable. I want it to be tidy. I do not want a rat's nest of cables and wall warts under my feet. It must therefore include power supply, surge protector, mains isolator switch and no-break battery backup inside its case. Throwing the isolator switch causes the operating system to shut down the computer gracefully under battery power.
I want its case to be large enough for me to be able to install or replace any single component without the need to remove or in any way interfere with any other components. I also want the case to be large enough to accommodate generous filtered air flow for cooling. For these reasons I decided on what is known as a full tower case. This is based on the 23-inch rack standard rather than on the 19-inch rack standard. The nominal dimensions of a full tower are: height 570 mm, width 230 mm, depth 550 mm. I would like a motherboard with a fairly fast CPU because I often create ray-traced images and animations, which are somewhat processor-intensive. On the other hand, I do not need the high-performance demanded for video since I do not play computer games. I prefer just one 1TB disk to minimize noise and vibration, and to avoid annoying low harmonics generated by multiple disks in proximity.
The only cables emerging from the back of the case are the power cable, UHDMI and power cables for the monitor, a 100BaseT LAN cable, and a single USB cable to connect to the keyboard. The mouse is connected to a USB repeater socket on the keyboard. That is 6 cables: no wall or mid-cable power units. I do not want a webcam because I often work at my computer sparsely dressed during hot weather. The video monitor gets no-break power from the computer's power supply. Headset sound (including microphone input) is via a durable coiled cable from a single robust quarter-inch jack on the front panel. The front panel also has extra USB sockets and memory card slots.
As well as my standard PC described above, I would also like what I shall call my server version. This is the same as my standard PC but with certain additions. I wish to add, within the same case, an Internet access modem (cable and/or ADSL), a router card and a small RISC technology diskless computer to be used as a server. All three devices are powered from a single independent low-power battery-backed power supply.
These powering requirements mean that these devices cannot be of the kind installed in an expansion slot of the main PC motherboard. They need to be able to remain fully operational when the PC itself is switched off and unplugged from mains power. They must have a separate mains cable and low-power battery backed power supply that has no connection at all with the main PC housed in the same case.
The type of cable/ADSL modem depends on the type of Internet connection and is usually supplied by the Internet service provider (ISP). Somehow, I would like to mount the modem and its wall-wart within the computer case. The router card can be of a fairly standard kind with 8 LAN-side sockets and 1 WAN-side socket. It must include a firewall function and port-forwarding control via a web interface usable from the main PC installed in the same case. Its WAN-side socket is connected to the Internet modem via a short LAN cable.
The small diskless computer has only one connection to the outside world (apart from its power cord). This only connection is to the router card via a short LAN cable. The small diskless computer's operating system, application programs and servable content are accommodated in an on-board SD card, which is prepared and updated from an external SD card slot on the front panel of the main PC. The applications run on this small diskless computer are a POP/SMTP email server, a web server, an FTP server and servers for various other networks such as eDonkey, gnutella, G2 and Freenet. The small diskless computer is managed via telnet, which is restricted to access exclusively by the main PC housed in the same case.
The Internet is based on national infrastructures which are connected together by an international backbone. These are licensed by national governments and co-ordinated by international agreements. The cables and switching centres (links and routers) are owned and operated by an amalgam of large public and private corporations. The individual end-user is the infinitely inferior party in any contract for the provision of Internet service. In fact, any such "contract" is not really a contract: it is an ultimatum. The ISP simply makes its customer "an offer he can't refuse". The end-user either accepts it or is excluded from the Internet. This to me is very unhealthy, as is evinced by the insurmountable obstruction encountered whenever one attempts to terminate or change such a "contract" of service. This is to say nothing of the disruption caused by the incessant telephonic pestering to upgrade one's service.
An alternative to the established Internet infrastructure is starting to emerge, at least outside densely populated city and urban areas. It is based on a particular way of using wireless networks. Such networks are usually set up by groups of enthusiasts and other non-profit entities. Consequently, access to this potentially global network is free to participants. Each merely has to provide his own wireless router. If I were to become a participant, I would not make a server version of my ideal PC. Rather, I would put my local LAN switch, router and server in a separate full-tower computer case, together with a wireless unit.
This alternative to the Internet takes the form of a wide-area wireless network, which operates in what is called ad hoc mode. This means that each node is of equal status within the network: no node acts as a central administration unit or hub. Each node builds a list of all other nodes within its direct radio range. It designates 3 of the most distant of these (up to 50km away, depending on terrain) as its onward routers. These 3 are best spaced at geographic bearings 120° apart. It uses whichever of these 3 is in the most appropriate geographical direction to route IP packets to whichever node in the global network it needs to communicate with at any particular time.
The topography of the wireless network is as shown on the left. The maximum radio range (distance) is depicted by the large greenish yellow background circle. The majority of the neighbouring nodes within direct radio range are shown in a magenta colour. These form the Community Area Network (CAN). The outer three cyan coloured circles represent my node's designated onward routers, which, from my point of view, form part of the Global Area Network (GAN).
A good plan would be to reserve the 2·4 GHz band for communication between a node and its 3 onward routers, reserving the 5 GHz band for communication with other local nodes (i.e. nodes within direct radio range). This way, the node's 2·4 GHz signal can be directed by high-gain radiators towards the node's onward routers, while the 5 GHz signal is left omni-directional. There is a maximum node density above which this kind of wireless network would be unable to provide a usable bandwidth to each node. I guess this threshold to be where the distance between adjacent nodes falls to 150 metres. In patches of higher node density, local switches and cable links may be needed to relieve the wireless network.
The logical essence of my nodal router is illustrated on the right. The 2·4 GHz wireless adapter card is connected through coaxial cable to three external (roof-mounted) directional radiators. The 5·0 GHz wireless card is connected to an external omni-directional antenna via co-axial cable. The 4-port LAN switching card connects the nodal router to up to three other normal PCs + a LAN printer/scanner. A further 4 ports could be provided by a second LAN switch card. The motherboard is a low consumption type incorporting a 32-bit RISC CPU. 4 GB of RAM should be sufficient to enable the servers to operate most of the time with the disk not spinning. The nodal router must be able to operate, on a continuous duty cycle, from 240 VAC, 115 VAC, 24 VDC or 12 VDC sources via an ATX power supply with built-in relay-based surge protection and battery backup. Its operating system is a topless version of either FreeBSD of Linux. This specification leaves plenty of space in its full tower case for efficient cooling, easy maintenance and future additions.
The job of the nodal router is fourfold. Firstly, it provides routing service to its three partners within the Global Area Network (GAN). Secondly, It provides PCs on my Local Area Network (LAN) with access to both the GAN and to neighbouring participants in the Community Area Network (CAN). Thirdly, it operates various types of servers, which are universally accessible from the LAN, CAN and GAN. These include: SMTP & POP servers, Web & FTP servers, plus specialised servers for the eDonkey, Gnutella, G2 and Freenet networks. For security and robustness, the Web server was written by me to have the minimum necessary and sufficient functionality to serve the content of my website. Fourthly, the nodal router runs a Network File System (NFS) server to provide common storage accessible to designated PCs on the Local Area Network.
Each of the two wireless networks is connected to the servers through its own separate dedicated software train. Each software train comprises an ad hoc mode network manager, a router for the particular network and a firewall. The two software trains operate in complete mutual isolation. The firewall in each train ensures that the only listening ports open are exclusively those required by the operating servers. The nodal router is managed exclusively via SSH. Permitted SSH connections to the nodal router are restricted to one or more designated PCs (IP addresses) on the LAN. The nodal router's SSH cannot in any way be accessed from the CAN or the GAN. Updating of served content is done via rsync over SSH.
The 'out and about' environment is, for me, not conducive to creative thought or productive work. The concentration and intensity with which I use a personal computer requires the quiet solitude of my dedicated workstation. When I am out and about, I want to be free from being hooked into the world of information and communication. I want to experience and enjoy where I am and the people I am with. This is why my ideal PC is the large static beast I have described.